Because I had actual work to do, I took some time to post something on "activism and academia" over at Duck of Minerva. Might be of interest to some of my readers, and will probably rile up some of my students…which is often the point…
:-)
And now back to grading. Happy Holidays, everyone.
Academia as a vocation. Thoughts on teaching, scholarship, and the other things that the privilege of being an academic affords the ability to think about. I also use this blog when I teach as a central repository of stuff related to various courses, so you'll find some of that if you browse here too.
20.12.05
20.11.05
More on the role and character of IR theory
In a discussion on Dan Drezner's blog -- a discussion spawned by the Marc Lynch posting that I reference here) "Steve" posted a comment about the nature and deficiency of IR theory in general. In part, he comments:
1) looking to IR for specific policy advice would be like looking to physics to find out how to build a bridge. Physics is a self-consistent, deductively rigorous portrait of the physical world that permits us to explain phenomena in a distinct manner. But it doesn't tell you how to build a bridge. For that, you want engineering and architecture, fields that take some of the insights into the world that are provided by physics and combine them with a lot of empirical practice to produce phronesis -- practical wisdom -- which can directly guide action. Theorists rarely develop phronesis, and they rarely even inform it directly. Instead, their influence is at a further remove than that -- although it is there, especially in the long term.
2) what strikes Steve as "strange" strikes me as perfectly comprehensible. What strikes me as more than a bit strange is that anyone would actually look to IR theory to find out what to do about Bosnia or Iraq or anything else. This doesn't mean that I don't think that policymakers should consult theorists and examine social-scientific accounts of things, but it does mean that I am wholly unconvinced that doing so will lead either to policy success or to an unequivocal justification for action. As "the slow boring of hard boards," politics is all about trying to bring incommensurate things together, and proceeds via the temporary forging of "good-enough" settlements. Science isn't like that, theory isn't like that -- and to the extent that IR is a social science, IR isn't like that either.
Engineers have to take physics. Practitioners should have to take theory courses, and politicians should draw on the insights that they have gained from scientific accounts. But in the end, practice necessarily escapes (abstract) theory, social life necessarily escapes our social-scientific accounts of it, and an engineer who pleads that the math worked out properly when the bridge collapses should probably be fired for not having consulted sufficiently with the folks who build actual bridges.
That's my reaction, at any rate.
My assumption is that IR is generally a field more akin to philosophy than to, say, country studies (or history, or whatever). We don't look to philosophers to find out what to do about Al Quaida, or Bosnia, or XX, why would we look to IR scholars? (frankly, this strikes me as a bit strange, but it seems to be about right...).Two things leapt to mind for me pretty much right away:
1) looking to IR for specific policy advice would be like looking to physics to find out how to build a bridge. Physics is a self-consistent, deductively rigorous portrait of the physical world that permits us to explain phenomena in a distinct manner. But it doesn't tell you how to build a bridge. For that, you want engineering and architecture, fields that take some of the insights into the world that are provided by physics and combine them with a lot of empirical practice to produce phronesis -- practical wisdom -- which can directly guide action. Theorists rarely develop phronesis, and they rarely even inform it directly. Instead, their influence is at a further remove than that -- although it is there, especially in the long term.
2) what strikes Steve as "strange" strikes me as perfectly comprehensible. What strikes me as more than a bit strange is that anyone would actually look to IR theory to find out what to do about Bosnia or Iraq or anything else. This doesn't mean that I don't think that policymakers should consult theorists and examine social-scientific accounts of things, but it does mean that I am wholly unconvinced that doing so will lead either to policy success or to an unequivocal justification for action. As "the slow boring of hard boards," politics is all about trying to bring incommensurate things together, and proceeds via the temporary forging of "good-enough" settlements. Science isn't like that, theory isn't like that -- and to the extent that IR is a social science, IR isn't like that either.
Engineers have to take physics. Practitioners should have to take theory courses, and politicians should draw on the insights that they have gained from scientific accounts. But in the end, practice necessarily escapes (abstract) theory, social life necessarily escapes our social-scientific accounts of it, and an engineer who pleads that the math worked out properly when the bridge collapses should probably be fired for not having consulted sufficiently with the folks who build actual bridges.
That's my reaction, at any rate.
The Irrelevance of IR Theory
In response to Marc Lynch's posts (here and here) about whether IR theory has anything to tell us about al-Qaeda, I posted a brief sociology of IR knowledge-production over at Duck of Minerva. I'm not going to cross-post it, but you're welcome to click on over and have a look . . .
Newsweek
Professor In Your Pocket
</shameless self-promotion>
I especially like how I'm the "solution" the the supposed problem. Can't see that I see a problem with students not attending lectures if the lectures are available in a different (downloadable) format; why waste time lecturing if you could put the classroom to much better uses?
</shameless self-promotion>
I especially like how I'm the "solution" the the supposed problem. Can't see that I see a problem with students not attending lectures if the lectures are available in a different (downloadable) format; why waste time lecturing if you could put the classroom to much better uses?
8.11.05
Academic Record Labels
As I make my merry way through the brave new world of podcasting (and am getting some press coverage, certainly moreso than my substantive/empirical work ever gets -- wonder what if anything that says about me as a scholar), something that is either profound or silly has occurred to me. This is based on my off-the-cuff comment to an MTV reporter who asked me to speculate about how this technology would change the university, and whether it would eventually make universities obsolete since all of the content produced by professors could in principle be downloaded for free off the 'Net. My reply: podcasting, or any other way of making lectures available publicly (i.e. outside of the classroom proper -- I think that you could simulate the same effect by broadcasting over radio/television, or video-taping and making the tapes freely available to all; mp3 compression simply makes it a lot easier to do what could have been done before, and in some cases was done before), kills the university only in the way that recording kills music.
Let me elaborate a bit. I have always (and I do mean always) used music-language to describe what I do professionally; together with baseball-language, music-language (and specifically music-industry-language) is a vital part of my metaphorical vocabulary for apprehending my academic ( = scholarly and pedagogical) practice. I have always thought of articles as "singles," chapters in edited volumes as songs that were part of a compilation, the process of revising an article as "remixing," books as albums, and so forth. Sure, part of this has to do with the fact that if I were not an academic I'd love to have been Tony Banks or Vangelis or Neal Morse (bonus points to any reader who can identify all three of these folks, with or without a Google search ;-) or any one of a number of musicians whose music I find fascinating and compelling. [And yes, I'd probably compose very elaborate prog-rock opuses, because that's the kind of geek I am, but that's neither here nor there.] But it also fits, to a certain degree: we academics think about things, compose various ditties that capture some piece of what we're thinking about, utilize the formal requirements of a particular publication or subdisciplinary language-game as a way of giving structure to our musings, and hope for nothing more than readers/listeners. "Nice song" and "good article" have the same value, I think.
Of course, the more scientistically inclined in my discipline would presumably be horrified that I am making aesthetic criteria so central to my metaphor, but I'm not too sanguine about the possibility of "progress" in knowledge anyway, so that doesn't bother me much. After all, As Weber pointed out in "Science as a Vocation, there isn't really any way to justify the notion of science-as-progress except as a value-commitment, which I'd consider to be basically an aesthetic criterion itself. [Wittgenstein mused that aesthetics and ethics were the same, and I think he was right, on that as on so many things.] So yes, social-scientific articles have to participate in the language-game of improving our knowledge of the world, as though it were actually possible to comparatively assess books and articles according to such a metric; personally, I'd say that the most important component of any such assessment would have to be the book or article's systematic exploration of the implications of some given set of value-commitments -- which would be little different from assessing how a piece of music disclosed certain sonic possibilities that other songs hadn't. So it's all aesthetics, in the end. But I know that this places me in a distinct minority.
So, if books and articles can be songs, why not lectures? In fact, lectures are even more clearly performances, and hence assessable on performative criteria. The problem with a lecture is that it's somewhat ephemeral, since it's a live performance; in that way, the podcasts I've been capturing and disseminating are little different than recordings of a concert -- albeit with less cheering and clapping. But the recordings serve the same purpose: making the ephemeral performance available to be re-experienced at some later date. Students in my methodology class have told me that they re-listen to some of my more directive lectures when they are sitting down to draft their prospectuses, which is precisely the use to which I hoped that these lectures would be put.
However, there's one major problem with what I've been doing thus far -- what I've been capturing are live performances, which are time-bound and contextually specific. There are a lot of references in my lectures from this semester to the ongoing situation with the ex-president of our university, references that I hope will not be as current next time I teach this course. Similarly, in other lectures I make reference to current news events, specific comments that came up in class discussion, and so forth. What this all means is that the lectures I've recorded are of limited utility, and won't really allow me to do what I'd been planning to do: record a set of lectures, and then stop using classroom time to lecture. Period. The recordings from this semester are valuable for the current class, and are a useful record of what I did in the classroom this semester, but in order to really make a set of recordings that will be useful in future semesters I need to decontextualize the lectures a bit, which is an artificial thing to do -- but a justified one in this case. It's like going into the recording studio to record a song that you play live, inasmuch as the studio is by definition antiseptic and purified and detached; cutting a studio record is not the same activity as performing live. And while there is value (especially for obsessives like me) in collecting a number of live performances of the same song, because each performance has different nuances, I don't think that there's as much value in collecting different performances of the same lecture. Hence I need to record an abstract, de-contextualized set of lectures, a studio version that will travel across the semesters better. Then class time can be used to engage in q&a, hands-on mentoring, discussion, and so forth.
So how does this make universities like record labels? Well, under ideal circumstances what a record label does is to a) find artists and b) give them the opportunity to make studio versions of their songs so that c) people can connect to the artists' music both through the existence of albums and through d) the promotional and legitimation function that the label performs. This last one works better for niche labels; if I see something that is released by Radiant Records, I know basically what kind of music it is going to be: contemporary progressive rock. And the fact that Radiant releases something means that it's probably a good bet that I'll like it. Ideally, labels would do this in general; in practice, major labels (whose days are probably numbered -- they're largely living on royalties from their back-catalogues at the moment anyway) skimp on this last function in favor of just trying to sell stuff. But these functions are precisely what labels are for.
Now, I think that universities are for pretty much the same things: they give artists (scholars) the opportunity to produce recordings of their songs, and help get those songs into the hands of listeners. In addition, universities are in the business of staging live concerts ( = courses), and that's what people are paying for. You want to hear my take on something, or know what I think? Read what I write, or listen to what I say. But if you want to come and join the learning community that I construct in my courses and more generally at the university as a whole, you need to pay up and commit to the process.
So yes, universities have to change, and the pedagogical practices that I am pursuing with podcasting (and in other ways) may be hastening that process. But universities are far from obsolete. I have a hard time thinking of any other place to engage in my craft.
Let me elaborate a bit. I have always (and I do mean always) used music-language to describe what I do professionally; together with baseball-language, music-language (and specifically music-industry-language) is a vital part of my metaphorical vocabulary for apprehending my academic ( = scholarly and pedagogical) practice. I have always thought of articles as "singles," chapters in edited volumes as songs that were part of a compilation, the process of revising an article as "remixing," books as albums, and so forth. Sure, part of this has to do with the fact that if I were not an academic I'd love to have been Tony Banks or Vangelis or Neal Morse (bonus points to any reader who can identify all three of these folks, with or without a Google search ;-) or any one of a number of musicians whose music I find fascinating and compelling. [And yes, I'd probably compose very elaborate prog-rock opuses, because that's the kind of geek I am, but that's neither here nor there.] But it also fits, to a certain degree: we academics think about things, compose various ditties that capture some piece of what we're thinking about, utilize the formal requirements of a particular publication or subdisciplinary language-game as a way of giving structure to our musings, and hope for nothing more than readers/listeners. "Nice song" and "good article" have the same value, I think.
Of course, the more scientistically inclined in my discipline would presumably be horrified that I am making aesthetic criteria so central to my metaphor, but I'm not too sanguine about the possibility of "progress" in knowledge anyway, so that doesn't bother me much. After all, As Weber pointed out in "Science as a Vocation, there isn't really any way to justify the notion of science-as-progress except as a value-commitment, which I'd consider to be basically an aesthetic criterion itself. [Wittgenstein mused that aesthetics and ethics were the same, and I think he was right, on that as on so many things.] So yes, social-scientific articles have to participate in the language-game of improving our knowledge of the world, as though it were actually possible to comparatively assess books and articles according to such a metric; personally, I'd say that the most important component of any such assessment would have to be the book or article's systematic exploration of the implications of some given set of value-commitments -- which would be little different from assessing how a piece of music disclosed certain sonic possibilities that other songs hadn't. So it's all aesthetics, in the end. But I know that this places me in a distinct minority.
So, if books and articles can be songs, why not lectures? In fact, lectures are even more clearly performances, and hence assessable on performative criteria. The problem with a lecture is that it's somewhat ephemeral, since it's a live performance; in that way, the podcasts I've been capturing and disseminating are little different than recordings of a concert -- albeit with less cheering and clapping. But the recordings serve the same purpose: making the ephemeral performance available to be re-experienced at some later date. Students in my methodology class have told me that they re-listen to some of my more directive lectures when they are sitting down to draft their prospectuses, which is precisely the use to which I hoped that these lectures would be put.
However, there's one major problem with what I've been doing thus far -- what I've been capturing are live performances, which are time-bound and contextually specific. There are a lot of references in my lectures from this semester to the ongoing situation with the ex-president of our university, references that I hope will not be as current next time I teach this course. Similarly, in other lectures I make reference to current news events, specific comments that came up in class discussion, and so forth. What this all means is that the lectures I've recorded are of limited utility, and won't really allow me to do what I'd been planning to do: record a set of lectures, and then stop using classroom time to lecture. Period. The recordings from this semester are valuable for the current class, and are a useful record of what I did in the classroom this semester, but in order to really make a set of recordings that will be useful in future semesters I need to decontextualize the lectures a bit, which is an artificial thing to do -- but a justified one in this case. It's like going into the recording studio to record a song that you play live, inasmuch as the studio is by definition antiseptic and purified and detached; cutting a studio record is not the same activity as performing live. And while there is value (especially for obsessives like me) in collecting a number of live performances of the same song, because each performance has different nuances, I don't think that there's as much value in collecting different performances of the same lecture. Hence I need to record an abstract, de-contextualized set of lectures, a studio version that will travel across the semesters better. Then class time can be used to engage in q&a, hands-on mentoring, discussion, and so forth.
So how does this make universities like record labels? Well, under ideal circumstances what a record label does is to a) find artists and b) give them the opportunity to make studio versions of their songs so that c) people can connect to the artists' music both through the existence of albums and through d) the promotional and legitimation function that the label performs. This last one works better for niche labels; if I see something that is released by Radiant Records, I know basically what kind of music it is going to be: contemporary progressive rock. And the fact that Radiant releases something means that it's probably a good bet that I'll like it. Ideally, labels would do this in general; in practice, major labels (whose days are probably numbered -- they're largely living on royalties from their back-catalogues at the moment anyway) skimp on this last function in favor of just trying to sell stuff. But these functions are precisely what labels are for.
Now, I think that universities are for pretty much the same things: they give artists (scholars) the opportunity to produce recordings of their songs, and help get those songs into the hands of listeners. In addition, universities are in the business of staging live concerts ( = courses), and that's what people are paying for. You want to hear my take on something, or know what I think? Read what I write, or listen to what I say. But if you want to come and join the learning community that I construct in my courses and more generally at the university as a whole, you need to pay up and commit to the process.
So yes, universities have to change, and the pedagogical practices that I am pursuing with podcasting (and in other ways) may be hastening that process. But universities are far from obsolete. I have a hard time thinking of any other place to engage in my craft.
28.10.05
Practitioner testimonies
Through the magic of podcasting, I've been able to listen to a number of pre- and post-game interviews with baseball players and managers as I go for my morning run -- subscribe to the daily MLB podcasts through iTunes, plug in my iPod, and when I get up in the morning there they are, ready to go. (Of course, now that the baseball season is officially over -- congratulations, White Sox, even though I wanted the Astros to force a game 5 so that we could see Roger Clemens try to pitch despite his hamstring injury -- I'll need to find some new podcasts to listen to on those runs. I wonder if MLB will start podcasting winter ball reports. Hmm.)
In any event, one of the things I have been noticing a lot lately is the fact that sports reporters invariably ask players and managers to produce causal speculations. How much did the condition of the field affect your performance? Does having the stadium's retractable roof open affect your play? How much difference does it make that you have a well-rested bullpen on which to draw? The sheer frequency of the questions leads to the unavoidable conclusion that reporters ask managers and players about these issues because of a presumption that players and managers have some kind of privileged authority to pronounce about these matters.
The general principle here seems to be: practitioners know about causal relationships specific to their particular field of activity. And this principle extends well beyond sports: players and managers know about baseball, diplomats know about diplomacy, activists know about activism, government officials know about government. In fact, the notion that participants have special access to knowledge about their domains is often elevated to a methodological principle in the social sciences: "we may not care at all about the views of revolutionaries, but if their answers to our questions are consistent with our theory of revolutions, then the theory itself will be more likely to be correct," as a recent methodological manual has it.
Pardon my bluntness, but I think that this kind of reasoning is downright absurd. In fact, the idea that practical experience gives one some kind of unmitigated access to causal knowledge strikes me as downright misleading in at least two respects: it mis-states the relationship between knowledge and practice, and it forecloses the possibility of generating new knowledge(s) by taking a step away from personal experience.
Allow me to elaborate.
1) there's a long-standing error (well, I think it's an error, but lots of people disagree and have done so over the years) in conflating practitioner-knowledge with detached-social-scientific-observer knowledge. Practitioner-knowledge is all about how to accomplish something, and consists of rules of thumb, operational practices, and in general a "feel" for situations. Bent Flyvbjerg draws on Aristotle in characterizing this as phronesis: practical wisdom, a kind of context-specific local understanding that produces results (in the form of successful, even virtuoso, performances of some task or tasks). Phronetic knowledge is neither rule-based nor reducible to rules, since much of it's tacit and experiential.
Social-scientific knowledge, on the other hand, is by definition not primarily about how to accomplish something; it's about how things are/were accomplished. Social science is all about systematizing practice, even for those of us who are quite enamored of messy contingent styles of explanation; by contrast to those outside of the social science fraternity, we're all inveterate systematizers. Systematizing practice means stepping away from it, and not practicing the activity in question, so that one can get a different kind of grip on it -- a theoretical grip, a conceptual grip, a necessarily more abstract grip than a practitioner has. The drawback is that the social scientist loses the immediacy of practice; the advantage is that they gain a broader and more general point of view.
Think about it like this: I know a fair bit about baseball because I've studied it, and I know that in most situations it makes no sense to attempt a stolen base -- a runner on first is worth more than the expected value of stealing second (unless you have Rickey Henderson, the all-time major league leader in stolen bases, on first base). But I couldn't for the life of me actually steal a base, which some people can. Ditto for plate appearances: I know that getting on base is the most valuable thing that a batter can do. But put me in uniform and give me a bat and I can virtually guarantee you that I won't get on base against any professional baseball pitcher.
So there's knowledge and there's knowledge. They're not the same. And just because I know when to take a pitch does not mean that I know that on-base percentage is the biggest component of run production. Nor does it follow that a good hitter is the best source to talk to when trying to see whether on-base percentage matters so much, because he may not know. And that's okay. Batters are paid to bat, not to analyze at-bats in a systematic way. Asking them to speculate on causal relationships is really asking them to pontificate on something that they have no special expertise in.
We get this in IR scholarship all the time, usually in a form like this: "my theory is that X matters, practitioner Y said in his memoir/speech/letter/interview that X matters, hence X matters and my theory is a good one." Non sequitur, in the precise and technical sense: the parts of the claim have nothing, logically speaking, to do with one another. A very good negotiator may not be able to articulate what it is about her negotiating style that is so effective, and if you ask her she may say something -- and even earnestly believe something -- that has no bearing whatsoever on the positive outcomes that she continues to produce.
Oddly enough, although social scientists and sports beat reporters don't seem to get this, Claire Danes does:
2) so what's the point of social-scientific knowledge, if it's not practitioner-knowledge and doesn't directly tell you how to do something? I am skeptical about the version of the "social science project" that would try to discipline practice by forcing it to conform to the systematic knowledge that detached observers produce; in fact, I think it's the opposite relationship, and systematic knowledge should be (with apologies to David Hume) the slave of practice. All of us producing our systematic analyses are dependent on practitioners to, you know, do stuff, so that we can observe what they do at a (relative) distance, systematize it, and perhaps reveal aspects of practice that are obscure to the practitioners themselves. Perhaps. Even if not, there's a value to systematicity for its own sake, since the world's messy and obscure and doesn't always simply tell us how it works. Being systematic allows us to generate detailed accounts of situations based on particular value-commitments, and thus to ground our critiques in something other than mere partisan opinion. And that strikes me as a good idea.
But it still won't help me raise my batting average. That takes a whole different kind of knowledge.
[cross-posted at Duck of Minerva]
[Posted with ecto]
In any event, one of the things I have been noticing a lot lately is the fact that sports reporters invariably ask players and managers to produce causal speculations. How much did the condition of the field affect your performance? Does having the stadium's retractable roof open affect your play? How much difference does it make that you have a well-rested bullpen on which to draw? The sheer frequency of the questions leads to the unavoidable conclusion that reporters ask managers and players about these issues because of a presumption that players and managers have some kind of privileged authority to pronounce about these matters.
The general principle here seems to be: practitioners know about causal relationships specific to their particular field of activity. And this principle extends well beyond sports: players and managers know about baseball, diplomats know about diplomacy, activists know about activism, government officials know about government. In fact, the notion that participants have special access to knowledge about their domains is often elevated to a methodological principle in the social sciences: "we may not care at all about the views of revolutionaries, but if their answers to our questions are consistent with our theory of revolutions, then the theory itself will be more likely to be correct," as a recent methodological manual has it.
Pardon my bluntness, but I think that this kind of reasoning is downright absurd. In fact, the idea that practical experience gives one some kind of unmitigated access to causal knowledge strikes me as downright misleading in at least two respects: it mis-states the relationship between knowledge and practice, and it forecloses the possibility of generating new knowledge(s) by taking a step away from personal experience.
Allow me to elaborate.
1) there's a long-standing error (well, I think it's an error, but lots of people disagree and have done so over the years) in conflating practitioner-knowledge with detached-social-scientific-observer knowledge. Practitioner-knowledge is all about how to accomplish something, and consists of rules of thumb, operational practices, and in general a "feel" for situations. Bent Flyvbjerg draws on Aristotle in characterizing this as phronesis: practical wisdom, a kind of context-specific local understanding that produces results (in the form of successful, even virtuoso, performances of some task or tasks). Phronetic knowledge is neither rule-based nor reducible to rules, since much of it's tacit and experiential.
Social-scientific knowledge, on the other hand, is by definition not primarily about how to accomplish something; it's about how things are/were accomplished. Social science is all about systematizing practice, even for those of us who are quite enamored of messy contingent styles of explanation; by contrast to those outside of the social science fraternity, we're all inveterate systematizers. Systematizing practice means stepping away from it, and not practicing the activity in question, so that one can get a different kind of grip on it -- a theoretical grip, a conceptual grip, a necessarily more abstract grip than a practitioner has. The drawback is that the social scientist loses the immediacy of practice; the advantage is that they gain a broader and more general point of view.
Think about it like this: I know a fair bit about baseball because I've studied it, and I know that in most situations it makes no sense to attempt a stolen base -- a runner on first is worth more than the expected value of stealing second (unless you have Rickey Henderson, the all-time major league leader in stolen bases, on first base). But I couldn't for the life of me actually steal a base, which some people can. Ditto for plate appearances: I know that getting on base is the most valuable thing that a batter can do. But put me in uniform and give me a bat and I can virtually guarantee you that I won't get on base against any professional baseball pitcher.
So there's knowledge and there's knowledge. They're not the same. And just because I know when to take a pitch does not mean that I know that on-base percentage is the biggest component of run production. Nor does it follow that a good hitter is the best source to talk to when trying to see whether on-base percentage matters so much, because he may not know. And that's okay. Batters are paid to bat, not to analyze at-bats in a systematic way. Asking them to speculate on causal relationships is really asking them to pontificate on something that they have no special expertise in.
We get this in IR scholarship all the time, usually in a form like this: "my theory is that X matters, practitioner Y said in his memoir/speech/letter/interview that X matters, hence X matters and my theory is a good one." Non sequitur, in the precise and technical sense: the parts of the claim have nothing, logically speaking, to do with one another. A very good negotiator may not be able to articulate what it is about her negotiating style that is so effective, and if you ask her she may say something -- and even earnestly believe something -- that has no bearing whatsoever on the positive outcomes that she continues to produce.
Oddly enough, although social scientists and sports beat reporters don't seem to get this, Claire Danes does:
"More people see movies with men in them than they do with women in them."Wisdom can be found in the most interesting places. Maybe that Ivy league education did precisely what it was supposed to do, and prevented Ms. Danes from speculating on things that she doesn't really have any expertise in.
Right. So -- why is that ?
"I'm not in a position to say," she snaps. "I mean, I'm just not."
Danes spent two years at Yale. Where's the conversational brio? Where's the analytical discourse that even half of an Ivy League education would seem to confer? Do we really have 35 minutes left of this? This is, after all, her career we're talking about. Her milieu. Her life. She can't "say" anything about the gender breakdown of filmgoers? If she can't say, who can?
"Someone who really studies that," she says. "Or thinks about that."
2) so what's the point of social-scientific knowledge, if it's not practitioner-knowledge and doesn't directly tell you how to do something? I am skeptical about the version of the "social science project" that would try to discipline practice by forcing it to conform to the systematic knowledge that detached observers produce; in fact, I think it's the opposite relationship, and systematic knowledge should be (with apologies to David Hume) the slave of practice. All of us producing our systematic analyses are dependent on practitioners to, you know, do stuff, so that we can observe what they do at a (relative) distance, systematize it, and perhaps reveal aspects of practice that are obscure to the practitioners themselves. Perhaps. Even if not, there's a value to systematicity for its own sake, since the world's messy and obscure and doesn't always simply tell us how it works. Being systematic allows us to generate detailed accounts of situations based on particular value-commitments, and thus to ground our critiques in something other than mere partisan opinion. And that strikes me as a good idea.
But it still won't help me raise my batting average. That takes a whole different kind of knowledge.
[cross-posted at Duck of Minerva]
[Posted with ecto]
27.10.05
Small n
After updating my post-season analysis with the results of last night's game, so that we now have a World Series victor for 2005, it appears that:
1) the wild card team is still not significantly more likely to make it to the World Series or to win it (chi-squares: .727 and .866).
2) the team with the best record in its respective league is significantly more likely to make it to the World Series (chi-square: 3.96, significant at the 5% level) . . .
3) . . . but is not significantly more likely to win the World Series (chi-square: .866).
4) wild card teams are not significantly more likely to win a division series, but the team with the best record in its league is (chi-squares: .97 and 3.88).
Of course, none of this tell us how much more likely these outcomes are. That would need a degree of analysis that I haven't the time to undertake at the moment.
The punchline: n=80 is not a very big number at all, and n=88 isn't either. We'll need several more years of the modern post-season before we can actually draw any really important conclusions about the wild card's effect on baseball's seasonal results.
[Posted with ecto]
1) the wild card team is still not significantly more likely to make it to the World Series or to win it (chi-squares: .727 and .866).
2) the team with the best record in its respective league is significantly more likely to make it to the World Series (chi-square: 3.96, significant at the 5% level) . . .
3) . . . but is not significantly more likely to win the World Series (chi-square: .866).
4) wild card teams are not significantly more likely to win a division series, but the team with the best record in its league is (chi-squares: .97 and 3.88).
Of course, none of this tell us how much more likely these outcomes are. That would need a degree of analysis that I haven't the time to undertake at the moment.
The punchline: n=80 is not a very big number at all, and n=88 isn't either. We'll need several more years of the modern post-season before we can actually draw any really important conclusions about the wild card's effect on baseball's seasonal results.
[Posted with ecto]
26.10.05
Call the Groundskeepers
The lesson for the day is that equipment matters. Not something I didn't know already, but something that I was rather forcibly reminded of last night at the end of class.
My "Masterworks" course is taught in an evening time-slot on Tuesdays that runs from 8:10-10:40pm. because of the fact that we are covering a lot of ground quickly, and because of the fact that I run the class largely as a free-form discussion but still want to give out some information to contextualize the readings, I divide each class session into two pieces: a discussion that runs until about 10pm or so (sometimes longer), and then a brief lecture setting up the next week's readings. This means that the class switches from low-tech to high-tech in a flash: discussion (old school, with books open on desks and me walking around managing the speakers' queue and trying to press points and pull out parallels and controversies) is followed by break which is followed by presentation (new school, Keynote slides, digital projector, wireless USB click-y thing so I can walk around the room and talk, iPod in pocket recording the lecture for podcasting the next day).
Of course, this also means that if there's any problem with the tech in the room I won't find out about it until about 10:10pm -- far too late to really do anything about it. Usually this isn't a problem; my Apple PowerBook doesn't break (unless I mess it up myself by mucking through with low-level geekery), I just changed the battery in my wireless click-y thing…but the one thing that I am dependent on is the room's digital projector. We're fortunate that almost every classroom on campus is equipped with a digital projector, but sometimes they fail to work…which is what happened last night. Open PowerBook; plug in adapter (DVI -> VGA) and VGA cable; and -- well, first a distorted image of my desktop picture, then a lot of green flickering, and finally blackness. On the wall screen, mind you, not on my computer's screen; that was fine. I even restarted my computer (first time this month!) on the off-chance that something had gotten frelled up with my continually unplugging and reconnecting of external displays as my PowerBook journeys from home to office and back. No luck.
During the 2002 ALDS, the Yankees were in Anaheim playing the Angels; it was game three, I think, and Mike Mussina was on the mound. He didn't look comfortable at all, and started giving up hits and runs almost immediately. Not a good performance. Afterwards, he indicated that he felt uneasy on the mound, since it was packed a bit strangely and was, although within regulations, a little flatter than normal. Hence, a poor pitching performance. That's about how I felt last night: my rhythm was off because of the technical glitch, and not having the slides and not being able to walk around (I use the slides as projected to keep me on point; I had to look at my laptop screen to see them, which basically wedded me to the desk in the front of the room) meant that I was not pitching anywhere near the way that I am accustomed to pitching.
I was not pleased.
This morning, I e-mailed the office in charge of maintaining the projectors -- the groundskeepers for our peculiar baseball fields -- and reported the problem. They were quick and professional, getting back to me almost at once and proposing an alternative solution (a portable projector to replace the installed one) as well as promising to check out the faulty projector as soon as possible. This is encouraging; a good groundskeeping crew is downright essential to my work in the classroom, and having such a quick response spoke well of their commitment and competence. Hopefully the problem will be cleared up by next week; I'd hate to have to go and pitch off such a mound again.
My "Masterworks" course is taught in an evening time-slot on Tuesdays that runs from 8:10-10:40pm. because of the fact that we are covering a lot of ground quickly, and because of the fact that I run the class largely as a free-form discussion but still want to give out some information to contextualize the readings, I divide each class session into two pieces: a discussion that runs until about 10pm or so (sometimes longer), and then a brief lecture setting up the next week's readings. This means that the class switches from low-tech to high-tech in a flash: discussion (old school, with books open on desks and me walking around managing the speakers' queue and trying to press points and pull out parallels and controversies) is followed by break which is followed by presentation (new school, Keynote slides, digital projector, wireless USB click-y thing so I can walk around the room and talk, iPod in pocket recording the lecture for podcasting the next day).
Of course, this also means that if there's any problem with the tech in the room I won't find out about it until about 10:10pm -- far too late to really do anything about it. Usually this isn't a problem; my Apple PowerBook doesn't break (unless I mess it up myself by mucking through with low-level geekery), I just changed the battery in my wireless click-y thing…but the one thing that I am dependent on is the room's digital projector. We're fortunate that almost every classroom on campus is equipped with a digital projector, but sometimes they fail to work…which is what happened last night. Open PowerBook; plug in adapter (DVI -> VGA) and VGA cable; and -- well, first a distorted image of my desktop picture, then a lot of green flickering, and finally blackness. On the wall screen, mind you, not on my computer's screen; that was fine. I even restarted my computer (first time this month!) on the off-chance that something had gotten frelled up with my continually unplugging and reconnecting of external displays as my PowerBook journeys from home to office and back. No luck.
During the 2002 ALDS, the Yankees were in Anaheim playing the Angels; it was game three, I think, and Mike Mussina was on the mound. He didn't look comfortable at all, and started giving up hits and runs almost immediately. Not a good performance. Afterwards, he indicated that he felt uneasy on the mound, since it was packed a bit strangely and was, although within regulations, a little flatter than normal. Hence, a poor pitching performance. That's about how I felt last night: my rhythm was off because of the technical glitch, and not having the slides and not being able to walk around (I use the slides as projected to keep me on point; I had to look at my laptop screen to see them, which basically wedded me to the desk in the front of the room) meant that I was not pitching anywhere near the way that I am accustomed to pitching.
I was not pleased.
This morning, I e-mailed the office in charge of maintaining the projectors -- the groundskeepers for our peculiar baseball fields -- and reported the problem. They were quick and professional, getting back to me almost at once and proposing an alternative solution (a portable projector to replace the installed one) as well as promising to check out the faulty projector as soon as possible. This is encouraging; a good groundskeeping crew is downright essential to my work in the classroom, and having such a quick response spoke well of their commitment and competence. Hopefully the problem will be cleared up by next week; I'd hate to have to go and pitch off such a mound again.
22.10.05
The Jeter Hypothesis
Derek Jeter, Yankee icon extraordinaire, once (I have not been able to find the original source) made a comment about how the wild card team is more likely to make it through the post-season since they're "hot" going into the playoffs. I think he made that comment after the Yankees were eliminated by some wild card team. Sour grapes or perceptive observation by an elite practitioner? Because I have nothing better to do with my time (okay, not really), let's put it to the test. To the statistical test.
A little background: since 1995, major league baseball's post-season has included not just the teams that amassed the best record in their divisions, but also a "wild card" team defined as the team with the best regular-season record that did not win its division. This means that the post-season includes eight teams: six division champions (three American League, three National League) and two wild cards. We're now in the eleventh post-season under these rules; since it's still ongoing I have not included it in the analysis that follows, but even so we have ten post-seasons of data to use in constructing a test of the Jeter Hypothesis.
To give away the punchline: there's no support for the notion that the wild card team is either significantly more likely to make it to the World Series (the end of the post-season) or to win the World Series. Of course, this hasn't stopped popular commentators (including the folks at Fox Sports, purveyors of baseball misinformation since 1996, and Joe Morgan, Hall of Fame player and ESPN commentator who is overly enamored of "small ball" and "post-season experience" and "clutch players" and other such statistically unsound silliness) from deploying the claim, perhaps reading too much into the fact that the 2002, 2003, and 2004 World Series were won by wild-card teams. In fact, if my analysis is correct, they definitely are reading too much into this.
How did I arrive at this conclusion?
The basic idea was to look at the ten full post-seasons to see whether any systematic relationships emerged. Step one was to simply collect the numbers and make two tables: one showing how many wild card teams made it to the World Series, and another showing how many wild card teams won the World Series.
On their own, these tables don't tell us much. In order to see whether or not there's any significant relationship involved, I ran a chi-square test on the matrices. [You can actually run this test using a web-based interface like this one, but being something of a baseball stats geek I coded my own Excel spreadsheet to do it for me.] The chi-square tests how far the observed values in a matrix differ from the values you'd expect if there were no systematic relationship between the categories; in order to do this it basically corrects the values by taking the proportions of the population in each category. So the fact that there have been 20 wild card teams, as compared to 60 division champion teams, in the post-season over the past ten years means that if there were no systematic relationship we'd expect that wild card teams would be in the World Series, and win the World Series, about one-fourth of the time. Wild card teams have been in and won the World Series slightly more than this.
But significantly more often? The chi-square value for the first matrix is .356; it would have to be 3.841 in order to be significant. The second matrix has a value of 1.371; again, it'd have to be 3.841 in order to be 95% likely not to be the result of sheer random chance.
Just for the sake of completeness, I also ran a series of tests to see whether the wild card team was more likely to make it to the second round of the playoffs -- perhaps the "hotness" of the wild card team only lasts through the short first-round series. No dice there either: chi-square was 1.067.
What does this all mean? Ultimately, I think it means that there haven't been enough post-seasons under the contemporary arrangement for us to really know whether the wild card makes that much of a difference. At the moment, there doesn't appear to be any significant relationship, but if the Astros win the World Series this year (which they won't -- White Sox in seven, I say), the population size is small enough that it might make a significant difference. In a week or so I'll update the numbers and we'll see then. Sorry, Derek.
More interesting, I think, is the fact that there's almost a significant relationship between having the best regular-season record and appearing in the World Series (chi-square: 3.2), even though the wild card system introduces more noise into the system by making it possible that a team that finished well behind in its division might win the championship. Maybe the best will out after all…but I'm not quite prepared to make that claim without further analysis.
[Posted with ecto]
A little background: since 1995, major league baseball's post-season has included not just the teams that amassed the best record in their divisions, but also a "wild card" team defined as the team with the best regular-season record that did not win its division. This means that the post-season includes eight teams: six division champions (three American League, three National League) and two wild cards. We're now in the eleventh post-season under these rules; since it's still ongoing I have not included it in the analysis that follows, but even so we have ten post-seasons of data to use in constructing a test of the Jeter Hypothesis.
To give away the punchline: there's no support for the notion that the wild card team is either significantly more likely to make it to the World Series (the end of the post-season) or to win the World Series. Of course, this hasn't stopped popular commentators (including the folks at Fox Sports, purveyors of baseball misinformation since 1996, and Joe Morgan, Hall of Fame player and ESPN commentator who is overly enamored of "small ball" and "post-season experience" and "clutch players" and other such statistically unsound silliness) from deploying the claim, perhaps reading too much into the fact that the 2002, 2003, and 2004 World Series were won by wild-card teams. In fact, if my analysis is correct, they definitely are reading too much into this.
How did I arrive at this conclusion?
The basic idea was to look at the ten full post-seasons to see whether any systematic relationships emerged. Step one was to simply collect the numbers and make two tables: one showing how many wild card teams made it to the World Series, and another showing how many wild card teams won the World Series.
Wild Card | Division Champion | |
In World Series | 6 | 14 |
Not in World Series | 14 | 46 |
Wild Card | Division Champion | |
Won World Series | 4 | 6 |
Didn't Win World Series | 16 | 54 |
On their own, these tables don't tell us much. In order to see whether or not there's any significant relationship involved, I ran a chi-square test on the matrices. [You can actually run this test using a web-based interface like this one, but being something of a baseball stats geek I coded my own Excel spreadsheet to do it for me.] The chi-square tests how far the observed values in a matrix differ from the values you'd expect if there were no systematic relationship between the categories; in order to do this it basically corrects the values by taking the proportions of the population in each category. So the fact that there have been 20 wild card teams, as compared to 60 division champion teams, in the post-season over the past ten years means that if there were no systematic relationship we'd expect that wild card teams would be in the World Series, and win the World Series, about one-fourth of the time. Wild card teams have been in and won the World Series slightly more than this.
But significantly more often? The chi-square value for the first matrix is .356; it would have to be 3.841 in order to be significant. The second matrix has a value of 1.371; again, it'd have to be 3.841 in order to be 95% likely not to be the result of sheer random chance.
Just for the sake of completeness, I also ran a series of tests to see whether the wild card team was more likely to make it to the second round of the playoffs -- perhaps the "hotness" of the wild card team only lasts through the short first-round series. No dice there either: chi-square was 1.067.
What does this all mean? Ultimately, I think it means that there haven't been enough post-seasons under the contemporary arrangement for us to really know whether the wild card makes that much of a difference. At the moment, there doesn't appear to be any significant relationship, but if the Astros win the World Series this year (which they won't -- White Sox in seven, I say), the population size is small enough that it might make a significant difference. In a week or so I'll update the numbers and we'll see then. Sorry, Derek.
More interesting, I think, is the fact that there's almost a significant relationship between having the best regular-season record and appearing in the World Series (chi-square: 3.2), even though the wild card system introduces more noise into the system by making it possible that a team that finished well behind in its division might win the championship. Maybe the best will out after all…but I'm not quite prepared to make that claim without further analysis.
[Posted with ecto]
19.10.05
Time and timing
It seems that inevitably class discussions follow an ornery pattern: as time gets short, they get really really interesting. This places me in a bit of a bind: shut things down so that we end on time, or let them run even at the cost of exceeding our scheduled boundaries. This gets even more pressing when the course in question is a night class, and "running over time" means that students are filing out of the classroom close to 11pm.
Okay, that doesn't happen every time. But it did on Tuesday evening, when we had the best class discussion we'd had thus far this semester -- and about Kant, of all people. Maybe it was the week off for Fall Break, or maybe it was my repeated warnings that Kant is Very Heavy Drugs and has to be consumed slowly with lots of food and water and really deeply pondered, or maybe it was the excellent student presentations that began the class session -- or maybe it was all three -- but for whatever reason the discussion was going very well indeed. We didn't have to spend too much time on first-order ("what is the author saying? what's the argument?") questions, but were able to leap fairly quickly into the second-order ("does the author's claim make sense? do I buy it? what's at stake in accepting or rejecting it?") issues that are the really fun part about discussing philosophical works. Relativism vs. universalism; idealism vs. pessimism; whether there is a real conflict between theory and practice -- heavy stuff for a Tuesday evening, but precisely where I would have wanted us to go had I been able to choose it.
So we were bopping along, collectively wrestling with core issues, when BANG -- 10pm. I know that we need a break (since we've been going since 8:10), and after the break I need to talk for a few minutes about next week's reading. So going to break means the end of the discussion. But it's going so well…so I let it go. Until about 10:15. Then we break, and then I give my mini-lecture…and then it's 11:00pm. Oops.
Part of my rationale for making podcasts out of all of my mini-lectures for that course is that next year I won't have to spend scheduled class time delivering the lectures -- I can just post them someplace, let students download them, and use the time saved to really allow the discussion to flourish. In my experience it takes about an hour or so for things to really get revved up, and a well-timed break about an hour and a half in can give people a chance to recharge and regroup -- making the second half of the session even better. Dropping the mini-lectures, or rather, externalizing them onto a website someplace, makes it possible for me to actually do that in the future. And then I can facilitate those great discussions without holding people on campus until 11pm.
Okay, that doesn't happen every time. But it did on Tuesday evening, when we had the best class discussion we'd had thus far this semester -- and about Kant, of all people. Maybe it was the week off for Fall Break, or maybe it was my repeated warnings that Kant is Very Heavy Drugs and has to be consumed slowly with lots of food and water and really deeply pondered, or maybe it was the excellent student presentations that began the class session -- or maybe it was all three -- but for whatever reason the discussion was going very well indeed. We didn't have to spend too much time on first-order ("what is the author saying? what's the argument?") questions, but were able to leap fairly quickly into the second-order ("does the author's claim make sense? do I buy it? what's at stake in accepting or rejecting it?") issues that are the really fun part about discussing philosophical works. Relativism vs. universalism; idealism vs. pessimism; whether there is a real conflict between theory and practice -- heavy stuff for a Tuesday evening, but precisely where I would have wanted us to go had I been able to choose it.
So we were bopping along, collectively wrestling with core issues, when BANG -- 10pm. I know that we need a break (since we've been going since 8:10), and after the break I need to talk for a few minutes about next week's reading. So going to break means the end of the discussion. But it's going so well…so I let it go. Until about 10:15. Then we break, and then I give my mini-lecture…and then it's 11:00pm. Oops.
Part of my rationale for making podcasts out of all of my mini-lectures for that course is that next year I won't have to spend scheduled class time delivering the lectures -- I can just post them someplace, let students download them, and use the time saved to really allow the discussion to flourish. In my experience it takes about an hour or so for things to really get revved up, and a well-timed break about an hour and a half in can give people a chance to recharge and regroup -- making the second half of the session even better. Dropping the mini-lectures, or rather, externalizing them onto a website someplace, makes it possible for me to actually do that in the future. And then I can facilitate those great discussions without holding people on campus until 11pm.
17.10.05
Management Lessons
Phil Garner, manager of the Houston Astros, gave a press conference the day after the epic 18-inning game that the Astros won last week over the Atlanta Braves. During the course of the questions, he was asked about his managerial style, and in particular whether his making of particular decisions was "a seat of the pants kind of thing . . . a gut feeling." Garner's response is instructive, both because of the ground that it covers and because of the place where it ends up. I think that there are some lessons to be learned here, lessons with applicability beyond baseball -- dare I say it, lessons about what social science is for and how it relates to social practice.
"Without knowing exactly which moves you're talking about it'd be hard to say," he began, noting from the outset that particular moves have particular rationales and implying that no single kind of decision-making captures or explains all of the choices that one makes in the course of managing a baseball game.
"I do -- sometimes by the gut, but most everything's calculated." Here Garner introduces two specific grounds for making particular decisions: gut instinct, and rational calculation. He is honest about the fact that some moves that he makes are simply the product of instinct; presumably this instinct stems from experience, so it's not entirely random or mysterious, but it's not directly capturable in rules or procedures. Contrast this to rational calculation, which is a type of decision-making based explicitly on rules and procedures -- what works in situation X and what doesn't.
If everything stemmed from rational calculation, then there'd be no point in having a manager at all; a book of procedures would suffice, and if there were someone called a "manager" involved at all his job would just be to look up the situation in the book and apply the appropriate response. Any agency that the manager has would disappear, as he would become merely the throughput for a set of factors over which he exercised no influence or control. Indeed, in such a situation the manager's only effective exercise of agency would involve ripping up the book, going off course -- acting on instinct rather than according to the rational plan.
The conventional Social Science Project, I think, involves making as much as possible calculable, eliminating the element of uncertainty that comes from not really knowing what the result of some move or decision will be. Garner's abstract comments to this point remain within that conventional project: social science would be the "calculating" part, leaving only the manager's "gut instinct" to serve as a ground for going off the planned course.
But when Garner starts to describe concrete particular moves that he made during the course of the game, this absolute opposition between instinct and calculation vanishes in favor of something else: Garner's specific knowledge of what his team can and cannot do.
Could this decision-process be automated, and replicated by a machine -- or by an abstract decision-making procedure, in accord with the conventional Social Science Project? It could probably be simulated, certainly. And we could probably build a baseball-managing machine that would apply formal rules to situations and arrive at outcomes similar to those that Garner arrived at. But I'm not at all certain that doing so would prove that Garner was really making rational calculations all the time that he was on the field. Just because I can retroactively narrate some situation in particular terms doesn't prove that the situation was really that way; it just demonstrates that we can describe that situation in certain ways. Period. We could describe Garner's decisions as entirely calculated, as he seems wont to do himself, but in so doing we'd miss a lot of the way that those decisions are being produced -- and the subtle ways that Garner keeps redefining "calculated" as he discusses situations.
This is nowhere clearer than in his conclusion, in which he explicitly denies having put Chris Burke in the game in order to hit the game-winning home run that he did hit in the bottom of the 18th inning:
So: three grounds for making decisions, and no simple formula for integrating the three in specific situations. In the gap between these different logics we have agency, understood as the capacity of the manager to have done otherwise than he in fact did. We have responsibility, because Garner can't deflect what he did onto any more solid or objective grounds than his own actions. And we have the subtle combination of art and science that makes baseball such fun to watch.
In this case, what is true of baseball is most likely true of other areas of human social action too. Rules, instincts, and identities concatenate in unique ways in every instance to produce particular decisions; reducing a decision to any one of these three deprives the actor in question of effective agency, as well as making it impossible to meaningfully attribute responsibility to her. If one is going to analyze decisions -- which is not something that I usually do in my work -- or if one is going to make decisions -- which is something that I like everyone else do all the time -- one should keep Garner's comments in mind, and work to develop all three of these capacities.
"Without knowing exactly which moves you're talking about it'd be hard to say," he began, noting from the outset that particular moves have particular rationales and implying that no single kind of decision-making captures or explains all of the choices that one makes in the course of managing a baseball game.
"I do -- sometimes by the gut, but most everything's calculated." Here Garner introduces two specific grounds for making particular decisions: gut instinct, and rational calculation. He is honest about the fact that some moves that he makes are simply the product of instinct; presumably this instinct stems from experience, so it's not entirely random or mysterious, but it's not directly capturable in rules or procedures. Contrast this to rational calculation, which is a type of decision-making based explicitly on rules and procedures -- what works in situation X and what doesn't.
If everything stemmed from rational calculation, then there'd be no point in having a manager at all; a book of procedures would suffice, and if there were someone called a "manager" involved at all his job would just be to look up the situation in the book and apply the appropriate response. Any agency that the manager has would disappear, as he would become merely the throughput for a set of factors over which he exercised no influence or control. Indeed, in such a situation the manager's only effective exercise of agency would involve ripping up the book, going off course -- acting on instinct rather than according to the rational plan.
The conventional Social Science Project, I think, involves making as much as possible calculable, eliminating the element of uncertainty that comes from not really knowing what the result of some move or decision will be. Garner's abstract comments to this point remain within that conventional project: social science would be the "calculating" part, leaving only the manager's "gut instinct" to serve as a ground for going off the planned course.
But when Garner starts to describe concrete particular moves that he made during the course of the game, this absolute opposition between instinct and calculation vanishes in favor of something else: Garner's specific knowledge of what his team can and cannot do.
We have interchangeable parts, moreso, I think, than most of the teams in the playoffs. We have a guy -- Bruntlett -- that can play every position, so if I take any one guy out, and I want to maneuver around the lineup in some way, I have him that I can just keep moving to any position on the field. And he'll play a great defense. And he's actually probably saved three games for us because he's made phenomenal defensive plays. Not to mention the couple he's won with his bat for us in the course of the season.Here we see a subtle combination of calculation and instinct at work, as Garner is able to derive possibilities from his experience with particular players on the team and to formulate different strategies as the game unfolds and the situation changes. The relevant basis here is not rule-based knowledge of which move belongs where, but the kind of practical wisdom that allows the expert manager to grasp conditions of possibility and then work to actualize them. And doing this doesn't mean that one abandons calculation or advance preparation, as Garner's discussion of one of his other moves makes clear:
The other day in our 18-inning game, [Brad] Asmus goes out from behind the plate because I wanted to keep both catchers in because, in case we get into a drawn-out affair and I had to bring Clemens in I want Asmus to catch Clemens, so it predicated a move to keep him in the ballgame at first base and put Chavez in and then flip-flop the two when Clemens came in. But that's not a difficult decision. I'm confident with both of them playing out there because both of them do a good job, and they practice it all the time: they take ground balls, all year long. And I even put Brad at shortstop one game, and first base -- or second base one game I believe it was -- in preparation for something just like this. So they're mostly all calculated."Calculated" has subtly shifted its meaning by the end of this example, and now means "based on experience" rather than "in accord with an abstract specification of rules." Because Asmus and Chavez have practiced taking ground balls at first base, Garner knows that he can put either one of them there and expect that they'll do a good job, but there's no necessary line from this fact about the two of them to the observed outcome. Instead, something else intervenes: Garner's experience-based sense of what he wants to accomplish and how he might best accomplish it, given the resources available to him.
Could this decision-process be automated, and replicated by a machine -- or by an abstract decision-making procedure, in accord with the conventional Social Science Project? It could probably be simulated, certainly. And we could probably build a baseball-managing machine that would apply formal rules to situations and arrive at outcomes similar to those that Garner arrived at. But I'm not at all certain that doing so would prove that Garner was really making rational calculations all the time that he was on the field. Just because I can retroactively narrate some situation in particular terms doesn't prove that the situation was really that way; it just demonstrates that we can describe that situation in certain ways. Period. We could describe Garner's decisions as entirely calculated, as he seems wont to do himself, but in so doing we'd miss a lot of the way that those decisions are being produced -- and the subtle ways that Garner keeps redefining "calculated" as he discusses situations.
This is nowhere clearer than in his conclusion, in which he explicitly denies having put Chris Burke in the game in order to hit the game-winning home run that he did hit in the bottom of the 18th inning:
I would say that there are some things that I feel good about, some things that I don't feel good about but I feel like I have to do -- I didn't want to take Berkmann out of the game the other night, but figure this one out for me: Everybody in the world would say, why would you take Berkmann out 'cause he's the guy that's gonna win the ballgame for you, and the guy I replace him with is the guy that hits a home run to win the ballgame! Now I didn't figure that was gonna happen. But I figured, based on Atlanta's outfield, if we get a base hit there, they're aggressive, they all have good arms and they're accurate, and if we don't take a chance to score I'm not gonna like myself very much the next day, so you gotta do things, in my opinion, that way. So most of it's calculated.Note the sudden emergence of a third ground on which to place decisions: a logic of identity, based neither on instinct nor on calculation. "Because I am X, or because I want to be X -- because I think of myself as X and want to continue doing so tomorrow -- I need to make certain moves, take certain chances, go in certain directions." Here we have a kind of active self-crafting, in which Garner basically produces his identity and his team's identity by going in one direction rather than another. The removal of Berkmann from the game flies in the face of rational calculation, but it isn't an instinctual move -- it is instead based on Garner's sense of who he is and who he wants to be. "I'm not gonna like myself very much the next day" isn't a preference held by a fully-formed rational actor; it's what John Shotter would call "knowing from within," sensing the potentials inherent in a situation and acting on those potentials in such a way as to craft a certain sense of oneself. I get the feeling that if the Astros had lost, Garner would be defending the decision in very similar terms: it may not have worked out quite right but at least we gave it a shot, played our kind of baseball, kept our integrity intact.
So: three grounds for making decisions, and no simple formula for integrating the three in specific situations. In the gap between these different logics we have agency, understood as the capacity of the manager to have done otherwise than he in fact did. We have responsibility, because Garner can't deflect what he did onto any more solid or objective grounds than his own actions. And we have the subtle combination of art and science that makes baseball such fun to watch.
In this case, what is true of baseball is most likely true of other areas of human social action too. Rules, instincts, and identities concatenate in unique ways in every instance to produce particular decisions; reducing a decision to any one of these three deprives the actor in question of effective agency, as well as making it impossible to meaningfully attribute responsibility to her. If one is going to analyze decisions -- which is not something that I usually do in my work -- or if one is going to make decisions -- which is something that I like everyone else do all the time -- one should keep Garner's comments in mind, and work to develop all three of these capacities.
11.10.05
Why I blog, and why I'm not about to stop
Dan Drezner, a political science professor at the University of Chicago and one of the best-known of the current crop of academic bloggers, was denied tenure last week. His very dignified and professional reporting of the fact has apparently touched off a minor panic in the blogosphere, and even made it into the mainstream print media. Suddenly many academic bloggers seem to be wondering whether they should keep doing what they're doing -- whether blogging is, in effect, a bad career move.
The debate is perennial, and it isn't just about blogging. Academia is an odd profession in that there are few if any clear expectations about how one should allot one's time on the job; there are also very few clear signals about whether one is doing a good job or not. Tenure decisions are cloaked in mystery, and pre-tenure reviews are most often written in language that is suggestive but not binding (perhaps so as to prevent lawsuits if someone expected tenure and didn't get it). I have no idea what things are like on the other side of tenure -- I am myself in the same boat as Dan Drezner was until last week, anxiously awaiting a judgment on the massive tenure file that my graduate assistant and I assembled and submitted for review back in September -- but speaking from the Assistant Professor position I can certainly say that I think I've been doing a good job but I have less confidence in that assessment of my performance than some of my friends in non-academic jobs do in theirs.
So the question "should I blog?" is in that way little different than "should I consult?" or "should I protest?" or "should I make a lot of media appearances?" The answer is: I don't know, nobody knows, and it all depends. Depends on what? Well, to be blunt, it depends on your research productivity in the first instance, and on your not pissing people off too much in the second instance. Publishing books and articles in highly-rated places is the central factor in just about every tenure decision at every research university (teaching colleges are different, by definition -- teaching matters much, much more than it does in research universities, even relatively teaching-centric ones like mine). So the basic decision rule for a tenure-track Assistant Professor goes something like this: does doing X take away time from my getting another article or book out there? If yes, then don't do it.
[Obviously, that rule isn't always absolutely followed, since taken to an extreme it means that one shouldn't eat or sleep or go to a baseball game…but the categorical character of the basic rule probably helps to explain why Assistant Professors as a group walk around with very high anxiety levels and nagging feelings of guilt when they're doing almost anything but publishing. It's like being a graduate student, but you get paid more and you get/have to teach classes too. I never maintained that going into academia was a rational decision.]
As for the second decision-rule, well, let's just say that Assistant Professors have to be kind of careful not to really annoy senior colleagues. It's no different than any other profession in that respect; what is different is that there are so many more opportunities to do something that confounds someone's expectations, because those expectations are often idiosyncratic and almost invariably tacit rather than explicitly articulated.
How does blogging stack up in terms of these two criteria? I think that Dan Nexon got it right when he pointed out that blogging doesn't take away from research-and-writing time, but constitutes a part of recreation-and-relaxation time. And lately my wife and I have been sitting down after the kids are in bed to watch the Yankees play, computers on laps, surfing the 'Net, trading observations, and blogging; that time wouldn't be spent in research-and-writing anyway. In addition, there is the fact that the things that I post to my blogs [yes, plural -- this one, Duck of Minerva, and more rarely on Progressive Commons; still trying to figure out exactly what kinds of posts belong where] are both extensions of and elaborations on my scholarly research and my pedagogical practice. Ideas that inform my scholarly writing get floated and debated (for example here and here), so at least some of my blogging is directly related to my research productivity. And I often blog here about science fiction, which is directly related to the "Social/Science/Fiction" seminar I offer once every couple of years.
So blogging doesn't detract from my output as a teacher and a scholar, and might actually enhance it. How about the other criterion? Obviously a blog gives one more of an opportunity to say something publicly that annoys someone else; the blunt nature of the blogging genre, combined with the lack of editorial screening, makes such an occurrence perhaps pretty likely. But it's possible to annoy someone in a faculty meeting, or a public lecture, or while chatting in the hallway, to say nothing of the potential for annoyance involved in allowing oneself to be publicly identified with a political program or agenda. Once again, blogging is no different.
So why are people so concerned about it? Might it be the generally technophobic character of academic practice? As we can see from the resistance to the use of digital projectors, e-mail, IM, and even computers in general by many many members of many many university faculties around the country, academics are a pretty conservative bunch when it comes to their research and teaching practices. (The politics of academics themselves are a wholly separate issue.) Blogging might be getting caught up in a general skepticism about information technology. I can't tell you how many times I've been told that I should cut out the flashy tech and just make presentations the standard way, me alone in front of the room speaking from notes…no, literally, I can't tell you that, because I've only ever heard those comments from senior colleagues and as I said before, rule #2 of getting tenure is Don't Piss Anyone Off Too Much. [I will only say that "colleague" is an expression that refers to any academic, not just to those at one's own institution. And my university is big on teaching with technology, so I face few if any obstacles around here. You do the math.]
So I'm going to keep on blogging. And I'm even going to try to blog in this space more regularly, so keep a watch -- or, better yet, subscribe to the rss feed.
The debate is perennial, and it isn't just about blogging. Academia is an odd profession in that there are few if any clear expectations about how one should allot one's time on the job; there are also very few clear signals about whether one is doing a good job or not. Tenure decisions are cloaked in mystery, and pre-tenure reviews are most often written in language that is suggestive but not binding (perhaps so as to prevent lawsuits if someone expected tenure and didn't get it). I have no idea what things are like on the other side of tenure -- I am myself in the same boat as Dan Drezner was until last week, anxiously awaiting a judgment on the massive tenure file that my graduate assistant and I assembled and submitted for review back in September -- but speaking from the Assistant Professor position I can certainly say that I think I've been doing a good job but I have less confidence in that assessment of my performance than some of my friends in non-academic jobs do in theirs.
So the question "should I blog?" is in that way little different than "should I consult?" or "should I protest?" or "should I make a lot of media appearances?" The answer is: I don't know, nobody knows, and it all depends. Depends on what? Well, to be blunt, it depends on your research productivity in the first instance, and on your not pissing people off too much in the second instance. Publishing books and articles in highly-rated places is the central factor in just about every tenure decision at every research university (teaching colleges are different, by definition -- teaching matters much, much more than it does in research universities, even relatively teaching-centric ones like mine). So the basic decision rule for a tenure-track Assistant Professor goes something like this: does doing X take away time from my getting another article or book out there? If yes, then don't do it.
[Obviously, that rule isn't always absolutely followed, since taken to an extreme it means that one shouldn't eat or sleep or go to a baseball game…but the categorical character of the basic rule probably helps to explain why Assistant Professors as a group walk around with very high anxiety levels and nagging feelings of guilt when they're doing almost anything but publishing. It's like being a graduate student, but you get paid more and you get/have to teach classes too. I never maintained that going into academia was a rational decision.]
As for the second decision-rule, well, let's just say that Assistant Professors have to be kind of careful not to really annoy senior colleagues. It's no different than any other profession in that respect; what is different is that there are so many more opportunities to do something that confounds someone's expectations, because those expectations are often idiosyncratic and almost invariably tacit rather than explicitly articulated.
How does blogging stack up in terms of these two criteria? I think that Dan Nexon got it right when he pointed out that blogging doesn't take away from research-and-writing time, but constitutes a part of recreation-and-relaxation time. And lately my wife and I have been sitting down after the kids are in bed to watch the Yankees play, computers on laps, surfing the 'Net, trading observations, and blogging; that time wouldn't be spent in research-and-writing anyway. In addition, there is the fact that the things that I post to my blogs [yes, plural -- this one, Duck of Minerva, and more rarely on Progressive Commons; still trying to figure out exactly what kinds of posts belong where] are both extensions of and elaborations on my scholarly research and my pedagogical practice. Ideas that inform my scholarly writing get floated and debated (for example here and here), so at least some of my blogging is directly related to my research productivity. And I often blog here about science fiction, which is directly related to the "Social/Science/Fiction" seminar I offer once every couple of years.
So blogging doesn't detract from my output as a teacher and a scholar, and might actually enhance it. How about the other criterion? Obviously a blog gives one more of an opportunity to say something publicly that annoys someone else; the blunt nature of the blogging genre, combined with the lack of editorial screening, makes such an occurrence perhaps pretty likely. But it's possible to annoy someone in a faculty meeting, or a public lecture, or while chatting in the hallway, to say nothing of the potential for annoyance involved in allowing oneself to be publicly identified with a political program or agenda. Once again, blogging is no different.
So why are people so concerned about it? Might it be the generally technophobic character of academic practice? As we can see from the resistance to the use of digital projectors, e-mail, IM, and even computers in general by many many members of many many university faculties around the country, academics are a pretty conservative bunch when it comes to their research and teaching practices. (The politics of academics themselves are a wholly separate issue.) Blogging might be getting caught up in a general skepticism about information technology. I can't tell you how many times I've been told that I should cut out the flashy tech and just make presentations the standard way, me alone in front of the room speaking from notes…no, literally, I can't tell you that, because I've only ever heard those comments from senior colleagues and as I said before, rule #2 of getting tenure is Don't Piss Anyone Off Too Much. [I will only say that "colleague" is an expression that refers to any academic, not just to those at one's own institution. And my university is big on teaching with technology, so I face few if any obstacles around here. You do the math.]
So I'm going to keep on blogging. And I'm even going to try to blog in this space more regularly, so keep a watch -- or, better yet, subscribe to the rss feed.
10.10.05
Serenity
On Friday evening my wife and I went to see the best science fiction film I have seen in a movie theatre in a very long time -- possibly since the first Matrix film came out in 1999. [Yes, I am well aware that Revenge of the Sith came out this summer; no, I don't consider any of the Star Wars films science fiction, for reasons detailed here.] The film is a spectacular piece of storytelling, much as you'd expect from Joss Whedon, and I'm happy to report that the film also extends the television series (Firefly) on which it is based in intriguing ways. After the shoddy treatment that the series received from the idiots at Fox who ran the episodes out of sequence and then refused to even air them all, it's nice to see a little vindication.
[Of course, the vindication is more artistic than commercial, since the film has only made about $18 million and is currently at #8 on the box office gross chart, having been at #2 the previous weekend but losing about 47% in the intervening period. Why more people aren't flocking to see this film is completely beyond me.]
Whedon is a magnificent writer and director -- he can tell a story primarily through dialogue without getting wrapped up in needless explication or didactic delivery. He's not really a visual director, although there are some visually stunning sequences (particularly the close combat episodes). Instead, what is most compelling about Whedon's art is the way that he develops his characters; he draws you in to the lives of the quirky people he depicts, and manages to sketch their portraits quickly with just a few turns of phrase. The early exchange between Mal and Jayne about taking grenades on a job, combined with the later allusion to that situation near the end of the film, tells you almost everything you need to know about their relationship. Zoe's professionalism and her firm deference to Mal suggests the back-story without having to explicitly go into it. And so on.
The most interesting thing for me -- besides the fact that the movie was basically a chance to go and visit with some people on screen I hadn't seen for a while, and whose lives and fates had become quite important -- was watching Whedon basically wrap up the unfinished first season of the series with the budget and script control that he really needed. Given Whedon's penchant for single-season arcs, I suspect that he would have done something like this if the series had been able to continue, answering some of the questions about River and leaving a few things to be developed later on. I found it pretty satisfying, abrupt deaths of main characters aside . . .
It's also interesting to me that both Whedon and Lucas were grappling with similar themes (the problems of a politics oriented towards an absolute ideal) in their films this year, but going about it in almost opposite ways. Lucas, telling an epic, mythic story, focuses on the fall of a particular individual; Whedon, telling a character-driven story largely about a small group's struggle to survive, ends up focusing on a large impersonal set of forces and institutions. We meet none of the perpetrators of the great idealistic project in Whedon's universe; we get an impassioned speech from Mal, but there is no corruption arc in the same way that we see in Lucas' epic. In the end both films take the same (anti-idealist) side, but they get there in very different ways.
There are rumors of more films set in the Serenity universe. I really hope that they get made; I want to see what happens next!
[Of course, the vindication is more artistic than commercial, since the film has only made about $18 million and is currently at #8 on the box office gross chart, having been at #2 the previous weekend but losing about 47% in the intervening period. Why more people aren't flocking to see this film is completely beyond me.]
Whedon is a magnificent writer and director -- he can tell a story primarily through dialogue without getting wrapped up in needless explication or didactic delivery. He's not really a visual director, although there are some visually stunning sequences (particularly the close combat episodes). Instead, what is most compelling about Whedon's art is the way that he develops his characters; he draws you in to the lives of the quirky people he depicts, and manages to sketch their portraits quickly with just a few turns of phrase. The early exchange between Mal and Jayne about taking grenades on a job, combined with the later allusion to that situation near the end of the film, tells you almost everything you need to know about their relationship. Zoe's professionalism and her firm deference to Mal suggests the back-story without having to explicitly go into it. And so on.
The most interesting thing for me -- besides the fact that the movie was basically a chance to go and visit with some people on screen I hadn't seen for a while, and whose lives and fates had become quite important -- was watching Whedon basically wrap up the unfinished first season of the series with the budget and script control that he really needed. Given Whedon's penchant for single-season arcs, I suspect that he would have done something like this if the series had been able to continue, answering some of the questions about River and leaving a few things to be developed later on. I found it pretty satisfying, abrupt deaths of main characters aside . . .
It's also interesting to me that both Whedon and Lucas were grappling with similar themes (the problems of a politics oriented towards an absolute ideal) in their films this year, but going about it in almost opposite ways. Lucas, telling an epic, mythic story, focuses on the fall of a particular individual; Whedon, telling a character-driven story largely about a small group's struggle to survive, ends up focusing on a large impersonal set of forces and institutions. We meet none of the perpetrators of the great idealistic project in Whedon's universe; we get an impassioned speech from Mal, but there is no corruption arc in the same way that we see in Lucas' epic. In the end both films take the same (anti-idealist) side, but they get there in very different ways.
There are rumors of more films set in the Serenity universe. I really hope that they get made; I want to see what happens next!
Sympathetic magic
The fact that today is Columbus Day, so the university is closed -- coupled with the fact that today the Board of Trustees is meeting to (hopefully) make a decision on the fate of President Ladner -- have allowed/encouraged me to stay at home today. (Untenured junior people should not, IMHO, be on the front lines of faculty protests involving the Board or the higher levels of the administration. Self-interest? You betcha. One of the things that goes along with tenure is a certain freedom and flexibility, and with those a certain responsibility, to take the lead on things that one's junior colleagues cannot. Bravo to those of my colleagues who have really stepped up the plate over the past couple of weeks.)
One of the nice things about staying home is that I can try my damnedest to do my part in helping the Yankees win in Anaheim tonight, and thus advance to the ALCS against the White Sox. What's my part? Well, like any fanatic, I believe (with the completely a-rational part of my brain) that by dressing the part I can help to pull my team to victory. I mean, why else would they sell all that merchandise? It's for focusing the sending of positive energy to the guys on the field, right? ;-) So my wife and I are dressed to the hilt in Yankees gear today; I'm typing in a Mike Mussina jersey (Moose pitches tonight, and he's one of my favorite modern Yankees and favorite modern pitchers, so my clothing choice is completely overdetermined today), ball-cap firmly on head, while my wife puts away groceries while wearing a "2003 AL Champions" t-shirt and a blue Yankees hooded sweatshirt.
Yes, even professors (perhaps especially professors) have their irrational streaks.
And the worst part about the whole thing is that we're not even really in the baseball season anymore. It's the "post-season" now, and the modern post-season is nothing but an emotionally manipulative carnival in which our loyalty to particular teams, nurtured and tested and developed over the course of a long season, is mercilessly exploited for a series of contests that bear very little resemblance at all to regular-season baseball. I mean, the regular season is long enough that random fluctuations, by and large, get filtered out, but the post-season is so short and each individual game matters so much that a pair of errors and a bad start by a pitcher can pretty much doom you.
If regular-season baseball is a marathon, the post-season is a long-distance sprint. But a fan can't not watch, can't not care how her or his team is doing; team loyalty doesn't just turn on or turn off as the conditions of the game change. So even though rationally, intellectually, I know full well that the post-season is basically a crap shoot and the best team rarely actually wins in the end, it'd still be terrible to be eliminated by the Angels and not go on to play the White Sox -- and ultimately to play in the World Series. (Because, you know, I'm somehow a part of whatever happens to the team. So "we" are playing in Anaheim tonight, and "we" will go on to play in Chicago tomorrow, and so on…)
I feel emotionally exploited, but you can bet I'll be glued to the television tonight. First pitch, 8:19 EST.
One of the nice things about staying home is that I can try my damnedest to do my part in helping the Yankees win in Anaheim tonight, and thus advance to the ALCS against the White Sox. What's my part? Well, like any fanatic, I believe (with the completely a-rational part of my brain) that by dressing the part I can help to pull my team to victory. I mean, why else would they sell all that merchandise? It's for focusing the sending of positive energy to the guys on the field, right? ;-) So my wife and I are dressed to the hilt in Yankees gear today; I'm typing in a Mike Mussina jersey (Moose pitches tonight, and he's one of my favorite modern Yankees and favorite modern pitchers, so my clothing choice is completely overdetermined today), ball-cap firmly on head, while my wife puts away groceries while wearing a "2003 AL Champions" t-shirt and a blue Yankees hooded sweatshirt.
Yes, even professors (perhaps especially professors) have their irrational streaks.
And the worst part about the whole thing is that we're not even really in the baseball season anymore. It's the "post-season" now, and the modern post-season is nothing but an emotionally manipulative carnival in which our loyalty to particular teams, nurtured and tested and developed over the course of a long season, is mercilessly exploited for a series of contests that bear very little resemblance at all to regular-season baseball. I mean, the regular season is long enough that random fluctuations, by and large, get filtered out, but the post-season is so short and each individual game matters so much that a pair of errors and a bad start by a pitcher can pretty much doom you.
If regular-season baseball is a marathon, the post-season is a long-distance sprint. But a fan can't not watch, can't not care how her or his team is doing; team loyalty doesn't just turn on or turn off as the conditions of the game change. So even though rationally, intellectually, I know full well that the post-season is basically a crap shoot and the best team rarely actually wins in the end, it'd still be terrible to be eliminated by the Angels and not go on to play the White Sox -- and ultimately to play in the World Series. (Because, you know, I'm somehow a part of whatever happens to the team. So "we" are playing in Anaheim tonight, and "we" will go on to play in Chicago tomorrow, and so on…)
I feel emotionally exploited, but you can bet I'll be glued to the television tonight. First pitch, 8:19 EST.
5.10.05
Bang-ZOOM go the fireworks
At the end of each home game that the Washington Nationals won this year, fireworks were set off at RFK Stadium. Charlie Slowes, one of the Nationals' radio broadcasters, invariably let loose with a dramatic "bang-ZOOM go the fireworks!" call, which just punctuated the excitement of the Nats having emerged victorious.
I feel like setting off some fireworks and having that call applied to me right now, as I just performed the academic equivalent of pitching a perfect game. It was one of those presentations where everything just worked, from the tech accompaniment to the delivery to the audience response. I imagine that it was what a pitcher feels when they're pitching a perfect game, when all the pitches are just working, and you can't seem to make any mistakes: the curveball curves, the cut fastball darts, the change-up fools batters, and that marginal pitch right on the outside corner of the plate? You get the call, and the batter strikes out.
Obviously, a presentation isn't precisely the same as pitching a baseball game. It's less explicitly oppositional, for one thing; the goal is to reach the audience, not to retire them in order. But there's a similar sense of "being in the zone" when all the pitches are working, so that when questions (which might be thought of as swings of the bat) are tendered, they aren't anything that derails you.
I've only given presentations like that very rarely. One of the most memorable was my job talk back in November 1999, when I was campaigning for the job that I currently hold; another was the talk I gave to the Council on Comparative Studies about "civilizations" maybe four years ago. In all of these cases, once I start into the presentation, it's as if something other than myself takes control of the performance, and I'm just sitting back and watching what happens, as amazed as anyone else at how well things are working. It's an experience of being in form, performing well enough that everything seems both inevitable and wholly contingent at the same time: it felt as though I could have done anything and it would have worked out just as well as though it had been inevitably destined.
In that way I can't completely claim credit for the performance. Yes, I practice presenting; yes, I prepared this talk; yes, I went out there ready to give it my all. But the results -- those I feel like I can only be grateful for, grateful that everything went so well and that I was able to deliver a compelling performance.
And there were Apple people in the audience, several of whom said that they enjoyed what I did and that they were interested in having me do it again in the future. Me? Pitch for Apple? Hell yes. Sign me up. No compunctions whatsoever -- Apple is a company and obviously is interested in selling products and making profits, but I like what they produce and I'm more than happy to tell others and to show them what I have been able to do with Apple hardware and software. If my telling people produces more sales for the company, so be it. And it's not like pitching in that way would require me to make any compromises or to alter the content of what I'm doing, so why the heck not?
I'm not sure how well my perfect game translated to audio, but you can (if you're interested) download the podcast I made of the event during the event here.
[Posted with ecto]
I feel like setting off some fireworks and having that call applied to me right now, as I just performed the academic equivalent of pitching a perfect game. It was one of those presentations where everything just worked, from the tech accompaniment to the delivery to the audience response. I imagine that it was what a pitcher feels when they're pitching a perfect game, when all the pitches are just working, and you can't seem to make any mistakes: the curveball curves, the cut fastball darts, the change-up fools batters, and that marginal pitch right on the outside corner of the plate? You get the call, and the batter strikes out.
Obviously, a presentation isn't precisely the same as pitching a baseball game. It's less explicitly oppositional, for one thing; the goal is to reach the audience, not to retire them in order. But there's a similar sense of "being in the zone" when all the pitches are working, so that when questions (which might be thought of as swings of the bat) are tendered, they aren't anything that derails you.
I've only given presentations like that very rarely. One of the most memorable was my job talk back in November 1999, when I was campaigning for the job that I currently hold; another was the talk I gave to the Council on Comparative Studies about "civilizations" maybe four years ago. In all of these cases, once I start into the presentation, it's as if something other than myself takes control of the performance, and I'm just sitting back and watching what happens, as amazed as anyone else at how well things are working. It's an experience of being in form, performing well enough that everything seems both inevitable and wholly contingent at the same time: it felt as though I could have done anything and it would have worked out just as well as though it had been inevitably destined.
In that way I can't completely claim credit for the performance. Yes, I practice presenting; yes, I prepared this talk; yes, I went out there ready to give it my all. But the results -- those I feel like I can only be grateful for, grateful that everything went so well and that I was able to deliver a compelling performance.
And there were Apple people in the audience, several of whom said that they enjoyed what I did and that they were interested in having me do it again in the future. Me? Pitch for Apple? Hell yes. Sign me up. No compunctions whatsoever -- Apple is a company and obviously is interested in selling products and making profits, but I like what they produce and I'm more than happy to tell others and to show them what I have been able to do with Apple hardware and software. If my telling people produces more sales for the company, so be it. And it's not like pitching in that way would require me to make any compromises or to alter the content of what I'm doing, so why the heck not?
I'm not sure how well my perfect game translated to audio, but you can (if you're interested) download the podcast I made of the event during the event here.
[Posted with ecto]
7.9.05
It's a marathon, not a sprint
At the risk of being accused of stretching my metaphors a bit -- and what metaphor isn't stretched when applied? isn't that the very character of metaphors, which by definition transfer elements from one domain to another? -- it seems to me that the course of a semester is in many ways like that marvel of modern athleticism, the regular season of organized professional Major League baseball. One always hears that the regular season is not a sprint (at least, not until the last few weeks if division races are tight), but a marathon; the test of a good baseball team is not whether it can win a few games, even a few games in a row, but whether it can consistently perform at a high level over the course of a grueling six-month season and a schedule of 162 games.
Why is the semester like this? Because the true test of whether you're doing a good job is not if one particular day or one particular week goes well, but if the effect of the whole achieves the desired level of engagement. In the end it is the long term that matters far more than any individual session or encounter.
One can think of the regular season of Major League baseball as, in effect if not in deliberate design, a giant machine for discounting random fluctuations.1 Yes, a team may win any given game because of a bad call by an umpire or a ball taking a bad hop in front of the shortstop which leads to a two-run error, but over the course of the entire season such random happenstances will cancel one another out, and we will be left with some teams having a record better than others -- and we can say with a high level of confidence that they have a better record because they simply played the game better over the course of the season. The same cannot be said of sports with absurdly short regular seasons, or of competitions where the winner is basically chosen at random.
Now, classroom teaching is not competitive in the same way. One does not accumulate a record of success by defeating other classes; instead, one simply tries to go out and have a good season by performing well, and there's no reason why every class can't be succeeding at once. In a way, I think of it as playing against myself, or against other classes that I've taught -- can this semester of this course go in the record-books as roughly comparable to other instances? Is the experience rewarding for all concerned? Could it be better -- not necessarily more enjoyable in the short-term, which is a marketing question in which I am generally not interested, but richer, thicker, more challenging and generally more complex?
The semester is also like regular-season baseball in that the early days of both are largely about figuring out what you have to work with. In my classes this semester I tend to oscillate between the "pitching" and "batting" roles; I pitch when I lecture, and I bat when I am facilitating a discussion and basically taking critical swings at whatever someone chooses to toss out in an effort to get them to specify and defend their position a bit better. I find that I spend the first couple of class sessions just trying to get the feel of the class, to see who's a "free swinger" willing to take a crack at whatever I toss out there, who has "perfect-pitch-itis" and refrains from speaking up until they think that they can make the most exquisite point, who's aggressive, who's cautious, and so forth. Once we start to develop some sense of one another, it becomes easier for us to engage in a good discussion or other positive pedagogical interactions. But first we have to get through the initial couple of weeks.
Thank goodness it's a marathon.
1Don't get me started on how the scheduling in contemporary Major League Baseball is completely messed up and unbalanced. Largely because of interleague play, each team faces a different combination of opponents within a given year, and faces them under different conditions -- one can't be assured of having the same number of games against each opponent as the other members of one's division or league, and certainly can't be assured of having an equal number of games against each opponent at home and away. So in a certain sense things aren't fair by design . . . but it's still better in MLB than in other sports.
Why is the semester like this? Because the true test of whether you're doing a good job is not if one particular day or one particular week goes well, but if the effect of the whole achieves the desired level of engagement. In the end it is the long term that matters far more than any individual session or encounter.
One can think of the regular season of Major League baseball as, in effect if not in deliberate design, a giant machine for discounting random fluctuations.1 Yes, a team may win any given game because of a bad call by an umpire or a ball taking a bad hop in front of the shortstop which leads to a two-run error, but over the course of the entire season such random happenstances will cancel one another out, and we will be left with some teams having a record better than others -- and we can say with a high level of confidence that they have a better record because they simply played the game better over the course of the season. The same cannot be said of sports with absurdly short regular seasons, or of competitions where the winner is basically chosen at random.
Now, classroom teaching is not competitive in the same way. One does not accumulate a record of success by defeating other classes; instead, one simply tries to go out and have a good season by performing well, and there's no reason why every class can't be succeeding at once. In a way, I think of it as playing against myself, or against other classes that I've taught -- can this semester of this course go in the record-books as roughly comparable to other instances? Is the experience rewarding for all concerned? Could it be better -- not necessarily more enjoyable in the short-term, which is a marketing question in which I am generally not interested, but richer, thicker, more challenging and generally more complex?
The semester is also like regular-season baseball in that the early days of both are largely about figuring out what you have to work with. In my classes this semester I tend to oscillate between the "pitching" and "batting" roles; I pitch when I lecture, and I bat when I am facilitating a discussion and basically taking critical swings at whatever someone chooses to toss out in an effort to get them to specify and defend their position a bit better. I find that I spend the first couple of class sessions just trying to get the feel of the class, to see who's a "free swinger" willing to take a crack at whatever I toss out there, who has "perfect-pitch-itis" and refrains from speaking up until they think that they can make the most exquisite point, who's aggressive, who's cautious, and so forth. Once we start to develop some sense of one another, it becomes easier for us to engage in a good discussion or other positive pedagogical interactions. But first we have to get through the initial couple of weeks.
Thank goodness it's a marathon.
1Don't get me started on how the scheduling in contemporary Major League Baseball is completely messed up and unbalanced. Largely because of interleague play, each team faces a different combination of opponents within a given year, and faces them under different conditions -- one can't be assured of having the same number of games against each opponent as the other members of one's division or league, and certainly can't be assured of having an equal number of games against each opponent at home and away. So in a certain sense things aren't fair by design . . . but it's still better in MLB than in other sports.
6.9.05
Masterworks blogs
As promised, here are the urls for the group blogs for this semester's "Masterworks" course:
[Posted with ecto]
- A Bookish Affair
- Evening Erudition
- Fight Blog
- H.M.S. Blogty
- Masterworks of International Relations Group Blog
[Posted with ecto]
Traditions
Amidst the continuing stories of tragedy and survival emanating from the Gulf Coast, this article in this morning's Washington Post caught my eye: Ursinus College, a small liberal arts school in Pennsylvania, has turned the higher education clock back by requiring all freshmen to take a required philosophy and literature course that covers what we might think of as the Usual Suspects in the Western Canon: the book of Genesis, Plato, Descartes, Marx, Darwin, Nietzsche, and the like.
I think that this is a good move, and I wish that more colleges and universities would adopt requirements like this. I say this not because I am any great believer in the integrity of the Western Canon -- especially since the best historical research shows us that the precise contours of what counts as a "canonical" work varies widely over time, sometimes fluctuating wildly from decade to decade. Nor am I convinced of the "enduring value" or "timeless wisdom" contained in any of the works that one commonly finds on lists for courses like this; I'm particularly skeptical that Plato or Aristotle have much of substance to teach us, given the differences between their worlds/ways of life and ours.
Instead, I'm in support of this kind of a required course for a few different reasons. The first is basically autobiographical: I taught Contemporary Civilization as part of Columbia University's Core Program for two years, and it was if not the absolute best course I have ever taught, it certainly comes very close to the top of the list. CC met twice a week, two hours per session, for an entire year, and we read a series of complex and challenging works running up and down and sometimes outside of the traditional list of dead white guys. And that stuff is fun to teach, so much fun that my "Masterworks of International Relations" syllabus looks in no small way like my old CC syllabus. Spend a week on the mechanics of voting at the UN or spend a week reading and discussing Kant's proposal for a perpetual peace? Kant wins, hands-down.
[Full disclosure: I interviewed at Ursinus six years ago, and at that time some of the faculty were just beginning to put a new course together. One of the things I liked the most about the place was that, in fact, they were heading in that direction and that I'd be able to teach in their new program, and basically keep teaching CC…but I didn't get that job, and the rest is history.]
Second, there's the historical dimension. The "Western Civ" course played certain specialized functions in its heyday (roughly 1920-1965, give or take), and the disappearance of that course as a required element of college and university education has made it harder to fulfill those functions. I'll highlight two functions: providing a common vocabulary, and socializing students into a multi-generational conversation. The "Western Civ" course fulfilled these functions more or less by design: starting as the "War Issues" course at Columbia during the First World War, transmuting into the "Peace Issues" course afterwards and then spreading throughout American higher education with more of an emphasis on a set of classical primary-source readings, the basic point remained pretty much the same -- to bring students into dialogue with the stuff that, for better or for worse (probably both), formed the intellectual foundation of the world in which they lived. Like it or not, we live in a world that has been produced by people who read and discussed and thought in terms promulgated by a lot of white European guys, and if one wants to understand that world one has to read their writings. American politics is basically incomprehensible without Locke; debates about evolution are incomprehensible without Descartes and Darwin; discussions of religion in public life effectively require familiarity with Mill, Nietzsche, Machiavelli, and so on.
Of course, people carry on conversations and debates in ignorance of this heritage all the time. The result, whether one agrees with it or not, is -- I'd submit -- thinner, poorer, and uglier. And the absence of a set of common philosophical and literary references makes it harder to articulate a robust conception of American identity; if everybody has their own heroes and their own gods, what becomes of the whole?
Now, I am in no way advocating that we only read "the classics" or that we uncritically accept the ideas that may be found in any such set of readings. But I am suggesting that it's important that we know where we came from, and that the critical conversations that we have be reliably able to depart from a common set of themes, tropes, commonplaces, images, and so forth. Struggling about the precise meaning of that common core is what a robust political and social life ought to be about -- and we can't have that unless we first have the basic set of stuff to argue about.
Third, although I don't think that we should be designing required readings because of any "timeless wisdom" that they supposedly contain, I do think that there's something to be said for thinking through issues in the company of people like Hobbes and Thucydides and Kierkegaard and Freud. It's not that they "got it right," but instead that they spent a lot of time working through to some sort of a stance on some of those perennial issues that keep coming up again and again throughout the course of recorded human history. [There may be a causal loop at play here, of course: perhaps part of the reason that those issues keep surfacing is that people have read certain works and engaged with them in trying to order their experiences, and then their records of those experiences become part of the canon, which people read subsequently, etc.…] I can think of no better way to wrestle with some thorny issue than by discussing it, either with live persons or with dead ones whom you only know through books -- but you can't use the book route unless you have been exposed to the books, and exposed to them in their proper historical and intellectual context. Which is what a required college course on the Columbia/Ursinus model provides.
For example: Augustine's central problem in his magisterial Concerning the City of God Against the Pagans is the question of how God could have permitted Rome to fall -- how such a magnificent city's ruin at the hands of barbarians could be squared with the notion of an infinitely good, just, and compassionate supreme ruler of the universe. Contemporary relevance, anyone? Augustine is the perfect source for trying to work through the theological or cosmic meaning of a disaster, to try to put it all into some kind of context -- even if one whole-heartedly rejects Augustine's conclusions. The point is that engaging what he is doing in the text, confronting his arguments and responding to his logic, helps to develop the habit of mind that can come to some kind of resolution about big incomprehensible things.
Enduring value? In that sense, you betcha. Worth the possible grumbling of freshmen seeking more choice and more control over their educations from the get-go? I think so. And, if these weren't enough, think also of the benefits for higher-level college courses, when one can presume a familiarity with certain works and authors and themes, the better to appreciate the critiques of them that are advanced by more contemporary authors…
Yes, I'm a CC junkie. And I think that you should be, too.
[cross-posted on Duck of Minerva]
I think that this is a good move, and I wish that more colleges and universities would adopt requirements like this. I say this not because I am any great believer in the integrity of the Western Canon -- especially since the best historical research shows us that the precise contours of what counts as a "canonical" work varies widely over time, sometimes fluctuating wildly from decade to decade. Nor am I convinced of the "enduring value" or "timeless wisdom" contained in any of the works that one commonly finds on lists for courses like this; I'm particularly skeptical that Plato or Aristotle have much of substance to teach us, given the differences between their worlds/ways of life and ours.
Instead, I'm in support of this kind of a required course for a few different reasons. The first is basically autobiographical: I taught Contemporary Civilization as part of Columbia University's Core Program for two years, and it was if not the absolute best course I have ever taught, it certainly comes very close to the top of the list. CC met twice a week, two hours per session, for an entire year, and we read a series of complex and challenging works running up and down and sometimes outside of the traditional list of dead white guys. And that stuff is fun to teach, so much fun that my "Masterworks of International Relations" syllabus looks in no small way like my old CC syllabus. Spend a week on the mechanics of voting at the UN or spend a week reading and discussing Kant's proposal for a perpetual peace? Kant wins, hands-down.
[Full disclosure: I interviewed at Ursinus six years ago, and at that time some of the faculty were just beginning to put a new course together. One of the things I liked the most about the place was that, in fact, they were heading in that direction and that I'd be able to teach in their new program, and basically keep teaching CC…but I didn't get that job, and the rest is history.]
Second, there's the historical dimension. The "Western Civ" course played certain specialized functions in its heyday (roughly 1920-1965, give or take), and the disappearance of that course as a required element of college and university education has made it harder to fulfill those functions. I'll highlight two functions: providing a common vocabulary, and socializing students into a multi-generational conversation. The "Western Civ" course fulfilled these functions more or less by design: starting as the "War Issues" course at Columbia during the First World War, transmuting into the "Peace Issues" course afterwards and then spreading throughout American higher education with more of an emphasis on a set of classical primary-source readings, the basic point remained pretty much the same -- to bring students into dialogue with the stuff that, for better or for worse (probably both), formed the intellectual foundation of the world in which they lived. Like it or not, we live in a world that has been produced by people who read and discussed and thought in terms promulgated by a lot of white European guys, and if one wants to understand that world one has to read their writings. American politics is basically incomprehensible without Locke; debates about evolution are incomprehensible without Descartes and Darwin; discussions of religion in public life effectively require familiarity with Mill, Nietzsche, Machiavelli, and so on.
Of course, people carry on conversations and debates in ignorance of this heritage all the time. The result, whether one agrees with it or not, is -- I'd submit -- thinner, poorer, and uglier. And the absence of a set of common philosophical and literary references makes it harder to articulate a robust conception of American identity; if everybody has their own heroes and their own gods, what becomes of the whole?
Now, I am in no way advocating that we only read "the classics" or that we uncritically accept the ideas that may be found in any such set of readings. But I am suggesting that it's important that we know where we came from, and that the critical conversations that we have be reliably able to depart from a common set of themes, tropes, commonplaces, images, and so forth. Struggling about the precise meaning of that common core is what a robust political and social life ought to be about -- and we can't have that unless we first have the basic set of stuff to argue about.
Third, although I don't think that we should be designing required readings because of any "timeless wisdom" that they supposedly contain, I do think that there's something to be said for thinking through issues in the company of people like Hobbes and Thucydides and Kierkegaard and Freud. It's not that they "got it right," but instead that they spent a lot of time working through to some sort of a stance on some of those perennial issues that keep coming up again and again throughout the course of recorded human history. [There may be a causal loop at play here, of course: perhaps part of the reason that those issues keep surfacing is that people have read certain works and engaged with them in trying to order their experiences, and then their records of those experiences become part of the canon, which people read subsequently, etc.…] I can think of no better way to wrestle with some thorny issue than by discussing it, either with live persons or with dead ones whom you only know through books -- but you can't use the book route unless you have been exposed to the books, and exposed to them in their proper historical and intellectual context. Which is what a required college course on the Columbia/Ursinus model provides.
For example: Augustine's central problem in his magisterial Concerning the City of God Against the Pagans is the question of how God could have permitted Rome to fall -- how such a magnificent city's ruin at the hands of barbarians could be squared with the notion of an infinitely good, just, and compassionate supreme ruler of the universe. Contemporary relevance, anyone? Augustine is the perfect source for trying to work through the theological or cosmic meaning of a disaster, to try to put it all into some kind of context -- even if one whole-heartedly rejects Augustine's conclusions. The point is that engaging what he is doing in the text, confronting his arguments and responding to his logic, helps to develop the habit of mind that can come to some kind of resolution about big incomprehensible things.
Enduring value? In that sense, you betcha. Worth the possible grumbling of freshmen seeking more choice and more control over their educations from the get-go? I think so. And, if these weren't enough, think also of the benefits for higher-level college courses, when one can presume a familiarity with certain works and authors and themes, the better to appreciate the critiques of them that are advanced by more contemporary authors…
Yes, I'm a CC junkie. And I think that you should be, too.
[cross-posted on Duck of Minerva]
2.9.05
Courses and classes
Ah, the start of a new semester -- immediately interrupted, as happens almost every Fall, by the annual American Political Science Association conference, which invariably happens over Labor Day Weekend. This year it's a bit odd, as the conference is in downtown DC; since I live here, I am both a local agent for arranging dinners and such, and splitting my time between conferencing (yes, it's a verb, and it indicates a whole different form of life than that ordinarily found outside of professional conferences. More on that in another entry, if I feel so inclined) and being at home to do family things like put the kids to bed.
Besides the general craziness of early September, I am always struck at this time of year how different a course can be from iteration to iteration. I distinguish between courses and classes: courses are defined by a number in the catalog, a syllabus, and a title, and perhaps by a basic core set of readings and activities. [Precisely how many of the core readings and activities can change before the course transmutes into another course is a matter of some dispute, kind of like Wittgenstein's question in Philosophical Investigations about how many houses it takes to make something a "city" rather than a "town" -- the answer, of course, is that neither term has an absolute and fixed meaning, so that any attempt to answer the question takes its bearings not from a determinate distinction but from the local rules of the language-game governing the use of the terms in practice. In other words: it's a new course when I say it's a new course, and when I can substantiate that claim within some given language-game and convince other participants to accept my characterization.]
Classes, by contrast, are the collections of people who participate in the course when it is offered; a particular class is a particular group of people, including myself, who undertake the semester-long journey that is shaped and structured by the course. But not determined: I can promote the same discussions, assign the same readings, in a course year after year, but the result is somewhat different for each class. I traditionally do the "what do you know?" drill at the beginning of my 206 course, but each class reacts differently and we generate a slightly different conversation every time. The conversations -- like the classes as a whole -- bear a family resemblance to one another, but each takes unique twists and turns and displays a different character.
It's important to keep in mind that classes are different not merely because the students (and my assistant(s), if any are involved in the course that semester) are different, but because I am also different from semester to semester. What I stress, what I downplay, what waves I send out into the shared pedagogical space of the classroom are never quite the same from instance to instance, even though I work from a similar script (and often a set of slides, when I am lecturing) each time. I regard the slides as sheet music, though, and I improvise around them, playing off the students in the class just as they play off of me. And class discussions are even wilder, since we may read the same words on the page in, say, Hobbes or Thucydides, but I have no idea where we are going to go from semester to semester in our collective consideration of that material. I'm always surprised, which is part of the fun of the whole exercise.
This semester I have two courses running, one of which (research methodology) I've taught many many times before, and the other of which ("Masterworks," which is basically a political philosophy of international relations course) I've only taught once before. But I have no clearer idea of where the former is going than I do the latter.
If I ever knew precisely what a class was going to do, I'd think that it was time to ditch the course or radically rework it -- largely because I'd have no idea how to participate in such a class. A good course permits and furthers a kind of joint action conversational process, a distinguishing characteristic of which is that none of us individually have complete control over it. In that sense, a certain amount of ignorance -- or, perhaps, humility -- on my part opens the space for a much more interesting class to take root and flower.
Since this is a "course diary," expect notes from the road as we find ourselves meandering along it.
Besides the general craziness of early September, I am always struck at this time of year how different a course can be from iteration to iteration. I distinguish between courses and classes: courses are defined by a number in the catalog, a syllabus, and a title, and perhaps by a basic core set of readings and activities. [Precisely how many of the core readings and activities can change before the course transmutes into another course is a matter of some dispute, kind of like Wittgenstein's question in Philosophical Investigations about how many houses it takes to make something a "city" rather than a "town" -- the answer, of course, is that neither term has an absolute and fixed meaning, so that any attempt to answer the question takes its bearings not from a determinate distinction but from the local rules of the language-game governing the use of the terms in practice. In other words: it's a new course when I say it's a new course, and when I can substantiate that claim within some given language-game and convince other participants to accept my characterization.]
Classes, by contrast, are the collections of people who participate in the course when it is offered; a particular class is a particular group of people, including myself, who undertake the semester-long journey that is shaped and structured by the course. But not determined: I can promote the same discussions, assign the same readings, in a course year after year, but the result is somewhat different for each class. I traditionally do the "what do you know?" drill at the beginning of my 206 course, but each class reacts differently and we generate a slightly different conversation every time. The conversations -- like the classes as a whole -- bear a family resemblance to one another, but each takes unique twists and turns and displays a different character.
It's important to keep in mind that classes are different not merely because the students (and my assistant(s), if any are involved in the course that semester) are different, but because I am also different from semester to semester. What I stress, what I downplay, what waves I send out into the shared pedagogical space of the classroom are never quite the same from instance to instance, even though I work from a similar script (and often a set of slides, when I am lecturing) each time. I regard the slides as sheet music, though, and I improvise around them, playing off the students in the class just as they play off of me. And class discussions are even wilder, since we may read the same words on the page in, say, Hobbes or Thucydides, but I have no idea where we are going to go from semester to semester in our collective consideration of that material. I'm always surprised, which is part of the fun of the whole exercise.
This semester I have two courses running, one of which (research methodology) I've taught many many times before, and the other of which ("Masterworks," which is basically a political philosophy of international relations course) I've only taught once before. But I have no clearer idea of where the former is going than I do the latter.
If I ever knew precisely what a class was going to do, I'd think that it was time to ditch the course or radically rework it -- largely because I'd have no idea how to participate in such a class. A good course permits and furthers a kind of joint action conversational process, a distinguishing characteristic of which is that none of us individually have complete control over it. In that sense, a certain amount of ignorance -- or, perhaps, humility -- on my part opens the space for a much more interesting class to take root and flower.
Since this is a "course diary," expect notes from the road as we find ourselves meandering along it.
29.8.05
And so it begins, again
Today is the first day of school for a lot of people, including for me. Running this morning was odd, in that I kept passing groups of kids standing on corners with backpacks waiting for buses; after a summer of not seeing anyone around in the early morning hours, it felt a bit strange. Plus, a gaggle of kids were blocking one of my usual paths, so I had to cross the street and get a fresh perspective on the trees and houses that I've become used to seeing in a particular way almost every time I run.
This morning I ran my A route, which features lots of varied terrain and a curvy course, in and out of side-streets and up and down parts of the neighborhood. (Technically, I ran A3, which cuts back at the end instead of making the long loop of A2 or tackling the major hill of A1, but the A routes are the same for most of the way…and you have no idea what I'm talking about, so I'll stop blabbering.) About two-thirds through the A route is the local elementary school, which I traditionally cut through the grounds of during my circuit: down the stairs, around the building, past the construction site where they're constructing what looks like a new wing to replace the temporary trailers and huts that used to house the overflow students, and back onto the street. They've been at work on that new wing all summer, and running past it every couple of days (I pass it when I run B route too, only backwards) I have been watching them work for several months. Last week I noticed that there were more workers around, seemingly making a last push to get the thing ready for the opening of the new school year -- as though someone had suddenly looked at the calendar, realized "oh, crap, school starts in a week and we aren't done with this fool thing yet!"
So I wasn't sure what to expect this morning: would they have finished? Would the new wing be ready to hold the expected teeming throngs of students?
Nope. The area was cordoned off with an impromptu fence and some thick ribbon, and fewer people than last week were at work, apparently continuing to dig out part of the foundation. The outline of the new wing was set on the ground in concrete, and there were some structural elements in place -- a few beams, part of a wall. Piles of materials were stacked neatly, but obviously the construction hadn't gotten to the point where those elements could be used quite yet. There was potential, but the summer had just run out on the project, leaving the workers scrambling…or perhaps simply resigning themselves to the fact that the work would take a little longer than initially expected.
I'm not sure where the overflow students will go. Neither of my kids attend that elementary school, so I'm not in the know about the planning. But the school officials apparently have some improvising to do in order to keep things together.
I know how they feel.
Welcome back.
[Posted with ecto]
This morning I ran my A route, which features lots of varied terrain and a curvy course, in and out of side-streets and up and down parts of the neighborhood. (Technically, I ran A3, which cuts back at the end instead of making the long loop of A2 or tackling the major hill of A1, but the A routes are the same for most of the way…and you have no idea what I'm talking about, so I'll stop blabbering.) About two-thirds through the A route is the local elementary school, which I traditionally cut through the grounds of during my circuit: down the stairs, around the building, past the construction site where they're constructing what looks like a new wing to replace the temporary trailers and huts that used to house the overflow students, and back onto the street. They've been at work on that new wing all summer, and running past it every couple of days (I pass it when I run B route too, only backwards) I have been watching them work for several months. Last week I noticed that there were more workers around, seemingly making a last push to get the thing ready for the opening of the new school year -- as though someone had suddenly looked at the calendar, realized "oh, crap, school starts in a week and we aren't done with this fool thing yet!"
So I wasn't sure what to expect this morning: would they have finished? Would the new wing be ready to hold the expected teeming throngs of students?
Nope. The area was cordoned off with an impromptu fence and some thick ribbon, and fewer people than last week were at work, apparently continuing to dig out part of the foundation. The outline of the new wing was set on the ground in concrete, and there were some structural elements in place -- a few beams, part of a wall. Piles of materials were stacked neatly, but obviously the construction hadn't gotten to the point where those elements could be used quite yet. There was potential, but the summer had just run out on the project, leaving the workers scrambling…or perhaps simply resigning themselves to the fact that the work would take a little longer than initially expected.
I'm not sure where the overflow students will go. Neither of my kids attend that elementary school, so I'm not in the know about the planning. But the school officials apparently have some improvising to do in order to keep things together.
I know how they feel.
Welcome back.
[Posted with ecto]
18.7.05
That whole public intellectual thing
This afternoon I was interviewed by ARD German television for a news spot of some kind. Like most news organizations and savvy observers, they expect the CDU to win the German elections in the fall, which will make Angela Merkel the new German chancellor. The folks from ARD wanted to know whether I thought that Merkel's election would change things in US-German relations; I told them no, because the issues that have cropped up in US-German relations have more to do with a general reconfiguration of transatlantic relations in the past couple of decades than they have to do with any particular individual. The shift in US foreign policy/grand strategy from collective solidarity to "coalitions of the willing" and from multilateral negotiations to unilateral assertions is not just a function of the Bush Administration, or of Bush not getting along on a personal level with Gerhard Schroeder. It has to do with a more fundamental reorientation of how the United States exists in the world, the implications that this reorientation has for Europe's role in official US thinking. No one person is going to either hinder or single-handedly advance this shift.
But I don't want to talk about that substantive argument here. Instead, what is on my mind after the interview is a more searching question:
What is the point of doing interviews like this?
Obviously, the point can't be the same as the point of doing serious and systematic scholarship. There isn't the time to produce an analysis of the proper nuance; the audience constraints are different; and the format isn't conducive of really laying out a complex analysis or engaging in debate about the particulars. But that's what we academicsdo, as a rule; it's ourjob to produce theoretically sophisticated analyses of things by consistently applying analytical frameworks. It's Weberian science: the systematic application of theoretical apparatuses to data to generate "facts." But that's not what can be done during a television interview -- nuance and qualification is pretty much swept away in favor of blunter assertions.
So what are we academics doing when we give television interviews? My fear is that we're perhaps unintentionally trafficking on a myth, the myth of Answers: the idea that we academic analysts have absolutely secure bases on which to advance interpretations and make predictions. Speaking for myself, I know that I certainly do not have any such Answers; if I am asked (as I was today) what effect the election of Angela Merkel will have on US-German relations, all I can do is to offer a more or less well-grounded perspective on the issue. And by definition, that perspective is contestable -- something of which academics are well aware. I'm not sure how much of that awareness of contestability makes its way out into the general viewing public.
Should this bother us? I think it should, for one major reason: if people start thinking that we academic analysts have Answers, then we are placed into the position of having to make political pronouncements to the potential detriment of our scholarly integrity. Also, we could simply bewrong, but if people forget that, they might end up simply implementing policies based on a consensus and ignoring potential problems with that consensus as a whole. [Actually, in the current political climate, consensus among academics doesn't seem to do much of anything to governmental policies, but I'm not convinced that the answer to this problem is to become more willing to make policy pronouncements -- since doing so would exacerbate the problem by further propounding the myth of Answers.]
So I feel a bit caught between Scylla and Charibdis here: act like a talking head and contribute bits of "erudition" to the popular media when they come calling, or retreat completely into the world of abstruse scholarship. Neither solution seems completely correct or comfortable.
People toss around terms like "public intellectual" sometimes when I pose this problem to them, but I can't for the life of me figure out what that actuallymeans. Public debates aren't ever as subtle and as rigorous as academic debates, and in order to participate in such discussions one necessarily has to alter one's style of presentation and play by a different set of rules. Trying to elevate the level of public discussion strikes me as a losing proposition, given the various demands on people's attention with which one has to compete, and the differences in vocabulary and conceptual tools that exist between the sphere of academia and the public square. And since I'm not convinced that an intellectual discussion is actually going to lead to Answers anyway, I'm not entirely certain what the purpose of trying to raise the level of public debate would be anyway -- to give partisans better weapons to fight with?
I don't know what a public intellectual is, and I don't know what it would mean to be one in the present media and political environment. Are bloggers public intellectuals? Are this blog and others like it the proper function for academics trying to debate issues, or is this just a form of "infotainment" like the episode of the History Channel showDeep Sea Detectives that I was interviewed for a couple of month back -- fun, a little informative, but not the sort of thing thatanyone would or should mistake for the actual practice of historical scholarship? [In my more pessimistic moods I wonder whether the general publicdoes in fact conflate The History Channel and the actual practice of historical scholarship.]
I'm not sure, but I thought I'd throw the issue out there for discussion.
[cross-posted at Duck of Minerva]
But I don't want to talk about that substantive argument here. Instead, what is on my mind after the interview is a more searching question:
What is the point of doing interviews like this?
Obviously, the point can't be the same as the point of doing serious and systematic scholarship. There isn't the time to produce an analysis of the proper nuance; the audience constraints are different; and the format isn't conducive of really laying out a complex analysis or engaging in debate about the particulars. But that's what we academicsdo, as a rule; it's ourjob to produce theoretically sophisticated analyses of things by consistently applying analytical frameworks. It's Weberian science: the systematic application of theoretical apparatuses to data to generate "facts." But that's not what can be done during a television interview -- nuance and qualification is pretty much swept away in favor of blunter assertions.
So what are we academics doing when we give television interviews? My fear is that we're perhaps unintentionally trafficking on a myth, the myth of Answers: the idea that we academic analysts have absolutely secure bases on which to advance interpretations and make predictions. Speaking for myself, I know that I certainly do not have any such Answers; if I am asked (as I was today) what effect the election of Angela Merkel will have on US-German relations, all I can do is to offer a more or less well-grounded perspective on the issue. And by definition, that perspective is contestable -- something of which academics are well aware. I'm not sure how much of that awareness of contestability makes its way out into the general viewing public.
Should this bother us? I think it should, for one major reason: if people start thinking that we academic analysts have Answers, then we are placed into the position of having to make political pronouncements to the potential detriment of our scholarly integrity. Also, we could simply bewrong, but if people forget that, they might end up simply implementing policies based on a consensus and ignoring potential problems with that consensus as a whole. [Actually, in the current political climate, consensus among academics doesn't seem to do much of anything to governmental policies, but I'm not convinced that the answer to this problem is to become more willing to make policy pronouncements -- since doing so would exacerbate the problem by further propounding the myth of Answers.]
So I feel a bit caught between Scylla and Charibdis here: act like a talking head and contribute bits of "erudition" to the popular media when they come calling, or retreat completely into the world of abstruse scholarship. Neither solution seems completely correct or comfortable.
People toss around terms like "public intellectual" sometimes when I pose this problem to them, but I can't for the life of me figure out what that actuallymeans. Public debates aren't ever as subtle and as rigorous as academic debates, and in order to participate in such discussions one necessarily has to alter one's style of presentation and play by a different set of rules. Trying to elevate the level of public discussion strikes me as a losing proposition, given the various demands on people's attention with which one has to compete, and the differences in vocabulary and conceptual tools that exist between the sphere of academia and the public square. And since I'm not convinced that an intellectual discussion is actually going to lead to Answers anyway, I'm not entirely certain what the purpose of trying to raise the level of public debate would be anyway -- to give partisans better weapons to fight with?
I don't know what a public intellectual is, and I don't know what it would mean to be one in the present media and political environment. Are bloggers public intellectuals? Are this blog and others like it the proper function for academics trying to debate issues, or is this just a form of "infotainment" like the episode of the History Channel showDeep Sea Detectives that I was interviewed for a couple of month back -- fun, a little informative, but not the sort of thing thatanyone would or should mistake for the actual practice of historical scholarship? [In my more pessimistic moods I wonder whether the general publicdoes in fact conflate The History Channel and the actual practice of historical scholarship.]
I'm not sure, but I thought I'd throw the issue out there for discussion.
[cross-posted at Duck of Minerva]
4.6.05
(Baseball) Statistics Never Lie
[cross-posted at The Duck of Minerva]
One of the things that I find the most fascinating about the game of baseball is the fact that statistical data about the performance of players and teams is actually meaningful. This is so largely because the kinds of things being measured -- whether a team wins or loses, how many balls and strikes a pitcher throws, what percentage of the time a player gets on base as opposed to making an out -- involve a repetition of the same basic actions a sufficient number of times that random fluctuations cancel out. Players do the same basic things enough times that over the course of a season their ability to do those things (like get the bat on the ball, throw a strike, and so forth) will be reflected in their numbers.
For example, every "plate appearance" that a batter has over the course of a season is roughly similar to every other plate appearance in its basic contours, and over the course of a 162-game season the average player can expect to come to the plate about 500 times -- a sufficiently "large n" that saying that a batter has an on-base percentage of .446 and a slugging percentage of .536 is a meaningful statement.1 Contrast this to "football statistics," which are based on a regular season of only 16 games; players rarely get sufficient chances to do things like catch passes and rush for yardage to make meaningful quantitative comparisons possible. This doesn't stop people from making those comparisons, and playing "fantasy football" based on them, but I'll keep my statistics operating in realms where they make some sense, thank you very much.
The fact that baseball statistics are meaningful allows observers of the game to conduct very precise analyses of how well their teams and players are doing. Just for kicks, this morning I plugged some numbers into a spreadsheet to make a rudimentary calculation not about which teams were doing the best in terms of wins and losses -- that information is readily available in any major newspaper, and all over the web (for instance, here) -- but about which teams were performing most efficiently. I took information about the 2005 payrolls of all 30 major league teams from this site, and had Excel calculate approximately how much money each team was paying for each of the wins it had thus far achieved this season.2
The results are interesting, although I won't bore you with all of the details. The important results are these:
The analysis also demonstrates, pretty concretely, that simply spending money on a baseball team doesn't guarantee you success. You also have to use that salary efficiently, and get sufficient bang for your buck. The Yankees are spending about $208 million and thus far have a 27-27 win-loss record; the Nationals' total payroll is about $48.5 million, and their record is 29-26. Put another way, the Nationals are spending 1.171% of their salary for each win, while the Yankees are spending 1.235%; the difference may not look like much, but over a 162-game season, minor variations between teams and players translate into major differences.
The fascinating thing is that in baseball we can determine precisely how much difference these things make. I'd be very resistant to running numbers like this in most other situations, but in baseball, bring on the quantitative analysis!
One of the things that I find the most fascinating about the game of baseball is the fact that statistical data about the performance of players and teams is actually meaningful. This is so largely because the kinds of things being measured -- whether a team wins or loses, how many balls and strikes a pitcher throws, what percentage of the time a player gets on base as opposed to making an out -- involve a repetition of the same basic actions a sufficient number of times that random fluctuations cancel out. Players do the same basic things enough times that over the course of a season their ability to do those things (like get the bat on the ball, throw a strike, and so forth) will be reflected in their numbers.
For example, every "plate appearance" that a batter has over the course of a season is roughly similar to every other plate appearance in its basic contours, and over the course of a 162-game season the average player can expect to come to the plate about 500 times -- a sufficiently "large n" that saying that a batter has an on-base percentage of .446 and a slugging percentage of .536 is a meaningful statement.1 Contrast this to "football statistics," which are based on a regular season of only 16 games; players rarely get sufficient chances to do things like catch passes and rush for yardage to make meaningful quantitative comparisons possible. This doesn't stop people from making those comparisons, and playing "fantasy football" based on them, but I'll keep my statistics operating in realms where they make some sense, thank you very much.
The fact that baseball statistics are meaningful allows observers of the game to conduct very precise analyses of how well their teams and players are doing. Just for kicks, this morning I plugged some numbers into a spreadsheet to make a rudimentary calculation not about which teams were doing the best in terms of wins and losses -- that information is readily available in any major newspaper, and all over the web (for instance, here) -- but about which teams were performing most efficiently. I took information about the 2005 payrolls of all 30 major league teams from this site, and had Excel calculate approximately how much money each team was paying for each of the wins it had thus far achieved this season.2
The results are interesting, although I won't bore you with all of the details. The important results are these:
- the Yankees have the most expensive wins, at $2,571,689.10 per win; the Devil Rays have the cheapest, at $498,447.13. This is not a major surprise, since the Yankees' overall payroll is about seven times as large as the Devil Rays' payroll.
- what is surprising is that the Yankees are paying almost twice as much for a win as the next team in the list, the Boston Red Sox. And the Red Sox are doing better than the Yankees in the overall win-loss standings.
- of the teams whose winning percentage is .500 or greater, the three teams paying the least for their wins thus far this season are the Toronto Blue Jays ($535,243.19), the Washington Nationals ($568,748.94), and the Minnesota Twins ($574,432.48). The payrolls for all three teams are in the bottom third of all major league teams.
The analysis also demonstrates, pretty concretely, that simply spending money on a baseball team doesn't guarantee you success. You also have to use that salary efficiently, and get sufficient bang for your buck. The Yankees are spending about $208 million and thus far have a 27-27 win-loss record; the Nationals' total payroll is about $48.5 million, and their record is 29-26. Put another way, the Nationals are spending 1.171% of their salary for each win, while the Yankees are spending 1.235%; the difference may not look like much, but over a 162-game season, minor variations between teams and players translate into major differences.
The fascinating thing is that in baseball we can determine precisely how much difference these things make. I'd be very resistant to running numbers like this in most other situations, but in baseball, bring on the quantitative analysis!
1 On-base percentage measures how often a batter reaches base successfully, whether through getting a hit or drawing a walk or being hit by a pitch; slugging percentage measures the total number of bases that a player reaches in all of his at bats; details on basic baseball stats can be found here. The specific numbers that I used for this example are Nick Johnson's stats for the present season. The "500 at-bats" figure is approximately the minimum number of at-bats required to qualify for a batting title under the present 162-game regular-season schedule. 2 We're approximately 1/3 of the way through the season at the moment, so if we divide each team's salary by 3 and then divide that number by the number of wins, we get the effective "price per win" that each team is paying -- note that this does not include the salaries for managers, coaches, etc., but is limited to the combined salaries of all the players on the team's payroll.
28.5.05
"Momentum"
Posted some of my thoughts on the role of prediction in social life, through an extended baseball metaphor, over at Duck of Minerva -- the latest collaborative project between myself and Dan Nexon, plus other friends/colleagues. That, plus Progressive Commons, is where I'll be posting substantive things in the future. I want to reserve this blog for daily course diary stuff, as the name would imply, although I'll also post links here to other, more substantive things that I post elsewhere.
On and on and on the blogosphere grows…
[Posted with ecto]
On and on and on the blogosphere grows…
[Posted with ecto]
27.5.05
Returns
Since this is a "course diary" blog, I suppose that I should be better about posting course diary kinds of things on it. And I will try to do so more in the future.
On the other hand, I do teach a science fiction and world politics class, so I guess that longish reflections on Star Wars (even if I don't think it's science fiction, exactly) sort of fit the bill.
In any event, next week, I'll try to return to making this blog be about the goings-on in my courses. I have other spaces for other kinds of reflections.
[Posted with ecto]
On the other hand, I do teach a science fiction and world politics class, so I guess that longish reflections on Star Wars (even if I don't think it's science fiction, exactly) sort of fit the bill.
In any event, next week, I'll try to return to making this blog be about the goings-on in my courses. I have other spaces for other kinds of reflections.
[Posted with ecto]
The turn of a friendly Card
I really like a lot of Orson Scott Card's books. Ender's Game and Speaker for the Dead remain firmly on my list of Best Science Fiction Novels Ever Written, and Card's "Hot Sleep" stories and novellas (eventually gathered up into The Worthing Saga) were very formative for my own adolescent musings about the value of suffering, the nature of freedom, and the like. But things like this -- Card's review of the new Star Wars film -- really help to convince me that he's completely gone off the deep end, especially when read alongside of the increasingly bizarre novels he's turning out these days. It seems that Card has decided to use his popularity among science fiction aficionados to evangelize for a particular and peculiar form of Christianity, even and perhaps especially when that evangelism requires him to trash other popular works. Often on very silly grounds.
Where to begin to rebut Card's inaccuracies? Well, for one thing, he suggests that Star Wars is "manichaean…evil is simply another way of using the Force. Only not as nice." There are two problems here: first, as Dan Nexon already pointed out over on his blog, the essence of a Manichaean position is that good and evil are distinct substances with distinct essences. The orthodox Christian position that opposes this, most commonly associated with Augustine of Hippo (who, unlike any of us, actually was a follower of Mani for a portion of his life), is that evil is not a positive substance, but simply a privation or corruption of good. Hence evil beings are fallen or sinful as opposed to simply and intrinsically evil, and as such can be redeemed…which is the plot of both Star Wars and of the New Testament. Evil and good being linked, connected, interwoven such that people can be tempted by evil even when trying to do good: such things cannot happen in a Manichaean universe where good is good, evil is evil, and never the twain shall meet.
Second, if there's a Manichaean in Card's piece, it's Card himself, inasmuch as he keeps insisting on an absolute difference between good and evil, and keeps looking for the kind of clearly-defined clash that would allow him to comfortably stand on the side of Right. Such populist Manichaeanism might be thought of as the peculiar heresy of (much of) the Christian religious right, much as populist Arianism is the peculiar heresy of (much of) the Christian religious left. [Arianism was the belief that the Son, Jesus Christ, was lesser than the Father rather than being co-equal with the Father in substance; in practice, this meant that Arians held that Jesus Christ was a divinely inspired human being rather than being himself divine. Many left-leaning Christian churches these days make a similar (heretical) mistake, essentially relating to Jesus Christ as a good man with good things to say, kind of like Gandhi or Martin Luther King Jr. Regardless of the fact that this is a theological heresy, it is the common currency of many "liberal Christians," informing their daily practices in profound ways.] Manichaeanism is the popular heresy that informs many conservative Christians' daily practices, and leads to an abandonment of humility before God in favor of a rather smug sense of being In The Right and thus empowered to vanquish Evil.
In effect: there's Good, there's Evil, and we aren't the latter, which makes us the former. Oh, and God is on our side, and will eventually vanquish the servants of Evil, or at least make sure that they get what's coming to them, as their immortal souls burn in Hell. Regardless of its theological status as a Christian heresy, such popular dualism makes up the common currency of a lot of everyday conservative Christianity. As Nietzsche pointed out in The Genealogy of Morals, this idea of a war between equally-matched good and evil forces provided those who couldn't take revenge on their foes in real life to content themselves with the knowledge that they would be avenged in the afterlife. It's Christianity without forgiveness and redemption, in which the believer arrogates to her- or himself the authority to determine the final disposition of immortal souls -- something Card, not surprisingly, does when he questions Anakin's redemption and appearance with Yoda and Obi-Wan at the end of Return of the Jedi. We know the Good, this popular Manichaeanism declares, and we reject the Evil; we are pure, they are fallen, we are saved, they are damned.
This is precisely the error that the Jedi make throughout the first three Star Wars films -- the error that leads to their downfall.
Card, like a number of commentators, seems to have missed the fact that the six-part Star Wars saga is not about good Jedi vs. evil Sith, but about a decaying Jedi order that is swept away by the (admittedly, evil) Sith. Yes, some Jedi say that they are "good," but since when do we regard lines spoken by a character to be the author's point? If the author is present in the text, she or he is present in the whole course of events that make up the plot, and we have to take that into account. Regarding something that one Jedi says to be the point of the saga would be like misreading the comments of the Athenian representative during the Melian dialogue ("the strong do what they can, and the weak suffer what they must") as though those comments represented Thucydides' whole argument, which would be a major mistake. And when that Jedi is Anakin Skywalker, who is being seduced by the Dark Side of the Force throughout many of the films, we have to take the declaration with an even bigger grain of salt.
Let's look at this a bit more closely. The Jedi order as it is portrayed in the prequel trilogy (Episodes I, II, and III) has all the characteristics of a mature, complacent bureaucracy that has virtually eliminated mystery in favor of administration. Originally established to study the Great Mystery that is the Force, the Jedi by the time we meet them in The Phantom Menace have been reduced to doing a few simple techno-magic tricks as instruments of the Chancellor of the Republic; they also have a clear internal hierarchy and a rigid Code to which adherents have to conform. Non-conformists like Qui-Gon Jinn aren't on the Council precisely because they won't conform, and it's no accident that Qui-Gon finds the boy Anakin and starts to appreciate what he represents -- unlike most of the other Jedi, Qui-Gon is still open to wonder and awe (even if that wonder is sometimes expressed in the techno-instrumentalist language of "midi-chloroian counts" and the like). The other Jedi can't place Anakin in their worldview very easily, since he doesn't fit the system (too old, too passionate, too powerful). We are not looking at a Jedi order at its height, but a Jedi order that is stultified and narrow -- and one that has virtually absented itself from the world, cloistering itself away from politics and ordinary people in a vain effort to keep itself pure from the temptations of the Dark Side.
And what are those temptations? The Sith and the Jedi differ largely in that the Sith embrace the struggle for ever greater power as a necessary component of political action, while the Jedi as portrayed in the prequels have somehow managed to delude themselves (at least organizationally, if not in every specific individual instance) into thinking that power struggles can be avoided by a monastic withdrawal from the world. Where the Sith and the Jedi do not differ is in their insistence that their actions are ultimately justified by a higher transcendental principle; the Jedi have their Code, and the Sith have their endless quest to prolong life (whether their own, or the lives of others). And it is this essential point of similarity that makes individual Jedi, like Anakin, corruptible: Palpatine is able to skillfully play on Anakin's desires and goals by promising him the power that he thinks he needs in order to achieve those goals, and once one starts down the path of acquiring more and more power the means to the goal becomes a goal in itself. [There's a very clever visual reference near the end of Revenge of the Sith before Anakin and Obi-Wan fight; the camera does a close-up on Anakin's face, but uses the extreme depth-of-field camera technique most famously used by Orson Welles in Citizen Kane to keep both Anakin and Obi-Wan (who is standing in the background) in focus simultaneously; Citizen Kane is, of course, a film about precisely this kind of corruption of a young idealist who becomes so wrapped up in the process of acquiring power that he forgets his original aims.]
Card is quite mistaken when he equates Palpatine's line "Good is a point of view" with Obi-Wan's later insistence that "only a Sith deals in absolutes." Palpatine's line plays on Anakin's idealism and helps to turn him to the position that anything is justified in pursuit of Anakin's selfless, idealistic goal of saving Padmé's life -- power is required for that end, as it is for the ending of the war and the bringing of peace and security, and Palpatine is offering Anakin a means to achieve those goals which purports to be much more effective than anything that the Jedi are offering him. If Anakin didn't already deal in absolutes, then Palpatine's offer wouldn't be so tempting, as the means offered would have to be weighed on their merits instead of being ultimately justified by the goodness of the goal towards which they point and the purity of the motives with which the wielder deploys them. The Dark Side of the Force is the temptation to believe that one's means are ultimately justified, that one is ultimately good and pure and right, regardless of what one does. (Sith don't appear to believe themselves evil; Anakin believes he is advancing the causes of peace and security, and Palpatine seems mostly concerned with prolonging life -- mainly his own, but still, he's not a pulp fiction villain who appears to love evil for its own sake. This is further evidence of how little Star Wars is Manichaean, since no one really thinks of themselves as "evil.")
Obi-Wan's declaration is the last best defense against this kind of Dark Side corruption, a defense that unfortunately the Jedi order didn't and couldn't accept. After all, when the Jedi do decide to re-enter the world and try to affect political change, the best they can come up with is "kill Palpatine and take over the Republic." Which would be in many ways just as bad as the Empire, featuring the same absolutist logic and the same smug purity displayed by the Sith.
I could go on. I probably will at some later point. But for now I think I've said enough. Although Card is correct that the Jedi order as seen in the prequels is not a model worth emulating, he seems to entirely miss that the whole point of the Star Wars saga is to show that the old model of being a Jedi wasn't up to the challenge of the Sith, and that something new was required. That new thing, I think, was faith: Luke, not being constrained by the old bureaucratic manner of training and thus lacking the smug confidence of the old Jedi, is able to prevail over the Sith by paradoxically refusing to eliminate them himself and leaving the ultimate disposition of things in the hands of the Force. The message of Star Wars is not "be a Jedi according to the old model." The message is "avoid the Dark Side temptation of thinking that you are pure and holy and justified; do not place overmuch confidence in your knowledge and your ethics and your traditions; have faith, like Luke did, and let the Force work as it will."
All of which sounds intimately Christian to me. I have no idea what Card's been smoking.
[Posted with ecto]
Where to begin to rebut Card's inaccuracies? Well, for one thing, he suggests that Star Wars is "manichaean…evil is simply another way of using the Force. Only not as nice." There are two problems here: first, as Dan Nexon already pointed out over on his blog, the essence of a Manichaean position is that good and evil are distinct substances with distinct essences. The orthodox Christian position that opposes this, most commonly associated with Augustine of Hippo (who, unlike any of us, actually was a follower of Mani for a portion of his life), is that evil is not a positive substance, but simply a privation or corruption of good. Hence evil beings are fallen or sinful as opposed to simply and intrinsically evil, and as such can be redeemed…which is the plot of both Star Wars and of the New Testament. Evil and good being linked, connected, interwoven such that people can be tempted by evil even when trying to do good: such things cannot happen in a Manichaean universe where good is good, evil is evil, and never the twain shall meet.
Second, if there's a Manichaean in Card's piece, it's Card himself, inasmuch as he keeps insisting on an absolute difference between good and evil, and keeps looking for the kind of clearly-defined clash that would allow him to comfortably stand on the side of Right. Such populist Manichaeanism might be thought of as the peculiar heresy of (much of) the Christian religious right, much as populist Arianism is the peculiar heresy of (much of) the Christian religious left. [Arianism was the belief that the Son, Jesus Christ, was lesser than the Father rather than being co-equal with the Father in substance; in practice, this meant that Arians held that Jesus Christ was a divinely inspired human being rather than being himself divine. Many left-leaning Christian churches these days make a similar (heretical) mistake, essentially relating to Jesus Christ as a good man with good things to say, kind of like Gandhi or Martin Luther King Jr. Regardless of the fact that this is a theological heresy, it is the common currency of many "liberal Christians," informing their daily practices in profound ways.] Manichaeanism is the popular heresy that informs many conservative Christians' daily practices, and leads to an abandonment of humility before God in favor of a rather smug sense of being In The Right and thus empowered to vanquish Evil.
In effect: there's Good, there's Evil, and we aren't the latter, which makes us the former. Oh, and God is on our side, and will eventually vanquish the servants of Evil, or at least make sure that they get what's coming to them, as their immortal souls burn in Hell. Regardless of its theological status as a Christian heresy, such popular dualism makes up the common currency of a lot of everyday conservative Christianity. As Nietzsche pointed out in The Genealogy of Morals, this idea of a war between equally-matched good and evil forces provided those who couldn't take revenge on their foes in real life to content themselves with the knowledge that they would be avenged in the afterlife. It's Christianity without forgiveness and redemption, in which the believer arrogates to her- or himself the authority to determine the final disposition of immortal souls -- something Card, not surprisingly, does when he questions Anakin's redemption and appearance with Yoda and Obi-Wan at the end of Return of the Jedi. We know the Good, this popular Manichaeanism declares, and we reject the Evil; we are pure, they are fallen, we are saved, they are damned.
This is precisely the error that the Jedi make throughout the first three Star Wars films -- the error that leads to their downfall.
Card, like a number of commentators, seems to have missed the fact that the six-part Star Wars saga is not about good Jedi vs. evil Sith, but about a decaying Jedi order that is swept away by the (admittedly, evil) Sith. Yes, some Jedi say that they are "good," but since when do we regard lines spoken by a character to be the author's point? If the author is present in the text, she or he is present in the whole course of events that make up the plot, and we have to take that into account. Regarding something that one Jedi says to be the point of the saga would be like misreading the comments of the Athenian representative during the Melian dialogue ("the strong do what they can, and the weak suffer what they must") as though those comments represented Thucydides' whole argument, which would be a major mistake. And when that Jedi is Anakin Skywalker, who is being seduced by the Dark Side of the Force throughout many of the films, we have to take the declaration with an even bigger grain of salt.
Let's look at this a bit more closely. The Jedi order as it is portrayed in the prequel trilogy (Episodes I, II, and III) has all the characteristics of a mature, complacent bureaucracy that has virtually eliminated mystery in favor of administration. Originally established to study the Great Mystery that is the Force, the Jedi by the time we meet them in The Phantom Menace have been reduced to doing a few simple techno-magic tricks as instruments of the Chancellor of the Republic; they also have a clear internal hierarchy and a rigid Code to which adherents have to conform. Non-conformists like Qui-Gon Jinn aren't on the Council precisely because they won't conform, and it's no accident that Qui-Gon finds the boy Anakin and starts to appreciate what he represents -- unlike most of the other Jedi, Qui-Gon is still open to wonder and awe (even if that wonder is sometimes expressed in the techno-instrumentalist language of "midi-chloroian counts" and the like). The other Jedi can't place Anakin in their worldview very easily, since he doesn't fit the system (too old, too passionate, too powerful). We are not looking at a Jedi order at its height, but a Jedi order that is stultified and narrow -- and one that has virtually absented itself from the world, cloistering itself away from politics and ordinary people in a vain effort to keep itself pure from the temptations of the Dark Side.
And what are those temptations? The Sith and the Jedi differ largely in that the Sith embrace the struggle for ever greater power as a necessary component of political action, while the Jedi as portrayed in the prequels have somehow managed to delude themselves (at least organizationally, if not in every specific individual instance) into thinking that power struggles can be avoided by a monastic withdrawal from the world. Where the Sith and the Jedi do not differ is in their insistence that their actions are ultimately justified by a higher transcendental principle; the Jedi have their Code, and the Sith have their endless quest to prolong life (whether their own, or the lives of others). And it is this essential point of similarity that makes individual Jedi, like Anakin, corruptible: Palpatine is able to skillfully play on Anakin's desires and goals by promising him the power that he thinks he needs in order to achieve those goals, and once one starts down the path of acquiring more and more power the means to the goal becomes a goal in itself. [There's a very clever visual reference near the end of Revenge of the Sith before Anakin and Obi-Wan fight; the camera does a close-up on Anakin's face, but uses the extreme depth-of-field camera technique most famously used by Orson Welles in Citizen Kane to keep both Anakin and Obi-Wan (who is standing in the background) in focus simultaneously; Citizen Kane is, of course, a film about precisely this kind of corruption of a young idealist who becomes so wrapped up in the process of acquiring power that he forgets his original aims.]
Card is quite mistaken when he equates Palpatine's line "Good is a point of view" with Obi-Wan's later insistence that "only a Sith deals in absolutes." Palpatine's line plays on Anakin's idealism and helps to turn him to the position that anything is justified in pursuit of Anakin's selfless, idealistic goal of saving Padmé's life -- power is required for that end, as it is for the ending of the war and the bringing of peace and security, and Palpatine is offering Anakin a means to achieve those goals which purports to be much more effective than anything that the Jedi are offering him. If Anakin didn't already deal in absolutes, then Palpatine's offer wouldn't be so tempting, as the means offered would have to be weighed on their merits instead of being ultimately justified by the goodness of the goal towards which they point and the purity of the motives with which the wielder deploys them. The Dark Side of the Force is the temptation to believe that one's means are ultimately justified, that one is ultimately good and pure and right, regardless of what one does. (Sith don't appear to believe themselves evil; Anakin believes he is advancing the causes of peace and security, and Palpatine seems mostly concerned with prolonging life -- mainly his own, but still, he's not a pulp fiction villain who appears to love evil for its own sake. This is further evidence of how little Star Wars is Manichaean, since no one really thinks of themselves as "evil.")
Obi-Wan's declaration is the last best defense against this kind of Dark Side corruption, a defense that unfortunately the Jedi order didn't and couldn't accept. After all, when the Jedi do decide to re-enter the world and try to affect political change, the best they can come up with is "kill Palpatine and take over the Republic." Which would be in many ways just as bad as the Empire, featuring the same absolutist logic and the same smug purity displayed by the Sith.
I could go on. I probably will at some later point. But for now I think I've said enough. Although Card is correct that the Jedi order as seen in the prequels is not a model worth emulating, he seems to entirely miss that the whole point of the Star Wars saga is to show that the old model of being a Jedi wasn't up to the challenge of the Sith, and that something new was required. That new thing, I think, was faith: Luke, not being constrained by the old bureaucratic manner of training and thus lacking the smug confidence of the old Jedi, is able to prevail over the Sith by paradoxically refusing to eliminate them himself and leaving the ultimate disposition of things in the hands of the Force. The message of Star Wars is not "be a Jedi according to the old model." The message is "avoid the Dark Side temptation of thinking that you are pure and holy and justified; do not place overmuch confidence in your knowledge and your ethics and your traditions; have faith, like Luke did, and let the Force work as it will."
All of which sounds intimately Christian to me. I have no idea what Card's been smoking.
[Posted with ecto]
Subscribe to:
Posts (Atom)