Preferred Citation: Bencivenga, Ermanno. My Kantian Ways. Berkeley:  University of California Press,  c1995 1995. http://ark.cdlib.org/ark:/13030/ft9j49p35z/


cover

My Kantian Ways

Ermanno Bencivenga

UNIVERSITY OF CALIFORNIA PRESS
Berkeley · Los Angeles · Oxford
© 1995 The Regents of the University of California


Preferred Citation: Bencivenga, Ermanno. My Kantian Ways. Berkeley:  University of California Press,  c1995 1995. http://ark.cdlib.org/ark:/13030/ft9j49p35z/

Preface

The early 1990s will forever be, for me, the Hegel years, the years of Kierkegaard and Nietzsche and Heidegger. And, of course, of Anselm. Years in which I stretched my resources and taxed my energy to the limit trying to cover more ground—and not minding how thinly. Yet, as I look over the musings and reveries and venoms collected here, I find Kant's majestic figure imposing itself once more. I can't get away from him; nor, for that matter, do I want to. The challenge of coping with him without being swallowed whole is enough of a motivation to develop a vaster array of tools than I ever dreamed I needed; the depth into which he throws the most marginal, humble occurrences is more of a reason for thinking that the world around me matters than I ever thought was possible.

Some of these pieces deal with Kant directly; others are written in his wake. In all of them, my greatest debt is to him; if such are indeed my ways, profound gratitude is felt for the one who marked them.

IRVINE, JUNE 1994


1

Introduction

I've had my share of abuse over the years. It comes with the territory. In this racket (maybe in every racket) you have your basic choice to be either a good boy or a power player, and if you choose to be neither you're going to make some people mad. So they will send you "messages," or make you offers they couldn't refuse, or, again, abuse your work—usually under the hypocritical guise of anonymously refereeing it. It didn't prevent me from publishing anything I wanted, but it sure was entertaining. It is comical to see these stuffed animals catch fire and be stupefied by their own rage, and stutter their awkwardness as they find themselves in an unfamiliar passionate mood—one for which they have developed no new moves since high-school dances, and those old ones are rusty. More comical even than watching them trot out their formless grey suits, legs hopelessly short, during the morning session of some silly conference, and then sport their flaccid bodies in Hawaiian shirts at night, when it's time to socialize and have fun, and maybe do some power playing too—for, after all, that is the most fun.

So I amused myself discovering that my "reading [of Kant] . . . is in such direct defiance of the plain sense of the text that [the referee could] only attribute it to willful perversity," or that my "complaints and observations . . .add up to a jumble that points to no plausible, specific conclusion of the slightest novelty." Amused myself lightly, for this was no Cervantes or Molière, and then tried to see if anything could be learnt


2

from it—which sometimes was the case, when fury had not struck the unlucky reviewer totally dumb.

One of these enlightening cases had to do with my Anselm book, and with a particularly unappreciative judge of it who "would have [had] difficulty approving it if it were submitted as a Master's thesis." In a suggestive passage of his/her/its scathing indictment, this individual described me as "playing a kind of self-indulgent intellectual peekaboo with the subject, and with [my] readers." I've been reflecting on this remark ever since (it's been well over two years now), and I think the time has come to articulate my reflections in public.

The remark was supposed to be a harshly critical one, and, as I see it, there are three possible elements of criticism voiced. There is play, peekaboo, and self-indulgence. I will begin with the last one.

In my Webster's New Collegiate Dictionary , self-indulgence is defined as "excessive or unrestrained gratification of one's own appetites, desires, or whims." (A whim, incidentally, is "a capricious or eccentric and often sudden idea or turn of the mind.") Which, in general and in the present context, is supposed to be a bad thing. But it will take some work to understand what is supposed to be bad about it.

The present context is one in which I write—"creatively," of course, since it's from "original work" that you are to get your mileage here. And everybody knows that creation most often issues from gratifying one's most capricious or eccentric or sudden turns of mind—if one is fortunate enough to have any. So is it my appetites and desires that I should leave at the entrance of my study—and my balls too, maybe? Should I forget my passions: what gives me joy, or frightens me, or makes me want to cry, what defines me as this pained or elated body and soul? Should I pass in silence my daughter's slender figure—how warm and pretty it is as she drapes and layers it with stockings and socks and skirts and jackets and ribbons, how elegantly she pushes it forward on those long legs of hers? Should I swallow my tears for The Siege that will not be, because Sergio Leone was betrayed by a failing heart at age sixty and no one is left with the courage and vision to attempt his impossible project, and I will never sit in a corner of a dark theater and forget my misery for three hours watching it? I don't think I should; I don't think anybody would recommend that. What would be left of me, of my "creative powers," if I forgot everything I am, the terrain on which I grow, the aromas brought by my winds?

I shouldn't forget it, the answer will go, but I should "restrain" it. It's unproductive to let oneself loose. One should draw whatever force or


3

motivation one can from this humus but then use it to address common issues, matters generally recognized as important. One should look at a picture of one's family and smile reassuringly and then plunge into a corporate raid, or a committee meeting, or whatever other ambush or make-work counts as respectable these days.

There is a hell for people corrupting intellectual work like that, torturing it on the Procrustean bed of professional competence, freezing it with the stiff iciness of their blank stares. In hell, they are forced to spend what time they have reading and praising other people who didn't want to leave their lives at the door, and chanced their irrelevance, and exposed them in sordid detail. Hell, of course, is where the corrupters do time now, before their afterlives, occasionally consoling themselves with the suggestion that they are oh-so-much-smarter than those other shameless shams, so much more in command of an established Lebensform —but still nailed to the shams' splintery cross, still bleeding on the splinters and the nails, and still getting no wiser for it.

If I had wanted to be established and respectable, I would have gone into banking, I guess, or law or something of the sort. Done my quota of slavery on borrowed money to earn the appropriate degree and then made others pay for it, with interest. Bought myself a mansion, cash, some place nice, and parked my boat at the local harbor. But I decided I would do philosophy, you see: the sort of thing people were executed for, the "business" that makes housemaids laugh and institutions tremble. I decided I would question standards and challenge received opinions and be a gadfly, and he/she/it who wrote the merciless indictment I'm debating decided that, too, I take it, so he/she/it knows that you can only do this thing with full commitment, throwing all of your weight on the scale, appetites and desires and whims and everything else, and if there is nothing there then there is nothing there and that's that, but what in the world is he/she/it doing accusing me of lack of restraint? Who would have wanted Plato or Spinoza or Hume or Sartre to restrain himself? Where would this "profession" be without their unrestrained appetites and whims? And, if I'm not like them why doesn't he/she/it just say so? Why this detour through self-indulgence? What exactly is wrong with self-indulgence anyway?

The other words of condemnation may be of help here, so let's turn to them. I'm not just being self-indulgent, my critic says. Self-indulgence comes in various forms: I could, for example, make a scene, or regurgitate for the nth time a joke no one finds funny, or vent my personal irritation under the convenient guise of an anonymous referee. None of


4

that, however, is what my critic accuses me of; he/she/it says rather that I play a self-indulgent game (peekaboo is "a game for amusing a baby in which one repeatedly hides his face or body and pops back into view exclaiming 'Peekaboo!' "—more about these details later). So let me rephrase my question: what's wrong with playing a self-indulgent game? A game that gratifies (in an excessive, unrestrained way) one's appetites, desires, or whims?

What self-indulgent play adds to self-indulgence, period, cannot be play's gratuitousness, as contrasted with a "serious" activity. For any self-indulgent practice is already gratuitous: performed not so much for its own as for one's own sake to be sure, but still not performed for extrinsic utilitarian reasons, not aimed at some extrinsic utilitarian goal. Not the sort of thing a good family man would do. So what a reference to play must be doing here is bringing out the pleasure that accompanies this particular kind of self-indulgence. Maybe all self-indulgent behavior is pleasurable: "pleasure" is plastic enough to cover even morbid nostalgia or temper tantrums. But, with play, pleasure tends to be closer to the surface, unmistakable, apparent. Play is an activity one indulges in because , by explicit admission, one enjoys it. With everything else one can give oneself the (often disingenuous) alibi of being forced into it and waiting for the weekend, but play is the weekend, so that alibi won't work. You must like it, or you wouldn't be caught in its snares: you would find better (more profitable, more valuable, more fun) ways of spending your time.

The suggestion, then, is that what is disreputable with the specific brand of self-indulgence my writing amounts to is the obvious pleasure I derive from it. I don't know that Plato was especially sad when writing dialogues, but put that aside now: the image conjured here is that of an irresponsible, overgrown child who turns to his private delight the sacred tools of the trade, what others took pains to develop and practice, their labor and sweat, all those days of their lives wasted in self-sacrifice and self-denial adding precious bits of knowledge to the common store.

The image is a realistic one; the suggestion makes sense. For, isn't the rite of passage into the profession supposed to be a painful one? Aren't we supposed to take bright, enthusiastic kids and drive them away from anything they were enthusiastic about, anything that brightened their lives—make them proficient at a tedious routine they might not even have associated with their calling, let alone embraced freely? Tell them: never mind the meaning of life or the ultimate foundations or a just society, or any of the other crap we threw at you in your intro courses,


5

never mind the excitement, the ambition, the vertigo that got you in here in the first place. We want you to write footnotes of footnotes for a dozen years, and if you do that you'll become like us, and be able to attend conferences and get tenure and sit on boards and sport a Hawaiian shirt over an extended belly. And entrap your own share of bright kids, and avenge your ruined life by ruining theirs.

Here I am overdoing it again, you will say: letting my passions run unrestrained, getting lost in my own rhetoric. Why should I call this "entrapping" and "ruining"? All it is is the channeling of intellectual and emotional resources toward the production of something that students themselves can be proud of. Unless they acquired the proper discipline, unless they were taught some hard, stern lessons, their "enthusiasm" would dissipate and leave them empty-handed, frustrated, and cheated of a future. As is always the case when one listens too closely to a demagogue, and follows him all the way to the gingerbread house.

The problem with this rhetoric is the superficial sense it makes, how it uses all the right words to get you to nod, and when you have nodded yourself to sleep you will not even know what damage you suffered, how much was stolen from you. Yes, I'll say then, it takes a tremendous amount of discipline, it takes teaching oneself the hardest lessons, to be able to keep one's excitement and ambition and vertigo, and face the solitude of a page that is not always already written because you've decided you don't want to be a footnote, you want to fight, however hopelessly, "language writing you." That's what it takes to refuse the easy allurements called discipline in Newspeak, the seductive "work" forever available to fill any time you might be troubled enough to have, the "commitments" that will nicely fit in your calendar, expropriating it with a smile, leaving you pointlessly tired, comfortably drugged. That's what it takes to play both ends against any middle, any dead center, any quiet settling down; to betray any expectations, others' or your own; to force all issues until they break. And to do all this with merriment and love, because that's when the real discipline shows: when your feet or fingers or words can jump effortlessly, gayly, when you can afford to stick your tongue out at the "clumsy, gloomy, creaking machine" intellect is for most,[1] at that caricature of seriousness which will be ridiculed by the energy, the concentration, and the joy of any child intent on beating a videogame.

I'm not going to let these poor excuses for humans get away with their

[1] Nietzsche, The Gay Science , 257.


6

cheap preaching. I'm not going to stand for their puffing up their chests and uttering wartime calls for blood and tears. There is nothing especially excruciating to what they offer, which is precisely why they have to look downcast as they make their move: they just want you to fit the crowd, to take the path of least resistance, to do what's easiest anyway. Busywork will be your passport to mindlessness, and if there is nothing thrilling about that, surely it's not painful either.

Of course this is hardly an "argument": it doesn't proceed by repeated applications of modus ponens to prove what was already obvious. But we don't have time for that sort of thing: our time is finite, and we won't waste it. We will rather use it to make a clear, biased, controversial point. Any goody-goody references to the profession's unpleasant necessities, any self-righteous disparagements of those who would rather "play" far from the beaten tracks, or maybe even with them, any dismal calls for the "family values" of accountability and thrift are but stratagems to mask a reality of irresponsible squandering, of futile expenditure of precious resources—including young people's enthusiasm and ambition. The reality of an easy life for the few who jumped on the right train and held tight to the rails: verbiage sold as research, arrogance as expertise, complacency as achievement, Sunday supplement puzzles as depth, indoctrination as teaching. If indeed none of this privilege gives the cheats joy, that serves them right and gives them no moral edge: their gloominess, real or pretended, will add no special value to their sorry lives. Nor will it authorize them to call self-indulgent play any names, or to think that calling it that is calling it names. It isn't: self-indulgent play is what philosophy is all about, what they should be doing—pursuing whims along meandering paths, toward uncertain destinations, probing the robustness of one's appetites, the extremeness of one's desires. And loving it, one knows not why. One doesn't have to.

But enough of that. It wasn't any kind of game I was blamed for playing with my subject and readers, remember: it was a game of peekaboo. We need to get to that last word of invective. To begin with, one thing deserves notice: peekaboo games are highly pleasurable for both parties. There are all sorts of dirty, perverse games one can play with babies. Some will scar them for life; most of them they won't like at all. But peekaboo they enjoy a lot, no less than the adults treating them to it. They enjoy it so much that they go on to play it themselves, with people and even with objects: Freud is our witness here.[2] And, if it does leave a

[2] Beyond the Pleasure Principle , 14ff.


7

scar, it's one that is only going to worry a Humean: it's going to be hard to unconvince them that objects continue to exist when they're not looking. So, if it is this kind of game that I play, it sounds like I'm not only self -indulging: it sounds like my readers and (who knows?) maybe even my subject might find themselves happily indulged. Not all readers, of course; not the ones who have bowed out, and clearly not the one who brought up this issue. But my readers, yes, however few of them there are: the people for whom I write, those who are going to take my hand and try this dance, however much trouble it is to figure out what dance it is, what genre this piece belongs to. Those who are ready to put out all the effort, enforce all the discipline, undergo all the training needed to make sure that, whatever dance we decided it was, at any time, the next step might always contradict our decision.

When you look at it that way, it does feel as if something dirty were going on. As if some foreign, slimy creature were watching in, on a rainy night, and mumbling to itself, "Look at the self-indulgent game of peekaboo that father is playing with his little child!" You do feel the evil radiating from this remark, its spoiler's tone, its envious anger. You do wish you could send the creature on its way and be left alone with your joy, the joy of all of you.

But we won't, not just yet. First we need to excavate the remark further, to learn some further evil twist from it. For peekaboo is a (mutually enjoyed, self- and other-indulgent) game of hiding and then showing oneself. So a very specific intimation is given here of the silly charade I'm trying to get my readers and subject involved in. I'm going to take cover behind my words and not reveal what I'm up to, and then suddenly flash it in people's faces, and then go secret again, and repeat this trick over and over. I'm not going to play a fair and square deal, all cards on the table; in fact, I myself will often be under the table, and only surface unexpectedly, unpredictably, in defiance of rules.

The question, now as a few times before, is what sort of criticism this is: where the punch is supposed to hit. It might be that the hiding part is a problem; that indeed would fit the general propaganda well—the emphasis on truth and transparency and illumination professionals are so fond of. Then I would be depicted as some sort of fraud, forever attempting to deprive others of a clear view into his shabbiness, if for nothing else than to make himself look more interesting.

Except that peekaboo is also a game of showing. So if I do hide it's not to keep anything for myself, to deny anyone access. Eventually, everything is going to appear, and the hiding may be part of an effort to make


8

the reader more involved in the search, more of a participant in it. It may be that I really want my readers to get what I'm up to, if (as is often suggested) this kind of active involvement is essential to pedagogical (and, in general, communicative) success.[3]

Indeed, more than just "my claims" are going to appear, since peekaboo is a game of showing oneself ; so it may be that I also want to (playfully, indulgently) remind my readers that under any philosophical "point" I happen to make there is this person making it, and the point means a lot to him—it connects not only with many other points but also with the juices of his life, with the scares that humbled him, the mirages that delighted him, the sins that stained him. It may be that I don't want to deny the point its chance, just because I happened to make it: I want to phrase it in such a way that you can find it attractive, take it, and run with it—and forever forget my importunate presence. But that I don't want to fool anybody either: whether or not you take it and run with it—and I hope you do, for its sake—it didn't just "come" to me, in a "disinterested" way. It feeds on what I feed on, it warms itself at my fire, it uses my own lymph to grow. And I rejoice in it, I bring it out once in a while and play with it, and this play is fun, and the fun will continue whatever you do with it. It may be cancerous, but it's part of me, a welcome part, and removing it would make me a lesser being.

Could it be that that's what my critic is so upset about: the revealing, not the hiding? It's not implausible: it would fit with the earlier suggestions of excessiveness, of lack of restraint. I'm not showing any taste, the complaint would go now; I don't know how to make myself inconspicuous, how to hide behind impersonal words—words that could belong to anybody and hence do not especially belong to me.

The fact is that "peekaboo" is a very economical term of abuse. If you don't put pressure on it, you come away with a general suggestion of concealment, of trickery, of swindle. Like in that other game played with three tablets, in bus terminals or subway stations, where one is going to lose one's money betting on something one thought one saw. If, on the other hand, you do put some pressure on it, you realize that no money is involved here, that the object of this game is to eventually show one's face, in full light—which may well be what my critic is afraid of. So, by moving very gently around the issue, one gets in fact to voice one's real anxieties, to blame what one really intends to blame,

[3] See chapter 11 below.


9

while giving the impression of blaming something else—indeed, of blaming the opposite.

What so-called philosophers are engaged in, in this day and age (and country), is largely a disappearing act. They look more and more like Matt Groening's "Gone Mom"[4] —that is, like nothing at all. They have no personal histories, display no emotions, take no risks. Even the hottest topics, even the matters of life and death, they address in a corporate, neutered language. They make cozy remarks in the preface—a luminary has even theorized that, said that he's "always disappointed when a book lacks a preface: it is like arriving at someone's house for dinner, and being conducted straight into the dining-room."[5] But then, when one does get to the dinner table and faces the coldness of the food, the stiffness of the movements, the inexorability of the ritual, one realizes that the earlier chattering was just as glacial, just as mandatory. Just as devoid of any humanity, of anything that might make it (whatever it is—a dinner, a book) worthwhile. So, if that's your life, if that's who you are, why would you want to show it to anybody? Why wouldn't you rather resist showing it, being found out? Peekaboo, then, call it peekaboo: this inconsiderate, indecent exhibition of one's private parts, this awful stew, this incestuous flirting. Call it that and establish your distance from it, and so much the better if it sounds like the distance is established because of how much this other guy is trying to hide : that way you can continue to believe that you have nothing to hide. Which, ironically enough, is indeed the case.

I like to have dinner in the kitchen. If that is impossible, because there are too many people, I like to do my cooking with them, and munch something in the process, and have a beer or a glass of wine, and set the dinner table next door, and slowly spill that way. I'd like to think that there is no time when "greetings" are over and "dinner" begins, and depending on how close I am to the people there, how much we resonate with each other, I can get more or less of what I want.

I'm going to give myself what I want here; I'm going to assume that you and I resonate together, and give you the sort of dinner I would enjoy. The ingredients, I'm going to get from my own life: it will be things that puzzled me, or intrigued me, or irritated me. Things I needed to come to terms with—to simmer slowly sometimes, and sometimes instead to scald. I'm going to try to serve them in such a way that you might want

[4] See Childhood Is Hell , chap. 3.

[5] Dummett, Frege: Philosophy of Language , ix.


10

to make them part of your metabolism, that they could be to your liking. But that's not the reason I cooked them, and I wouldn't want you to think otherwise. However much pleasure we can all derive from playing little games of peekaboo, from having me hide behind abstract themes—briefly, of course—and then show up again, however far this joyful practice may take those themes away from me, and prove them to have a life of their own, the cooking needed no such motivation. Because, see, I do enjoy making dinner for myself too.


11

Chapter One—
The Electronic Self

Recent literature on the social effects of computer networks[1] has brought out the following features:

(a) Communication through computers tends to become "unregulated."[2]

(b) The behavior of people involved in it "becomes more extreme, impulsive, and self-centered." These people "become, in a sense, freer."[3]

(c) As opposed to face-to-face situations, computer-mediated decision making is "slightly risk seeking no matter what the choice was."[4]

[1] See, for example, Kiesler, "Thinking Ahead"; Kiesler and Sproull, "Response Effects in the Electronic Survey"; Sproull and Kiesler, "Reducing Social Context Cues"; and McGuire, Kiesler, and Siegel, "Group and Computer-Mediated Discussion Effects in Risk Decision Making." I should emphasize from the beginning that, in connection with the phenomena discussed here, uses of computer networks can be arranged in a spectrum, going from the quite conservative to the radical (the latter is especially apparent with bulletin boards). It is the radical consequences that I am interested in, but it is still fair for me to refer to computer networks at large because it is the availability of this general tool that makes the radical consequences possible.

[2] Kiesler, "Thinking Ahead," 48. See also Kiesler and Sproull, "Response Effects," 405; and Sproull and Kiesler, "Reducing Social Context Cues," 1495.

[3] Kiesler, "Thinking Ahead," 48. See also Kiesler and Sproull, "Response Effects," 405, 411; Sproull and Kiesler, "Reducing Social Context Cues," 1497, 1500; and McGuire, Kiesler, and Siegel, "Group and Computer-Mediated Discussion Effects," 927.

[4] Kiesler, "Thinking Ahead," 52. See also McGuire, Kiesler, and Siegel, "Group and Computer-Mediated Discussion Effects," 927.


12

(d) Electronic mail "could contribute to organizational strength," but it "may also contribute to organizational instability."[5]

A general conclusion of a leading researcher in the field is that these social effects "may be far greater and more important than you imagine,"[6] and the point is convincingly illustrated by reference to the surprising social impact of such other technological advances as the elevator and the telephone. Still, these results and examples per se do not help us stretch our imagination and conceive of what exactly the ultimate effects of computer networks might be. This chapter represents a highly (some might say, wildly) speculative effort to chart such effects.

The background to the effort is constituted by a theory of subjectivity that I sketched out in my Montaigne book. The basic tenets of the theory are as follows:

(1) Originally, the distinction between physical and mental activities is nothing but the distinction between customary, automatic concatenations of moves and the exploring, playful, transgressive forcing of such routine practices away from their structure, as a possible preliminary to the establishing of new automatic concatenations (at which point "the mental" will already be somewhere else, forcing the new structure away from itself).

(2) Because this mental play undermines the blind efficiency that is associated with unreflective, automatic behavior, thereby generating a destabilizing situation of anxiety and guilt, an act of repression is in order to keep it separated ("abstracted") from that behavior, until the time when it might be of use. This political act (that is, this exercise of power) is what establishes the difference "in principle" (the dualism , if you will) between the physical and the mental, breaking the continuity between custom and innovation, disconnecting the nursery where things are first tried out from the rest of the house, where people ("adults") do things for real.

(3) Among the strategies deployed to implement the repression of play, the most effective and conspicuous utilizes the "lucky hit"[7] that made available to our species the very articulate medium of language, in which complex models for (that is, substitutes of) ordinary practices can be constructed and then violence be done to these models instead of to

[5] Kiesler, "Thinking Ahead," 54. See also Sproull and Kiesler, "Reducing Social Context Cues," 149 8.

[6] Kiesler, "Thinking Ahead," 46.

[7] This expression is self-consciously Nietzschean.


13

the practices themselves. There is no guarantee that the consequences of such substitute violence (of this play with words) will be at all informative if and when the violence occurs of which it is a substitute, but if the risks involved in real violence are too great, a "language lab" may be the best we can do.

(4) Since the concern of mental activities is with what is not (yet) the case, it is improper to think of these activities as constituting an object—a "mind," a "subject"—belonging to the same world as what is the case. Still, such an improper constitution has a fundamental political significance: by reifying transgression, indeed attributing to it as much "substantial" weight as to the whole world of ordinary practices and things (a "thinking thing" materializes side by side with the "extended" one), it counteracts the consequences of the abstraction process. Mind and body are not just two; they are also interconnected, and the problem is posed of understanding how. An ideological problem if there ever was one: a problem that can only arise at the shaky point of equilibrium between rashness and fear where our form of life is situated.[8] But still a problem that we might want to keep on having as a critical reminder so long as the Übermensch is out of the picture.

With this background in mind, return to (a) through (d). There are interesting similarities between a computer network and the "privacy" of a human being[9] —between computer-mediated activities and mental ones. Both are less regimented than ordinary practices, both favor higher risk taking and the contemplation of more extreme options, and both may spread into the "public" sphere with destabilizing effects. One might say that such similarities are a natural consequence of a point explicitly brought out in (a) through (d): that the behavior of people involved in electronic communication is more "self-centered." And one might even find an obvious explanation for this point: when social reminders are lacking, people tend to concentrate more on themselves, to overempha-size their importance, to bring out more of their personalities.[10] This is

[8] It is only because of the rashness with which our species indulges in experimentation that dangers arise, and hence fear of those dangers, and hence the necessity to cope with fear by (among other things) dividing and conquering, and hence the necessity somehow to relate the things we divided.

[9] A large part of what I do here may be thought of as reconceptualizing privacy and extending it beyond human beings. Any such operation is bound to bring forth conflicting statements, expressive of the conflicting points of view involved. So I will say that something is private, and then that it is not, and then again that it is, when seen in the proper (that is, the new) way. Hopefully, by the end things should get clear.

[10] This explanation is proposed, for example, in Sproull and Kiesler, "Reducing Social Context Cues," 1495. See also notes 13 and 18 below.


14

an empirical explanation, that is, one that accounts for the data by mobilizing a spatiotemporal statistical regularity: if x happens, then y tends to happen. And there is nothing wrong with it, as far as it goes. But what I am interested in is a conceptual explanation, that is, one that accounts for the data by raising issues of legitimacy, by inquiring into the conditions of application of key terms: in the present case, at least, the term "self." In the course of this account, we may be able to throw a different light on the empirical explanation, too.

One fundamental feature of (1) through (4) is that they are not couched in the realist vocabulary of things . I do not start out by saying, that is, what kind of thing (object) the self is and then proceed to derive from this categorization some notion of how that particular thing might act and react.[11] Mine is a vocabulary of moves, of transformations of a manifold: such are my primitive terms, and things only enter the picture later, if at all. Some moves are repeated, some transformations become standardized and hence recognizable; when this happens, when a behavioral sequence crystallizes, it is usually because the sequence tends to produce a predictable, advantageous outcome. It then often becomes possible to describe the outcome as the result of a set of interactions among relatively stable objects, but there would be no such projection of an ontological stability if it weren't for a deeper, more basic behavioral stability. And the same is true for the quasi-thing that goes under the name of "self ": in this case, too, what there is first is moves—novel, revolutionary ones, moves that have not been repeated yet—and it is only (logically) later that such moves are made to cluster around a "substantial" term.

Because of this general feature of my framework, there is no necessary connection in it between the notion of a self and that of an individual of the biological species Homo sapiens . It just so happens that the behavior of individuals of this species shows more "freedom"—that is, more occasional irresponsibility, more risk taking, more playing with fire—than the behavior of other organisms or entities, and that humans have available a linguistic medium in which to indulge such freedom without suffering too much damage. But in principle there could be a self wherever the same conditions arose; more precisely, under these conditions it would be appropriate to bring out the same contrast between mental and physical activities, and then (if the political motivation were there) to

[11] For this notion of realism and its idealist counterpart, see my Kant's Copernican Revolution .


15

phrase the contrast in terms of an opposition between two kinds of things.

Enter electronic mail. At least three features of it play an important role here. First, it is quick. It is certainly possible to play with alternative states of affairs and try out various resolutions of them at any speed, but the faster the game is, the more potential impact it will have. Any time a "deviant" move is suggested, a battery of defenses is called in place to neutralize it and save traditional routines,[12] but placing defenses is a process that takes time and effort, and if deviant moves pile up quickly, they may eventually get ahead of the opposition. Second, electronic mail is text-based, thus sharing in the liberating effect that language ordinarily has: for reasons mentioned above, more things can be said than can be done. Third, it happens in private (but see below), each individual human facing a screen and nothing else, much like each human occasionally faces his own unspoken words and nothing else. Of course, a community is connected through the network, but a conscious effort must be made to remind oneself of it: the situation emphasizes loneliness, separation, and silence.[13] With all these conditions in place, it is no wonder that the activities involved exhibit the characteristics that I decided to call "mental"—that is, that they are transgressive and innovative. Once this is clear, however, our fundamental problem is finally allowed to emerge: whose activities are these, if anybody's? It is not true here, as it is for the realist, that activities must be conceived as somebody's activities: things are no longer primitive here. So, if a thing is to be reached at all, it will take work, and the process of reaching it might involve some surprises.

It is certainly possible that the subjects of these activities are the individual humans constituting the network. After all, using the network is something they do, and an activity during which they display more freedom than when they do other things: things like driving and shopping and mailing conventional letters to each other. But one aspect of the

[12] Some of this defensive process is realized at the linguistic level (as are most of the attacks). See chapter 3 of my Looser Ends .

[13] If I were trying to defend the empirical explanation suggested above (see note 10 and the attending text), it would be natural for me to insist on these features. Then my argument would be that there are so many similarities between mental and computer-mediated activities because of how "private" (in an ordinary, uncontroversial sense) they both are. But, as I pointed out earlier, my goal here is more radical. I want to account for the similarities by challenging, and ultimately rewriting, what "private" is—and related matters. If my account is accepted, the features brought out here will become irrelevant; it will be irrelevant, for example, whether or not computer-mediated activities are, for the individual participants, their most intense social experiences. (Alternatively, it will be possible to rewrite these features, too, and no longer attribute them to the participants; see note 18 below.)


16

situation gives me pause here, and makes me wonder whether a different choice might be more reasonable: contra the appearances, the situation in which an individual faces his screen is not , for him, a private one—that is, it is not neatly separated from the activities of other individuals. There may be few reminders of this fact, and it might take an effort to keep it in focus, but it is a fact nonetheless. Individual users are connected: that is what the network is all about, and the playful, transgressive (occasionally quite inflamed) experimentation that takes place involves, potentially at least, all of them.

Clearly, when it is a matter of translating some of that experimentation into action of sorts—of taking risks instead of toying with the idea of doing so—it will be individual people who do it, but this case is not essentially different from the following one: after a lot of play with the consequences of pressing a certain button (say, the one causing a bomb to go off), play that involves various components of an individual human being, it will be his right index finger that actually does the pressing, but this is not to say that the action belongs to the finger, and the reason why it is not is precisely that the moves preparing that event involved so many other organs. Thus in the present case, too, the fact that some human in particular plays out the consequences of the mental game is not decisive in establishing that it is his game.

Whose game is it then? An alternative suggestion might be "the network's." The whole network is private: either you are a member of it or you are not, and if you are not, you are not in the game.[14] You may be experiencing some of the impact of the game, but only as an outsider, much in the way another human being might experience some of the impact of my mental game without being a player in it. And the network's behavior may be more transgressive and destabilizing than that of any of its members (or even the collection of them). A chain-reaction effect may be realized in a network that would be impossible if the resources available were merely those of the individual users (or the collection of them): one small deviant move brings another, one abusive word a more vivid profanity, and soon you might be out of control (out, that is, of the

[14] One might object that the network is not private because its members are not exhausted by it; they have other connections and dealings, which "mess up" the neat separation between inside and outside the network. But this objection presupposes that a human being's privacy is conceptually basic, and that the privacy of anything else must be defined in terms of it. I am denying this presupposition: I am defining privacy in terms of transgression and political separation. So the conceptual landscape is being reshaped, and new aggregates and "essential" distinctions may issue from this operation. In particular, it is possible that (different?) moves of one and the same body may be attributed to different private spheres. See also below, note 18 and the attending text.


17

control that could be exercised by any individual). And the decisions that occasionally surface within this activity—those decisions that involve more risk taking than in conventional cases—are a joint expression of the wisdom (or lack thereof) of several members, so they, too, do not belong to any one of them in particular. In conclusion, it seems that the "self " manifested here has no special association with a human body, or indeed with a biological organism of any kind: it is a "thing" of a higher order of magnitude, an electronic self.

The same issue could be phrased and addressed in different terms. Return to the individual user, and to the empirical account of his "self-centeredness." That was a consequence—the account claimed—of the lack of social, public reminders. And it sounded right at the time. Now, however, things are no longer so easy, since my new understanding of what it is to have (or to be) a self brings out a conflict that was not visible before: whose self is the user centered on? Or, more precisely, at what level shall we understand the deviance he is instrumental in realizing?

The crucial point to be kept in mind in answering this question is that a self is not just a way of releasing excess energy: it is a lab. When a child vents its drive to transgression by throwing a tantrum, we don't call that play, and appropriately so: nothing is tried out then, nothing that could possibly be used under the right circumstances. (Or maybe something is, and this is play, but then it's not the tantrum itself that is that: it is the way the child uses it, which is a different set of moves.) In other words, there must be a learning element for us to say that somebody is playing,[15] and the same is true with the self: there must be potential returns for the deviance, ways in which the deviance can impact public, observable, real behavior and make it into something new, into a set of new routines. This is the connection between a human self and a human body; this is the reason why the former is not just lodged in the latter "as a pilot in his ship": what the self experiments, inquires into, thinks may become part of a future stage of that body, make it a different body, a differently behaving structure. When this point is appreciated, the conflict I was hinting at may be recognized. To make it clearer, I will now introduce two opposite scenarios, each the consequence of one of the two parties in conflict emerging as the clear winner. The empirical situation has no such clarity, and will be seen in the end as a (temporary?) compromise between the two theoretical pictures.

[15] Though not all the playful moves practiced and learned during childhood end up contributing to one's adult form of life—and the same is true for intellectual "play." See my Philosophy in Play .


18

In the first scenario, each member of the network plays for his own sake. There is a common area of play, to be sure, as there is a common field every time a group of track athletes come together to practice, but, as with track, each player will profit individually from the practice.[16] A new running or jumping technique may be thrown in the next time a trophy is at stake, just as a new response or mannerism may be staged at the next staff meeting after being tried out in the "privacy" of the network. The game is (distributively) everybody's, and so is the self (or rather, so are the selves ).

In the second scenario, the play has no recognizable impact on the participants (for reasons already suggested, it would be wrong to call them "the players"). When they flame their anger by posting outrageous abuse, when they follow up the wildest associations, when they elaborate the most intricate stories, they are behaving like King Midas's barber: digging a hole in the ground and whispering in it, only to then close it and bury all that the whispers implied or promised. When all is said and done with such nonsense, they go back to their traditional moves purged and relieved, ready to put up with an even greater amount of mindless orthodoxy than they would be without this outlet. But, against their best intentions, some of their stories and associations occasionally ignite, some of their abuse gets out of hand, and such things spread outside the network: they become new policy, a new fad, or a riot. These outcomes may surprise the participants and even be resisted by them, felt like an extraneous body trying to invade their lives and ruin their "ordinary" efficiency, so if any learning ever follows for them from activities of this sort it will be the painful discipline of those who are trying to cope, on whom moves are imposed "from the outside," not the breathtaking, anxiety-ridden, but possibly exhilarating attempt at generating a new creature, a new behavioral mutation. The community that expresses the network, on the other hand, can be sensibly credited with such an attempt: at that level, the configurations explored in private are in fact occasionally tried out in public, and get a shot at impacting upon customary behavioral patterns. The network is (collectively) the community's lab, the mental activities displayed in front of a screen are the community's, and the self on which each individual participant centers is the electronic one.[17] Those who think otherwise are being led astray by

[16] For this example to work well, one must of course forget about relays and team spirit in general.

[17] This sentence might be misunderstood. I might be taken as saying that the new emerging self is the collection of all individual users of the network. But this would be amistake. As I pointed out in note 14 above, those individual users are not exhausted by their network-related activities, and the relevant private-public distinction in no way coincides with the distinction between the aggregate of them and the rest of society. The emerging self is a "collective" one, or is "the community's," only in the sense that it is brought about by activities performed by various members of that community, but such expressions are to be understood at best as colloquial approximations to an adequate description of the new quasi-thing—which may be the best we can get at the present stage, when the proper vocabulary has not yet been developed


19

the "natural" association which I am resisting here: that between mental activities on the one hand and the kind of being to which so far such activities have been ordinarily attributed on the other.[18]

There have always been holes in the ground where people vented their frustration, "gratuitous" trips that they took with no desire of ever crossing paths with the established course of their lives, fantasies whose purpose was to minimize, not to increase, the potential revolutionary significance of the human tendency to displacement. In this sense, flaming in a network represents nothing new. Nothing newer, that is, than a chance of eavesdropping on somebody's daydreaming aloud, or than the painful spectacle of a fit of rage: those people will come back to their senses, and the episode will be "boxed" somewhere by all involved, remembered as an embarrassing intrusion on the part of something that should have been kept inside. If they never come back to their senses, they will be boxed in a transcendental cage of insanity. On the other hand, there is something new in a network: we have a third way in addition to mental experimentation and stupid discharge—"stupid" because nothing is changed by it, learned through it. We have the possibility that the stupid discharge by all individual participants will result in mental experimentation on the part of the network itself, that by yelling in holes without hope of ever hearing an echo—indeed, even with the hope of never hearing one—the participants will be working for somebody else's "benefit," that is, for developing the structure of somebody else's behavior. Which is not to say that this possibility will be realized but is to say that it must be considered when one sets out to imagine the potential significance of the phenomenon.

Those who take this story seriously might come to see the current situation as one in which we are moving from scenario one to scenario

[18] It may be useful to point out explicitly how matters stand in this second scenario with the empirical explanation of notes 10 and 13 above (and the attending text)—how, that is, my conceptual account does in fact shed a new light on the empirical one. Within the new understanding of privacy, the situation of a human facing his screen can indeed be seen as one of loneliness, separation, and silence, but then such features would not belong to the human : they would rather belong to the network.


20

two. A fundamental driving force in this move is inertia. Stupid daydreaming has always been more common than mental experimentation: the latter requires more "wiring," more control, and more energy. On the other hand, because the former never came to anything much, the importance of the latter was preponderant: whatever little was achieved that way, it was far more than the nothing one could usually hope for when going the other way. The new possibility now created will bring to fruition all the untapped resources of laziness, the inarticulate mumbling of choler, and the adventurous cravings of timidity, thus eventually burying the dinosaurs who still want to play it out in their heads before going public on anything. There are still many such dinosaurs around, and they are putting up a brave, honest fight against windmills—I mean "giants": it's just that they will look like windmills if the fight is lost, as it might well be. They want to regulate the network, to inject an ethical code into it, to make sure that people feel observed all the time just as they do in ordinary social situations, and hence treat this situation, too, as a public, not a private one.[19] But their position is weak, they have no bargaining power. They used to, when experimentation and play could only happen inside individual minds, so theirs were the only resources available for initiating the changes that after-the-fact rationalizations usually label "progress": their minds were the only ones that tried to make contact with what is not mind, hence the only ones that had a credible chance of establishing such contact. Now, however, it doesn't take that kind of self-conscious effort: rambling will do, and self-consciousness may go the same way mortars and pestles did when blenders entered the kitchen.[20]

The kind of picture that emerges when scenario two is pushed to its ultimate consequences reminds one of Invasion of the Body Snatchers : after the last dinosaur has died, there will be no more learning from one's mind at an individual level; in fact, there will be no more individual minds at all. There will be a bunch of automata reiterating perfectly predictable moves and occasionally screaming into their computers just

[19] Such is the goal of many attempts to convince us to use networks "more effectively." See, for example, Bishop, "How To Use USENET Effectively."

[20] Individual human self-consciousness, at least; networks may have to find their own (analogous?) control mechanisms. But the important point is that these new mechanisms may be entirely orthogonal to the distinction between one human and another. For example, even assuming that one such mechanism manifests itself by some individual member of the network entering some state analogous to current human consciousness as a consequence of a "move" (say, a dangerous one) made by the network (and a state that has that move as its intentional object), it is not necessary that the member entering this state be the one originating the move, or that he be in any other way connected with it—other than by becoming aware of it, I mean. In this case it would be unreasonable to attribute the state to him as a manifestation of his self-consciousness.


21

as others might take a sleeping pill. When enough screams have circulated, enough nightmares have been entertained, a pattern will appear, of an order of magnitude far too great for the brutes handling the keyboards, and that pattern might then be acted out. If it is, the brutes will be retooled and retrained, and possibly think that God has spoken, and indicated a new way. Some might see this analogy as a reductio of my conjecture, or even a way of ridiculing it: what is expressed in it is not a subtle, provocative extrapolation from the available data, but rather a regurgitation of old, stale mythologies, an unfortunate outgrowth of too much familiarity with B movies. But I would like to turn such a judgment on its head and use the conjecture to look at this antique piece of science fiction with new eyes, instead of the other way around.

What Invasion of the Body Snatchers and similar productions brought out in the fifties was the anxiety generated by the cold war: the aliens invading the States were fictional representatives of an all-too-real "evil empire."[21] Now anxiety is a signal that your individuality is at stake and on the verge of crumbling, so clearly those aliens did more than threaten our shores: they challenged the particular kind of anthropological construction that went with our form of life. How? I claim that my discussion of the electronic self suggests an answer for this question.

This is not the first time in history that collective minds have appeared over the threshold, poised and ready to take over.[22] I don't know that they ever did take over. I am no historian. What I am is a teller of conceptual stories, and as such I can now explain to myself why I never saw the threat as clearly as when I began to think about computers.[23] It is just that the process was too slow: it took too long for the "neurons" of those other gigantic brains to respond to one another for me to appre-

[21] Of course, one might want to go deeper and think that the evil empire itself was an excuse: that at some level people sensed (with anxiety) the similarity between the enemy outside and the one that was already within, oozing out of Trojan horses like appliances and tract houses and monthly payments—and Un-American Activities Committees. (For some account of these alternative readings, and of how they might well coexist in the movie, see the introduction to LaValley's Invasion of the Body Snatchers .) I would in general sympathize with this deeper analysis, but it makes no difference to my present point.

[22] I must insist on what "taking over" means here. It means fragmenting the current association between selves and human bodies, disqualifying humans as the empirical carriers of privacy, and finding some other arena for playing out the public consequences of "the mental." It does not just mean the coexistence of play at different levels, for that coexistence might well be a peaceful one (see the following note), and hence might well go unnoticed.

[23] In fact, since seeing this phenomenon in the case of computers, I happened to see it all over the place: in the ever-growing graffiti on a bathroom wall, for example, or in the secret records of Dead Poets Societies. But I doubt that I would have seen it at all unless such an effective (and hence, dangerous) case as that of computers had presented itself.


22

ciate the fact that they were responding, and that their responses might have a sense and an effect that totally escaped the neurons themselves. It is possible that unless the process is fast enough nothing will happen, but it is certainly true that with a slower process I could not see what, if anything, was happening. Now, however, as I bring my reflections to bear upon past occurrences, I realize that most of the traits that give a collective self a fighting chance in the computer case were there in other cases, too: all of them indeed, except a higher level of speed and efficiency. The separation between a public and a private sphere, the releasing of all deviance into the "network" (through confession, interrogation, or whatever) before it could be put to individual use (or, even better, so that it could not be put to such a use), the appropriation and capitalization of all that deviance by a larger body, which then often proceeded to impose it on its members as the next orthodoxy. Probably, no scenarios two have ever developed yet, but there has often been a struggle, the same struggle being fought now. And if there is a struggle, and my individuality is at stake, I will have to get involved. Not that this is right in any absolute sense: my survival is going to be detrimental, if in a small way, to the new form of life, which has as much (or as little) of a right to subsist as mine. But you can't ask dinosaurs to just roll over, and it won't do to ask me. My first act of war will be—in fact, is—the identification of the enemy. I said earlier that when mental activities emerge a self may also arise if the political motivation is there to phrase the contrast in terms of an opposition between two kinds of things . Well, at least this much political motivation is there, for me: I need an enemy to fight against. As I perceive that my structure is challenged, I need to think that somebody is working at it, not necessarily with a malicious intent: perhaps only in the way in which a more successful species undercuts the livelihood of a less successful one. So I will call this "thing" a name, and thereby make it a "thing"; so I will talk about, and puzzle over, and fear the coming of, the electronic self.


23

Chapter Two—
On the Electronic Self Again:
An Interview

A: So you are saying that a computer network—more specifically, a bulletin board—constitutes something closely analogous to a human self. How is this different from simply saying that there is a structure to it, much like there is to any other thing, including, but not limited to, human selves?

B: What I am saying is actually the opposite of this. Things have a structure, of course. They are in some definite way. Which also means that we have definite expectations about them. Knowing the structure of this table means knowing what I can use it for, how it can help me, or hurt me. But a self, for me, is not a thing. It is the negation of things, of thinghood. It is the contradicting of any expectations, the calling in question of any structure, the capacity of doing violence to it, of forcing it away from itself. It is, if you will, the negative image of things: that background of play against which things emerge, and which continues to oppose whatever emerges.

A: But "self"—the word, I mean—is a common noun, so it is supposed to refer to a species of things.

B: I don't deny that, but that's because of the repression of the nothing ness that the self is: because this repression begins with the very language we use, leaving nothingness unsaid, speechless, inarticulate.

A: So you would need another language.

B: I'm not sure. I don't think any crystallized means of expression would do. I don't think what I am looking for is reentering the Garden of


24

Eden. It seems best to me, most faithful to the vocation that I am trying to voice, simply to accept the means of expression I find already in use and put constant pressure on them, explode them from within, reveal their limits and inadequacies: specifically, here, talk about a "thing" that contradicts our ordinary conception of a thing.

A: So, as you said elsewhere, you prefer to think of this as guerrilla warfare rather than as an all-out war.

B: Precisely.

A: Your position is clearer now, but I wonder whether it makes sense. After all, when I think of myself , I think of a specific individual, from whom others may expect this and that, and not expect something else—just as they would in the case of a table.

B: What makes you think that way is the repression I was talking about: the objective repression that will only acknowledge things . It is because of this repression that your subjectivity, your selfhood, is exposed to this most extreme, ultimate abuse, that its nature is violated and turned into its very opposite, that your indictment of any fixity, any determinacy, your transgressive, irresponsible playfulness get translated into the horror of a fixed and determinate character . But a self is not a character, and is not a role; if anything, it is a theater—or, indeed, a play.

A: But this seems to be quite arbitrary. People use the word "self" with a certain meaning, and now you come along and decide to use it with an entirely different one. What authorizes you? Is this nothing but an act of violence on your part?

B: That it is an act of violence I will be the first to admit. But we have to get clear as to where exactly the violence is situated. There are a number of experiences that people are used to describing by using the word "self experiences of role-playing, say, or of consciousness, or responsibility, or intimacy. Also , I would think, experiences of subversion: of feeling that others' perception of oneself is intrinsically inadequate, not even in the ballpark, to be discarded, to be rebelled against. Now I would not want to give up any of these experiences as relevant to the self; I would want to capture them all. I would consider my position a failure if any of these experiences were not "covered" by it. But the question is: how exactly does the "covering" work? And here is where the violence occurs. To begin with, I refuse to deal with the problem by means of the ordinary, analytic, Aristotelian logic. It's nothing specific to this problem: I simply find analytic logic a very poor tool. What it tells me is that the semantics of "self " is supposed to be given by a collection of traits, inclusive of everything that I would want to be part of a self and


25

common to everything that is to count as one. But then many of these traits would seem to contradict each other: the role-playing and the sub-versiveness, for example. So I can either let any two contradictory traits cancel each other out, in which case I end up most likely with no interesting collection of traits—nothing that distinguishes a self from a number of other things I do want to keep distinct from it. Or I privilege some traits as giving me the essence of a self and adopt what Quine would call an invidious attitude toward some others: specifically, think that any trait contradicting the essential ones doesn't really belong there, and it's confusing to think that it does.

A: So how would you think of a self instead?

B: The way I would think of the semantics of any other word, and certainly of any philosophically important, complicated word. I would think of it in a dialectical, Hegelian way. I would think of the meaning of such a word as provided not by a collection of traits at all, but rather by a narrative, a story, and one that in its development captures everything one associates with a self—including all the conflicting, contradictory features one associates with it. Indeed, a story that would have to declare bankruptcy if any such feature were missed along the way, if it were not reached by the plot.

A: I can certainly see an element of violence in your refusing to adopt the same logic as most of the opposition.

B: Yes, but I insist that there is nothing specific to this level of violence. Nothing that has to do, specifically, with the self. The specific violence comes about when I select the beginning of my narrative, when I decide to assign the origin of the self to transgression and subversiveness, and to see other uses of the word—those referring to a consistent structure, say, or to a definite role, or a clear-cut set of expectations—as the outcome of a defense process, of an act of denial consequent upon the anxiety generated by that transgression and subversiveness. I can certainly explain, in my story, how it is that "self" came to be used, often, to refer to something quite disparate from what its logical origin is, but the opposition is not going to like that . The opposition is not going to like being inscribed in my story as one of its many tortuous, intricate developments, very far from original clarity and simplicity, deep down the line of epicycle building—though of course for a Hegelian such epicycles are just as "necessary" as the origin itself is.

A: OK. Suppose I buy all of this. Then subjectivity is transgressiveness, however much this transgressive nature is covered up, hidden, or even turned into its contradictory nature by a powerful and suspicious "objec-


26

tive" opposition. And transgressiveness/subjectivity plays itself out freely and infectiously, the more so when it can evade the opposition—which it will be easier to do in a medium less filled with anxiety: a linguistic medium of substitutes of things instead of a substantive medium of "real" things. Couldn't I still ask essentially the same question I asked at the beginning? That is: What's so special about a bulletin board? Why wouldn't the postal service work just as well? Can't people freely communicate with each other by writing conventional letters, and take up each other's suggestions that way, and develop them beyond what is possible to anyone of them in isolation? Shall we talk about a postal self as well then?

B: Why not? There are empirical characters to the electronic situation—its speed and its efficiency, primarily—that made it possible for me to identify its relevant similarities to a human self. But, once the idea has come together, I don't see why it shouldn't be applied elsewhere. The general moral emerging then would be the following: there is play going on at different levels and involving this body ("mine," that is) at all these different levels. And it's possible that there are structures at all these levels, or at least at more than one of them, that profit from the play, that "learn" from it and consequently grow and develop, so that the various levels of play belong to their respective structures. Or it's also possible that one level of play may simply take over, that one structure may capitalize all the creativeness of play and stunt the growth of every other structure. And what makes the difference between these two cases may be the inordinate amount of power—that is, once again, of speed, efficiency, and the like—possessed by the "winning" level.

A: So it's possible for the "postal self" to evolve side by side with its users, and also for it, after going electronic, say, to continue to evolve while inhibiting the evolution of the users.

B: Yes, and in this connection it helps to point out that peaceful coexistence is not the most natural outcome. There is potential competition between any two levels of organization of the same materials: their goals might be perfectly consonant, but that would be a contingent matter, and one that can easily turn around. So, if two levels coevolve for a while, even a long while, this is best read as a compromise, an equilibrium of opposing forces. Anything that destabilizes the situation might make it impossible to reach any other equilibrium point.

A: And electronic communication is a destabilizing factor.

B: Precisely. But note that I make no empirical predictions as to how in fact it will turn out. I don't say that in this case an equilibrium point


27

will not be reached—or, for that matter, that it will. Mine is a conceptual story: not one that tells me what will happen as people use computers more and more, but one that provides me with an interpretive scheme within which to read whatever happens. If the interpretive scheme makes sense, then when computer communication becomes, say, regulated and dull, people might read that as showing that the new emerging subjectivity was successfully resisted against. And, of course, people might also begin to look at other forms of communication (conventional mail, for example) in the same way, and pose questions like: what is it in these situations that makes it easier to control subjectivity? Questions that only make sense once the scheme is accepted, and questions such that making sense of them is the main point of accepting the scheme in the first place .

A: But your original statement contains more than this scheme: it contains anxiety, the fear of this particular, emerging subjectivity, the sense that this is an enemy worth fighting against.

B: It does, but that's not part of its contribution to conceptual development. My anxiety is more like a piece of data that I am trying to understand. And of course, mine is not the only possible explanation. Others might say that I feel challenged by a new generation of computer whizzes who are quickly making me obsolete, and that I am myself playing a repressive role as I try to put them back in their place. Or that I suddenly lost my familiar pathological projection—the demonized "evil empire"—and I am frantically working to fill that empty space with a new demon, instead of accepting the challenge of restructuring my form of life altogether.

A: And beginning to play, rather than taking yourself so seriously?

B: Maybe, except that for me play is a very serious thing—the most serious there is. Have you ever seen good chess or card players really involved in their game? Or a child, for that matter, really involved with her bricks? And have you ever tried to distract them from it? To make light of what they are doing? Have you noticed their reaction when somebody does that?

A: That seems to be a limit case. Lots of people take themselves lightly when they play; they are not so intense and passionate. They laugh a lot.

B: That's not play. Or, I should say in accordance with my Hegelian strategy, it's not original play. It's a compromise of original play with the defensive structures play evokes, and one that emerges much later in the narrative of what "play" means.

A: I see. But now let's go back to the anxiety.


28

B: Actually, we never left it. The defensive structures I was referring to are evoked precisely by anxiety: the anxiety that accompanies any subversive—that is, ultimately, destructive—move. Such as play is.

A: Wait a minute. I need to think this through. If anxiety is a response to play, and you feel anxious about bulletin boards, then your moves to fight them are indeed repressive ones; you are indeed taking a reactionary stand against the "infection" of play.

B: No question about it. The objective structure that has come to be identified with my social persona, and has been able, so far, to profit maximally from the subjective play that goes through this body, feels threatened by a quantitative leap in the scope and power of play that it does not feel capable of controlling.

A: But if playfulness and subversion are your values, you should resist this feeling: applaud the quantitative leap, happily let yourself—sorry, your objective structure—be swept away in the process.

B: You don't seem to understand that values are just as much in question here as everything else. Each party in this confrontation has its values. And there is no place to stand to decide which ones are right. My objective structure has a concern for its self-preservation, of course, and this is neither right nor wrong. It simply is, and in this conversation I have no comment to make on it. I take it as given—literally, as part of my data. What I am interested in here is the light it throws on the meaning of such key philosophical words as "self," "privacy," and the like.

A: But then it is from the perspective of this structure that you tell your "story" about the electronic self.

B: Yes, and if you mean to add that this makes the story an unlikely candidate for some sort of absolute truth, I would agree entirely.

A: Easy to do, for you, since you have no commitment to any such absolute truth.

B: I find that there are stories where the phrase "absolute truth" makes for interesting developments, and stories where it doesn't.

A: Again I think we are digressing.

B: And again you want to enforce consistency on our conversation. Which, incidentally, I'm perfectly happy to see: this sort of give-and-take is precisely what play is all about.

A: So, going back to the point of view expressed in the story. . . .

B: It is, quite plausibly, a paranoid one: the point of view of a structure that feels on the verge of collapse and finds something to blame for it. Which, of course, doesn't in and by itself make the story less interesting.

A: I think I understand your general framework now, and I must say I


29

don't find it terribly interesting. So let's concentrate on the particular story you are telling within this framework. And, more specifically, let me bring out for consideration a word you've used before, in passing—the word "privacy." I have a hard time fitting together all your various claims about, and involving, this word. Let me review the main ones. Bulletin boards have a lot in common with private human experiences. Bulletin boards may be the most intense public experiences their participants have. Bulletin boards are private. Privacy is a safety valve, a barrier against the subversiveness of play. Bulletin boards are very dangerous. Can you help me make sense of all of this?

B: I'll try. What you've offered me is something that looks like a straight contradiction together with something that looks like an inconsistent triad, and I find it easier to begin with the latter, since I can deal with it along the general lines of the dialectical logic I am recommending. To begin with, privacy is indeed a defense against subversiveness: subjectivity is literally deprived of direct access to reality, and hence of a direct way of playing itself out. Confines are drawn for this activity, and its results are admitted into the "public" arena only with great caution and after long testing. If you reason in an Aristotelian way, this is it: privacy and subversiveness are in opposing camps. But that's not a productive way of reasoning. Once the confines are drawn, and the barriers erected, subversiveness will find ways of allying itself with them, of becoming stronger through them. Let me give you an example of what I mean. I've been told that the drug AZT works by inhibiting the replication of the AIDS virus. But the virus is very resourceful, so for a while it tries various ways around the drug: it mutates in search of a reproductively advantageous variety. When it finds it, its attack on the organism can be more catastrophic than it would have been otherwise. The drug was originally opposed to the virus, but in the end it creates an environment where the virus's destructive capacity is exalted. And it does so by letting the virus do its thing, play with itself as it were, away from any immediate attention, letting it experiment until it's come up with something deadly enough—indeed, and this is the relevant point here, deadlier than what was there before. Much the same is true of subjectivity and privacy. The latter is a way of fighting the former—worse still, of making it subservient to the opposition. Subjectivity is now supposed to be used only to make objects more powerful. But this also means that there will be a space for subjectivity, that transgression will not be brutally denied and canceled right away, wherever and as soon as it emerges, that it will be let do its thing in that space. Until, maybe, it becomes too strong for the


30

space to contain it, and then privacy will reveal itself an unwitting vehicle for the very party it was supposed to control.

A: So the triad is not inconsistent after all.

B: Not in a dialectical sense. Bulletin boards play much the same game as individual people: there are private dealings that are carried out inside them and are not supposed to emerge in the public awareness. For a while, this may be an effective way of controlling transgression: anyone who feels frustrated and rebellious can discharge his feelings into the network and go to bed happy. But after enough back-and-forth inside, this process may find strategies deadlier than anything the current form of life has defenses for.

A: You mean the form of life that writes this story.

B: Exactly. The one that feels anxious about its impending demise.

A: It looks like this line of thought might help you address the other apparent contradiction as well. For you are focusing not so much on the empirical phenomenon of privacy as on privacy as a general conceptual strategy, and one that can find application at different ontological levels—indeed, one whose various applications may be in conflict with one another. Here, for example, it seems that the privacy of the network may be synergic to its subversiveness, while it's working as a roadblock for the subversiveness of its members .

B: That's right, and the roadblock can work in a couple of different ways. It can let the individual participants lead a perfectly conservative public life, once purged of their subversive tendencies in the privacy of the network.

A: As people have done for centuries in the "privacy" of carnivals and the like.

B: Yes, a certain day of the year is "separated out" and people are allowed to indulge in "insane" behavior then. Until midnight, of course, when one is supposed to switch back to normalcy.

A: OK, and what's the other way the roadblock works?

B: It's the one that was suggested more directly by the "contradiction" you brought up. People, that is, may be simply swallowed by the network. Whatever their individual, private craziness, they will have no place to "infect" others with it except the network itself . Which means that, if "the system" can control the network effectively, it will also control its individual members—those of this latter kind, at least.

A: There is a certain perverse—or, if you prefer, Hegelian—consistency to your position. But are you sure you don't want to venture any


31

prediction on how it will turn out? For example, will this electronic self come to look more like our selves? Will it develop a consciousness?

B: If you mean something that "feels" like the consciousness I have, I don't have a way of even addressing that question. If you mean something that fulfills the functions my consciousness does, it's certainly one possible result of this play of forces.

A: As I understand it, you don't identify with your consciousness.

B: What I think is that consciousness is primarily, originally, an institution of control, a public eye that exposes, and often denies, the reality of subjective subversiveness. And I think that considering consciousness essential to the self is a classic case of the defense mechanism known as "identification with the aggressor"—one that is instigated at least as much by the aggressor as by the victim. With that in mind, it can certainly happen that a similar "watchman" will be injected into the network, and even that the network will be ideologically identified with it—with the consistency it can distill out of the subject's messy activity by various selective, repressive moves. Once again, I'm not so interested in whether it does, but in how entertaining this possibility tells me something about the concept of consciousness—or, more accurately, confirms something about that concept that I believed all along.

A: It's a peculiar sort of "confirmation": one that is obtained by telling a story.

B: I'm not sure it's very different from any other case of confirmation, though certainly it doesn't fit the ideology usually superimposed on all such cases. The way it confirms my beliefs is by showing their resourcefulness, their wide applicability, their capacity to survive in very different conceptual environments.

A: Do your beliefs have enough resourcefulness to account for the situation you and I are currently in? Is this private, or what?

B: Its content is such that it's usually only played out in private. You and I, of course, are characters animating the same body. And ordinarily, these various characters are entertained in thought , and their dialogue developed there, before the body decides, in light, among other things, of its previous related moves, which party to side with. This, at least, is what is ordinary in philosophy these days. It wasn't always like this, of course: there were times when multiplicity and dialogue were more publicly displayed. And even today other social agents (artists, say) are freer to spread the virus of dissension by fragmenting in public. So what we are doing is using the infection of some traditional and some contempo-


32

rary examples to provide ourselves with a behavioral genre where "siding with a party" is not a stylistic constraint.

A: Which, if I understand you correctly, is just as well for you.

B: Of course, talking about dissension and fragmentation is not going to get us anywhere—or it will take a long time before it does, before dissension and fragmentation become real. So realizing them by one's actual behavior is quite an improvement. But note that this "example," as all the other structures we've considered, can cut two ways. Once the behavior is actualized and played out in public, its subversiveness will no longer operate along mysterious paths, and may end up being more easily defended against. The first thing you need to do with an enemy, after all, is bring him out in the light.


33

Chapter Three—
The Metaphysical Structure of Kant's Moral Philosophy

For some time now I have been working on the following project: how to understand Kant's moral philosophy within the general framework presented in my Kant's Copernican Revolution . Eventually, I would hope to produce a monograph on the subject; the present chapter is intended as a prolegomenon to this effort. As such, it will limit itself to the most basic—indeed, metaphysical—aspects of my understanding of Kant's moral works, and will not engage the secondary literature. My goal is that of providing a sketch, clear as far as it goes and promising as a research program.

Two Notions of Cause

The fundamental problem left open by the first Critique is that of understanding the notion of action, and it is a formidable problem. Chapter 7 of my book argues that, according to Kant, a conceptualization of cognitive contact with the world (and hence, ultimately, of the world itself: in the Copernican paradigm "objects conform to knowledge") requires bringing in the concept of an act of synthesis: an element of choice is inextricably linked with the selection of the ontological level at which to "read" experience. But this choice seems to have no place in the world as reconstructed conceptually: what makes the world one world (and makes experience one experience) is the connectedness of events, the fact that all of them can in principle be accounted for as necessary consequences


34

of their antecedents, thereby justifying that precisely that event had to be part of this world, that one should not have expected any other, that not only is the event no disruption of the identity of the world, but in fact it is an integral part of that identity. Therefore, the arbitrariness that is prima facie associated with the notion of a choice, the idea that a course of events is thereby initiated , on no other sufficient ground than the choice itself, appears to be an absurd one. And so is the notion of action, insofar as it is dependent on that of choice.

Therefore, if one attributes freedom to a being whose existence is determined in time, it cannot be excepted from the law of natural necessity of all events in its existence, including also its actions. Making such an exception would be equivalent to delivering this being to blind chance.[1]

So it is an intrinsic development of Kant's own views that makes it mandatory for him to face the perplexing cluster of concepts action-freedom-choice, and to acknowledge all of its perplexing character. He could not escape into an easy determinism because (as a consequence of the antinomies) some sort of free, active choice had to be postulated to explain the possibility that there are any objects at all. And he could not escape into an easy admission of freedom either, because reality had come to mean for him necessary integration into a unique spatiotemporal structure, and hence what by definition is not so integrated could not, by definition, be real.

But if it is some of Kant's own views that create this problem, and make it as much of a problem as it is, it is also other views of his that make a solution seem possible. I argued in my book (chapters 4 and 5) that two notions of necessitation (or cause)[2] surface in the first Critique : that of imposition (an event literally forcing another to come to pass, thereby manifesting its "causal efficacy"), and that of regularity or rule-directedness (events of certain kinds following one another in predictable ways, according to patterns that can be recognized). And I also argued that a large part of what Kant is doing in the Analytic of the first Critique is rewriting the more "naive" notion of imposition as regularity—that is,

[1] Kant, Critique of Practical Reason , 98.

[2] I take causes and effects to be events, and necessitation (or determination ) to be a key characteristic displayed by their relation (but not only by it: I want to allow for the logical possbility of an internal sort of necessitation). Thus different construals of the cause-effect relation (more loosely, of the notion of cause) will often issue in different construals of necessitation. I take explanation to be, first, an activity that one performs on events (another phrase for it is "accounting for" events), and that amounts to showing how they are necessitated (possibly by other events). Second, I also use "explanation" for any propositional outcome of this activity.


35

establishing that, whatever empirical content there is to a claim of causal efficacy, it is to be found in the bringing out of regularities of various sorts. The relevance of these points will begin to appear when one realizes that, whereas causality as imposition makes an at least prima facie claim to uniqueness,[3] causality as rule-directedness makes no such claim. If we conceive of an event being necessitated in terms of its being kicked into being (note the strong agonistic resonances of the metaphors used), and find that there is more to what brought event a about than just, say, the previous occurring of b , it will be natural to think that whatever other c we find it useful to refer to acted in conjunction with b to produce a . In other words, either b was sufficient to make a happen, or one was just wrong in calling b the cause of a and one would have to think of something else instead (b and c , perhaps). If, on the other hand, no such kicking plays any conceptual role, and we just think in terms of the emerging of regular patterns, then there is no problem in principle in thinking that one has fully explained a by reference to b (because the pattern consisting of b followed by a is a regular one), and then turned around and proceeded to equally fully explain a by reference to something else. In this scheme of things, overdetermination would not have to reduce to several causal factors jointly determining an event: it would be perfectly legitimate to allow for several factors each independently and completely determining an event. Then, of course, it would make little sense to speak of the cause of a in general, though it might make perfectly good sense to speak of the cause of a within a specific explanatory context (where one concentrates on regularities of a specific sort).

It may be useful to insist on this crucial point, and articulate it by way of an example—which will also bring out the sort of explanation (and regularity) that is relevant, according to Kant, to moral contexts. So suppose that a game of chess is played, and at some point the black queen is moved from D8 to E7. Suppose we are asked to account for this event. We could answer by referring to electric impulses firing in the player's nervous system, muscles contracting, a hand moving and grasping the black queen, and so on. We could also answer in terms of the player's

[3] The qualification "prima facie" is essential here. A number of philosophers, of course, have brought out an element of multiplicity within causal explanation—for example, on pragmatic grounds—without self-consciously and deliberately abandoning the imposition reading. From my point of view, they are trying to introduce a Kantian element within a structure that is still non- or even anti-Kantian: they are stretching their conceptual tools (often beyond recognition, and with awkward results) instead of simply changing them. I, on the other hand, need no such stretching, so I can face the imposition reading in its most natural and plausible form. I make a similar point about the relation between Kant and the rationalist tradition in my book (pp. 102–103).


36

psychology, of his aims and strategies, of his understanding of his opponent, of his competence and skill. And we could also answer in terms of the game itself, by pointing out that the move is the rational one, the one one would have to make under the circumstances. In preparation for things to come, note the following feature of the last answer. Both the preceding alternatives have a potential for spreading indefinitely far from the present context, one explanation always leading to another and implacably extending the range of our concerns. The physics and physiology of the player's nervous system are in a relation of continuous interaction with the physical environment and with the rest of the player's physiology: innumerably many stimuli impinge upon (here come the agonistic resonances again—language has a way of resisting conceptual reform) that nervous system at any one time, and all contribute to the outcome. And of course all those stimuli are themselves effects of physical or physiological causes. Similarly, the player's psychology is not exhausted by this particular game: for one thing, his attitude—whether aggressive or cautious, solid and firm or wildly imaginative—has been shaped by his innate resources and by innumerably many outside influences (education, society, and so on). The explanation in terms of the game, on the other hand, need not spread outside the game itself: more generally (and relevantly) it is at least possible to think of it as ending somewhere. If a move is indeed the rational one under the circumstances, we need only reason about the game to come up with this sort of explanation of the move. Interestingly, one would feel the need of going outside if a move was not rational; then one would think that some disturbing factor had intervened (lack of attention, fatigue, or whatnot) and would be looking for an explanation that is not entirely in terms of the game.[4]

Question: Which of the three explanations mentioned above is the correct one, the one that brings out the true causal factors of the event? This question is (in the present framework) based on a misunderstanding. When causal necessitation is construed as regularity, each regular pattern provides an equally legitimate causal account.

Question: In how many regular patterns would Kant say that our behavioral moves fall? A first answer is that they must fall in a natural pattern (be integrated in the one spatiotemporal nature), or they would not count as real. It seems also possible, however, that—just as the move

[4] Here I won't be pursuing further the analogue this point has within Kantian philosophy (but see note 9 below); so let me just note in passing what the analogue is. Kant's account of freedom has the consequence that either one acts rationally or one does not act at all. Therefore, if one does not act rationally, a certain sort of explanation of his moves is simply inapplicable—just as in the chess case.


37

in the chess game—they fall in a pattern of rationality: that they are the moves one (could conclude by reasoning one) would have to make under the circumstances. At which point the conceptual analysis in the Groundwork of the Metaphysic of Morals becomes relevant: the "positive concept" of freedom turns out to be autonomy—that is, being guided by an inner, intrinsic law—and autonomy turns out to coincide with rationality. In conclusion, a behavioral move that, besides exhibiting the connectedness that accounts for its reality, was also such that reason could recognize itself in it—such that, abstracting from all external, contingent features of one's physical and psychological makeup and basing oneself only on one's capacity to recognize universal and necessary connections, one could establish it as the move to make —well then, a move that exhibited that character too could legitimately be called an autonomous, and hence a free one. It could be called not just a move, or an event, but an action . We cannot call it moral yet, because we have not (and will not here) introduce any evaluative words and show the connection between such evaluations and the metaphysics articulated so far. But once that connection is made, and freedom is proved to be equivalent to morality, we will be able to call it that, too. If , that is, we can find a move like that—a move that is also an action. Which, as it turns out, is more than we can hope to find.

But to the law of freedom (which is a causality not sensuously conditioned), and consequently to the concept of the absolutely good, no intuition and hence no schema can be supplied for the purpose of applying it in concreto .[5]

Objects of Thought

Return to the move in the chess game and suppose you want to provide the "rationality" explanation of it. I said earlier that in principle you could accomplish that without going outside the game, by just reasoning about it. Therefore, I also said, this explanation appears to have a feature that the other ones lack: it seems that here, as opposed to the other cases, we can get to a point where we need no further explanation , where we have an answer that raises no further question. But now let us ask ourselves how in fact this sort of reasoning would go.[6]

[5] Kant, Critique of Practical Reason , 71.

[6] As we proceed to articulate the answer to this question, and then to show its relevance within the metaphysics of morals, it will become less and less likely that the rationality explanation of a move can really be as conclusive as is suggested here. Still, this unwelcome development will be the consequence of other factors not yet uncovered. In theinterest of clarity, it is essential to emphasize that at this stage the rationality explanation has a definite and promising distinctive feature—and one that will remain a distinctive feature of it even after those other factors have largely undermined the promises made here.


38

We would consider the present position on the chessboard, map out all the possible moves one could make, and follow out all their possible consequences. There may be staggeringly many of them, but not infinitely many, so after following out all of them and comparing the outcomes, we could prove that a certain move is the best (the rational) one.[7] With this example in mind, turn now to an analysis of human behavior in general—of any human behavioral move a .

Suppose you want to prove that a is the rational move to make. The example suggests that you would have to consider all the possible alternatives, follow out their consequences, compare them, and so forth. But this, by itself, won't do. A move in a chess game has a definite goal: that of bringing the mover closer to winning the game. So a move in a chess game is the rational one to make if it best approaches that goal. But applying this notion of rationality to a Kantian analysis of human behavior would at least have the effect of turning Kant into some sort of consequentialist. There would have to be a winning of sorts to be striven after: some sort of final state of affairs which is to be approached, and one's approaching which is crucial in establishing the moral significance of one's behavior. And it would be hard to reconcile all of this with Kant.

This remark . . . explains once and for all the reasons which occasion all the confusions of philosophers concerning the supreme principle of morals. For they sought an object of the will in order to make it into the material and the foundation of a law (which would then be not the directly determining ground of the will, but only by means of that object referred to the feeling of pleasure or displeasure); instead, they should have looked for a law which directly determined the will a priori and only then sought the object suitable to it.[8]

Still, something like the above following out of consequences is going on here. To understand how, I must bring out a crucial difference between chess and the Kantian analysis of human behavior. No one move, in and by itself, could according to Kant even conceivably be proved the rational one. It is only the (presumed) law of a move (in Kantian jargon, its

[7] It is of course possible that, in some circumstances, no single move will emerge from this analysis as the rational one—that is, that two or more moves will be proved equally rational. But I will disregard this possibility. Mine is, after all, only an example, and when we turn from the example to the real thing it will become apparent that (as was suggested in the previous note) the uniqueness of the rational move is the last of our problems.

[8] Kant, Critique of Practical Reason , 66.


39

maxim )[9] that is a plausible candidate for any such proof. Philosophers of a mentalistic orientation would look for this law in the mind of the (presumed) agent, but Kant, despite his occasional slipping into a compromising mentalistic jargon, sees it very differently. Intentions play no role for him, except insofar as the word "intention" is understood as shorthand for the ways in which one's moves come together—for the pattern, once more, that they draw.

[W]e cannot base such confidence upon an immediate consciousness of the unchangeableness of our disposition, for this we cannot scrutinize: we must always draw our conclusions regarding it solely from its consequences in our way of life.[10]

One's move of, say, donating money to charity will be a generous or a self-interested one depending on what one's other moves are, so there is no way that any move can—in contrast with the chess case—be judged in isolation. Its rationality can only be judged—if at all —within the context of one's whole career.

[A] man . . . can gain . . . confidence [in his moral disposition] . . . without yielding himself up either to pleasing or to anxious fantasies, by comparing the course of his life hitherto with the resolution which he has adopted.[11]

By a strange dialectical twist, this disanalogy with the chess case forces an analogy in how to deal with both cases. For, once again, we are down to considering alternatives: not as to how one could maximize a certain outcome, but as to how a move could be placed in context. Say that p makes move a . In and by itself, a cannot be judged rational: such a judgment only applies to a in conjunction with all the other moves b, c, d , . . . that p also makes. And how would this sort of judgment come about? One might, for example, point out that the succession a, b, c, d , . . . involves an egoistic element—an influence on the part of p 's psychological makeup—that might at first have gone unnoticed but is clearly revealed once the succession is compared with the alternative a, b', c, d , . . . One might have thought that all those acts of donating money dis-

[9] When somebody's behavior is taken to be an action, a maxim is a principle that is supposed to explain that (alleged) action, and may be offered as an explanation by the (alleged) agent. But this principle only acquires independent explanatory value (it is a law ) if it is rational; otherwise, whatever the "agent's" subjective persuasion, his "action" is one more case of nature working itself out. In line with what I said in note 4, I will be leaving this issue aside here, and hence I will be freely talking of the law of a move, when in fact I could only talk of its presumed law (that is, of its maxim).

[10] Kant, Religion Within the Limits of Reason Alone , 65.

[11] Ibid., 62.


40

played a law of, say, generosity, but actually, when you consider the way p answered his neighbor yesterday, you are rather inclined to a less favorable reading of them. He could have answered differently—indeed, come to think of it, he should have answered differently—which throws an entirely new light on whatever else he has done or will do.

So the following has surfaced as a program for how to account for the rationality of human behavior. You cannot judge the rationality of a move a but only that of the character displayed by a .[12] And judging of such a character involves comparing the succession of which a is a part with all possible alternatives also including a and deciding through this comparison what the law of a was. An asymmetry threatens here, since establishing the rationality of a move might be an open-ended task (it might require comparing a sequence with indefinitely many others), whereas establishing its ir rationality appears not to be (one piece of contrary evidence will be enough). And this asymmetry will eventually play an important role.[13] But, before we even worry about that, we are going to bump into another of Kant's conclusions, which makes this program unfeasible—just as it had made his epistemological program unfeasible.[14]

There may be no problem in talking about all the possible moves in a chess game. After all, one might say, any such move corresponds to a definite sequence of positions on the chessboard, and if one had enough time one could give physical representations of all such sequences; then talking about "possible moves" would be tantamount to talking about these representations. But (whatever the case ultimately is with chess) there are problems—big ones—in talking about possible behavioral moves, indeed about possible events in general, when by event one does not mean something as abstract and sterilized as a move in a game but a full-blooded, concrete sort of happening. For talk about such possibilities may turn out to be just that—talk .

The only events or things whose possibility we can assert, Kant has

[12] Note how the word "character," used near the end of the first section in an ordinary colloquial sense, has now come to be used as a Kantian technical term. (For Kant, the "character" of an "efficient" cause is "a law of its causality, without which it would not be a cause." See the first Critique , 468.) The slippage between the two uses is a good example of the sort of "rewriting" (keeping many of the same terms, but giving them a different semantics) that Kant's revolution consists of, in my interpretation.

[13] The asymmetry will surface again in the suspicious attitude that, as I indicate later, comes to coincide for Kant with a moral stand. See notes 16 and 17 below, and the attending text.

[14] This point is argued in chapter 1 of my book, where one can also find an articulation of the points made below about real possibility.


41

concluded in the first Critique , are the actual ones. We can, of course, presume to extend that range by introducing this variation and that, and playing with how things could be different, but such an extension is delusive: we will never be in a position to tell that things really could be that way. Real possibility (that is, possibility period—something that is more than an appearance of possibility) collapses for us into reality. However many words we use to attempt to describe alternative possible situations (worlds, behavioral moves), and however plausible the descriptions sound, we are never going to be able to produce a conclusive proof that these plausible descriptions do not hide deep-seated inconsistencies. Such a proof could only come from being given an actual example satisfying the description; short of that, there is never going to be any establishing that our words in fact describe anything (anything possible, that is).

Everything actual is possible; from this proposition there naturally follows, in accordance with the logical rules of conversion, the merely particular proposition, that some possible is actual; and this would seem to mean that much is possible which is not actual. It does indeed seem as if we were justified in extending the number of possible things beyond that of the actual, on the ground that something must be added to the possible to constitute the actual. But this [alleged] process of adding to the possible I refuse to allow .[15]

This conclusion has devastating consequences for philosophical "knowledge," insofar as the latter is supposed to be concerned not just with what there is, or what happens, but with what must or could be or happen. There comes to be no conceptual room for this activity, or for any outcome of it: rational inquiry is reduced to a play with words. Not surprisingly, the same problem shows up in the present context, too; after all, this is the context in which reason inquires upon how it itself "can be practical," that is, can issue in action, can be the law of some happening, can find its own traces out there. And it shows up with the same devastating effects: reason will never in fact find those traces.

Say that I have just done something a , and I am trying to satisfy myself that it was or was not the rational thing to do. So I put a in the context of my other behavior and try to understand how I work, what my character is. I see a lot of what I have done, of course, maybe even all that is relevant, but what does that tell me? I need to bring some generality into the matter, to say things like, I am this way, because if I were not I would have behaved differently in such and such circumstances." And, as I say

[15] Kant, Critique of Pure Reason , 250; my italics, translator's brackets.


42

this, I find myself not making sense. Could I have behaved differently then? What does that mean? I know of one sense in which I could not have: what I did was, like everything else that happens, naturally determined. Is there any other sense? Maybe I am describing a possible world just like this one, except that there I did this other thing instead. But is this world really possible? How would I know that ? And if there is no way that I can ever know that, what am I talking about? I did what I did, and in some sense (the natural one) I had to. As for this other sense I am trying to capture, there is no way to cash it out.

All I am left with is the possibility of dealing with my behavior as if a judgment of rationality were possible. In the next section I will address the question of what makes it legitimate to adopt such an attitude, in the face of the conceptual limitations just uncovered; for the moment, I am interested in saying more about what sort of attitude it is.

Return to my analysis of my move a and think of the dialectic that must be going on in me as I carry out this analysis. I will be my own prosecutor and my own defense lawyer, of course, but note how unfair the setting is for these two roles. Whatever the circumstances, the lawyer in me has it easy: there is no way anybody can establish that I could have behaved differently, and hence no way anybody could have sensibly asked me to behave differently. The lawyer might want to engage the prosecutor in a debate and try to rationalize what I did, but that would just be a concession on his part, and one that he could always take back if things got out of hand. The prosecutor, however, faces an impossible task. He is trying to give content to something that is intrinsically empty, to stretch my sense of what I can or cannot do in ways that are beyond anybody's capacity for conceptual control. On the other hand, if he were ever to stop, this whole enterprise would fold: just because of how unfair the setting is, the defense need not even worry about any of this. It has its own ready-made answer, and it can well be content with it, unless somebody insists on making a fuss.

The field of the possible must be kept open if moral concerns are to be an issue, but keeping it open is entirely the burden of the party that wants to prove me wrong. The other party16 must establish a universal statement (no possible sequence of moves is more rational than the one I realized), and a universal statement is more easily established the fewer elements there are in the domain—and becomes trivial if there is only

[16] I cannot quite refer to this party as the one that wants to prove me right. In a way, it does, but so as to make the very notion of "right" lose any content. See also the following note.


43

one element. Proving me wrong, on the other hand, requires establishing an existential statement (the sequence of moves I realized is less rational than some possible ones), and hence fighting to extend the domain.

If a significant range of possibilities could be determined once and for all, one would have a neutral ground on which to sit and happily "calculate." But since this is not an option, since all we have is verbal descriptions of possibilities—descriptions that might, of course, make sense, but then again, might not—the asymmetry that threatened earlier surfaces again, with a vengeance. It is not just that proving rationality is an open-ended task: one has to work to keep it that way. It won't be enough to sketch out another way things might have gone, because we may well decide in the end that it is impossible for them to go that way; we will have to try harder and sketch out yet another possibility, and yet another one.[17]

So, in essence, the attitude I am describing—that of proceeding as if morality were an issue—comes to be a critical one, an attitude of suspicion, of trying to find fault with my behavior. As a pattern seems to emerge in some things I do, I will test it against my understanding of what other patterns might be relevant, I will look for clues that might give away the self-centered nature of the pattern. I will play devil's advocate and refuse to take any of my own words, any of my own sincerest pronouncements, at face value; I will want to see how those words fit my other moves, what picture all of these data draw, and what other pictures I can come up with. I will keep on worrying, combing my behavior patiently, looking for tensions, for lapses: if I could ever convince myself that I have found one, I would know that my behavior then was not rational. I know that I cannot so convince myself but I do it anyway, for this is what a moral stand toward one's own behavior is , according to Kant.

It is indeed at times the case that after the keenest self-examination we find nothing that without the moral motive of duty could have been strong enough to move us to this or that good action or to so great a sacrifice; but we cannot

[17] One might argue that this is not the same asymmetry noted earlier, since that asymmetry favored irrationality (irrationality was easier to prove), whereas this one seems to favor rationality (rationality now seems easier). But the lawyer's "ready-made answer" in the present dialectic only establishes rationality in a trivial sense—a sense that makes the whole enterprise worthless. Thus, ultimately, the party that wants to prove me wrong—and consequently tries to open up the field of the possible—also acts in the interest of proving me right, insofar as proving one right is more than proving the insignificance of saying he is wrong. And, of course, when the field of the possible is kept indefinitely open, it is in general easier to argue for an existential statement on that field than for a universal one.


44

infer from this with certainty that it is not some secret impulse of self-love which has actually, under the mere show of the Idea of duty, been the cause genuinely determining our will. We are pleased to flatter ourselves with the false claim to a nobler motive, but in fact we can never, even by the most strenuous self-examination, get to the bottom of our secret impulsions.[18]

[I]t does not . . . seem advisable to encourage . . . a state of confidence; rather it is advantageous (to morality) to "work out our own salvation with fear and trembling ."[19]

As for giving an object to my suspicions, substantiating them or resolving them, any of that is just an object of thought, something I can contemplate within the delusive realm of philosophical reflection—that silly realm where I detach things from some of their traits, and recombine these abstracted traits in ways that I find entertaining, and can see nothing wrong with the outcome, and then feel like I just proved something to be . I cannot claim that these objects of thought will ever become objects of experience, not even of possible experience.

Room for Faith

But then, why don't I just give up? Why don't I admit that I am not an active character in what I may have thought was my own story? Why don't I settle for being one of the many ways in which nature works itself out? Isn't determinism right, and shouldn't any claim of responsibility, any praise or blame of human behavior, be ridiculed as conceptually confused? Not necessarily—in fact, not at all.

From Kant's point of view, the incompatibilist's determinism is a conjunction of two assertions: that everything occurring in nature is naturally necessitated and that it is not necessitated otherwise (by a free choice, for example). Anybody giving the imposition reading of necessitation, of course, would find the second assertion redundant, since in that reading there is only one way anything can be necessitated. But Kant gives the regularity reading of necessitation, and within this reading both assertions must be proved if the incompatibilist's determinism is to be established. The first assertion Kant would have to accept, as a simple consequence of what "nature" has come to mean for him, but what about the second one?

We got disappointingly negative results in the previous section, and

[18] Kant, Groundwork of the Metaphysic of Morals , 74–75.

[19] Kant, Religion Within the Limits of Reason Alone , 62.


45

we concluded that one can never know that one's behavior displays autonomy (in other than a vacuous sense). But the one negative result we did not get is that one can know that one's behavior does not display autonomy. For all we know, it might: we simply have no way to tell. Our results were negative in the sense in which undecidability results, not inconsistency ones, are. Because we could not conceptualize the comparison class needed to flesh out our rationality explanation of a move, we could not assert that explanation. But, for the same reason, we cannot deny it, either. And this is not just an empirical matter: it is not that we have not decided the issue yet, but we might later. The one we found was a conceptual impossibility, and one that we share with the incompatibilist. We had to give up any hope of ever concluding that our freedom is real, not just because of practical limitations but because of the very nature of the case. However, in the process of thus giving up hope we gained the right to shut up our opponent, since he will never be able to conclude that our freedom is un real either.

Thus the Idea of freedom can never admit of full comprehension, or indeed of insight, since it can never by any analogy have an example falling under it. . . . But where determination by laws of nature comes to an end, all explanation comes to an end as well. Nothing is left but defence —that is, to repel the objections of those who profess to have seen more deeply into the essence of things and on this ground audaciously declare freedom to be impossible.[20]

"I have . . . found it necessary to deny knowledge , in order to make room for faith ," Kant says in a celebrated passage of the first Critique .[21] But in that work we find only, in effect, a metalinguistic version of this claim. Knowledge of tables and chairs, or, for that matter, of mathematical and physical laws, is not denied, and where knowledge is denied—knowledge of things or principles that are unconditioned in a way that would satisfy reason—faith is not invoked as a substitute for it. We are certainly not invited to have faith in the possibility of things in themselves. What happens there, instead, is that philosophical reflection comes to see its own cognitive limitations, and hence how much of a fideistic element there is in any attempted rational reconstruction of knowledge or anything else, how much of a leap one ultimately has to accept within the texture of any project of making the world intelligible to oneself. In the moral works, as I noted earlier, reason is doing more than reconstructing something else: it is looking for itself as a possible

[20] Kant, Groundwork of the Metaphysic of Morals , 127.

[21] Critique of Pure Reason , 29.


46

agent in the real world. So, not surprisingly, we end up with an object-language version of the same claim about knowledge and faith. Because of its intrinsic limitations, reason will never know that indeed it is—or even can be—an agent in the real world. Faith will be more than an inevitable component of any rational reconstruction: it will be the essence of the attitude with which an imperfectly rational being of the sort we are must live its own experience. I know that the table is brown, and that two plus two equals four. It is only when I try to understand what that means that I end up invoking things I cannot be said to know. But I will never know that my behavior is free or moral; I will never even know that it makes sense to say that. If I have good, practical reasons to believe it, on the other hand, no rational argument can rob me of this belief. I will trust that my suspicious attitude toward my "motivations" is not groundless, that my sense that I could be doing better is not absurd, that my efforts in that direction are not entirely delusive, and I will know —this, at least, I will—that my despairing opponent is in no better shape than I am, that his is as much of a leap of faith as mine, if a suicidal leap.


47

Chapter Four—
Kant Is on My Side:
A Reply to Walker

In his review of my Kant's Copernican Revolution , Ralph Walker raises a number of important stylistic, methodological, and substantive issues. Discussion of these issues will allow me to highlight and develop both my position in the book and my more general philosophical stance.

Style

For some forty years now, analytic philosophy has dominated the Anglo-American academic world, and shown some strength on the European continent as well. To some extent, the secret of its success must be found in the looseness of its defining features. A tight, ambitious philosophical program such as that of the Vienna Circle (from which analytic philosophy somewhat loosely derives) was bound to fail quickly, but analytic philosophers have no such recognizable programs. The ways they identify themselves are largely sociological: they go to the same conferences, publish in the same journals, refer to one another in their work, are members of the same associations, and suchlike. They also have, of course, some sort of general training in (usually infant) logic, a penchant for clarity of exposition and commonsensical views, and a conception of their discipline as cumulative, professional, and scholarly. But such generic qualifications cannot bring into relief a philosophical school any more than a scientific project or a political party, and hence within the wide—and monetarily rich and academically powerful—world of ana-


48

lytic philosophy you can find pretty much whatever you want: from metaphysics to theology, from aesthetics to animal rights. An aggregate of this kind will not die a quick, merciful death: there are no standards by which it can fail. It will rather fade away, as old soldiers do—drown in its own boredom. To slow that painful process there is only one strategy at hand (the same one soldiers, young and old, are limited to): conquer, consume, and devour more and more land. So it's Husserl today and Hegel tomorrow, Nietzsche here and Heidegger there, and—who knows?—one day it might be Paracelsus or Schelling. All, of course, streamlined and normalized to make them sound like the last issue of the Philosophical Review , and never mind if what they say is crazy, provided one paraphrases it in the right manner and gives their "arguments" some standard form. Which is sometimes difficult to bring about, and involves doing some violence to the texts, but when Sherman said that war is hell, he was not thinking of life in the barracks.

The sad reality of this race toward the abyss has begun to sink in with some members of the aggregate. New jargons are becoming fashionable, new journals emerge, new associations are formed, and the threat is clear: soon the wise guys might have no market for their wisdom. The reactions, again, are obvious. It is silence or derision until it works; then, with some, it is negotiations (doesn't Derrida, after all, talk much like Wittgenstein?) and, with others (usually those perceived to be weaker), it is open confrontation. When the latter happens, the amusing paradox is realized of this cocktail-party sort of network suddenly acquiring a strong sense of its identity and values.

It is this mythical battle of gods and giants that is fought in the background of the modest occurrence of Walker addressing himself to my Kant book. And, in all of its modesty, the occurrence is a paradigmatic one. In his first paragraph, Walker suggests the threat: "[Bencivenga] believes that the analytic philosophy of the present day remains grounded in pre-Revolutionary habits of thought, and that Kant's Copernican Revolution is as much needed as ever." There follows an accurate reconstruction of some of the problems I find in "pre-Revolutionary habits of thought," but when it comes to the way I treat "modern attempts," readers are reassured that there is nothing to worry about: my treatment "betrays an insecurity of touch which recurs whenever [I attempt] to tackle contemporary analytic thought." How that insecurity is displayed readers are not told: the only thing close to a criticism here is that I deal with such modern attempts very briefly. Nor are readers told that my treatment has only purposes of illustration, or that when I claim that my


49

dismissal is not definitive I also explain why nobody could do any better, given the kind of commitment one has to a paradigm (incidentally, Walker puts quote marks around the word "definitive," which makes it sound much less serious; here is a good example of doing maximal damage with minimal effort); but readers need not know that. What they need is some little words to exorcise the threat, and those words are provided: "extremely brief section," "insecurity of touch."

After these soothing words, Walker almost gives himself away when he acknowledges that "one can hardly dispute Bencivenga's claim that . . . these are serious and difficult problems." But by now he must feel that he has a firm grip on his readers: it is enough, he thinks, to remind them that this is so "for most of us." By this and his previous moves, he clearly thinks he has managed to divide the world into "us" and "them," assigning me, of course, to the foreign camp. To make this foreign character more apparent, he goes on to point out that he has a hard time following me, and that I develop my approach "from a side of Kant's thought that some writers in the analytic tradition have found unfortunate and have tended to play down."

Why do I want to beat this dead horse? Why do I not let Walker and his acolytes go the way dinosaurs did? Because I find it fascinating to explore how rhetorical means operate within a philosophical form of life that has made formal logic its flagship discipline. It teaches me something about how words are in fact used to change the world, or to leave it as it is: to do things, in sum. Up to this point, about one-third of the way through his review, Walker has not said a word about my interpretation of Kant. He is going to, in a minute, and the things he will say are of sweeping and destructive generality. But, before saying them, he must establish his credibility, and what better way of doing that than by having readers take sides, indeed, an opposite side from me? After that, it will be easier to make them swallow the idea that Kant, too, is on Walker's side.

As it turns out, I think that taking sides is doing a disservice to philosophy. In some of my other works I insisted that philosophy is promoted by foreign insemination and cultural transplants.[1] It is not promoted, of course, by simple anarchy: to transplant a tradition onto another, one needs to be proficient in both, and that requires more discipline and training than being proficient in one. Philosophy is an interminable—and interminably destabilizing—schooling and experimenting, and tak-

[1] See, for example, my Looser Ends and The Discipline of Subjectivity .


50

ing sides is a banal attempt to find a shortcut, by putting your value judgments where your sweat should be. So when it comes to analytic philosophy, or any other such movement, more or less vaguely characterized, it is not "us" and "them" for me: it is rather what I can learn from it, and how I can play with what I learn. And I will not enter a battle whose use or legitimacy I don't recognize.

Method

Walker has just acknowledged (and apparently endorsed) the fact that there are parts (or "sides") of Kant that the analytic tradition has tended to play down. But later, after some reconstruction of my interpretation, he claims that the position I describe "is hardly Kant's." I will have something to say in the next section about the substance of his claim; for the moment, I am rather interested in what seems to be a tension in his conception of the enterprise of Kant interpretation—or, for that matter, of the history of philosophy in general. The tension is this. Clearly, Walker does not think that it is profitable to take into consideration all that Kant said, and that some things are profitably left aside. But then, given that we can so pick and choose, how is one to decide what counts as Kant and what "hardly" (which is, I admit, less serious than hardly) does so?

Walker himself provides the beginning of an answer later. To excise transcendental idealism from Kant, he claims, "left us with what was still recognizably Kant; for it left us with a major philosopher, fighting Kant's battles with Kant's weapons." This is not going to help if we take the last two occurrences of "Kant" as proper names, since Walker and the tradition he belongs to have already decided to discount some of Kant's battles and Kant's weapons as "unfortunate," and hence we wouldn't know how many (or which ones) we can discount and still have something that is "recognizably Kant." We can go a little further, I think, if we see those last two occurrences of "Kant" as doing the work of common nouns, and being largely synonymous with the previous expression "major philosopher." This way, something will count as recognizably Kant if it fights worthy battles with formidable weapons, or something of the sort. But then, of course, the value-ridden nature of this judgment becomes apparent: to be recognizably Kant is (at least) to fight what Walker (or the tradition he identifies with) conceives as worthy battles with what he (or it) conceives as formidable weapons. So, once again, we are back to a


51

confrontational mode, this time applied to a historic and interpretive task.

Is there any alternative to such a confrontation? I think there is. If we decide that a certain author or text is worth looking at, we look at the whole corpus or the whole text and try to explain its occurrence: to explain why those words were written—those we find fortunate and those we find unfortunate, those we judge illuminating and those we judge blunders. In fact, explaining the blunders will often be more instructive (if, that is, we do more than "play them down"), because our explanation is likely to reveal some underlying tensions or frictions, and those tensions and frictions will offer precious glimpses into how the underlying conceptual machinery works or doesn't work. Through an operation of this nature, we really get the most out of an author or a text: not just their superficial consistency or inconsistency, but the options that they were facing, the complications into which each option led, the strategies by which they tried to handle, or sometimes deny, those complications. And by going through these options and complications and strategies we learn to appropriate that author or text, to make them participants in our internal dialogue, to know not just what they said but what they would have said if . . . , and when we face our next theoretical issue their voice will be one more to listen to, their journey one more example to consider, as we painstakingly try to find our bearings within the matter at hand.

It is this kind of operation that I tried to perform in my book, and that is why I had to concentrate on some of the most obscure and controversial passages—not to emend them, as Walker suggests at one point, but rather to learn from Kant's own awkward handling of them through the two editions of the first Critique . The outcome of this operation is a theory of the text or of the author, and of course the theory can be wrong, but at least there is a definite sense in which it is a theory of that text or that author. With Walker's (and the analytic tradition's) pick-and-choose attitude, on the other hand, I don't see what ground there can be—apart, again, from simple value judgments—for saying, as he does, that what I come up with "is no longer Kant at all."

Substance

There are two substantive objections Walker makes to my account, and they strike at the very heart both of my book and of Kant scholarship. So


52

they need to be considered with some care. The first one has to do with transcendental arguments. Here Walker claims that I "systematically [play] down [!] the role of arguments of this kind in Kant," though I "cannot deny that Kant uses them." Now this is confused (perhaps it is one of those points where Walker had a hard time following me), so let us try to set the matter straight. A transcendental argument for Kant is, as I show in the book, simply a conceptual argument: an argument that requires nothing but the mobilization of concepts and depends on no appeal to experience. That philosophy is limited to arguments of this kind is a consequence of its purely conceptual status. Of course, some such arguments are better than others; some Kant endorses and uses, and some he does not. Among those he uses, some attempt to establish the possibility of some synthetic knowledge, but that there is such synthetic knowledge is no philosophical matter: it is an empirical matter and hence does not belong in Kant's transcendental concerns. So no transcendental argument can, by Kant's own lights, have premises like those mentioned by Walker—that there is experience, or spatiotemporal experience[2] —and Kant's occasional suggestions to the contrary are evidence of some of the tensions I intimated above, and as such are discussed in some detail in my book. In conclusion, I don't deny that Kant uses what he calls transcendental arguments; I deny that he uses what his analytic critics (including Walker) call transcendental arguments. It is these critics, not Kant, that I have a tendency to "play down."

The second criticism is that in my view "it is facts about human thoughts and experiences, and these alone, which determine the truths about objects in the world—insofar as there are determinate truths about these at all. Nor is there anything that is wholly independent of us which our thoughts or experiences have to match, or to which they are answerable in any way." This, again, is confused. It is clearly facts about objects that in general determine (that is, cause) facts about human thoughts. Facts are the matter of experience, and Kant is an empirical realist: for him, things (res ) are what constitutes the empirical realm. Within transcendental reflection, however, we ask ourselves how the empirical realm is possible, and try to provide a conceptual account of this possibility. It

[2] In his own Kant , Walker points out (p. 15) that with these premises Kant is not going to go very far in what Walker conceives as Kant's task, and in fact Walker's judgment on what he calls transcendental arguments in the Critique is ultimately quite negative. The best he can say about them is that "[i]f they can be made to work . . . [they] are a particularly satisfactory way of replying to sceptics" (p. 14), but the fact of the matter is that, in his view, Kant's transcendental arguments don't work. He does not, however, ask whether Kant might possibly have had some other weapons (and battles) in mind.


53

is in such an account that thoughts (or rather, representations) become the dominant factor: the legitimacy of calling something an object is accounted for by bringing out certain properties representations have and certain patterns they follow. And note: not human representations, or my representations, but representations period. Humans (including me) are as much in need of a philosophical account here as anything else. Even with myself, I will have to invoke the same patterns and the same properties to make philosophical sense of the claim that I am an object at all, and only when that is done will I be in a position to justify the attribution of mental occurrences to this object which is myself. And yes, this "leaves no place for the thing in itself, except as an Idea—the conception of something objective and independent." But that is exactly the kind of place the thing in itself can have within transcendental philosophy, which has no things to deal with, but only conceptions, and is not interested in what things there are, but in how the concept of a thing is related to other concepts.[3]

Clarifying these substantive matters is helpful, but once again I must emphasize that Walker's contentions turn on taking philosophical sides. While formulating his second criticism above, he throws at me a couple of menacing references to Fichte—one of those authors who have not yet been successfully conquered by analytic philosophers. And, by driving a wedge between Kant and Fichte, he extends that wedge to include me: "[Kant] thought Fichte had no workable account of the given, and he would have said the same of Bencivenga." As for transcendental arguments, the issue is a fundamental one. Analytic philosophers think of arguments necessarily establishing their conclusions as the ideal philosophical tool, and therefore a "major philosopher" for them is one who has found a more clever way than most to force his opponents to accept a given claim. That is why suggesting that for Kant theories (or, more provocatively, stories) are at least as important as arguments, and creativity and imagination at least as valuable as logical cogency, is equivalent for them to attempting to deprive Kant of the "major philosopher" status. That is why "Kant's weapons" must be arguments of some sort, and "Kant's battles" issue in proving somebody definitively (or "definitively") wrong.

[3] Of course, the conception of a thing in itself will turn out to be one that can have no experiential realization for Kant. But this claim does not belong to the initial characterization of Kant's transcendental idealism; it is rather the conclusion of a complicated conceptual path—and one to which Walker devotes no attention here.


54

Chapter Five—
Rorty and I

The other one, the one named Rorty, is a happy fellow. He "frankly recognizes" his ethnocentrism (O 3on), but is not worried by it. He thinks that a reference to us Americans may be "much more persuasive, morally as well as politically," than one to humankind (C 191), and the thing doesn't seem to touch him in the least. He lives in a world dominated by contingency, where "how we get from here to there . . . is just the way things happen to have fallen out" (C 182), but finds it totally comfortable and relaxing. In fact, he criticizes others for "their failure to take a relaxed, naturalistic, Darwinian view of language" (E 3) and other matters (O 60). He has no explanation for anything much, "no deep premises to draw on" (O 110) to establish the superiority of his beliefs to those of others, no story to tell to reconcile the various conflicting aspects of his or anyone's personality, but somehow feels no urge to come up with any of that, "no special duty to construct" the relevant idealizations (O 68). He thinks that philosophy—his line of work—is no big deal, that his own efforts "may by now be irrelevant to contemporary high culture" (E 101), and that one should shrug off "the idea that there is something called 'philosophy' or 'metaphysics' which is central to our culture" (E 104), but is not impressed by this outcome: for him, it merely proves "that one of the less important sideshows of Western civilization—metaphysics—is

In this chapter, I use the following abbreviations: P for Marasco's Child's Play, N for Orwell's 1984, C for Rorty's Contingency, Irony, and Solidarity, E for Rorty's Essays on Heidegger and Others , and O for Rorty's Objectivity, Relativism, and Truth .


55

in the process of closing down" (O 218). Indeed, he contributes enthusiastically to the process by adopting a "therapeutic" approach that denies most traditional philosophical problems instead of addressing them, and thinks of this liquidation and of the "[l]ovably old-fashioned [metaphysical] prigs" (E 86) who are to suffer from it as "the only excuses which [he has] for staying in business" (E 86). In sum, his is "a philosophy of solidarity rather than of despair" (O 33), a philosophy devoid of tension and pain, where "[o]ne will be content to use lots of different vocabularies for one's different purposes, without worrying much about their relation to one another" (E 127, my italics), where "society as a whole asserts itself without bothering to ground itself " (E 176, my italics), and whose heroes "would happily grant that a circular justification of our practices . . . is the only sort of justification we are going to get" (C 57, my italics).

I am afraid I am much more troubled than that. I realize that self-referentiality may be the (insoluble) problem of contemporary philosophy, and that in some sense there may be "no conditionless conditions" (E 55), but this suggestion hurts me: the idea that I will never be able to argue for the correctness of my views in any noncircular way is one that I cannot comfortably stare at. I understand that the systematic unity my philosophical heroes worked for is at best a focus imaginarius , and yet I can't quite let different spheres "coexist uncompetitively" (E 170): I feel inevitably led to bring them together, to make some common sense out of them. I agree that a theory is not a way of representing the world but a way of coping with it, but I can't avoid making this statement part of another theory, my theory, and I can't help feeling that this theory is right and the others aren't, that this theory tells me how things are and the others don't. I know that philosophers have mostly provided just a minor source of entertainment, by falling in wells and stuff, and that when they announced the end of this or that, everybody else kept looking the other way, but I am strongly tempted to say that there is something wrong with this lack of interest, that people ought not to have laughed at the philosophers' doom and gloom. I accept the view that "the only notion of 'object' we need is that of 'intentional object' " (O 106) and that any more realist construal of what is out there is bound to failure, yet I still find myself attempting elaborate distinctions between some intentional objects and others, say, between neutrinos and winged horses. I still try, if not to climb out of my own mind, at least to describe what that would be like. I share de Man's and Sartre's "sense of human life as a perpetual oscillation . . . [at one pole of which] is the desire to attain a God's-eye view . . . [and at the other] the thought that this


56

attempt is impossible" (E 131). Mine is, I guess, a philosophy—or at least a condition—of despair.

Maybe my problem is, as with other, more distinguished colleagues, "a failure of nerve" (E 63). My "desperate anxiety" (E 63) matches Heidegger's as he wants to think of himself as offering more than "simply a history of the alterations in human beings' self-conceptions" (E 63): we don't have the guts to look into the void we have discovered, the maturity to accept a groundless form of life, the intellectual honesty to admit that we are socialized through and through, that there is no residue to the conditioning process to which we have been subjected. We are still longing for fetal repose, for parental guidance, for God-given bliss. We haven't woken up to the reality of an adult world—not the sad reality of it, because it is only sad for those who have our childish expectations. If we could get rid of those expectations and have therapy come to its happy end, not only would we be able to stare into the void, we would do it with a smile. We would "no longer hope for world-historical greatness" (E 81), and would let "transcendence go" (E 181). The myths of the past—of the metaphysical infancy of our species—would be forgotten and, finally, we would have grown up.

The more I think about it, the more it seems that this guy Rorty is made of different stuff altogether, possibly the Übermensch 's ironic, joyful stuff. He can stand a conflict between private and public matters without being bothered by it, pursue the search for his own perfection without any attempt at proselytizing, live in the midst of irreconcilable differences and be perfectly tolerant of them, ask no question unless he can find an answer for it, think that there is nothing ultimately wrong with Hitler and still be ready to die for his own views. "[A] lot of small contingent facts" (C 188) must have brought about this fortunate mutation, and so much the worse for those of us who find ourselves competing with it.

But wait. There may be trouble in paradise. It comes in three steps. First, with characteristic geniality and untroubledness, the mutant acknowledges quite another excuse for "staying in business"—other than "the only" one of ganging up on old metaphysical prigs, I mean. "All we philosophers," we are told, "have at least a bit of the ascetic priest in us" (E 71). And ascetic priests are always after what cannot yet be said; they are not content with regurgitating—sorry, apprehending—their time in thought (in fact, they don't seem to be content with too much at all). They want "a language entirely disengaged from the business of the tribe, irrelevant to the mere pursuit of pleasure and avoidance of pain" (E 71).


57

And lo and behold, this misguided activity pays some dividends. "For the result of trying to find a language different from the tribe's is to enrich the language of later generations of that tribe. The more ascetic priests a society can afford to support, the more surplus value is available to provide these priests with the leisure to fantasize, the richer and more diverse the language and projects of that society are likely to become" (E 72). You might say that this is just an unimpressive bit of doing Nietzsche on Aristotle, but it sure forces me to reconsider the agenda of my formidable adversary in the struggle for philosophic (and academic) life. Now it looks like traditional theorizing (recycled as storytelling) is not to be terminated, but rather encouraged. Theorizers are not to be taken too seriously, but are to be kept around, fed, and sheltered from the storm. We might even think of doing something with those wells for which they seem to have a leaning.

The natural way to have your cake and eat it, too, as seems to be the alien creature's new goal, is by resurrecting old foci imaginarii , and in fact this Kantian relic, still chastised in its Habermasian version on C 67 and wearily accorded a courteous bow on O 100, gets a new lease on respectability on C 195 and C 196—though with sundry qualifications, intended to keep some distance from its scary historical source. And so does the more realistic, no-nonsense, utopian version of it—a focus imaginarius is, after all, nowhere to be found—which is now supposed to "lift our spirits" (O 212), to "persuade" us (O 220), and to give us "visions of glorious new institutions" (E 121). Consistently with this picture, the creature considers it one of his aims "to suggest the possibility of a liberal utopia" (C xv), and later he provides a powerful summary of what the citizens of this dreamy, inspiring world would look like: "They would be liberal ironists . . . people who combined commitment with a sense of the contingency of their own commitment" (C 61).

The problem with this summary (and here comes the second station of the calvary) is that it is too powerful—that is, too concise. "The important thing about novelists as compared with theorists is that they are good at details" (E 81). Since novelists are role models here, and theorists at best an antiquarian curiosity (and at worst an embarrassment), we might expect to get a lot of details about the liberal utopia; after all, "[i]f Freud [another role model] had made only the large, abstract, quasi-philosophical claim . . . he would not have startled. . . . What is new in Freud is the details he gives us" (C 31). But no details are forthcoming: only large, abstract, quasi-philosophical claims. Only "amateurish" guesses (O 53), "spiritually comforting" fuzziness (O 44), a blurring of


58

distinctions (O 83), a "thinking of the entire culture, from physics to poetry, as a single, continuous, seamless activity in which the divisions are merely institutional and pedagogical" (O 76). Which means that divisions and distinctions and details—the sorts of things that make a story worth reading, or a speech inspiring—are to come from the outside: sociology, history, politics, or whatever. When they come from inside philosophy, they come from somebody else: "[T]o spell out my fantasy in detail, I shall use strategies suggested by my favorite contemporary antiessentialist, Donald Davidson" (O 103). As far as the creature is directly concerned, there are only platitudes, everyday common sense, "old, familiar, inconclusive" arguments (O 67), and occasional "persuasive" moralizing matched with the conviction that "revolutionary politics in [North Atlantic] countries can be no more than intellectual exhibitionism" (O 221). There is, in other words, a vast landscape of boredom. Not even in regard to his own source of livelihood does the creature have anything detailed or inspiring to say. He knows, of course, that one of the main consequences (and goals) of a utopia is the sense it makes of the past ; but for him, even when it comes to philosophy, it's contingency all the way down. "In time it may seem merely a quaint historical accident that [these] institutions bear the . . . name [of philosophy]" (E 23). Or it's the same sort of trivial generalities as everywhere else: "In such a [utopian] community, all that is left of philosophy is the maxim of Mill's On Liberty , or of a Rabelaisian carnival: everybody can do what they want if they don't hurt anybody else while doing it" (E 75).

If statements like this sound disappointing, it's because they are, not just for the occasional reader but for their own author. After some devastating criticism of what he calls "the School of Resentment," he says, "If my criticism of this School seems harsh, it is because one is always harshest on what one most dreads resembling" (E 184). And then he adds, a bit later on, "[T]he only difference between us and the Resenters is that we regret our lack of imagination, whereas they make a virtue of what they think a philosophico-historical necessity" (E 84). Construed unsympathetically, this might be taken to mean: They at least have a metastory concerning why they have no story to tell. We don't even have that. We only feel bad about it.

As it turns out (so goes the third movement of our sonata), the creature does have a metastory after all. It's not an inspiring one, not for his own ethnos at least—the audience he is supposed to address and persuade—but it provides an explanation of sorts. The metastory says that American "tragic liberals" have gotten complacent and lazy, and content them-


59

selves "with saying that, as institutions go, [theirs] are a lot better than the actually existing competition" (E 179). They have come to the end of their day; theirs is an Alexandrian culture that can only cling to its privileges. The sort of preposterous romanticism that makes stories intricate and surprising is no longer accessible to them: "[W]e Alexandrians no longer have the strength [for it]" (E 192). It is these liberals—people like our formerly happy mutant—who are now to be "berated" for their "failure of nerve" (E 180). Richer, more detailed, more inspiring narratives must come from somewhere else, from places like Brazil, where there is more at stake. "Being a political romantic is not easy these days. Presumably it helps a lot to come from a big, backward country with lots of raw materials and a good deal of capital accumulation—a country that has started to lurch forward, even though frequently falling over its own feet" (E 180). Which reminds me of when I was unemployed in Italy, in my middle twenties, and decided to take a chance and go to Canada on a fellowship, and my (tenured) professor told me that I was lucky to have nothing, that he could not have taken that step. Somehow, I didn't like that, but that's another story.

Now this is a curious reversal. By the end of this three-step process, our mutant has turned into an endangered, decrepit species, desperately defending its ecological niche. Or, maybe better, our childlike, joyful, happily schizoid god has turned into an announcing angel, the prophet of a new millennium, the guy who must die minutes before the promised land is reached. He's a dinosaur like all of us, but a more perceptive specimen; his "nerve" is not enough to make him enter the Garden of Eden, but enough to admit what it takes to get there, and that we haven't got it. He may not be inspiring, but he sure has a point.

Or does he? Brazilian neoromantic politician-philosophers like Roberto Unger may well stir people's juices more than yet another analytic philosopher's analysis of the paradox of analysis. They do so because they make bold claims and sustain them by clever, detailed narratives. Of course, they could be wrong—that is, their community (or ours) might end up having no use for them—but none of this detracts from how exciting their stories are, from how good they are at doing their job (the ascetic priest's or whatever else we want to call it). That is all fair and good, but what about some old-fashioned romantics, such as Hegel, for instance? They made bold claims and sustained them by clever, detailed narratives; but somehow our fellow dinosaur thinks that there is something deeply wrong there. They thought they had the definitive truth, believed that long-lasting, intricate problems were finally being resolved


60

by their novel, "preposterous" approaches. But does Unger perhaps think that what he says is wrong , or is just another dream? It is the others , the Alexandrians, who refer to his proposals as preposterous; as far as I can tell, he considers them wise and rational.

It comes down to a matter of nerve, once more (there must be a reason for courage being so central here—maybe later I will understand it). For, suppose you have "persuaded" yourself of the optional character of all your "basic" beliefs, of the irredeemable quality of your ethnocentrism; what comes next? Do you then have to go on reiterating the same boring, general, quasi-philosophical claims, or expressing your regret, or writing footnotes on those others who still (delusionally?) believe they have stories to tell? No, you don't. You can still tell a story, your story, tell it straight, with no qualifications, no tongue in cheek, no cautionary side remarks. You can tell it as if it were the ultimate story, and let others take it apart. It is for them to keep you honest: if you do it yourself, you will end up stuttering to no purpose.

On C 104, the enlightened dinosaur considers a similar suggestion: "It would be charitable and pleasant, albeit unjustified by the evidence, to believe that Hegel deliberately refrained from speculating on the nation which would succeed Germany, and the philosopher who would succeed Hegel, because he wanted to demonstrate his own awareness of his own finitude through what Kierkegaard called 'indirect communication'—by an ironic gesture rather than by putting forward a claim. It would be nice to think that he deliberately left the future blank as an invitation to his successors to do to him what he had done to his predecessors, rather than as an arrogant assumption that nothing more could possibly be done." There is a wonderful rhetoric to this passage; it must be the way "persuasion" works. The two sentences are built in such a way as to strongly suggest that the hypothesis they entertain is false—"it would be charitable," "it would be nice," and all that. Reference to some unspecified, majestic "evidence" is thrown in to scare you out of your wits. One can only surmise what the "evidence" might be, and that makes it more convincing. Is it things that Hegel said? But why should we not interpret the things he said so that most of them turn out true for us—to the exclusion, perhaps, of most or all of the meta things he said, the things he said about other things he said (for example, about the definitive character of the latter)?

I will put it in a different way: "Because the theorist wants to see rather than to rearrange , to rise above rather than to manipulate, he has to worry about the so-called problem of self-reference—the problem of


61

explaining his own unprecedented success at redescription in the terms of his own theory" (C 104). Maybe so; maybe this is one of the many problems the theorist must face. But suppose now that he has no solution for it: he can't think of a reason for "his own unprecedented success." Then what? Does that mean that his theory has no value? For the theorist, the answer is "no": it just means that his theory is not complete, as most of our theories are not. For the (pragmatically) enlightened dinosaur, the answer is still "no," since a theory is a tool, and a tool may well be good for some purposes and not others. For whom, then, is this problem as devastating as the dinosaur makes it sound? Only, I guess, for some sort of hypertheorist who claims that, unless a theory is entirely self-contained and self-standing, it is as good as nothing. But this position is foolish, so why should we read anybody—Hegel, say—as committed to it? Why should we not read them, instead, as just taking their best shot and challenging others to do better, if they can. Of course most people don't say that, but this is just as it should be. When I was younger, I used to preface lots of things I said with "in my opinion" or suchlike, until somebody older and wiser told me, "It's obvious it's your opinion, so just say it."

What this comes down to is a peculiar situation. The dinosaur thinks that society can profit immeasurably from diverse, imaginative, audacious tales, and encourages getting acquainted "with strange people (Alcibiades, Julien Sorel), strange families (the Karamazovs, the Casaubons), and strange communities (the Teutonic Knights, the Nuer, the mandarins of the Sung)" (C 80). He recognizes that all those strange people (or, in case they are fictional, their authors) may have been entirely deluded in their pursuits, yet still thinks that we have a lot to learn from looking at them. Nabokov, for example, is one guy who's done a lot of good with his fiction though creating a crazy "private mythology" (C 168) to make sense of it. Since philosophers (or theorists, if you prefer—if you do, make the appropriate substitutions below) are as good an example as any of deluded seers who came up (usually for the wrong reasons) with very strange, imaginative, audacious views, one would think that they can be of as much help as anybody in weaning people who are "stuck in the vocabulary in which they were brought up" (C 80). But, somehow, the dinosaur thinks otherwise. Philosophy is out; it's closing down; it's going out of business. Why? Because each philosopher thought of himself as the last one and couldn't explain why. Now, you smart dinosaur, tell me another one. Why are these delusions to be treated so much more harshly than any others?


62

Maybe the answer is to be found in a remark I already quoted: one treats most harshly what one most dreads resembling. The dinosaur is a philosopher, so he beats on philosophers with a stick a thumb wide; if he were a poet, he would laud Plato and Kant, and urge the old prigs who still read and value Catullus or Shakespeare to close shop for good. But this is too boring, too abstract, too quasi-philosophical; there must be interesting details to our invidious distinction somewhere.

Consider this: Nabokov "was the son of a famous liberal statesman who was assassinated when his son was twenty-two" (C 156). Possibly as a result of the assassination, the son abandoned all "hope for future generations" (C 156). On the other hand, he retained the more negative side of his father's concerns: if he could do nothing good for humankind, at least he wanted to do nothing evil. "It is clear from his autobiography that the only thing which could really get Nabokov down was the fear of being, or having been, cruel" (C 157). Part of the strategy by which he defended himself from this fear was the claim that literature can do no evil: "[A]s [his character] Humbert says, 'poets never kill' " (C 159). But Nabokov is more lucid than that, in spite of himself: "He would like to see all the evil in the world—all the failures in tenderness and kindness—as produced by nonpoets, by generalizing, incurious vulgarians. . . . But he knows that this is not the case" (C 159–160). So he goes on to write his best novels about poets who kill, about delicate, sensitive, gifted people like Humbert who end up wasting and abusing others. Nabokov's practice disqualifies his belief that curiosity, tenderness, kindness, and ecstasy come together naturally and inseparably in art; his characters show how "writers can obtain and produce ecstasy while failing to notice suffering, while being incurious about the people whose lives provide their material" (C 159). The moral of this intellectual journey is obvious: Nabokov was afraid of betraying his father twice, first by abandoning his ideals and then by being cruel to others in the name of his art. So he concocted a bit of doublethink to convince himself that the latter was impossible; fortunately for all of us, he was in practice to give out a lot of details of how it is possible.

Keep all that in mind and turn to the following: "Concepts do not kill anything, even themselves; people kill concepts" (C 134). This statement, of course, is just as bad as the one quoted earlier about poets, but now we can do more than stigmatize it as one of the many strange things our dinosaur is happy (?) to live with. We can provide a charitable reading of it, look at it as an ironic gesture of the kind Hegel was denied: an invitation to do to its author what he does to Nabokov.


63

The clues are there. The book is dedicated to lots of relatives: parents and grandparents—all liberals, that is, all "people who are more afraid of being cruel than of anything else" (C 192). The book sketches, in a very powerful summary, a utopia whose structure resembles Nabokov's quite a bit: a world where things come together which it is not clear can come together. If they did, one could be a philosopher without hurting, just as in Nabokov's case one could be a gentle artist; but no details are given, just as in Nabokov, of how the thing is supposed to work. So one ends up with the impression that this, too, is doublethink, hallucinatory wish fulfillment, hypnotic comfort, and that it serves the same purpose: that of convincing a beautiful, warmhearted, somewhat weakly soul that everything is going to be all right, that nobody is going to get hurt.

What seems to be missing here, of course, is the lucidity Nabokov had in spite of himself, the details he gave us about how poets do kill. But I can't help sympathizing with the weakling, and seeing the point of his inconclusiveness: he is so afraid of hurting that he will do nothing at all, will tell none of those brave stories that might very well bring about a brave new world (here, then, is where the issue of courage might come to a head). In fact, he will try to "persuade" us to close down this whole enterprise once and for all. He knows that it can do some good—that indeed it has done some—but can I blame him if he can't stand the evil it has also done, if he'd rather have somebody else (poets, writers, whoever) do this sort of dirty work? I am certainly in no position to throw any stones here. Why should I mess up the weakling's "private mythology," which he might need in order to reconcile himself with the memory of his liberal ancestors? It's his own business, isn't it? So let's tiptoe out of here, and blow out the candle before closing the door.

And yet, things do not add up. For this object I am looking at—this book, I mean—is not private at all: it carries a public message; and what sort of message is that? That there are questions for which it would be a mistake to think that there could be answers, facts for which it would be a mistake to think that there could be any explanation. ("A pang of pain had shot through his body. O'Brien had pushed the lever of the dial up to thirty-five. 'That was stupid, Winston, stupid!' he said. 'You should know better than to say a thing like that. . . . The object of persecution is persecution. The object of torture is torture. The object of power is power. Now do you begin to understand me?' " [N 216–217].) That there is no such thing as unsocialized human nature, and humans will turn into whatever they happen to be conditioned to be. (" 'You are imagining that there is something called human nature which will be outraged by


64

what we do and will turn against us. But we create human nature. Men are infinitely malleable' " [N 222].) That, as far as there are stories still to be told, they will have to be one's own, liked for no other reason than that; that there will never be any ground on which to argue that one story is better than another, that any of them is more than arbitrary or "contingent." ("[T]he aim of this [torture] was simply to humiliate him and destroy his power of arguing and reasoning" [N 199].) That all the multifarious private stories must "coexist uncompentively" not only outside but also inside one—insofar as one reads, or writes, many books—and no attempt must be made of bringing them together. ("In the end the nagging voices broke him down more completely than the boots and fists of the guards" [N 200]; " 'Power is in tearing human minds to pieces'" [N 220].)

"The point that sadism aims at humiliation rather than merely at pain in general," the weakling says, "has been developed in detail [!] by Elaine Scarry. . . . It is a consequence of Scarry's argument that the worst thing you can do to somebody is not to make her scream in agony but to use that agony in such a way that even when the agony is over, she cannot reconstitute herself " (C 177). So the weakling is aware of the horrid reality that is emerging here, and faces it: "Ironism, as I have defined it, results from awareness of the power of redescription. But most people do not want to be redescribed. They want to be taken on their own terms—taken seriously just as they are and just as they talk. The ironist tells them that the language they speak is up for grabs by her and her kind. There is something potentially very cruel about that claim" (C 89). Two moves are made to defuse this potential. First, we are told that the metaphysician redescribes just as the ironist does, and hence "possible humiliation [is] no more closely connected" with the latter than with the former (C 90). This move, however, misses the mark by a mile: the metaphysician takes his opponent at face value, tries to prove him wrong, and so also shows him respect, takes him seriously. It's a fair game: there may be a loser, but there is not the sort of thing that "happens when [a child's] possessions are made to look ridiculous alongside the possessions of another, richer, child" (C 89–90). The weakling eventually connects with this point, and phrases it by saying that, though the ironist has no special inclination to humiliate, he can be blamed for "an inability to empower" (C 91). Upon recognizing that, he brings out his other ploy: liberal ironist philosophers know, just know , what pain and cruelty are, so they are just going to refrain from it, period. They are going to do their irony in private and, as for public purposes, they will acknowledge being "of no use to


65

liberals qua liberals" (C 95). Which may well be all right, if ironist philosophers were to now withdraw to a cave; but it leaves untouched the problem of what to do with this document, this public object which I have in my hands. What ironist philosophers are supposed to do—not bother others, just entertain themselves, look after their own happiness and perfection—this thing doesn't do. What it does instead is spoil all hope and break down all unity, cut our minds to pieces and leave them that way. Is this, perhaps, the last (ironist) philosophy book? But, if it is, and given the sort of book it is, wouldn't it have been better to leave it unwritten?

I watched a play long ago, so long ago that it seems to have happened to somebody else, in another life. The title was Child's Play , and it was pretty scary. I didn't sleep that night, but then, I am an impressionable guy. There was this schoolteacher in it, by the name of Joseph Dobbs. He loved his students and remembered all their names, all two thousand of those whom he'd taught: "I've always valued it, the affection of all those boys, their friendship . . . years of it. You know me, you boys. It's you, you I trust. Not myself, but you, all those boys" (P 87). Dobbs is a "kindly, paternal" figure (49), "rumpled and comfortable" like his "old corduroy jacket" (P 6). He excuses everybody—every one of his boys, I mean. He wants to give them another chance, wants others to go easy on them, not to "push them too hard" (P 21), to regard their misdeeds as innocent adolescent pranks. As far as he's concerned, he's happy if he teaches them "[o]bviously nothing" so long as they don't suffer (P 11). And then there is also something else in the school. "Something's come into this place" (P 87). It is something evil, something that has "[k]ids fighting for bits of broken glass . . . to tear themselves" (P 106–107), plucking their eyes out, "going at one another" (P 20). What this something is, it turns out, is Dobbs's hate, his malevolence, well covered under his genial affection: the malevolence of his unspoken acts , never mind what he says . "[T]his sort of leniency has a way of getting back to the boys" (P 47).

So I watched this play, way back then in another life, and things were never the same again. When another gentle, affectionate soul presents itself as too weak or good to stand anybody else's pain, when another avuncular character refuses to play the game and resorts to deciding issues "on a high metaphilosophical plane" (O 146)—"You're still a freshman to me, young man. . . . You're all freshmen to me" (P 83)—I look for malevolent Mr. Dobbs, and I often find him. Did I find him once more? Is the practice of this Protean creature I've been examining with


66

so much passion, of this half-mutant, half-dinosaur that disconcerted me so much, of this liberal who will sacrifice his own form of life in fear it might hurt—is this the practice of a cruel torturer after all? Of a fragmenter, a tearer-apart, someone who will never let us reconstitute ourselves? Ironically, if that were the case, the creature would be one step ahead of Nabokov, one step truer to himself: his practice would amount not to an unwitting description of the threat he has in store for us, but to an unwitting execution of it.

I am going to close this book now, as in the end one closes all books. And, as I close it, I know it's done something to me: it's made an issue of my cruelty ("Is that what I tried to tear out of myself. The hate?" [P 108]). It's made it an issue for me, at least; but is this, could this be, only a private matter?


67

Chapter Six—
The Irony of It

In his first meditation, Descartes faces an embarrassing situation:

As if I did not remember other occasions when I have been tricked by exactly similar thoughts while asleep! As I think about this more carefully, I see plainly that there are never any sure signs by means of which being awake can be distinguished from being asleep. The result is that I begin to feel dazed, and this very feeling only reinforces the notion that I may be asleep.[1]

Note two features of this situation. First, Descartes's embarrassment has a number of empirical consequences and manifestations. He feels dazed, is in a state of confusion, and (we may imagine) is also likely to be pragmatically quite ineffective. Second, the embarrassment is caused, at least in part, by the empirical presence of dreams: by the empirical fact that dreams occur in Descartes's experience. There is also, of course, the additional fact that he is unable to tell dreams and waking life apart. And by working on this aspect of the situation he will eventually be able, in the sixth meditation, to resolve his embarrassment:

I now notice that there is a vast difference between [being asleep and being awake], in that dreams are never linked by memory with all the other actions of life as waking experiences are. If, while I am awake, anyone were suddenly to appear to me and then disappear immediately, as happens in sleep, so that I could not see where he had come from or where he had gone to, it would not

[1] Meditations on First Philosophy , 13.


68

be unreasonable for me to judge that he was a ghost, or a vision created in my brain, rather than a real man. But when I distinctly see where things come from and where and when they come to me, and when I can connect my perceptions of them with the whole of the rest of my life without a break, then I am quite certain that when I encounter these things I am not asleep but awake.[2]

He "notices" that, as a matter of fact , dreams and waking life can be told apart. And, of course, a conceptual argument has convinced him that facts will have to continue to be so favorable. His confidence is thus restored: he is back on his feet and can dismiss "as laughable" his former "exaggerated doubts."[3] Indeed, he is so confident now that some of his current statements sound like exaggerations: for example, the one he seems to be making in correspondence that he is never deceived by dreams.[4]

Keep all of this in mind and turn to the following passage from Kant's first Critique :

The empirical truth of appearances in space and time is, however, sufficiently secured; it is adequately distinguished from dreams, if both dreams and genuine appearances cohere truly and completely in one experience, in accordance with empirical laws.[5]

This passage gives us pause, if compared with Descartes's earlier resolution of his problem. That resolution consisted of bringing out the disconnected, incoherent nature of dreams, as compared with waking experience. No matter how perceptually vivid a dream is, we can always call its bluff by asking a few pointed questions. Where does this man come from? Did I see him before? How come he looks familiar? What is his address, his occupation? And so on. No answers to such questions are forthcoming for the "people" or "things" that populate a dream, and that is precisely how we decide that they are not (really) people or things . In less fortunate circumstances (with a less benevolent God), we might find ourselves in much bigger trouble. What if we went to sleep and dreamed a perfectly reasonable continuation of our waking experience, perfectly coherent with it? How would we then tell that it was a dream we were having? How would we then resolve our embarrassment? But now, what can Kant possibly mean by saying that "the empirical truth of appearances is . . . adequately distinguished from

[2] Ibid., 61–62.

[3] Ibid., 61.

[4] See my "Descartes, Dreaming, and Professor Wilson."

[5] Critique of Pure Reason , 440.


69

dreams" if the two cohere with one another ? Isn't the hypothesis of such coherence the most dreaded one, the one that we must hope and pray will never be realized?

Forget about dreams for a moment and turn your attention to the surprise examination paradox. The teacher announces in class on Friday that next week an exam will be given, and that, when given, the exam will come as a surprise. Little Tom spends the weekend convincing himself that no such exam is possible. It cannot be given on Friday because it would not be a surprise, but then it cannot be given on Thursday either. . . . On Monday Tom is as unprepared as he is unfazed; the teacher comes to class, gives the exam, and Tom is very surprised.

What this paradox does for us is illustrate forcefully how whatever use we make of our conceptual (Kant would say transcendental) tools is itself part and parcel of our empirical lives, and often interacts with other aspects of that life with significant consequences. It would be nice if the various levels of our activity were neatly separated in the way in which Tarski separates his language levels; then little Tom could establish his conceptual conclusion at a metaexperiential level (an exam of this sort is impossible) and return to his ordinary experience in an enlightened state. As it turns out, however, Tom's establishing his conceptual conclusion belongs with that very ordinary experience, together with studying (or not studying) and preparing (or not preparing) for the exam; the teacher knows all about that (and about Tom) and uses this knowledge to play a fast one on Tom. There may be some other student waiting in the wings, ready to use this very knowledge the teacher has to play a fast one on him .

But the interplay between the conceptual and the empirical need not be so disconcerting. Sometimes having certain concepts available, and using them in certain ways, makes our life (our empirical life, I mean) easier. One way in which this can happen was illustrated by the example of Descartes: one convinces oneself, by a conceptual argument, that certain difficulties simply will not arise (there will not be coherent dreams). Which conclusion makes us feel more decisive, maybe even elated. What Kant is doing, in the passage quoted earlier, is indicating that we may get empirical help from our conceptual friends also in another, and more basic, way.[6]

Consider a situation in which you face the totality of your experience and find it to be very confusing. Some patterns can be identified, but they

[6] This interpretation was first suggested, in passing, in a note to chapter 5 of my Kant's Copernican Revolution , 241–242.


70

don't last forever. There is all this other stuff that doesn't fit, that has maybe little patterns of its own, but broken ones, and in any case disconnected from the larger picture. Because of this confusion, you also don't quite know how to move. By and large, you would want to follow the more general patterns and develop responses on the basis of them, but such responses, by and large adequate as they are, are always at a risk of becoming dysfunctional when the "general" patterns explode.

A "Cartesian" analysis of this situation might be the following: you have not learned to distinguish dreams from waking. You don't know when dreams stop and waking begins, so you don't know what to take seriously—what requires a response and what doesn't. To repair the confusion, a criterion such as coherence must be provided. But this analysis leaves out an important aspect: the person who wonders whether his is waking experience, and finds himself "dazed" by such reflections, can only do so because he has the concept of a dream (and the related one of being awake) available. And having this concept is the solution of a problem, not (only) the origin of one. This is what Kant has in mind, this is how he would analyze the situation. Calling some parts of our experience dreams is an effective strategy for restoring (not challenging) the coherence of that experience. The empirical presence of dreams may well give rise to an empirical difficulty: dreams are a mess, and it is difficult to tell them apart from waking life, so they make everything a mess and get us very confused. On the other hand, the conceptual presence of dreams (or more explicitly, the presence of the concept of a dream in our logical space) makes it possible to resolve the confusion we might be involved in. For, of any confusing element, we can always say, "It is only a dream." And by this maneuver we can turn the conditional, tentative adoption of some behavioral strategies ("This is how it pays to respond most often, as far as we can tell") into an unconditional, self-assured endorsement ("This is how one must respond whenever a response is called for").

When one already has the concept of a dream, of course, one may get into the second-order problems Descartes is concerned with: second-order in the sense of using a tool that was originally of help in resolving a confusion as an occasion for generating some new confusion ("I now know that I can use the word 'dream' to label whatever doesn't fit with the rest, but is it legitimate to do so? How do I know that dreams are incoherent? [And that waking experience is not?]").[7] But here I am inter-

[7] Such second-order problems are not foreign to Kant, of course, as is indicated byhis requirement that the coherence of dreams and appearances be realized "according to empirical laws." But the passage from Kant also lets us bring out the other, more basic role of the concept of a dream.


71

ested in the first-order, more fundamental, use of this concept, and of similar ones: their pacifying, resolving, and confidence-building use. I am interested in it because I find it instructive to unearth some related paradoxes—not unlike that of the surprise examination, though of the opposite sign, as it were.

In the surprise examination paradox, little Tom is made weaker by his conceptual abilities. His dumber friends will take the teacher's statement at face value, study hard for the exam, and possibly do well in it (if they are not too dumb). Not he: his argument is his downfall. There are cases, however, where the ability to use a concept makes one a lot stronger, a much more formidable competitor, however troubled the reality designated by the concept is supposed to be, and even if the user finds himself well within the range of this trouble.

Take, for example, the concept of irony. Richard Rorty makes substantial use of it, indeed names his ideal humans "liberal ironists."[8] Irony, on the other hand—the real thing, I mean, not the concept of it—is conspicuously absent from his work, which sounds indeed like the very opposite of anything even remotely tongue-in-cheek. Try "deadly serious" instead, or even "preachy." One might blame this earnestness on a constitutional defect: maybe the guy just doesn't have it in him, and regrets it. He's not like his ideal. But this explanation doesn't seem to work, since no regret ever surfaces: on the contrary, we are constantly surrounded by self-satisfaction, contentedness, appeasement.[9] So one wonders: what exactly is going on here?

Our experience is fragmented, the postmodern credo goes. The comprehensive metanarratives that took the place of old, derelict God have themselves gone out of business and left us without final, reassuring answers. Science will not harmonize nature with our needs, the proletariat will never come of age, and liberal democracy is at best a tiny bit less rotten than any alternative system. After giving up God, we had to give up His point of view, too, submit ourselves to our "thrown" character, our limited, situational perspective, fully aware that other similarly lim-

[8] See the previous chapter.

[9] The significance of this attitude was examined (from a different but related point of view) in the previous chapter. Note also that, occasionally, statements of regret surface (indeed, one of them was quoted on p. 58 above), but, just as with irony, the real thing seems to be absent.


72

ited perspectives are just as much (that is, as little) "right" as ours, that there is no common ground on which we could rationally prove the legitimacy of the latter—indeed, not even any uncontroversial notion of rationality that we could use to attempt such a proof. The world we live in is one of empirical subjects, embodied and local, groundless and finite, constantly negotiating with each other about, among other things, the rules for negotiation. And, of course, with so much fragmentation going on around each subject, there will have to be fragmentation in it, too: different personas regulating the proceedings at different times, different positions available in existential space. It will be possible to assume any such position and look at the others, detach oneself from the others, expose and exploit their vacuity, their arbitrariness, their ultimate lack of justification. Irony will be a natural outcome in these circumstances: a merciless, lucid insight into the reality of our disseverance, a cruel surgical probe painstakingly and painfully testing the extent of our wounds, revealing the thinness of the scar tissue (of our pathetic accommodations, that is), illuminating the points where the tissue will break again. Self-inflicted pain, mostly: self-directed irony, the black humor of those who know that the enemy is inside, and there to stay ("We have met the enemy and he is us," as Pogo said once).

A clear and credible picture, isn't it? Yes, maybe, except that it doesn't have a place for itself, which once more reminds one of little Tom, of how he satisfied himself with his logical argument and conclusion, and didn't see the necessity of making room for that very argument and conclusion within the argument itself, didn't "notice" how much the arguing and concluding fed on each other, and ended up being only the first one to laugh. Similarly, now, we may be talking about fragmentation for days on end, but does this mean that our experience while we so talk is indeed fragmented? Does it mean that we painfully feel the fault lines, the tensions, the imminent collapse? That part of us is looking at some other parts of us and finding them gross and stupid and perverted, and issuing an ironic smile as a result of this unsympathetic realization? Far from it. The very concept of fragmentation is working here as the best connecting tissue plastic surgeons could dream of. Having that concept makes it possible to generate as unified and connected a picture of our experience as the most extreme "modern" thinker could ever hope.

Habits die hard, we know. Or maybe they never die: it's just people who do. There was a habit once of systematic philosophy; such a distinguished representative of modernity as Kant even made it the character-


73

istic feature of human reason. In a system, everything has a place, and every question has an answer—potentially, at least. But the important thing here is that the places are not necessarily distinct, and the answers not necessarily interesting. A paranoid schizophrenic who blames everything happening in the world on the fact that people hate him has a system: a perfectly comprehensive structure encompassing the whole of his experience. You might not like it, you might not share it, but you can't deny that, as systems go, few are as powerful and all-inclusive as his. Indeed, come to think of it, the more elementary and unsophisticated a system is, the more of a chance it has not to be challenged by recalcitrant data, not to have its systematic character called in question. Maybe it's small children or, indeed, psychotics who are most proficient at system building.[10]

Consider now the postmodern thinker. Would you deny that everything has a place for him, and every question an answer? The places tend to look a whole lot alike, and the answers tend to be quite repetitive, but answers and places they certainly are. Most of the answers are corrective, of course, not direct: if you ask what policy is right, or what statement is true, the postmodern thinker will tell you that it depends on your point of view, on your situation. If you ask him how any particular thing will turn out, he will invariably answer that we should wait and see. But these are answers nonetheless . Indeed, they are usually given without hesitation, without qualifications, without shame. Poor and uninformative they may well be, but they certainly come quickly, by return mail.

Here, then, is the irony of it. The empirical phenomenon of irony may well be the consequence, and the expression, of a fragmentation of our culture, of its objects, and of its subject. It may very well be the case that, when we feel like we no longer have any ground to stand on, and shifting ground has become a fact of life, one major (maybe even the only) advantage we gain is the comic relief we experience when seeing how seriously some of us (including some of each of us, some of what each of us is) take whatever we are doing. But it is also the case that the empirical phenomenon of having concepts like those of irony and fragmentation available is a major remedy against the fragmentation, just as in the case of dreams. For we can always call upon these concepts to do the unifying for us: the unification of a fragmented field.

[10] Consistently with a suggestion by Freud, Totem and Taboo , 73.


74

To see more specifically how the ploy works, let us ask ourselves: who exactly sees the field, our experience, as fragmented, and how? The answer may be surprising, but seems inescapable. If you are the fragmented counterpart of a fragmented world, you won't be able to describe yourself as such. If you do anything, if the fragmentation does not destroy all of your efficiency, you will feel like you are doing the right thing whatever it is you are doing—except that, of course, at other times you will feel differently, and act just as decidedly on this different basis (where your "act" may well be smiling ironically at what you just did). Alternatively, you may become entirely ineffective and spend your time in a perpetual state of puzzlement, unable to favor, even temporarily, any of the conflicting claims being made on your time, energy, and commitment. If this is indeed your state, interestingly enough, then bringing fragmentation into the picture, describing the situation as a fragmented one, would be a way of beginning to deal with your predicament. There would then be something you do: describing the situation as fragmented. And this might be a useful first step and help you to break out of the impasse and, eventually, enable you to do other things as well.

Most often, your state will be a mixture of all of these possibilities: you will usually act "in a situation," and feel the pressure of your other selves, and relieve that pressure by vaguely bringing in your situational being, and maintain balance thereby . When the pressure is too strong, a crack in your precarious composure may develop and you may slip into bottomless anxiety and lack of decisiveness, which indicates that the token reference to fragmentation is not enough and more expert, specific help is needed. In any case, seeing the situation as fragmented is possible only for somebody who sees it from the outside, and might utilize that outside view within the situation to repair whatever disturbance arises there, not for somebody who experiences either the alternation of equally legitimate points of view (each legitimate when assumed ) or the painful incapacity of assuming any (and hence doing anything) which the fragmentation is . To put it in extreme but, I think, fair terms, it is possible only for those who observe the situation from a God's-eye point of view.

As I suggested earlier, the concepts of fragmentation and irony are not alone in playing this ambiguous role, expressing a disconcerting, weakening content while at the same time performing a reassuring, strengthening function . Another example is the concept of ethnocentrism. There are those—among them, once again, Rorty—who claim that each of us


75

is inevitably stuck with the perspective and values of his culture, that there is no way out of them. And, when you talk like that, it sounds like a limitation. But then you ask yourself: who is saying this, and where is he situated? And, with this question, the whole position unravels. Being ethnocentric includes believing in the absoluteness of your perspective and the definitiveness of your values. When you describe yourself as ethnocentric, on the other hand, you have already taken a point of view outside your ethnos, outside all ethnoses—one from which your ethnocentric character and that of others can be seen (and disarmed). And your emotional state, once you have taken this point of view, will be very different from that of an ethnocentric person. This person will experience the other ethnoses as wrong and as a challenge, will be harshly critical of them, occasionally will go to war against them. Whereas you will be entirely relaxed about the whole issue: the smile with which you look at the nice checkerboard the world has become for you will be not an ironical but a contented one.

But Rorty has learned a trick or two from Jacques Derrida. So he refuses to label his position ethnocentrism; it is rather, he insists, an anti-anti-ethnocentric one. Which means (and for this, maybe, an assist from Derrida might be unnecessary: old Pyrrho was more than enough) that you don't say any thing positive about where you stand, but simply wait for others to say something and then show their statements to be unwarranted. Specifically, you don't say that you are ethnocentric because the logic of that statement might get you into trouble, but the moment your anti-ethnocentric opponent makes his statement, you have a ball revealing how much trouble he is in. Which strategy has all the features needed to become an industry: it is simple, looks clever, and, most important, has an unlimited range of application (think of how many things people have said by now, and of how long it will take showing their unwarrantedness one by one, in painstaking detail). And, if there can be such an industry, there will be: intellectuals must keep themselves busy. Indeed, they must be kept busy; otherwise, they might get involved in something that matters, and even propose one or two social changes. So, right at the time when the penultimate deconstructive movement—analytic philosophy, that is—begins to be out of breath, when its clever-looking, hair-splitting, simple-minded strategy of turning philosophers against themselves begins to lose the attractiveness it briefly had on young, bright men and women, the dernier cri in this self-destructive, reactionary business takes hold of a new unfortunate generation. And raises a lot of dust by


76

being very confrontational with the fading fad, vehemently so—always an effective tactic to conceal deep-seated synergisms.

When the dust is cleared, the logic of this move becomes apparent. If the new poor wrecks are smart enough, they will never say anything as bold and stupid as "Everything is language." Indeed, they will say nothing at all—bold or otherwise. But their practice and their attitude will reveal, better than any words they might utter (or, God help us, write), the self-assurance and stability that accompany their anti-anti-whatever, that indeed are a consequence of their smart references to their anti-anti-whatever—that make them, in the end, anything but poor wrecks, but make the rest of us so much more so.

Jean-François Lyotard thinks of the modern and the postmodern as permanent tendencies of the human mind, not to be identified with any particular historical age.[11] I have been arguing that these two postures are not just always there: they also have a way of dialectically involving one another. The modern thinker will live in a fragmented, postmodern world: he will make his own statements and face the opposition, and often find that he has no conclusive argument against it. And he will occasionally change his mind, and make a new statement, and find that he has no conclusive argument for it, either—that his new self is just as unwarranted as his old one was. The postmodern thinker, on the other hand, will be perfectly at home in his "fragmented" experience: the very notion of fragmentation will be of great help in achieving this comfortable stance—that is , in making the experience (as Kant suggested in the passage quoted earlier) no longer fragmented . He will, therefore, live in a modern world: one in which fragmentation and irony simply are the metanarrative ruling the field, uncontestedly.

Both positions have, as I suggested, important political implications. The postmodern metanarrative, in either its analytic or its "French" variants, is an uninspiring, deflationary one. It says very little, and what little it says can at best encourage intellectuals to become critical of each other : certainly not to devise absurd utopias to expose the absurdity of everydayness. It makes its adherents a perpetual source of self-generated and self-enjoyed, cheap entertainment: no longer a challenge for anybody, and no longer of interest. Which may be just as well: maybe there can't be any more myths after Auschwitz. Maybe we better give it up. Or maybe, just maybe, this is Auschwitz; we have fallen into it

[11] See The Postmodern Condition .


77

without "noticing," without wanting to notice the systematic slaughter that happens away from sight, in countries not quite good enough to play the game.[12] And maybe, just maybe, one or two myths might help: it might help to talk less about our limitations and feel them more—feel them as we painfully work against them.

[12] I realize that, in spite of all my qualifications, some may find this statement excessive and disturbing. But I must insist that, in my view (and one that I cannot defend here), a world that includes (at least) what is happening today in the former Yugoslavia is just as bad as one including Auschwitz.


78

Chapter Seven—
Kant's Revolutionary Reconstruction of the History of Philosophy

Consider the following three facts:

(1) Kant describes what he does in the first Critique as transcendental philosophy. And he insists that this discipline is entirely new, that something like it was never tried before: "[I]t is a perfectly new science, of which no one has ever even thought, the very idea of which was unknown, and for which nothing hitherto accomplished can be of the smallest use, except it be the suggestion of Hume's doubts."[1]

(2) Within this new discipline, a distinction is made between transcendental realists and idealists: "By transcendental idealism I mean the doctrine that appearances are to be regarded as being, one and all, representations only, not things in themselves. . . . To this idealism there is opposed a transcendental realism which . . . interprets outer appearances (their reality being taken as granted) as things-in-themselves, which exist independently of us and of our sensibility, and which are therefore outside us—the phrase 'outside us' being interpreted in conformity with pure concepts of understanding."[2]

(3) Most of Kant's predecessors in the history of philosophy, and specifically all those who either accepted or seriously entertained a skeptical outcome of their philosophy, are regarded by Kant as transcendental realists—in fact, he thinks, this is why they ended up accepting or seriously entertaining a skeptical outcome: "Since, so far as I know, all

[1] Prolegomena , 9–10.

[2] Critique of Pure Reason , 345–346.


79

psychologists who adopt empirical idealism are transcendental realists, they have certainly proceeded quite consistently in ascribing great importance to empirical idealism, as one of the problems in regard to which the human mind is quite at a loss how to proceed."[3]

The conjunction of (1) through (3) generates a prima facie perplexity. For how can Kant blame other authors for taking a questionable position on an issue that, by his own admission, they did not even address? If indeed "we have hitherto never had any transcendental philosophy,"[4] then how could Kant's predecessors have gone wrong in it? Any further fleshing out of the perplexity is bound to be controversial, since the exact nature of transcendental philosophy, transcendental realism, and transcendental idealism is a highly controversial issue. But probably it won't be too controversial to say at least the following. Many of Kant's predecessors would have had no conceptual room for the distinction that he wants to make here between something being real ("their reality being taken as granted") and it also being a thing-in-itself , existing "independently of us and of our sensibility"—a distinction on which the very definition of transcendental realism seems to depend. They would have seen themselves as addressing the issue of whether something specific, or something in general, was real—that is , whether it existed independently of us and of our sensibility. And they might or might not have come up with a satisfactory resolution of that issue, but they certainly cannot be criticized for a stand they did not take on an issue they could not even phrase, let alone deal with.

It seems that Kant should make up his mind. If he chooses to emphasize the extraordinary novelty of his philosophical concerns, he depicts himself as playing an entirely new philosophical game from the traditional one. But then he should give up calling the tradition to task on any specific point and simply develop his new game side by side with the tradition's, hoping that it will eventually catch on and supplant, not correct, the tradition's. The problem is a general one, well illustrated by Thomas Kuhn's notion of incommensurability between different paradigms.[5] A revolutionary scientist's "new vision" cannot be compared with the old one, since the bearers of the two paradigms see altogether different worlds, their standards of accuracy are different, and there is no notion of success or failure that applies within both their visions. Analo-

[3] Ibid., 347.

[4] Prolegomena , 26.

[5] See The Structure of Scientific Revolutions .


80

gously, if Kant is a revolutionary philosopher, then he has no relevant vocabulary in common with his "normal" predecessors, and a decision between his vision and theirs can only come, if at all, on the basis of global considerations—that is, on the basis of how much better or worse a system based on his vision works. In any case, it is unfair of him to question the attitude of previous philosophers regarding issues that make sense only within his vision.

The more an interpreter of Kant stresses the revolutionary aspect of Kant's thinking, the more the interpreter will have to face this problem. In my Kant book I have stressed the revolutionary aspect a whole lot, referring explicitly to Kuhn's work; so, as some have said, that book "shouts for [the problem] to be raised."[6] I intend to raise it and resolve it here, and in the process bring some additional light on Kuhn's position—do some Kant (as I understand him) on Kuhn, as it were, after doing Kuhn on Kant in the book.[7]

One of my conclusions in the book is that Kant rewrites the notion of necessitation as regularity or rule-directedness, that is, as events of certain kinds following one another in predictable ways, according to patterns that can be recognized. This rewriting has an important, indeed startling, consequence: the same event may be fully necessitated in more than one way. One could conceivably give a complete causal story accounting for the event and then turn around and give another one , equally complete. Kant makes use of this multiple causality in only one context: when presenting his peculiar form of compatibilism between physical and rational determination. But multiple causality is there, in his conceptual repertory, in a way in which it cannot be for those who do not accept that conceptual priority of experiences (or representations) over objects which constitutes, in my reading, his transcendental idealism. (The significance of this "presence" for the very operation I am now involved in will become apparent by the end of the chapter.)

"By nature, in the empirical sense," Kant says in the first Critique , "we understand the connection of appearances as regards their existence according to necessary rules, that is, according to laws."[8] Even more directly, the Prolegomena inform us that nature, in the formal sense, is "the totality of the rules under which all appearances must come in order to be thought as connected in experience."[9] This totality, of course, is

[6] Mark Glouberman, personal communication.

[7] In chapter 3 of my book (pp. 76–80) I point out that many of Kuhn's points can be made in Kant's own language, and then proceed to do some Kant on Kant .

[8] Critique of Pure Reason , 237.

[9] Prolegomena , 65.


81

never given: it is an object of thought that guides our understanding as it painstakingly establishes local connections within experience. Because it is never given, we do not even know that it is really possible—as opposed to being something whose contradictory character we have not yet been able to expose. But, in the sense in which (the concept of) nature does play a role for us—as a noumenon that organizes our limited, contextual research projects—there is no denying the logical possibility (and remember, logical possibility is in any case the best we can get here) of several different natures—that is, several different systems of regularities including all phenomena. Clearly, the unity of nature (that nature be one) is required for the unity of the knowing, experiencing subject. But we should not take this Kantian thesis to imply (or presuppose) what it cannot possibly imply (or presuppose)—that is, that the subject is one and therefore nature is one. There may be one empirical object that we identify with the carrier of subjectivity, but such empirical matters are out of order within the present, transcendental inquiry. It is the transcendental subject that is in question here—that is, the concept of the subject. So all Kant means—and can possibly mean—is that, if the empirical carrier of subjectivity has access to more than one nature (in the sense in which one accesses something like that—that is, in the sense of acting in the wake of a total structure which is never directly accessible), then it carries more than one subject—and may or may not have a fragmented experience as a result, depending on whether or not it makes empirical use of various transcendental maneuvers which are constantly available to save superficial consistency.[10] Similar maneuvers may convince distinct empirical carriers of subjectivity that they share the same nature—whether or not a less sympathetic transcendental construal of the data is logically possible.

Within the logical space I just sketched, Kuhn's position finds a natural place. In fact, it finds a much more natural place and a much clearer formulation than in Kuhn's original language, which is highly ambiguous between transcendental realism and idealism. Aristotelian and Newtonian physicists, in this logical space, do not just have different world views : they literally inhabit different worlds (a conclusion that Kuhn suggests at times, but always somewhat reluctantly).[11] For inhabiting a

[10] I mean maneuvers like "It's only a dream" or "It's only a hallucination." See the previous chapter.

[11] Part of Kuhn's problem, as he himself notes, is that he is writing in the middle of what he perceives as a paradigm shift. So he ends up saying things like the following: "Though the world does not change with a change of paradigm, the scientist afterward works in a different world" (p. 121). And he comments that "we must learn to make senseof statements that at least resemble these" (p. 121). Most often, however, he dodges the issue of the revolutionary character of his own discourse and reverts to the familiar viewing metaphor, inclusive of suggestions of distortion: "Rather than being an interpreter, the scientist who embraces a new paradigm is like the man wearing inverting lenses. Confronting the same constellation of objects as before and knowing that he does so, he nevertheless finds them transformed through and through in many of their details" (p. 122).


82

world here means acting (specifically, doing research in physics) in the wake of a certain idea of the total structure of experience, and the world you then inhabit is the noumenal object of that idea. So any suggestion that there be a distortion involved in going from a world, period, to a world as viewed is to be firmly rejected, and blamed on a persistent attachment to realism. And any claim that I cannot intelligibly think or talk about the different "conceptual schemes" involved here (or even about my own)[12] is just the result of a misunderstanding—that is, of a basic confusion between transcendental and empirical subjects. It may just happen to me (to the empirical me, I mean) that I access two different natures, just as I may speak two different languages. Whether my empirical psychology will then go to pieces is an empirical problem, but clearly, as long as it does not, I may go back and forth between the two natures, perhaps even at will. After going back and forth for a while, I may get to the point of finding a place for one nature "inside" the other, that is, of coordinating in a way that I judge (temporarily and contextually) adequate one whole nature with part of the other—adequate in the sense that within the latter it helps me account for the behavior, including intellectual behavior, that I display when I move in the former's wake. It will then become possible for me (the empirical me) to think or talk about a nature which, in its totality and uniqueness, determines the uniqueness and totality of a subject which, among other things, I (the empirical I) am.

What about the incommensurability thesis, then? The best way to see it, from this perspective, is as a strategic tool to undermine and explode the realist notion of what it is to understand something. If understanding is grasping an objective meaning, and words are tools for crystallizing such grasp and allowing others to share it, then words will turn out to be systematically ambiguous in different paradigms and there will be no hope of using them to communicate any understanding across such paradigms. If pushed to an extreme, this is going to imply that no two people can understand each other. If, on the other hand, understanding some-

[12] The obvious reference here is to the debate initiated in Davidson's "On the Very Idea of a Conceptual Scheme."


83

thing is explaining it, reconciling oneself with its consistency, proving it to be possible ,[13] then no such problem arises. Some things will be easier to understand than others; specifically, it will be hard to account for a whole form of life very distant from ours. But that is all the problem there is in this area: an empirical problem.

Within this logical space, let us now return to Kant's predecessors and to his criticism of them. Those predecessors performed various empirical activities, as part of an empirical enterprise that they called "philosophy" (or "metaphysics," or whatever). They, for example, uttered sentences and wrote texts. They had their own understanding of what they were doing (of what philosophy is), and Kant shares that understanding, since he's been trained that way himself. He has also gained, however, quite a different concept of philosophy. On the basis of this concept, and within the general conceptual framework of which the concept is part and parcel, he can find room for a different understanding of his predecessors' practices. He can explain those practices, in his world, by deriving them from the hypotheses of transcendental realism, and there is no way that, in his world, anybody could object to this practice in principle .[14] The story Kant tells may be more or less convincing, intricate, and connected, but that is as far as we can go in judging it: there is no delegating a final word here to some "objective matter of fact" that supposedly decides the issue. Because of this feature of the situation, the understanding Kant reaches of previous philosophical practices does not have to do violence to the understanding the previous practitioners had of them: the legitimacy of either understanding will have to be defended in positive terms, by bringing out its relevant structural features, and will not necessarily involve a delegitimation of the opposition. For the realist, on the other hand, a unique matter of fact does decide the issue, and hence arguing for an understanding is automatically also arguing against the other. In conclusion, within Kant's liberating idealist perspective there is room for a sense in which one can, to use his own phrase, "understand [an author] better than he has understood himself,"[15] and yet this does not entail denying that the author did indeed (in however limited, shallow, unproductive a way) understand himself.

[13] This Kantian notion of understanding is articulated in chapter 5 of my Looser Ends . For a Kantian passage that brings out clearly the connection implied here between understanding something, explaining it, and establishing its possibility, see Kant's Ground-work of the Metaphysic of Morals , 127.

[14] As will become apparent shortly, one could definitely raise such objections in the world the transcendental realist lives in.

[15] Critique of Pure Reason , 310.


84

But now a complication arises. Suppose you take the Aristotelian world and find room for it within a Newtonian framework. This will amount to reinterpreting various key statements about material objects, and we may take it that, once the idealist position is accepted, such a reinterpretation can raise nothing but empirical trouble. It's a different story, it seems, when it comes to reinterpreting the Aristotelian philosopher himself—that is, accounting for his philosophical practice in terms of concepts that were not available to him. It seems that here there should be a unique matter of fact that decides the issue: that this philosopher should know whether or not he has certain concepts available, is prepared to make certain distinctions, can use the distinctions to articulate certain definitions, and so on and so forth. It seems that we might be able to understand what Plato said about ideas differently (and maybe, in some sense, better) than he himself did, but what about Plato's understanding of his own relation to his assumptions and tenets? Can we understand that relation better than him? Can, indeed, anything different from his own understanding of it count here as understanding at all?

Clearly, these questions raise the issue of the privileged access one allegedly has to one's own mental life. In the context of interpreting a text, such privileged access surfaces by locating "meaning" at the level of the author's "intentions." In practice, the intentions are most often entirely mythical: most often, what one does is interpret the text in a way that one finds satisfying and then project the interpretation onto the author's intentional state, claiming it to be faithful to "what the author really meant."[16] But here I am concerned with the theory of it (though see below), that is, with how attributing to intentions this decisive conceptual role bars the way to multiple interpretations of anything that we construe as a voluntary performance on the part of an intelligent being. Specifically, I am concerned with whether or not Kant can call his predecessors transcendental realists once we assume (for the sake of argument, at least) that they would not have described themselves this way, or maybe even that they would have rejected the description if it was proposed to them.

Here another aspect of Kant's picture becomes relevant. I have argued in chapter 3 that intentions cannot play for him the decisive role mentioned above. They are, of course, constantly referred to, but they are

[16] Within the enterprise of Kant interpretation in the Anglo-American community, this move has a long history, ranging (at least) from Kemp Smith's subjectivist/phenomenalist readings of the first Critique to Guyer's uncovering of Kant's "intentions" in the Refutation of Idealism.


85

also inaccessible: a pure object of thought. So, just as with that other object of thought which nature is, talking or thinking about intentions, and even acting in the wake of such talk or thought, is perfectly compatible with there being several legitimate intentional accounts of what we do.

The basic point here is this. What we say to others or ourselves concerning our intentions is part of what we do, and is to be taken as no more revealing or transparent than anything else we do. It expresses at best a subjective maxim for our behavior, that is, a proposal for a possible law of it. Whether the maxim is really a law—that is, whether it necessitates (in the Kantian sense) our behavior—we can never establish. Our best bet is to put that talk in the context of everything else we do and see how it fares. As I noted in chapter 3, Kant says that "we cannot scrutinize [our disposition]: we must always draw our conclusions regarding it solely from its consequences in our way of life."[17]

In the light of these considerations, how Kant's predecessors would have described themselves, or even whether or not they would have accepted a given description if presented with it, is certainly relevant but by no means decisive. How they would have described themselves, or whether they would have accepted somebody else's description, is (if we take such counterfactuals seriously) part of the data: it is not to be discounted, but it is not especially fundamental either.

The most general sense of this discussion is that, because of the particular kind of revolution Kant realized (because, that is, of the transcendental idealist outcome of it), he did not have a problem accounting for the possibility of establishing a meaningful dialogue with the tradition (meaningful for him and from his revolutionary perspective, of course). A revolution going in the opposite direction, or maybe remaining within the general scope of what Kant would call transcendental realism, would indeed have to face the problem formulated at the beginning of this chapter, and would have no natural solution for it. Within transcendental realism, there is only one way things can be, and that includes how different people see things. If an author sees things a given way, and if that way of seeing them makes it impossible to even phrase a given issue, you cannot call the author to task on that issue. How you would see things, or what sort of sense that author's practice makes in your way of seeing things, is beside the point.

As I already noted, the practice of most of the interpreters who would

[17] Religion Within the Limits of Reason Alone , 65.


86

be regarded by Kant as transcendental realists is far more liberal than these conclusions suggest. Such interpreters feel perfectly entitled to making their way of seeing things relevant to an interpretation, occasionally in direct contradiction with the author's own statements concerning his intentions. Various trivial maneuvers are used for this purpose. Sometimes, one invokes deceit.[18] More often, however, the deceit is self-inflicted, in which case one ends up using, in a more or less explicit form, the notion of a fragmentation of the author's personality, and possibly that of the unconscious quality of some of the fragments.[19] Even more often, finally, the issue is not faced, and "one"[20] relies on the common practice of not facing it as a justification for an additional example of the same practice. In Heidegger's terms, "what has thus been covered up gets passed off as something familiar and accessible to everyone."[21]

The trivial character of these maneuvers cannot, of course, provide the basis for a refutation of realism. In general, as I pointed out in Kant's Copernican Revolution , nothing could provide such a basis. More specifically, criteria like connectedness, articulation, and detail, which would make one favor transcendental idealism in this case, are only going to matter to an idealist . For a realist, it is only how things stand that matters, and things might well stand in a totally trivial or disconnected way; so, if the realist is convinced that he has got hold of the Truth, no such criteria are going to impress him. Which in turn indicates one more difference between him and the idealist—a difference of strategy this time.

I pointed out earlier that the realist framework is committed to uniqueness, and as a consequence tends to produce confrontational attitudes. When I first made this point, I emphasized that, for the realist, arguing for a position is automatically arguing against all alternatives. There is also, of course, the opposite side of the coin. Arguing against an alternative position, that is, is one way of arguing for your own. Things are a certain way, one way, so narrowing down the possibilities makes it

[18] Most notably, in the so-called esoteric-doctrine interpretations. See, for example, Caton's The Origin of Subjectivity .

[19] In "Kant's Intentions in the Refutation of Idealism," Paul Guyer lumps together deception and self-deception as providing motivations for a Kantian statement in this remarkable passage: "Kant's well-attested desire to appear consistent could easily have led him to use the rhetorical context of a preface to attempt to persuade his reader (or, for that matter, himself) that there had been no change in his view when in fact there had been" (p. 331n).

[20] The significance of this Heideggerian expression emerges in the next sentence.

[21] Being and Time , 105.


87

more likely that you hit the bull's-eye. This accounts for the deconstructive, negative slant of a lot of realist philosophy: its emphasis on what is impossible—or, more often, to make it sound less negative, on what is necessary .[22] What a realist is most concerned with establishing is not that his position is interesting, or comprehensive, or deep, or in any way attractive, but that its opposite could not be . For the idealist, on the other hand, deconstructing somebody else's position is going to have little or no significance. Reality, for him, is not a matter of matching some external standard, but rather a matter of displaying certain structural characters, so whatever mistakes others may have made in coming up with their pictures of reality will be irrelevant to whether or not your picture has any credibility. You will have to work in a constructive vein to articulate the picture in detail: the more articulation you carry out, and the more you maintain logical consistency throughout, the closer you will get to the ultimately unreachable ideal of truth.

Clearly, this constructive work will have a secondary deconstructive effect: by placing your picture side by side with the alternative ones, you will weaken the hold that they might have had when they were the only ones around. But it will not be the disheartening deconstruction of Socrates or Derrida: it will not amount to infiltrating the other's position to make it fall of its own weight, and ultimately leave the reader with one less thing to work with. It will be an empowering deconstruction that substitutes free competition for tyranny, and leaves the reader with one more thing to work with.

So it is indeed true that the realist will not (necessarily)[23] be impressed by the idealist's invocation of criteria of connectedness, detail, or whatnot, and that he would only be impressed by an apparently successful refutation of his own position. It is equally true, however, that none of this is going to impress the idealist: he doesn't care about impressing anybody, or refuting them. He might, indeed, well enjoy a situation in which several positions stand unrefuted, using each other as a challenge to further articulation. For not only can he live perfectly well with this

[22] In this connection, see chapter 3 of my Looser Ends .

[23] Not, that is, at the conceptual level, though as an empirical individual the realist may well be impressed by the idealist's criteria. (They are, after all, the criteria guiding his own work, though he has a hard time accounting for their crucial importance.) Analogous remarks apply to the idealist, who (at the empirical level) might well be impressed (contra what I say below) by the lack of impact of his proposals. One main aspect of this issue is discussed in the next note, but the issue also brings out how much realism and idealism are both present (and, as I argue in Kant's Copernican Revolution , necessarily present) in all of us.


88

situation (whereas the realist would have problems with it), he can also see it as a concrete realization of his empowering notion of truth.[24]

This is as much of a story as I need to tell here. But a final remark is in order. Would Kant have accepted this reconstruction of his own position? Would he, for example, have admitted the possibility of different, all-inclusive natures that I defended and utilized above? I would clearly be interested in knowing the answers to these questions, but there are things that such answers could not prove to me. Specifically, they could not prove that my reconstruction is wrong, or, for that matter, right. They would immediately become part of what I have to interpret, and I would have to balance them against everything else there. No "fact of the matter" is going to be decisive for me, and in particular no fact about Kant's own psychological convictions. What I said is self-applicable: I can only judge it by the very criteria it expresses. That others do not share such criteria, or that they would use some of Kant's own statements to refute me, is an interesting but ultimately irrelevant consideration.

The moral I draw from this analysis is that the history of philosophy (indeed, I would argue, history period) can only be done Whiggishly. In Heidegger's words once more, "historiological disclosure temporalizes itself in terms of the future . The 'selection ' of what is to become a possible object for historiology has already been met with in the factical existentiell choice of Dasein's historicality, in which historiology first of all arises, and in which alone it is ."[25] Where this conception of history can well (in my own case does) see itself as being possible only as an articulation of the "disclosure" contained in Kant's texts.

[24] Some qualification is in order concerning the sense in which the idealist can "live perfectly well with this situation." He may, of course, be a relativist, and quietly develop his own point of view without being at all troubled by the existence of alternatives (except insofar as he can learn from them). But there is no necessity that things go this way. It is also possible that the idealist conceives of his position as in some sense a better one, and of his articulation of that position as attempting to substantiate this claim. Then he will share, in practice, the realist's agonistic attitude, but he will be better equipped to make theoretical sense of this practice (will "live perfectly well" at that level). For truth is for him a question of who can tell the best story at a given time , and if no clear winner is yet emerging among several competing stories, that only means that more work needs to be done on his own story. (Once more, this is exactly what the realist, too, most often does , but not what he can say .)

[25] Being and Time , 447.


89

Chapter Eight—
The Conceptual Independence of Kantian Appearances

The notion of an intentional object is a promising one in attempting to make sense of the textual morass surrounding Kant's use of words like "appearance," "phenomenon," and the like.[1] When facing the conflicting claims that appearances are "mere kinds of representation, which are never to be met with save in us,"[2] and on the other hand that the permanent in space "cannot . . . be something in me, since it is only through this permanent that my existence in time can itself be determined,"[3] it is a hopeful strategy to construe Kant as groping for an expression of the directional character of some of our experiences, as lacking the language for such expression, and as consequently getting confused, like so many before and after him, between the content and the object of an experience. Undeniably, this conjecture makes for a plausible reading of such otherwise frustrating passages as the following one:

That which lies in the successive apprehension is here viewed as representation, while the appearance which is given to me, notwithstanding that it is nothing but the sum of these representations, is viewed as their object. . . . [A]ppearance, in contradistinction to the representations of apprehension, can be represented as an object distinct from them only if it stands under a rule which distinguishes it from every other apprehension and necessitates

[1] For some such attempts, see my Kant's Copernican Revolution and Aquila's "Intentional Objects and Kantian Appearances."

[2] Kant, Critique of Pure Reason , 348.

[3] Ibid., 245.


90

some one particular mode of connection of the manifold. The object is that in the appearance which contains the condition of this necessary rule of apprehension.[4]

Are appearances representations, sums of representations, objects of representations, viewed or represented as such objects, or are objects something in the appearances? The conjecture above provides an approach—regimentary, to be sure—that straightens out this confusing situation.

But there is a problem, most recently signaled by Gordon Brittan. In his own words,

[O]bjects (however tied to our representations) are "independent of us" in Kant's view in a two-fold sense. . . . [T]hey are independent of our perception of them . . . but they are also independent of our conception of them, i.e., independent of the manner in which we refer to them. That is to say, objective judgments are "about" the same object differently described. This sort of independence precedes any further question about the reality of such objects (the criteria of objectivity that they must satisfy, etc.). But then "intentional objects" cannot possibly play the right sort of role for Kant, because they are not independent of our conception of them; they are objects of intention only with respect to particular descriptions.[5]

A terminological issue must be addressed before I discuss this problem, since in Kant's Copernican Revolution I use the phrase(s) "conceptual (in) dependence" in a different sense from the one relevant to Brittan's argument. For me, appearances are conceptually dependent on experiences in the sense that the concept of an appearance (not what it applies to) is defined by an essential use of the concept of an experience. Brittan, on the other hand, seems to have in mind what I would call the empirical dependence of a specific intentional object on the specific experience of which it is the object.[6] An example will help clarify his point.

[4] Ibid., 220.

[5] Review of Kant's Copernican Revolution , 742.

[6] "Empirical" means here (in accordance with Kant's usage) "belonging to the field of experience (hence not to a conceptualization of this field)." So, in the case to be mentioned shortly, no intentional brown table would be experienced unless, say, a seeing of it were experienced. But note that empirical dependence in this sense can sometimes be conceptually based (say, based, as it is here, on what kind of thing an intentional object is ), and sometimes empirically based (as when I say that there would be no child without a mother). Note also (to anticipate themes on which I will focus at the end of this chapter) that my notion of conceptual dependence is defined in an essentially historical way, and Britian's notion occurs as a chapter in this history: x is (for me) conceptually dependent on y if y occurs somewhere in the narrative specifying "what" x is. Appearances are conceptually dependent on experiences because the narrative constituting the semantics of "appearance" originates with experiences—and they remain so even if that connection between experiences and appearances (which is precisely the connection Brittan is talkingabout) is eventually aufgehoben . A specifically Hegelian way to construe what is going on in this chapter, then, would be to say that here, at the beginning, the understanding is doing its usual work of separating and distinguishing (different senses of "conceptual dependence"), whereas by the end reason will have brought out the identity of those distinct moments (and justified the fact that a single expression has those different senses).


91

I am currently having the experience of seeing a brown table. Therefore, a brown table is the intentional object of my current experience. Suppose now that this brown table is a Kantian appearance. Then it must (in some sense) exist and be what it is (a table, say) whether or not I perceive it, and whether or not I ever thought of it (as a table or in any other way). Indeed, it must be such that it would still exist and be what it is in a world without intelligent beings, thoughts, or experiences of any sort. But we assume that the intentional object of an experience could not be if the experience had never been, and that its structure is determined by the structure of that experience. So the appearance brown table cannot be identical with the intentional object brown table. Q.E.D.

A large part of what makes this argument convincing is lack of realization of the scope of Kant's revolution. As I pointed out in my book, this is a conceptual revolution, one that is going to provide our most basic philosophical words with a new semantics. And the relevant thing here is that "to exist," "to be (such and such)," and "to be independent of" are not exempt from this overhauling operation—indeed, they are at the very center of it. That is why I added the parenthetical remark "in some sense" in the previous paragraph: as an indication that, contra what Brittan seems to believe, there is a lot of work still to be done here.

I have seen people in different but related contexts argue as follows. An object could not be a set of sense-data because then a sentence like

(1) The wall is white

would be meaningless: a set is not the sort of thing that can be white, or any other color—a set is an abstract object. Which is cute, but unfair, since if we give a different semantics to the nominal phrase "the wall," then we must extend the same treatment to the predicate "is white," and come up with reasonable conditions of application for this predicate under the new interpretation of what it is to be a wall, or indeed an object in general .

The situation is analogous here. Of course, by the end of the day Kant will want existent objects to be independent of anybody's conception of them. But we cannot expect this to happen while the traditional under-


92

standing of "existent" and "independent" remains untouched: the outcome would be as much of a category mistake as in the example above. What makes the present situation (and, possibly, the one in the example) somewhat confusing is that traditionally there was virtually no unpacking of the key words involved: they were very close to primitive terms. Objects exist, period; objects are independent of one another because they can exist apart, period. Such is the transcendental realist's conventional wisdom. And a term that is primitive in one framework (especially if it is your framework) might be naturally thought of as requiring no analysis anywhere . Whereas in Kant's framework terms like "existence" and "Independence" are going to require extensive and detailed analysis.

I am not going to provide this analysis here; I did so elsewhere.[7] But one element of it deserves mention, since it explicitly contradicts Brittan's claims and further detracts from their persuasiveness. Because Kantian objects must "conform to knowledge," not (as was traditionally the case) the other way around, that something a is an existent object—or an object simpliciter —will be defined in terms of a being the intentional object of an experience that qualifies as cognitive. So I start out with a statement of the form

(2) I represent a

I rephrase it less controversially as

(3) I represent-a

to signal that reference to a is still only an internal feature of the experience I am having; I study this experience closely, and possibly decide that it has those characters (coherence, connectedness, determinacy) that make it a cognition.[8] I summarize this conclusion by reformulating (3) as

(4) I know-a

And then, since objects now conform to knowledge, I conclude that I do have a case here of genuine reference to an object, which I express by dehyphenating (4):

(5) I know a

[7] See my Kant book and my "Knowledge as a Relation and Knowledge as an Experience in the Critique of Pure Reason ."

[8] No experience (or object) has such characteristics absolutely, but only relative to a context, as I argue in my book. But this qualification is irrelevant here.


93

Finally, after thus detaching a from the experience that first brought it in play, I will be able to refer to it in a variety of contexts, independently of the cognitive relation I have found I have with it.

Consider the most delicate and difficult step in this process: the one from (3) to (4). A substantial portion of this step will amount to establishing identity statements of the form

(6) a = b

for various b 's. I will have to decide, for example, whether the brown table a is identical with the brown table I saw yesterday, or with the brown table John sees now, or with the table Mark cannot see (he is blind) but can feel. For certain kinds of table there will be no sensible criteria for answering questions like this. If, say, I dreamed of a brown table very different from anything I have ever seen, and you did the same, and the table you dreamed of was descriptively quite similar to the one I dreamed of, it would make little sense to ask whether we have dreamed of an identical table or of distinct ones. When, on the other hand the questions do make sense, it is usually because we can project both a and b into a common spatiotemporal framework, and there resolve our worries. For example, we can say that the tables you and I see while in each other's presence are identical because they occupy the same spatial location, and that the table I saw yesterday is identical with the table I am seeing now because there is a continuous space-time trajectory of which they are both members.[9]

So identity statements do play the crucial role Brittan attributes to them, and they share this crucial character (as he suggests elsewhere in the same piece) with the "intuitions" of space and time. But it is a mistake to think that the truth of identity statements must be decided before (and hence, I take it, independently of) "any further question" about objects—specifically, before criteria of objectivity are applied to them. Deciding these statements is an essential part of what it is to decide on the objects' status: to a large extent, the criteria of objectivity amount to criteria of identity.

Brittan's is not just any mistake. It proves that he is still committed to a realist framework. For there objects are the conceptual foundation: nothing can be done before the objects are given. Thus, in traditional

[9] Note that it is not part of Kant's transcendental philosophy actually to decide such identities. He need only tell us that deciding them is what it would take to establish objectivity.


94

formal semantics, one needs a domain of objects to get started; one needs to know how many (distinct) objects there are, and then one can go ahead and define all sorts of useful notions, including truth and validity. Without objects one would be stuck. In the idealist picture, on the other hand, the identifying and counting of objects is part of what requires conceptual articulation, part of what will eventually make us say that it is objects that we are dealing with, that they exist and are independent of experiences.

But this is not all there is to Brittan's point. Even after his argument is defused, there remains a feeling of puzzlement surrounding the whole discussion. For good reason: the qualification of conceptual independence (that is, independence of any specific characterization) that we must be able to attach to an intentional object after the long and difficult work I described is more than some new qualification or other. If, say, I consider an object and spend some time deciding whether the criteria apply that would make me call it living, and conclude that they do—hence that the object is living—this conclusion is not in conflict with anything I thought of the object before. The object was not, supposedly, considered non living before; I just did not know which it was. But an intentional object is concept-dependent to begin with; so when I decide that it is concept-independent after all, I contradict my earlier characterization of it.

The issue is a logical one, indeed strikes at the very heart of what logic is for us. Most often, logic is defined as a theory of argument and reasoning, but before there can be any such, there must be a theory of words, and since words are words (as opposed to marks, or sounds) because of their meaning, this must be a theory that tells us what sort of thing the meaning of a word is—a semantic theory. The traditional analytic (that is, Aristotelian) semantic theory proceeds (not surprisingly) by analysis: by dividing further and further. The meaning of a word is something like its dictionary definition, which proceeds by identifying a genus and then a species and a subspecies and so on. "Human," say, means "rational self-moving living substance." Crucially, none of the more specific traits denies any of the more general ones it is attached to. Each such trait enriches our description, does not throw it in disarray.

This logic, however, is not the only one on the market. Everybody knows that, though one usually does not mention it—not in Anglo-American philosophical circles, at least. So, to spell out what everybody knows but few tell, there is also dialectical logic: Hegel's. And in dialec-


95

tical logic the traits composing the "definition" of a term do not peacefully proceed to focus our attention more and more on smaller and smaller portions of the semantic field. They are rather constantly at odds with one another; they fight, and do violence, and demand inventiveness and creativity of those who would accommodate their disputes.

Consider an example. Say that you naively oppose being and nothing; you think of them as excluding each other. Then you reflect on how exactly this opposition is to be cashed out, and realize that there is no way: pure being simply has the same conceptual content as nothing. So you are led to think that the two must come together, but how can you possibly make sense of a being that is also nothing, a being that is also non being? One answer is: Dasein , a form of being that itself consists of opposition—the form of being that something has when it is only insofar as it excludes something else, as it is not that something else.

I am not asking you to buy any of this. I just want to point out how different (and harder, and more questionable) it is to connect the qualification "nonbeing" to "being" than it is, say, to connect the qualification "living" to it. Hegel, of course, thinks that it is the former kind of connection that rules our language, our experience, and our history, whereas peaceful analytic connections can rule at best in cemeteries—of people and of words. But, whether or not Hegel is right (and this is certainly not the place to decide that issue), what matters for us is that Kant's treatment of objectivity proceeds along the same general lines. Just as Hegel faces the challenge of making sense of a nonbeing being , Kant owes us a credible account of objects that are both concept-dependent and concept-independent.

Hegel's main source of metaphors is biology. His concepts are live structures, evolving much like organisms do—and, much like organisms, constantly revising and contradicting themselves (their earlier phases, as we might want to say less controversially). A boy evolves into something that is not a boy, but still is the same as the boy; a seed evolves into something that is not a seed, but still is (Hegel would say) the same as the seed; being evolves into something that is not being (it is nothing), but still is the same as being. Adding a temporal dimension to the picture helps, to the extent that we are used to such growth, to such cases of "identity within difference." And, of course, it does not help to the extent that we are only used to them—we do not understand them. Aristotle's metaphysics, with all its troubling questions about substantial change, is there to prove that.

But, however obscure the metaphor still leaves the field, it is natural


96

to try it in Kant's case, too. I did it above when I stated what Kant wanted "by the end of the day," and how different that was from what he had at the beginning. What I did not point out then but has emerged now is that it was not just different. By the end of the day, Kant wanted something that was the opposite of what he had at the beginning, but still he wanted it to be the same thing. Which is not going to be an easy matter, as Aufhebung never is. It is not going to reduce to drawing a few dividing lines on a piece of land already conquered and made forever still; it will tax his imagination to come up with chapters for a plausible story, and unless the story is plausible, unless it makes sense, the presumption of identity will not be vindicated.

There is much talk in my book about Kant "rewriting" this and that—for example, rewriting causality (formerly construed as imposition) in the new language of rule-directedness. All this talk can be read in a straightforward way: within his new language (and conceptual framework) Kant provides a new construal of causality after junking the traditional one . A simple substitution, that is: something in place of something else . And we can get a lot of mileage from this straightforward reading; we can get a long way toward an understanding of the new picture Kant is drawing for us. At some point, however, we will have to face the puzzlement behind Brittan's objection, and the problem behind that puzzlement. We will have to face the fact that some of Kant's rewriting is an overwriting, which may make us reconceptualize all of his rewriting—make us "rewrite" it, that is.

The problem comes to a head with the conceptual independence of those intentional objects that are to qualify as appearances, since here the overwriting is obvious: appearances do not cease to be intentional objects, and hence concept-dependent , when the new character of conceptual independence is laid on them. So, to make it work, conceptual independence (and dependence) will have to undergo a transformation and resurface as a conjunction of coherence, connectedness, or what have you, but—and this is the crucial point—a transformation that can still be seen as of the same thing . Unless we can justify our continuing use of the same phrase here, unless we can persuade ourselves and others that it makes sense to call all of the above "conceptual independence," ours will be at best a case of confusing ambiguity, and at worst one of fooling ourselves.

The required justification will take the form of a narrative, which will typically begin with the traditional notion, point to some paradoxical


97

consequences of it, and slowly turn our attention away from that whole notion, concentrating on some aspect of it which the narrative suggests is the essential one and which can be used as an Archimedean point for a revolution in our understanding of the same notion . For whatever it is worth, this narrative is provided in my book—in its attempt to show how traditional objectivity is constantly involved in the Kantian project, guides its articulation, and determines its categorial structure. But my emphasis there is different: I am more concerned there with the heuristics of sketching the new framework than with the issue of legitimizing the use of old words within it. So, at some point, I am perfectly happy to let the old construals fade away from consciousness—however much they helped in arriving at the new ones. Here, on the other hand, my interest lies precisely in this development. Here my point is that, without some such narrative as I tried to provide, Kant's operation would reduce to sticking an arbitrary label onto a foreign, opaque object. It is the narrative's task to avoid this, and the narrative will do a satisfactory job of it to the extent that it is convincing. There is nothing "analytic" about this process: no perpetual return of the same thing. Unless the narrative is constructed right, with the skill and flair of a good writer, it will not hold together: it won't be one story and one character (that is, one notion). It will disintegrate; it will stay manifold; it won't be redeemed by synthesis.

Once this conclusion is reached, it can be extended to the less superficially troubling instances of rewriting. To causality, say: if imposition is to be replaced by regularity as an interpretation of it, then imposition must still be in the picture somehow. There must be some convincing account of how what matters about imposition is best understood in terms of regularity, of how it is best to let the seed of imposition grow into the tree of regularity. Not just the rewriting of conceptual independence requires the tiresome work of Aufhebung —indeed, is that work. Every Kantian rewriting shares the same fate.

The Transcendental Dialectic takes about half of the Critique of Pure Reason . But, as I noted in my book, it has failed to engage the interest of Anglo-American commentators to a comparable extent. My procedure in the book reversed traditional tendencies by making the Analytic dependent upon the Dialectic—by seeing understanding as a projection of reason into the limited field where only (reason itself has concluded) it can get work done, rather than seeing reason as a wild extrapolation from the understanding's sensible claims and modes of operation. I argue there that, though not being faithful to the order of the text, this ap-


98

proach is more faithful to the text's logical structure. Be it as it may with that claim, here I am about to make another related one, supported in part by the discussion above.

The Anglo-American general disregard for the Dialectic has led to misunderstandings not only of Kant's oeuvre but also of the way it relates to subsequent idealist developments in German thought, and especially to Hegel's dialectic. Focusing on the limiting outcomes of Kant's inquiry, on the "bounds" he discovered for "sense" in its various senses, has made scholars pay scant attention to the logic of reason's many conflicts with itself, and to how this logic is a necessary preamble to Hegel's.

These are not just misunderstandings, of course: ideology plays a major role in them. For a long time, Hegel's name was anathema in "the profession," and still is for most members of it.[10] So, since Kant was respectable (sort of), the proper thing to do with him was to connect him more with the philosophers preceding him—better yet if English speakers, like Locke or Hume—than with the ones following him. And, if that requires us to draw attention away from a large portion of what the man said (and mattered to him a great deal), so be it: that stuff is bad anyway.

What I am saying, plain and simple, is this. It is not just useful to see the Transcendental Aesthetic and Analytic from the point of view of what comes later in Kant's text. It is also useful to see Kant's text as a whole from the point of view of what comes later in German philosophy. For then we can see a natural progression from separately facing individual cases of the same problem to eventually seeing this problem in full generality and addressing it at its proper level. The problem, of course, is how to do justice to the "historicity" of our concepts, to their having an element that is not properly temporal—though, given the temporal character of our experience, a temporal articulation of it is what best helps us visualize it—but certainly has a timelike structure: the structure of an unfolding, a growing, a development, a plot. With that general problem in mind, we can see Kant's various attempts at a rewriting of causality, or existence, or independence, as inevitably leading to a recognition (not by him, as it turned out) of what is common to all of them: of how all of them are cases of identity within difference.

I will not deny that mine is a biased account. Most important, it is biased in taking the Hegelian form of a reconstruction ex post facto: an a posteriori rationalization working backwards, with the privilege of hindsight, rather than a forward-looking, causal explanation. That is, it

[10] But there begin to be exceptions. See, for example, Pippin's Hegel's Idealism and Wood's Hegel's Ethical Thought , both recent books.


99

does not take the form, "This is what Kant had to say, on the basis of what, for example, Hume had said," but rather, "This is what Kant had to say, on the basis of what, for example, Hegel was going to say." But, once this is clear, it is also no criticism. To say that our biases are our starting point is trivial; what matters is where we get from there. Specifically, where we get in drawing a sympathetic and comprehensible picture of Kant's efforts. Kantian appearances make a lot of sense in the language of intentional objects, and then, when Brittan's puzzlement arises, the backward-looking approach recommended here helps see what is important about the issue, and how it can be approached. Let us see what "the opposition" can do with all of this.


100

Chapter Nine—
The Thickness of Words

"There are more typos than I can stand," said my friend pensively. "They bother me." And the sentence, somehow, got stuck in my mind. I found myself thinking what it means to bother one, what it means to stand something—eventually, to stand for something. I found myself traveling a long way.

Flies are typically bothersome creatures. You sit in the sun, after a good meal and a little more wine than is good for your liver, and your eyes imperceptibly, implacably start to close. Vague reminders of earlier commitments are becoming irrelevant, any little red flags are hard to see in the distance. The distance of your lack of concern, of your appeasement, of your forgetfulness. Of this scrap of death in midafternoon, this nirvana too easily acquired. It's three o'clock, and all is forever well. Except for the fly.

The fly rings in your ear, sits on your arm, rubbing its velvety legs. The fly tickles you. The fly exists. There is no getting rid of it. Other than killing it, that is. Other than coming out of catalepsy and tensing all your muscles and nerves and waiting for a promising opportunity and moving with feline swiftness to smash it on the chair, or the table, or whatever. And then going back to inactivity, inattention, unlife. Until the next fly shows up.

You might not yet see the connection with typos, so let me change the example a bit. It's still three o'clock, and you still look passive from the outside. It still looks like you've had too much food and drink for your


101

own good, and are ready to lose whatever little acuity you have in your brighter moments. But it's a sham, a defense mechanism. You've decided to close windows and doors and withdraw to your inside room. There is a train of thought you intend to follow, something that promises rainbows and shock waves but won't deliver unless it's taken seriously, and carefully, lovingly unrolled. So you've created silence and peace, darkness and unattachment, you've bracketed everything except for this creature of mystery and fascination, and are now ready to listen to it, to do it justice, to take its hand and go to the end of the world, if needed, or to the beginning of the next one.

But there is a fly. An indecent, uncivil insect that won't get the message, won't let you lie there, with your antennae disconnected, your brain absent to any worldly affair. The fly will fly, and ring, and tickle, and your antennae will come back into play, and the promising land will pale beyond hope of recovery, of fulfillment.

Concentration is hard: it takes painful striving to separate one line from the whole maze and keep it in view, let alone walk it. It's largely an effort of denial. Everything else must be found irrelevant, must be felt irrelevant—better still, must be felt nonexistent. There are ways to achieve this, long-taught ways of minimizing intrusive factors, at least for a while, for as long as it might take, say, to get a clear shot at that elusive line. But those ways are not foolproof; for them to work, history must lend a hand, the deck must be stacked in your favor. Flies must be kept busy somewhere else. It's hard to deny the presence of a fly.

You might begin to see where typos come in. Or you might not, if your idea of reading consists of forcing your eyes open and pinching your cheeks and taking several cold showers to stay awake through the latest issue of the Philosophy Journal (please fill in your favorite title). Then a typo or two might be a welcome source of entertainment, a refreshing oasis of genuine, aggressive pleasure within the stony, sandy, heartless terrain you're so unfortunate to roam. But think again. Think of those hours and days when you first discovered books, books that mattered: The Count of Monte Cristo , say, or Twenty Thousand Leagues Under the Sea . Think of how you got lost in them, and wouldn't hear when they called you for dinner, and sometimes they had to pull your hair to make you come back to your senses. Your ordinary senses, those putting you in contact with the ordinary world—a world you were not missing. Think of those old, decrepit editions, of how much you suffered when the insults of time or the original poor workmanship forced you to confront broken words, faded or missing pages. Forced you to face the reality of


102

the object-book, the physical character of its (faulty) composition, the concrete, spatiotemporal embodiment of that sweet dream of yours. It bothered you, didn't it? It annoyed you more than ever a fly will, but much the way a fly does. Denying your denial, unbracketing a world well-lost, imposing an unwanted presence. Existing, where nonexistence was merciful and gay.

There is a story about language, about how it works, that I always found preposterous. Worse yet, I found it violent and insensitive, a bit of shameful propaganda to cover up political abuse. But it's a popular story, at least among philosophers—those enlightened creatures so supportive and understanding of winners. It goes like this. Language has meaning, words refer to things, and that's what language and words are good for. "Table" means table, "chair" means chair, "Snow is white" is true if and only if snow is white. And it doesn't matter that "table" sounds, or reads, like "table": we could just as well have decided to use the word "rable" instead, and then "rable" would mean table. Indeed, in that other language born out of our decision, "rable" would mean rable.

I'm not saying things don't go like this. I'm saying it's not that easy. Language will not be so subservient by itself; it must be forced. Pressure must be applied to it, violent pressure if needed. When there were no clocks, or watches, or any other mechanical concoctions to count time, people found the most creative ways to get the job done. Some were truly sadistic. Slaves were forced to count lentils, or turn wheels, or climb stairs. Or they were chained motionless, and a torch was lit up on their head, and when the fire reached the slave he would scream. And his scream would stand for a time interval. Now what does it take for a scream to work like that? A scream is full of anguish, of pain, of fear; a scream makes our blood boil, our legs run; a scream makes us want to help. For a scream to be converted into a signpost, all other context must be annihilated; irresistible power must be applied to the anguish, and the pulse, and the legs, and silence them, and make them stand still. So that only the time interval is left; only the time interval matters.

Or: I'm sitting in the dark in a tent structure, a hot summer night. Katarzyna Gdaniec has just finished dancing "Voyage." Her lean, muscular body glows with sweat; she's out of breath, panting. She's given us a passionate rendering of woman battling man, and emerging victorious. It's not been easy, for her character and for herself: it was a narrow escape, and we are all still in pain. We are still so glad to be alive. Now comes Luciana Savignano; she's "the Moon," music from Bach, choreography by Béjart, created especially for her. Béjart knew what he was


103

doing. Her body was conquered long ago; now it offers no resistance, it shows no effort, it bends and folds easily, entirely under control. You forget that a body is there: arms and legs can get anywhere, at any angle, and hence they've ceased to be arms and legs, they're now segments of line, compositional elements, hues in a masterful fresco. They've been so cruelly crushed into shape that only the shape matters; their substance is gone, done with, thoroughly violated. Only the moon is visible.

It's the same thing with language. It takes beating words to a pulp by repeated, vulgar use; it takes chaining them irrevocably to the same tedious, predictable sequences, the same useless, wasted exchanges; it takes legislating their proper spelling and grammar with more determination and thoroughness than in any other field of social activity; it takes a lot of fast talking and reading and writing (the kind Nietzsche hated), before any respect and appreciation is lost for them, before their thickness is no longer noticed, before one can see through them and concentrate on something else. On what they're associated with, on the images and experiences they're ready to bring to mind once theirs has become a mindless presence, on what they now, finally, are in a position to "mean."

Notice that my values are completely orthogonal to this phenomenon. (Yes, I like Gdaniec better than Savignano, but I wouldn't call that a value.) Sure I despise the obtuse language of the news or of most philosophy papers, but that's because this thin, transparent medium is associated with garbage, brings to mind equally thin and transparent nothings. When there are adventures to be shared, thoughts to be explored, then beaten words may turn out to be priceless gifts: you climb on them without even seeing the steps, and throw them out effortlessly, unconsciously, while those adventures or thoughts take the whole stage the whole time.

So it's not a matter of values; it's a matter of what's going on. It's a matter of being clearer as to how transparency is achieved. A case with no obvious violence might be the best illustration yet. Say a shrill, persistent noise occupies the perceptual field: say you work in a loud factory somewhere, or at the flea market. The first day on the job you don't know how anybody can stand that, and know that you won't. Older hands tell you you'll wise up: it only takes time. And guess what, they're right. The second day you find yourself hearing your neighbors; within a week you can pick up distant conversations. In a month the noise is gone: you can now hear through it. Loud doesn't mean perceptible, powerful doesn't mean existent. Make a stimulus as loud and powerful as you wish; as long as it doesn't kill you, you'll get used to it and no longer feel it,


104

provided it's regular enough . Which makes me have second thoughts about flies. Maybe they bother me so much because there are so few flies in my life. I now remember seeing people—peasants on a farm—going about their business in rooms full of flies and being entirely oblivious to them, having no reaction to their presence. It was as hard for them to notice flies as it is for me not to.

That's the point with typos, then; that's why they're so irritating. They show language rebelling against our curfew, declaring independence and originality, and rights, and anarchistic freedom. Most often, we just disregard them, correct them automatically. It's the same thing the police do, in seedy parts of town: they do nothing, as if nothing happened. What the eye doesn't see. . . . But when it does see, it doesn't like it. The medium is getting out of control, is getting too much attention, is getting to be the message. It's asserting its existence, by disobeying our rules.

Now, however, something seems to be amiss. Seems to me, at least, given what I said elsewhere. Remember Kant rewriting the notion of causality, abandoning the imposition model and replacing it with a model of regularity, uniformity, rule-directedness? Well, if you think that something exists if and only if it has causal power, that amounts to rewriting existence as regularity. Not surprisingly, then, Kant also claims that nature, the world, is not a set of things but rather a system of laws. So for him (in my reading) something will be part of nature if it belongs to nature's regular structure, if its behavior fits the lawlike behavior of everything else. To put it otherwise, an intentional object may be as vivid as you wish, such that you can't entertain it without having your heart skip a few beats and your saliva run, but none of that will make it existent. Only conformity with a general, predictable scheme will. And here is where the problem is, because we've just concluded that regularity, spontaneous or contrived, is a main strategy to make something go away, to make it as good as non existent. It looks like one or the other claim will have to give: regularity can be a defining trait of existence or of nonexistence, but not both.

Unless perhaps our whole perspective is wrong—our whole point of view on Kant's operation. It's natural to think that he "rewrote causality (and, by implication, existence) as rule-directedness" because the earlier reading as imposition was inadequate, incorrect, and opened the door to unwanted paradoxes, whereas the new reading is exempt from some of these problems. But the matter may be much less sanitized than that; there may be more of a warlike character to it.

Suppose there are two players to a game and one of them plays a


105

regular, exceptionless strategy. It takes only some figuring out of what the strategy is and this player will cease to be a factor, could just as well leave—for all the difference he makes. The other player has complete control over him, and whereas this doesn't necessarily mean that he will lose (the strategy could after all be a good one), it does mean that he's no longer needed. After the figuring out, he's been internalized by his opponent without residue; he's become a well-defined problem in his opponent's framework. A problem for which there may or may not be a solution, I insist, yet still one whose formulation promises no more surprises.

And now suppose that the regular, exceptionless player does make an exception, that he unpredictably, ununderstandably changes the nature of the "problem" he represents. What if his opponent told him, "You can't do that. For then you wouldn't fit the system of regularities I've just established, and I would have to declare you nonexistent." You would find it funny, I'm sure, but there is a moral to this funny development. It's in a player's interest to fit the other into a scheme, to turn the other from a mysterious, potentially bothersome presence into a character of his own "play." When the fit explodes, two different notions of existence will confront each other: one "internal" and one "external," as we could say out of respect to Carnap. The player who was just betrayed will still want to have his little scheme work, of course, and might end up making his attempt at denial more and more explicit (and pathetic), but if enough external pressure is exercised on it then the scheme itself will have to go, and a less tame, more basic, more menacing "reality" will surface in full armor.

Kant is a philosopher of shaky, delicate equilibria, skeptical of final solutions, appreciative of disciplined effort. Knowledge requires for him a balancing of passive and active factors, of intuitions and concepts in his technical terminology. And, since in the Copernican framework objects are supposed to conform to knowledge, the being of objects requires the same kind of balance: there will be objects to the extent that we are able to grasp, order, and systematize material received from elsewhere. Once again, this could be seen as a disinterested description of a peculiar state of affairs. Or, instead, as a shrewd military tactic to survive in unfriendly territory, in the perpetual game of competition with the environment—human and otherwise. As much as you can, try to inscribe your opponent within your own strategy, to "conceptualize" it (him, her) so as to be able to anticipate its each and every move. But don't get lost in your own pretense, be on your toes: nothing is as dangerous as a


106

strategy with no default mechanism, no room for revision. The one thing you know for sure is that it's a jungle out there.

Commenting upon the oddities of quantum mechanics, David Lewis said once that there's such a thing as genuine chance in the world. Totally random phenomena occur, unpredictable events, irrational ones. The sillies thought that this was, finally, a "refutation" of Kant: here you have something undeniably existing, they argued ("science" proves it), yet obeying no rules, fitting no intellectual schemes, falling under no concepts. But the sillies were wrong, as they always are. They were hoping to be rid of Father, to have eaten Him up, but He was poisonous, and hard to swallow. He was still going to be the winner.

What the sillies were missing is the struggle element. And, consequently, the dialectical process. To begin with, there is a mysterious sort of being. It is because it hurts, it bothers, it pesters and cries. It demands, one knows not what. It urges, one knows not where. It is sensed but not understood. Not yet, at least.

Then this being is attacked, handled, grasped, grabbed, held. Locked, imprisoned, manacled, gagged. And, when it's overcome, a label is pasted on it, a proud declaration of the power now acquired over it. An emblem of ruthless sarcasm. Those who were once bad are now good, we know, and the good of old have turned into evil. What was once existence is now the lack of it: evidence of dreamlike quality, of gross incoherence, of insanity. And vice versa. Something was done to the words to bind them to such new, inimical resonances; but, before that and to make it possible, something was done to the (other) things, to rule on them, to make them fall from towers and slide along inclined planes and burn and crash and freeze.

Matching existence with regularity is a form of colonization. It's not wrong, not because it's right, but because it's not the sort of thing that can be wrong or right. Kant wins not because genuine chance has no currency but because it's denied currency, because irregular, unpredictable, random existence is now a problem to be solved, an unknown quantity to be deciphered, a scandal to be silenced. And a "theory" allowing for this embarrassment is no theory, has no dignity as one, is at best a temporary stage toward complete explanation—toward pax Romana.

For two thousand years the order of business was set in stone. First you learn to reason, then you give a clear and accurate description of the world, handle the baffling questions any such description will generate (such as how something can change over time and still remain identical


107

with itself), and, finally, turn your attention to the human microcosm. What makes for the good life, for a good person, for a good relationship? Further down the line, what makes for a good community? Politics comes at the end (just like death, did you notice?), including the politics of words: the ways in which discourse can be used to influence people, to make them do things, to bring their emotions to the surface.

Again, there's nothing wrong with any of that. Because, again, it's not the kind of thing that can be wrong. It's the kind of thing you live and die for, possibly without noticing. But there is a way of turning it around, of making logic, say, into a species of the genus rhetoric, into the exhausted rhetoric of words no longer functioning—or finally functioning, finally enslaved to functioning, to place-holding for nonwords. And there is a way of making politics the start of it all, the beginning of logic and metaphysics, the basis of any viewpoint, of any Begriffsschrift .

It's not like it's a new thing. Empedocles said it already, that strife is the mother of all academic fields. But it's a thing that needs repeating, over and over again, every generation or so. Or it will be easy to conquer it—to make Strife into yet another concept, not to see the strife that it takes to give strife its due. To keep it as a nonconcept, as the condition of all concepts, the temporary, risky, baleful condition of all systematic arrangements, of all (deadly, stinking) order. Nowadays, it's mostly feminist writers who do this dirty job for us all, who recognize and proclaim the war bulletin character of any metaphysical pronouncement, any categorization of the landscape. "Writers" like Gdaniec, I'd say—for why should pistols shoot only straight ahead?[1] Why should movement have a definite velocity, or direction?

It's politics that grounds metaphysics. Constellations take shape within the perpetual strife, sustained by blind, cruel power. They look immutable, eternal. They use this look as one of their best defensive weapons. Then, for no apparent reason, as everything is standing still, walls come down and the immutable mutates, the eternal dies. Cards are shuffled again, a new deal is dealt: new winners will emerge, and make their success look like destiny, like the alpha and omega of it all. Politics is no inessential application of our conceptual understanding of the world; it's a political statement, and a conservative one, to see it like that. Politics is the roots, the guiding metaphor, the driving force; metaphysics is the hourly report of who's trying to convert a temporary advantage into a historical victory, and then into an epoch of humankind.

[1] At the end of "Voyage," the man shoots the woman. But his pistol fires backwards and he falls to the ground.


108

Rewriting something as something else is a two-part operation. It requires a canceling and a new labeling—a true Aufhebung , in both of its contradictory senses. Rewriting "human" in such a way that it includes hoi barbaroi requires canceling old privileges and assigning new rights, redesigning our system of expectations, making the whole history and strength of a word available to a larger group of customers, making them the heirs of a foreign tradition, inventing the tradition anew. None of that will happen by itself: it's going to be a bloody, messy affair. None of that will stay long unless more blood is poured.

Rewriting existence as regularity requires throwing gods out of business—those capricious, brutish tyrants who could make you kill your children at a moment's notice, send you plagues and famines and droughts, shake earth and sea, all out of their stupid, arbitrary whims. Possibly saving one of them as the tired officer countersigning your conquest, rubberstamping your "rational" domination of what was once, but is no longer, a very strange place. It requires keeping things in line—and people, too, since their freedom has now suffered the same fate as God's: they're free insofar as they're predictable, coherent, boring repetitions of perfectly reasonable patterns. It requires reducing everything and everyone to a sign of something else, in a mad race without substance carefully analyzed by the new universal doctrine: semiotics, in any of its various incarnations.

It doesn't always work, of course; all the power in the world won't make it work all the time. There will be irritating exceptions: wistful phenomena, neurotic humans. Typos. They will have to be corrected, which sometimes will take generations, and sometimes will take faith—in the colonizers of an ever-receding future to be smarter than ourselves.

Where do I stand in this war? Let me answer with another question: Do I have a choice? Does it matter that I resent the emptiness of the "existence" traded down to me? That I long for a thicker form of being, one you can't see through, you can't conquer? One that just is, that arrogantly presents itself as "I am who I am," doesn't call upon something else, just as ungrounded, just as much of a tortoise upon a tortoise upon a tortoise . . . all the way down? Does it make any difference that I find myself fantasizing the most sudden outbreaks, the most insane responses?

No, it doesn't. I still belong to the same gang; I'm still on its side—fantasy and all. Who said that winning would be easy? And this is winning, and comes with a price. Comes with guilt, with the torment of a


109

beautiful soul, with the nostalgia of an irretrievable past. Spelled out at a computer, in a house with electric light, and running water, and a thousand other documents of existence rewritten. Flies are kept outside, by screens; typos are taken care of by the appropriate program. Words are thick no more, and neither are things, and it's just as well.


110

Chapter Ten—
Really Trying

What a queer concept "to attempt," "to try," is: what can one not "try to do"!
—Wittgenstein, Zettel, 104e


So you are a realist—a conceptual realist, I mean. You think that to explain what anything is (a set, an experience, a society), you must start with one or more res , or objects. The concept of a set you will articulate by reference to the objects having, say, a given property; that of an experience by reference to an object of a special kind—that is, a subject. And so on and so forth. The concept of an object you will not articulate; after all, one must start drawing one's logical space somewhere, and objects (the concept of them) is where you start. It's not the only possible starting point, to be sure: your philosophical enemy, the idealist, starts instead with ideas, or representations, or experiences, and gets to objects (if at all) only at the end of a long, tortuous route. But uniqueness is not an issue here, and you shouldn't worry about it. It's enough to worry about consistency and completeness: whether your conceptual elaborations hold water, and whether they let you reach (that is, account for) all the concepts you need. Which they do, don't they?

Well, consider the concept of a name, a concept that applies to such varied morphemes as "Hillary Clinton," "Sherlock Holmes," "Vulcan" (the Greek god—by his Latin name), and "Vulcan" (the presumed planet once taken to determine the oddities of Mercury's orbit and eventually judged to be nonexistent and unnecessary). How do you explain what a name is?

You will of course begin by saying that a name is an object, not a concrete object like the quintessentially philosophical tables and chairs


111

and not a highly abstract one like a number or an ideological constraint: something in between, a type with many tokens, a pattern with many realizations, in many different media. But saying that is hardly enough: now you need to attach the genus a differentia, specify what makes a particular object a name . And here you run into problems.

What you would want to say is that "Hillary Clinton" is a name and "#*$" is not because of the relation that the object "Hillary Clinton" has to another object, that is, Hillary Clinton—a relation (call it reference ) that "#*$" has to no object at all. But you must resist this temptation, for what about "Sherlock Holmes" then, or "Vulcan" (in both of its uses)? There is no investigating violinist at 221B Baker Street, and there never was, just as there never were Greek gods, or planets between Mercury and the Sun. Just as there never was an object #*$. So the failure of establishing a relation of reference cannot be what makes something not a name, and that the relation is established cannot be the differentia we are looking for.

Do you want to expand your horizon, and bring fictional "objects" into it, and claim that "Sherlock Holmes" refers to some such creature? Maybe, but notice that this will work for the fiddling sleuth, and possibly for the unfortunate crippled progeny of Zeus and Hera, but not for the missing planet. Nobody ever thought of Vulcan as a fictional planet. There were those, at one point, who thought of it as a real planet, a planet period—just like Mercury and all the others—and were later considered wrong, for, scientists agreed, there just is no such thing .

You could insist on the same strategy, and claim that not just fictional, but in general nonexistent objects are needed, and there would be nothing wrong in principle with that claim, except that the foundational character of objects would come under pressure. In general, you think of your primitive notions as something you have definite intuitions about, and may consequently leave unexplained. But if now "there are" nonexistent objects, what exactly is the basis of your conceptual framework? Most of the objects that do not exist will never come into contact with anything else, they will just sit there and, as they say in California, work on their tan; so explaining everything in terms of objects, if objects can be like that , will amount to reducing everything to total opaqueness, utter incomprehensibility[1] Not to mention the problems this approach would create for our own specific area of concern. For, if we allow nonexistent objects unrestricted currency, how can we tell that #*$ is not

[1] For more on this point, see chapter 2 of my Looser Ends .


112

one of them, and hence that "#*$" is not a name after all? Which explains the drive to reduce objects to something else—sets of properties, for example[2] —and to deprive them of their primordial status.

Or you might change the subject: dismiss your current language as a mistake and daydream about a "perfect" counterpart to it, one in which every name does refer to an existent object and problems never arise—just as they never do in Disneyland. But, when you come back to earth, you will find those problems waiting for you. Nobody speaks your perfect language, everybody thinks of "Vulcan" as a name, and there is no Vulcan.

One last avenue is left, and you will take it—which, as it turns out, will make the whole game unravel. You will say that a name is an expression that purports to refer to an object.[3] "Hillary Clinton" purports to refer to Hillary Clinton (and, as a matter of fact, it does), "Vulcan" purports to refer to Vulcan (which, however, it cannot do); so they are both names. "#*$" does not purport to refer to anything, so it's no name.

Now that's a strange way to put it, one might say. Expressions don't "purport" (that is, I take it, "intend," "try") to do anything; it's rather people who purport (or intend, or try) to do things, possibly by using expressions.[4] So why couldn't you just say the latter? The answer, as I understand it, is that it wouldn't work. Often people purport to refer to nothing whatsoever when they use a name: they say things like "The Greeks believed that Homer wrote the Odyssey ," or "It was once thought that Vulcan, a tenth planet, existed," or simply "Vulcan doesn't exist," while being perfectly convinced (rightly or wrongly, that's not the point) that there is no Homer or Vulcan to refer to—existent or nonexistent—and still recognizing "Homer" and "Vulcan" as names. So it must be something in the name , in the expression itself, that gives substance to the "purporting." And what could that be?

Let's focus on this strange sentence: "A name is an expression that

[2] See Parsons's Nonexistent Objects , 18, where, however, the reduction is left hanging in the air: nonexistent objects are not identical, but rather "correlated," with sets of properties.

[3] Quine, Methods of Logic , 197.

[4] Though see note 6 below for a more appreciative view of Quine's subtle strategy here. Unfortunately, he is not always that subtle. On the same page referred to in the previous note, he continues by saying that a name (in his terminology, a singular term ) "is powerless to guarantee that the alleged object be forthcoming; witness 'Cerberus.'" Of course, no sympathetic reader will want to put pressure on this language; they will all understand it to be "metaphorical." But there may be no way of cashing out the me taphor.


113

purports to refer to an object." Clearly, the occurrence of "an object" here cannot be de re . For, if the name cannot do what it purports, then there is no object—to refer to, or to purport to refer to. So the way to read the sentence must be: "A name purports to refer-to-an-object." That is, what makes the difference between those objects that are names and those that are not is not that the former have a relation to another object and the latter don't, but rather that the former have a complex property that the latter lack. A bound variable ranging over objects does occur in the specification of what the complex property is, but trying to get to that variable would involve us in all the nightmares of quantifying in. A totally unrestricted range of values (including all objects, existent and nonexistent, possible and impossible) would probably let us reach that position from the outside, but it would also (as per the argument sketched earlier) turn every expression into a name; a less extended range would leave us with an irredeemably notional (not relational) context.[5] So it's hard to see how in practice this recipe can help us discriminate between names and nonnames.

There is, of course, an alternative.[6] You could say that some parts of speech are simply designated to fulfill certain grammatical roles—to be subjects of propositions, say—and that "purporting to refer to an object" is a colorful way of expressing that. But this alternative would force you to give up the primacy of objects for good, at least as far as accounting for names goes, and to embrace a variant of conceptual idealism: language, not objects, is where you would start articulating this portion of your logical space. And you don't want to do that. So you, in effect, do nothing—nothing substantive, that is. You simply (and this is the main point of my example) paper over your embarrassment by a very convenient ploy, a very resourceful word, or cluster of words. For it does sound

[5] For this terminology, see Quine's "Quantifiers and Propositional Attitudes."

[6] Spelled out in more detail in chapter 9 of Looser Ends . Note also that "purport" is an ideal word for playing on the ambiguity between the two construals (and hence for exploiting the plausibility of the second one while never espousing it). For "to purport" means both "to intend, to purpose" and "to have the often specious appearance of being, intending, or claiming (sometimes implied or inferred)." That is, it conveys together (not by coincidence, of course—Hegel would have loved this) both the resonances of purposive, intelligent behavior and the suggestion of a phenomenon that simply tends to be read in a certain way, because of its own intrinsic properties. So the word allows one to get away with (in effect) utilizing structural considerations (which would be the idealist tack) while still giving the impression of relying only on "intended" objects . One must admire Quine's sensitivity to language in choosing such an exact word—if only to seal the problem the word dramatizes. Something similar (though not quite so forceful) is true of the word "claim." See note 10 below.


114

like trying, purporting, or intending to do something (in this case, refer to an object) is very closely related to doing it, maybe it's even doing it to an extent ; so if doing it is out of the question you might try the next best—attempting to do it. If you make your move quickly and confidently you might get no eyebrows, and no challenge to your sham realism, raised. On the other hand, if anybody were to stop and think about it, he might happen to convince himself that trying and purporting and attempting are very queer notions indeed.

Before we do some such stopping and thinking, let me throw a few more examples at you. What, then, will you say astatement is? Remember, you're a realist, so you want to start with objects. A nice way to go about it would be to characterize the notions of a state of affairs and of a relation of describing, and say that a statement is an expression that describes a state of affairs. But we know we can't do it that way, because there are false statements. Shall we say that a statement is an expression that attempts or purports to describe a state of affairs? Maybe that's what we will have to say eventually, but let's try some epicycles first.

Epicycle one: You say that there are two objects, T(rue) and F(alse), and a relation (of meaning?)[7] that expressions can have to them. The expressions that do are statements. But, whatever other merits this suggestion might have, here it gives no help. It's like explaining what a human being is by saying that it's either a man or a woman—and, of course, without having the faintest idea of what men and women are. You've now divided your task, but made it no more approachable.

Epicycle two: You say that there are possible worlds and possible states of affairs, and statements describe the latter. "The sun rotates around the earth" is a statement because it describes not something that is the case (a state of affairs period) but something that could be the case (something that is not a state of affairs, but could be one). That there are statements describing impossible states of affairs might give you pause, and you might contemplate admitting impossible worlds, but before you do that let me tell you why it would get you nowhere. What a world is is just as much of a problem as what a statement is, and the same basic conceptual options must be faced in addressing both problems. Is a world to be defined starting with language (the idealist option) or starting with objects (the realist one)?[8] And, if the latter, is it just ordinary,

[7] After a detour through "reference," this is how Max Black finally (and, in my view, correctly) decided to translate Frege's "Bedeutung" in Frege's Collected Papers .

[8] On this matter, too, see chapter 9 of my Looser Ends .


115

existent objects that we want to use or also their enigmatic unreal cohorts? If the former, we are stuck with the insoluble problem of identification through possible worlds—with deciding, say, what actual object Ajax-in-nonactual-world-w is identical to (an issue we need some way of addressing, since there are statements "about" Ajax). If the latter, we're back to the problems those enigmatic characters gave us before.

But that's not all. For what about statements with no truth-value—category mistakes like "The Pacific Ocean is thoughtful," or the "Colorless green ideas sleep furiously" of old?[9] They don't even satisfy the basic requirements for describing anything, here or in any other world. The Pacific Ocean is simply not the sort of thing that could be thoughtful or nonthoughtful, so wherever it exists it will make no sense to either attribute it or deny it that property. Shall we say that "The Pacific Ocean is thoughtful" is not a statement? We would certainly say that if we were in the business of evading the issue—of sketching an imaginary, artificial, "formal" situation in which the issue does not arise. But we decided to discount such cheap tricks. "The Pacific Ocean is thoughtful" is a statement: a meaningless, truth-valueless one to be sure, but a statement nonetheless. Just as "Have you stopped beating your wife?" is a question even when asked of a bachelor, or a devotee to nonviolence: the wrong question to ask of that person, to be sure, but a question nonetheless, and one that, if we were trying to understand what questions are, we couldn't just disregard on account of its "wrongness." So, if we are trying to understand what statements are, we can't just disregard "The Pacific Ocean is thoughtful" and the like. And, if these statements can't possibly describe anything, we will have to appreciate this fact and see what follows from it.

Or we might avoid any such appreciation, which is where the cop-out mentioned earlier becomes relevant again. A statement, we will say, "purports" (or "attempts," or "intends") to describe a state of affairs.[10] Even "The Pacific Ocean is thoughtful" tries to do that; too bad it can't, poor thing.

We're talking about monumental matters: worlds and truth and meaning and all that. You might think that that's the problem, that there is something intrinsically paradoxical about notions that are too large. So suppose we try a more modest notion—something indeed we've al-

[9] Chomsky, Syntactic Structures , 15.

[10] An example among many: "Let us call a sentence that makes a definite factual claim a statement " (Skyrms, Choice and Chance , 2).


116

ready worried about in this book. Suppose you tell me what a typo is. And, before you start, let me tell you how the idealist would go about it, so you don't think I'm just giving you a hard time. The idealist would start with the notion of a text, and look for regular patterns in it, and find that some things look like exceptions to the patterns, and call some of these exceptions typos. Any such judgment, of course, would be inherently temporary and revisable, since there are indefinitely many overlapping, and even conflicting, patterns to be found in any text, so what looks like an exception might eventually turn out to prove the previous regularities exceptions, eventually be revealed as the basis of a whole new exciting reading of the text.[11] This particular avenue, however, is not open to you, since it would make the objectivity of any interpretation—and, specifically, of what counts as a typo and what doesn't, what is mistaken and what is not—conceptually dependent on factors originating with that interpretation (its coherence, persuasiveness, and even the excitement it might cause), not on its adequacy to any objectual standard.[12] So how else will you do it?

You will probably say that the typo is a typo because it doesn't match what the author "intended" ("purported," "tried," "attempted") to say. But it won't wash—not so baldly stated, at least. For, do you mean that the author literally had an intention to write one particular word rather than another—say, "talk" instead of "tall"—or maybe rather than no word at all—say, "talk" instead of "talt"? You might answer: not a conscious intention, of course, but still an intention, one that the author could become aware of if he wanted, or were appropriately questioned. We'll stop a second to point out the peculiarity of your agreeing with Freud all of a sudden—that shady character who, as we all know, based his wild theories and practices on no objective evidence whatsoever—and then we'll let your answer go, for the time being. Because there is something else to worry about.

I'm sure you've had your share of dealings with copy editors—those sacrificial offerings on the altar of "authorial intention" who are most

[11] The essentially temporary, revisable nature of any such judgment is argued for in my Kant's Copernican Revolution .

[12] This, of course, is precisely how claims of objectivity are assessed within ordinary interpretive practices. But then the popular trick is available to the realist of distinguishing the ontology or metaphysics of truth (or objectivity) from its epistemology or methodology . A number of redoubtable philosophical careers have staked their credibility on this trick, which lets one insulate one's wildest "metaphysical" dreams from the necessity of providing any responsible defense of them.


117

often liquidated with a blanket word of thanks in the preface, together with assorted companions and stepkids and friendly alligator pets. I'm sure they pointed out to you, more than once, that a certain word, perfectly spelled and grammatically inflected, was not, however, "what you meant." You didn't mean "implacable" here, since "implacable" is defined as "not capable of being appeased, significantly changed, or mitigated"; you actually meant "persistent," that is, "continuing or inclined to persist in a course." For "implacable" entails an agency attempting to placate, as well as unconcern or loathing for any such attempt, and you don't really mean that there is such an agency here, do you?

Well, now, suppose somebody asked you—before this illuminating discussion—whether you did mean to write "implacable." Maybe the person asking had never heard or used such a word, and wanted to make sure it was the right one. I believe your answer would have been, "Yes"; I believe you might even have considered giving the guy a little lecture concerning the unexpected resources of the language. Except that, after all, it was not the word you meant (intended, purported). So what exactly is going on here? Shall we get one step closer to Freud and recognize not just a preconscious intention but two conflicting intentions: a preconscious one to write "implacable" as well as one to write "persistent"—one that we would have to call unconscious now in the technical sense, since it's not something that you could have brought up to consciousness at will, but rather something that it took expert help and argument to make you aware of?

Come on, you might say at this point, we all know what it means to intend (purport, mean) something: our philosophy is not expected to forget (indeed, it is expected to presuppose) ordinary commonsensical understanding. And, if you do say that, I won't buy it. I will first give you a little lecture about Heidegger's "idle talk" and then point to the disingenuousness of this appeal to common sense. Commonsense philosophers, I will tell you, strain and abuse the "ordinary" understanding of words and phrases as much as anybody, and only resort to hiding behind common sense when there is something they cannot justify, or for whatever other reason have decided to leave alone. In this maneuver, incidentally, they look like any other philosophers: I don't know a single figure in the history of our discipline—be that Hegel or Hume—who hasn't at times made decisive references to the "man in the street."

So let's forget about common sense and bring the humble practice of writing back into focus. After all, that's what makes much of the sub-


118

stance of our lives, isn't it? So consider this other small matter. You write a word here and then, a few lines later, you write the same word again, or a cognate word. If you were asked of each word, "Is this the word you meant?" you would answer, "Yes." We might even assume that in both cases, after considerable reflection, you had convinced yourself that that was not just a word that expressed what you meant, but the word that expressed it best. Now, however, your copy editor points out the "unintended repetition," and immediately you're bothered by it. You didn't want a repetition there (or anywhere): echoes you find annoying, you like to think of yourself as somewhat stylish. So, did you or did you not intend (purport, mean) to write those two words?

You might answer, "Of course I did, but I did not intend the repetition." And this will get you into a fine mess—the problem, that is, of how far one intends the consequences of what one admittedly intentionally did. Did the doctor intend to kill or maim his patient by proceeding as he did? Maybe, since his insurance had to pay. Did the guy who stabbed Monica Seles intend to do something evil to her, or just relieve Steffi Graf from harsh competition in the most efficient possible way? Who knows? The fact is that he's out free now. Does a flag burner intend to offend people's patriotic sensibility by making a figurative statement about the politics of his country? Although the issue has been resolved in the courts, we are still debating it. As far as you are concerned, anyway, your answer has had the brilliant effect of explaining a mystery by reference to something just as mysterious—or more.

In conclusion, the only credible picture of authorial intention you might honestly conjure, at the present stage of conceptual elaboration (or lack thereof), is that when you sit down to write something you have a general and sometimes quite vague idea of how your argument is supposed to go and, as for specific words, an equally general intention to do the best you can. So, if the copy editor convinces you that a given turn of phrase is "better" than the one you used, you will quickly recognize it as "what you meant" and incorporate this unacknowledged coauthor's work. A good example of how exploitation proceeds in the liberal, progressive world of intellectuals. But, more relevantly to our discussion now, a strong piece of evidence that you don't have a workable notion of what a typo is. You will say that that's not the word you meant (purported, tried) to write, of course—but what you say will not sustain a tiny bit of scrutiny.

I could go on, but by now you can see the logic of my examples. A number of concepts that present no special problem to the idealist behave


119

like anomalies in the realist universe of discourse. Undeniably, the reverse is also the case: think of how problematic the notion of an object is for an idealist. But idealists may have been more forthcoming in bringing up such complications; in my reading, for example, this is what Kant does in the Antinomy chapter of the first Critique .[13] What realists do with their anomalies, on the other hand, often amounts to sweeping them under the rug, and for this purpose the family of concepts centered around trying and purporting provides very expedient devices. But it's a troubled family: one that can't help itself, let alone rescue others. We have seen some of the specific troubles the family brings in its wake; it may be time to phrase the issue in more general terms.[14]

Suppose you try to draw a triangle, but do not succeed. The phone rings, your buzzer goes off, your pencil breaks, or whatever. The fact of the matter is that you end up with no triangle. Can you still make sense of your attempt? I'd say yes: you have drawn triangles before, know exactly what that activity amounts to, and can describe your current interrupted activity as an imperfect approximation to what in other circumstances was carried out to completion. But now suppose you try to draw a straight line; what does that amount to? You can't draw a straight line, of course. Nobody can. A straight line is infinite, so it's not the sort of thing that could be drawn. You might describe what you do as drawing a segment of a line, maybe drawing a very long one—the longest you have ever drawn. All this is fine: you know what it is to do these things, which means you also know what it is to try to do them. But what is it to try to draw a straight line ? Is it to draw a segment while having a straight line in mind, and focusing and concentrating on it—really trying , that is, trying real hard? But what do you have in mind when you have in mind a straight line? Not the real thing, for sure. Maybe a name, a symbol for it? So trying to draw a straight line is drawing a segment while at the same time clenching one's teeth and having the phrase "straight line" clearly before one's mind?

You will notice that we're traveling in familiar Wittgensteinian territory here, so I won't press the point any further. The point, simply put, is that it makes sense to talk about trying to do x when one can think of such an attempt as possibly (indeed, as occasionally) succeeding. But "trying" and the like are very plastic verbs: they easily and unnoticeably slip beyond the realm of possibility, so they can be profitably used to

[13] This point is argued in chapter 6 of my Kant's Copernican Revolution .

[14] A similar attack is made on a specific case of attempting the impossible in chapter 1 of my Logic and Other Nonsense .


120

cover up all sorts of conceptual inadequacies, pay off all sorts of intellectual mortgages, realize all sorts of hopeless philosophical projects. As one realizes wishes in a dream, by fiat, without wondering whether it adds up, without asking too many questions, putting too much pressure. For, after all, we do know what we mean, don't we?


121

Chapter Eleven—
Deadly Clear

It was one of those department meetings when you wish you had taken up mountain climbing, or crochet. A bunch of mediocre students to evaluate, to pass mediocre judgments on, on a mediocre afternoon. An agony deciding whether to extend their agony, to get them one step closer to joining a mediocre profession—where they probably belonged anyway.[1] One of these unfortunate young people, I remember, believed himself to be smart, and had written a "stimulating" and "controversial" piece of junk, but one that, alas, was not "carefully worked out." A "lively discussion" ensued as to what this guy's fate should be, where a lot of "fine points" about the paper, and its author, and his character and methodology, were carefully belabored: a good example of what he should have done—if only he could listen in. There were two schools of thought (there always are): one praising the guy's "originality," the other stigmatizing his lack of deep analysis. And it went on and on. Until this one colleague, notorious for having trouble publishing anything, found a way of clashing the two points of view, of pointing at their incompatibility. He announced, "If he had been clearer about it, he would have realized he had nothing to say." He announced it softly, with a sad look on his face, and from then on the meeting was lost on me. There were things

[1] Lest those who care more about gossip than about philosophy draw any conclusion here concerning my judgment of UCI students, let me hasten to add that not all meetings on them are like this. And, of course, I am not telling which one I am talking about.


122

I needed to work out then, things I needed to get clear about. To dissolve them forever, perhaps.

There is Kant, of course (there always is): "[M]any a book would have been much clearer if it had not made such an effort to be clear ."[2] But this statement brings out two kinds of clarity, and it's important to get clear (ha!) as to how both are relevant here. There is intuitive clarity (Kant also calls it "aesthetic"): the clarity of examples, the one being disputed. Examples might confuse us, make us miss the forest for the trees, bury us with colorful, useless detail, disturb our concentrated effort to penetrate high-powered conceptual structures. Better to do without them, without the delusive familiarity they intimate; to have our strenuous work face us mercilessly, our agenda clearly set (ha! ha!) by a few elongated tongue twisters. It will be hard to say "transcendental unity of apperception," let alone use it properly, but at least the hardship will show; there will be no surprises later, when we least need them.

And there is discursive clarity (also called "logical"), which I'm sure is what my colleague had in mind. I'm sure what he meant was something like, "The guy should define his basic terms, argue one point at a time, display all relations of logical dependence, and then he would realize that his attempted construction is an impossibility, that there is no such thing—in the philosophical sense: in the sense that there can't be." In the process of thus deceiving himself and others (not everybody all the time, fortunately), the student might well have been (mis)using a lot of intuitive clarity, trying to get friendly with his readers by throwing a lot of images at them, and thus avoiding the clarity that really matters, the ascetic exercise that would have revealed all his presumptuous emptiness.

Right. Except that the statement that intrigued me seemed more general, more sweeping than that. It may have been the tone with which it was uttered—that sad, dejected tone—but what it suggested to me is that the good clarity, the one missing from the student's paper, the one Kant will stick to, inevitably has destructive consequences, is inevitably deadly. Could there be something to this suggestion? Which, incidentally, resonates with other similarly disturbing suggestions. Once they asked Niels Bohr, "What is complementary to truth?" His answer was, "Clarity." And his biographer calls this "a response that tells more about him than many a lengthy essay."[3] Does it also, perhaps, tell about clarity, and about truth?

There was this other colleague, way back, a highly respected one—by

[2] Critique of Pure Reason , 13; italics in the original.

[3] Pais, Niels Bohr's Times , 511.


123

myself, among many others. I sat in his course, my first year teaching, and found him wonderfully clear, and learned a lot from him about doing things in a classroom. And I told him, on two separate occasions. Once his reaction was, "Ermanno, you were born clear." The other time it was more elaborate. "There is a disadvantage," he said, "to being too clear. You want the students to realize that there is still something above their heads, something they do not understand. You don't want the course to close the issue for them." Now these two remarks were made years apart; the first one is highly complimentary, the second one provides food for reflection. But what if I bring them together, make my reflection relevant to the compliment? Doesn't the latter begin to appear left-handed (not intentionally, of course)? Doesn't something dirty and dangerous surface, under the cordial appearance of infectious, inborn clarity?

The Enlightenment, of course. That's where it all comes from. Powerful light rays chasing darkness away from wet, musty corners, ruthlessly exposing the cobwebs that inhabit them; pure, clean water flushing out the dirt, leaving everything spotless, clearing the field for the impeccable, unstoppable progress of reason. But cobwebs are delicate, graceful structures, and what if it takes darkness and mold for them to be? Isn't the world better, richer that way?

To some extent, it's a story we heard from Hegel.[4] Critical reason eventually falls into a perverse cycle of destruction, into systematic slaughter, into Terror. Once unleashed, there will be no controlling it—until a very tragic end. Or, as others have added, until its pretense shows: until its call for liberation dialectically turns into a more systematic, sanitized, efficient servitude.[5] But Hegel's Prussian armies march too fast for me; his eagle's view I find overwhelming. I am fascinated by details, by what happens when indeed the light or the water meet the cobwebs, by how all of them feel. Maybe it's because I am fascinated by guilt—the cobwebs', or the light's—and guilt gets lost when the perspective is too vast. Guilt is something you feel in private, away from it all. "In the abstract conception of universal wrong, all concrete responsibility vanishes."[6]

How come nobody ever wants to work with me? I have been in this business almost twenty years now, and I have directed no thesis. It's not like I'm a bad teacher: I won awards and stuff and, of course, "I was born clear." It's not like I'm lazy: I work like a madman. And it's not like I

[4] See Phenomenology of Spirit , 359ff.

[5] See Horkheimer and Adorno, Dialectic of Enlightenment .

[6] Adorno, Minima Moralia , 25.


124

don't have fruitful, exciting intellectual exchanges with students: reading groups have been meeting at our house since I can remember, continuing well into the small hours of the morning, debating Deleuze or Searle or Judith Butler or whatever and drinking red wine. Which means that some of those students did think they would want to work with me, but eventually changed their minds. Sometimes they even left philosophy for good: two of the best are now an engineer and a poet. Could it be that precisely clarity is a problem? That, once given a clear picture of the situation, one will have a hard time figuring out what else is required, how to justify one own's contribution, one's own presence? You might not have to kill your children, or eat them: you might simply blind them, and it will have enough of an incapacitating effect.

"Only the accustomed context allows what is meant to come through clearly," Wittgenstein says.[7] Suppose we take this unsympathetically. So you come upon some unfamiliar area of your thoughts—unfamiliar, as it turns out, just because of the strange angle from which you happen to be looking, or the peculiar light that happens to be thrown on things. And you find the area suggestive. You don't quite know suggestive of what yet: it's one of those hunches that rapidly cross one's mind and most often, just as rapidly, vanish without leaving an address. You would want to sit in that peculiar light for a while, keep on looking from that strange angle, spend precious time spinning a few cobwebs of your own, pursuing their silky threads at leisure, wherever they might lead. But suddenly it's over: the earth's axis shifts by a minute fraction of a degree and you find yourself facing quite ordinary surroundings, old, worn-out faces. You have in fact never left home.

It takes a while to leave home. It takes patience and care. Respect for those delicate, graceful structures that will be blown away by the faintest breeze, dried up by the most timid sun, banished by the quickest glance. It takes looking away from them, holding your breath to give them a chance to breathe, to grow, to become. Obtrusive, loud familiarity will kill them in the bud, as it's killed many a budding passion: the vulgar recognition of the identical, of what has always been, of what we've always known, of the usual, obvious, tedious motives. You can do this kind of stifling, smothering job on others, invading their elusive privacy with some bottom-line, crass remark. And you can do it with yourself: mutilate your own hopes, drown your own omens. Which is one way to read the intimation I had of a connectedness between clarity and destruc-

[7] On Certainty , 31e.


125

tion—an intimation I decided to give a chance, to spin some time around.

The plan this reading sets is clear enough (not again!). To be creative, we need to practice obscurity: too much light is bad for that piling up of cells which is required before anything can face the light. So we'll shut windows and doors, and not dust the floor either, since cleanliness is an enemy also. We'll sit in the dirt, in the mud, in the swamp; adjust to the few leftover photons; keep our tongue still; and wait. And maybe we'll get lucky. Maybe that way we'll forget we're home—which is how you leave it, how you make yourself a stranger to it, how you find yourself in another place, really other, not just a few thousand miles away and still horribly familiar. And, if we do forget, we'll cherish that, we'll cultivate our hard-won distance, refuse to relate to all that is ancient and stupid, spin away with a vengeance, until the web is so thick you would have to cut through it with a knife, until you can no longer deny that it exists, that our hunch has happened.

Maybe. Though it goes against everything I ever believed in, hoped for, worked at. The ideal of a life without shame, without under-the-table arrangements, sideways looks, knowing looks. Though it seems to favor everything I always hated: obscurantism, complicity, and cobwebs. Who knows?—at the end of this road I might find that the Mafia is not so bad after all.

Or I might not. For what about this now. Suppose you got used to your limited harem of photons, to sitting around unkempt wares, to silent spinning of ethereal thread. Is this different from any other kind of getting used to? Is it not familiarity in the familiar sense—familiarity to darkness , and unkemptness, and spiderwork? Have you not just gotten absolutely clear about what it takes to live and practice and be efficient in that peculiar (indeed, by now no longer peculiar) environment?

The problem is that these things won't stay put, will turn into other things, indeed into their opposites—which once more reminds one of Hegel. It's hard to keep them straight. So let me try again. I guess what I'm saying is that there may be cases, many cases, where the sort of clarity Kant, and the Enlightenment, had in mind, the sort I often depicted myself going for—neat structures, explicit definitions and premises, careful arguments—is very unfamiliar, and hence if you throw it at people, or at yourself, this is just as bewildering a gesture as in some other cases the shutting of the light could be. I guess I'm saying that some hunches can come by being invaded by light, and can be pursued by pursuing that light, by trying to focus on it in spite of its blinding force, indeed by using


126

it to blind you, so that, unaware of the present, you can tell the future, the possible future, the future that suddenly looks possible to you. Because you're blind to the present.

It's from this daze that spring the ideals I worried about earlier: of justice, truth, a decent life. Of clear rules for a fair game. And sure all this can be destructive, but I don't think I sympathize with its victims too much—not at this stage, anyway. The victims belong to a well-established, luxuriant undergrowth; the shade down there is no longer making anybody uncomfortable. If anything, it's all these straight lines and sharp angles and clear-cut figures unexpectedly thrown at them that give them trouble—and that's OK, that's the sort of trouble we need, if we don't want to get stuck with the same luxuriant undergrowth forever. Even the forest gets boring after a while, they tell me. More important, even the forest gets complacent, and arrogant, and vicious.

But you must be careful. There's probably no amount of light you can't get used to, no lofty ideals that can't become insensitive, inanimate routine. When that happens, it's time to go back under the trees again, or into that swampy, dusky room, looking for new ways of giving oneself trouble. Or even for old ones, as long as you've gotten rusty enough at them. It's time to do so, at least, if you still have it in you. I was watching this movie last night—a great one, as it turns out. This Boy's Life , it was called. At some point this boy's mother, who's been skipping one town after another and now finds herself locked in a hellish marriage, says to him, "I don't have another get-up-and-go left in me. . . . I'm gonna make this marriage work." So, when the time comes you don't have another get-up-and-go left in you, everything will forever be perfectly, deadly clear.

But now it seems that clarity is fragmenting again for me, going two different ways. There is clarity as a matter of style: the sharp angles and explicit definitions and all that. And there is clarity as a behavioral matter: how clear you are, or are not, about how to read your surroundings, and move effectively, automatically in them, and do the expected thing, and receive the expected response. Just as there is an obscurity made of fuzziness, ambiguity, and doublespeak, and one made of embarrassment, unpreparedness, ineptitude. Where the two (meaning both the two kinds of clarity and the two kinds of obscurity) are independent of one another: you can be totally slick at political nonsense, and totally lost within the rarefied elegance of a mathematical proof.

Which creates various problems. For one thing, why are these two different qualities given a single name? Is it just, again, an "ambiguity"?


127

Or can I tell a story connecting the two meanings—one that makes it look sensible that they would belong to the same word? Maybe I can. For consider: a roundabout, wandering movement is precisely what you are forced into when you don't quite know your way. When you do, on the other hand, you take the shortest route, use the fewest words and moves, the right words and moves.

Except that right angles are not always the right ones, and Euclidean straight lines are not always the shortest distances, which is where the confusion starts. How many times did I speak in what I thought was a straight manner, and in some sense it was, but ended up stalling the process, taking it for a ride into oblivion, whereas the most convoluted, intricate, elusive jargon got the job done in no time at all! Each space has its own geodesics, its own notion of straightness, its own familiar, effective ways, its own sense of what it is to be clear. So this, now, could be part of what Kant meant—even the most important part: the stale, borrowed pun hid his usual witty cleverness, which hid his usual startling revelations. Shifting paradigms will also involve shifting senses of clarity, painstakingly acquiring the ability to make some new moves without pain, to travel some new paths with confidence, however oblique the paths might look from the old point of view. Many a book would have been much clearer for some if it had not made such an effort to be clear for others.

A lot of what's involved here, of course, will have the status of a program—with Kant, at least, it sure is that way. There will be times when two existing, fully realized "spaces" confront each other, but also times when the confrontation is between reality and the idea of such a space, or even between two such ideas. Then there will also be a confrontation between an existing, realized clarity and something that is not at all clear yet, but tries to sell itself as clear—or between two such hopeful prospects. And this confrontation will consist, to a large extent, of a war over who has the right to use the word "clear," as is always the case with momentous words like that (who, or what, has the right to be called "human," or "scientific," or "logic," or "art"?). And I guess I've been on a specific side in this war, pushing a certain way, sharing a certain agenda. Though, maybe, feeling the agenda more, feeling closer to it, when it was still only an agenda. Being more sympathetic, say, to Frege's project of making everything absolutely clear, with all its awkward complications and contradictions, all his fuzziness about being saturated or grasping thoughts or what in the world identity relates, all his "the concept horse is not a concept" stuff, than to the idiotic implementation of a


128

well-entrenched form of life that has followed from it. More to his uncertain notion of clarity, that is, than to its terrifyingly lucid fallout.

Clarity, then, might be very obscure (and it's probably at its best when it is). In other words, a plan to make us all obey rigorous, fully formulated, public laws might itself be formulated at a time when nobody really knows how it's supposed to work, or that it will work—when indeed nobody would even recognize its (not) having worked if it did(n't). Which brings me back to my concern with details, with passing moments, with those ephemeral experiences that will go unnoticed—worse, will get raped—when our vista becomes too powerful, our eschatology too self-assured. That the rationality of the Enlightenment might, in fact probably will, inevitably, inexorably bring about Orwellian or Huxleyan nightmares does nothing to infect the purity of its dream when it was first conceived. Moreover, it leaves its purity intact when it is still first conceived, for there are many places where we still need it to do its work, where there is still plenty of time before the tragedy turns into a farce, before the Washingtons and the Jeffersons turn into the Bushes and the Clintons. Or the Russells and the Carnaps turn into those others whose names are best consigned to silence.

And it also brings me back to guilt, and students. For, now, which clarity gives them more trouble, stunts them, makes them want to leave? The clarity of "rational," Enlightenment structures, or the one of comfortable, expert moves? Is it mathematical rigor or professional deftness that they are afraid of? The species or the genus? Better, and more honestly: the genus or what would like to present itself, what one would like to present, as a species of it?

Earlier I was afraid it would be the (alleged) species. You want to have leisure to work out your nonsense, I thought, to give it an opportunity to develop into something better than nonsense, and displaying its nonsensical character prematurely will deny it that option, burst the bubble, reveal it to be nothing but gas. So, I thought, I should not rush giving a limpid picture of the situation; I should let the nonsense play itself out a bit longer, until perhaps it reduces limpidity to nonsense, cogency to silliness. Diagramming and defining and schernatizing may well scare shy jewels away. But now it looks like even that is too sanctimonious, too respectable: it looks like what's involved here may be an even more lurid sin.

The sin of being the one who does the playing—whatever the game might be. Of being the one who has the slick moves whether the subject is hard or soft, analytic or continental, history or theory, logic or non-


129

sense. Which makes you want to leave and find another place altogether, another lair, far from here. A place where you can practice what moves are still left, or you can come up with, without worrying constantly about preemptive strikes. If, that is, you're the sort of person who misses having moves of his own to make. There are, of course, all those other people who are happy to follow the leader along wide, well-kept highways. Those mediocre people I now realize I shouldn't be so fussy about.

Bartleby the scrivener will not speak, will not cooperate, will not comply.[8] To every such request, he will respond with his opaque, maddening formula, "I would prefer not to." He will never reveal his secret, never come out in the open; he will rather have death in prison than any of that. Rather go out as a heap, a bundle of dirty clothes thrown on the stones of a prison yard, than accept the compromising gesture of establishing contact.

But there will be plenty the good-natured Master in Chancery will do as a result—against his own intentions, his own better judgment. He will be bewildered, and incensed, and flabbergasted repeatedly, and think of all the reasonable things a reasonable man like him is supposed to do and, most often, do nothing of the sort; he will find himself behaving uncharacteristically, with uncharacteristic generosity. Or concern, or care. He will use all his wits to fill the emptiness at the core of the scrivener's being, all his resources to make him into an interlocutor, a collaborator, maybe even a friend—because the kinds of things the Master will do for Bartleby only a friend does.

I don't know that this is how God created the world from nothing. Whether he did it by receding, making himself into nothing, having nothing desire fullness, the fullness that formerly invaded it and had now painfully retreated. Making nothing's desire work something out, turn nothing into something. But sometimes I feel, I fear , that this is indeed the formula of creation: you create by making room, giving way, letting be. By your absence.

The genus clarity is skilled articulation. Skilled, practiced articulation takes time and effort: if you spend your time watching someone else's display, or someone else's effort, no time is left for your own. Nothing is learned by watching spectator sports, except perhaps how to be a spectator. So it's not so much darkness and mustiness that I should teach myself, if I want to be able to teach; it's rather reserve, and silence. Say

[8] Melville, "Bartleby."


130

your thing quietly, put it down in a corner, and let others decide what to do with it, or even whether to do anything with it at all. Even stereotyped behavior wouldn't be such a bad idea. Don't we learn a foreign language more effectively by imitating a stereotype native speaker? A stereotype German officer, say?

A thing is a stereotype—or, maybe, that's the stereotype of a thing. A thing is supposed to be determined once and for all: definite, settled. A subject, on the other hand, is supposed to be open and surprising and challenging. So am I saying that a truly inspirational teacher is one who accepts total relfication, refuses being provocative, and exciting, and enthusiastic? Sacrifices his subjectivity?

Yes, that is what I'm saying. And I think it must be said now, in this environment of sensorial overstimulation, of colorful display, of entertainment around the clock. An environment where fast, articulate speakers like myself adapt easily, and can do the most damage. I must sing the praise of awkwardness, of tentative, shameful moves. And then I must stop, before it gets too damn clear.


131

Bibliography

Adorno, Theodor. Minima Moralia: Reflections from Damaged Life . Translated by E. Jephcott. London: Verso, 1978.

Aquila, Richard. "Intentional Objects and Kantian Appearances." Philosophical Topics 12 (1981): 9–37.

Bencivenga, Ermanno. "Descartes, Dreaming, and Professor Wilson." Journal of the History of Philosophy 21 (1983): 75–85.

———. The Discipline of Subjectivity: An Essay on Montaigne . Princeton: Princeton University Press, 1990.

———. Kant's Copernican Revolution . New York: Oxford University Press, 1987.

———. "Knowledge as a Relation and Knowledge as an Experience in the Critique of Pure Reason." Canadian Journal of Philosophy 15 (1985): 593–615.

———. Logic and Other Nonsense: The Case of Anselm and His God . Princeton: Princeton University Press, 1993.

———. Looser Ends: The Practice of Philosophy . Minneapolis: University of Minnesota Press, 1989.

———. Philosophy in Play: Three Dialogues . Indianapolis: Hackett, 1994.

Bishop, Matt. "How to Use USENET Effectively." USENET , October 19, 1986.

Brittan, Gordon. Review of Kant's Copernican Revolution. Philosophy and Phenomenological Research 52 (1992): 740–742.

Caton, Hiram. The Origin of Subjectivity: An Essay on Descartes . New Haven: Yale University Press, 1973.

Chomsky, Noam. Syntactic Structures . The Hague: Mouton, 1957.

Davidson, Donald. "On the Very Idea of a Conceptual Scheme." Proceedings and Addresses of the American Philosophical Association 47 (1974): 5–20.

Descartes, René. Meditations on First Philosophy . In The Philosophical Writ -


132

ings of Descartes , translated by J. Cottingham, R. Stoothoff, and D. Murdoch, 2:3–62. Cambridge: Cambridge University Press, 1984.

Dummett, Michael. Frege: Philosophy of Language . 2nd edition. Cambridge: Harvard University Press, 1981.

Frege, Gottlob. Collected Papers on Mathematics, Logic, and Philosophy . Edited by B. McGuinness. Oxford: Basil Blackwell, 1984.

Freud, Sigmund. Beyond the Pleasure Principle . In The Standard Edition of the Complete Psychological Works of Sigmund Freud , edited by J. Strachey, 18:7–64. London: Hogarth Press, 1953–1974.

———. Totem and Taboo . In The Standard Edition of the Complete Psychological Works of Sigmund Freud , edited by J. Strachey, 13:xiii–161. London: Hogarth Press, 1953–1974.

Groening, Matt. Childhood Is Hell . New York: Random House, 1988.

Guyer, Paul. "Kant's Intentions in the Refutation of Idealism." Philosophical Review 92 (1983): 329–383.

Hegel, Georg. Phenomenology of Spirit . Translated by A. Miller. Oxford: Oxford University Press, 1977.

Heidegger, Martin. Being and Time . Translated by J. Macquarrie and E. Robinson. New York: Harper & Row, 1962.

Horkheimer, Max, and Theodor Adorno. Dialectic of Enlightenment . Translated by J. Cumming. New York: Continuum, 1991.

Kant, Immanuel. Critique of Practical Reason . Translated by L. Beck. Indianapolis: Bobbs-Merrill, 1956.

———. Critique of Pure Reason . Translated by N. Kemp Smith. New York: St. Martin's Press, 1965.

———. Groundwork of the Metaphysic of Morals . Edited and translated by H. Paton. New York: Harper & Row, 1964.

———. Prolegomena to Any Future Metaphysics . Edited by L. Beck. Indianapolis: Bobbs-Merrill, 1950.

———. Religion within the Limits of Reason Alone . Translated by T. Green and H. Hudson. La Salle: Open Court, 1960.

Kiesler, Sara. "Thinking Ahead: The Hidden Messages in Computer Networks." Harvard Business Review 64 (1986): 46–59.

Kiesler, Sara, and Lee Sproull. "Response Effects in the Electronic Survey." Public Opinion Quarterly 50 (1986): 402–413.

Kuhn, Thomas. The Structure of Scientific Revolutions . 2nd edition. Chicago: University of Chicago Press, 1970.

LaValley, Albert, ed. Invasion of the Body Snatchers . New Brunswick: Rutgers University Press, 1989.

Lyotard, Jean-Françols. The Postmodern Condition: A Report on Knowledge . Translated by G. Bennington and B. Massumi. Minneapolis: University of Minnesota Press, 1984.

Marasco, Robert. Child's Play . New York: Random House, 1970.

McGuire, Timothy, Sara Kiesler, and Jane Siegel. "Group and Computer-Mediated Discussion Effects in Risk Decision Making." Journal of Personality and Social Psychology 52 (1987): 917–930.


133

Melville, Herman. "Bartleby." In Billy Budd, Sailor and Other Stories , 95–130. New York: Bantam Books, 1981.

Nietzsche, Friedrich. The Gay Science . Translated by W. Kaufmann. New York: Random House, 1974.

Orwell, George. 1984 . New York: New American Library, 1981.

Pais, Abraham. Niels Bohr's Times, In Physics, Philosophy, and Polity . Oxford: Clarendon Press, 1991.

Parsons, Terence. Nonexistent Objects . New Haven: Yale University Press, 1980.

Pippin, Robert. Hegel's Idealism: The Satisfactions of Self-Consciousness . Cambridge: Cambridge University Press, 1989.

Quine, Willard. Methods of Logic . Revised edition. New York: Holt, Rinehart and Winston, 1961.

———. Quantifiers and Propositional Attitudes." Journal of Philosophy 53 (1956): 177–187.

Rorty, Richard. Contingency, Irony, and Solidarity . Cambridge: Cambridge University Press, 1989.

———. Essays on Heidegger and Others . Cambridge: Cambridge University Press, 1991.

———. Objectivity, Relativism, and Truth . Cambridge: Cambridge University Press, 1991.

Skyrms, Brian. Choice and Chance: An Introduction to Inductive Logic . 3d edition. Belmont: Wadsworth, 1986.

Sproull, Lee, and Sara Kiesler. "Reducing Social Context Cues: Electronic Mail in Organizational Communication." Management Science 32 (1986): 1492–1512.

Walker, Ralph. Kant . London: Routledge & Kegan Paul, 1978.

———. Review of Kant's Copernican Revolution. Philosophical Review 99 (1990): 439–442.

Wittgenstein, Ludwig. On Certainty . Translated by D. Paul and G. Anscombe. New York: Harper & Row, 1972.

———. Zettel . Translated by G. Anscombe. Berkeley: University of California Press, 1970.

Wood, Allen. Hegel's Ethical Thought . Cambridge: Cambridge University Press, 1990.


135

Index

A

action, 33 -46

Adorno, T., 123 n

Alcibiades, 61

analytic philosophy, 47 -48

Anselm, ix

anxiety, 27 -28, 55 -56

appearances, 89 -99

Aquila, R., 89 n

Aristotle, 57 , 94 , 95

ascetic priest, 56 -57

B

Béjart, M., 102

Bishop, M., 20 n

Black, M., 114 n

Bohr, N., 122

Brittan, G., 90 , 92 , 93 , 94 , 96 , 99

bulletin boards, 11 -32

Butler, J., 124

C

Carnap, R., 105

Caton, H., 86 n

Catullus, 62

causality, 34 -37, 96 , 97 , 104

Cervantes, M., 1

choice, 33 -46

Chomsky, N., 115 n

clarity, 121 -130

computer networks, 11 -32

conceptual independence, 89 -99

consciousness, 20 , 31

courage, 56 , 60 , 63

cruelty, 62 -66

D

Davidson, D., 58 , 82 n

Deleuze, G., 124

de Man, P., 55

Derrida, J., 48 , 75 , 87

Descartes, R., 67 -70

dreams, 67 -71

Dummett, M., 9 n

E

electronic mail, 11 -32

Empedocles, 107

Enlightenment, 123 , 125 , 128

ethnocentrism, 74 -76

existence, 91 -93, 103 -108

F

familiarity, 124 -130

Fichte, J., 53

fragmentation, 71 -74

freedom, 33 -46

Frege, G., 114 n, 127

Freud, S., 6 , 73 n, 116 , 117

G

Gdaniec, K., 102 -103, 107

Glouberman, M., 80 n

Groening, M., 9

Guyer, P., 84 n, 86 n

H

Habermas, J., 57

Hegel, G., ix , 25 , 27 , 30 , 48 , 60 , 61 , 62 , 90 n, 94 -95, 98 , 99 , 113 n, 117 , 123 , 125

Heidegger, M., ix , 48 , 56 , 86 , 88 , 117

Horkheimer, M., 123 n


136

Hume, D., 3 , 98 , 99 , 117

Husserl, E., 48

Huxley, A., 128

I

idealism, 78 -88, 110 -120

identity, 93

incommensurability, 79 -80, 82 -83

intention, 39 -40, 84 -85

intentional objects, 89 -99, 104

irony, 64 -65, 71 -74

K

Kant, I., ix , 1 , 33 -53, 57 , 62 , 68 -71, 72 , 76 , 78 -99, 104 -106, 119 , 122 , 125 , 127

Kemp Smith, N., 84 n

Kierkegaard, S., ix , 60

Kiesler, S., 11 n, 12 n, 13 n

knowledge, 92 -93

Kuhn, T., 79 -82

L

language, 102 -104

LaValley, A., 21 n

Leone, S., 2

Lewis, D., 106

liberals, 62 -65

Locke, J., 98

logic, 24 -25, 94 -95

Lyotard, J., 76

M

Marasco, R., 54

McGuire, T., 11 n

Melville, H., 129 n

mental, 12 -13

Mill, J., 58

modernism, 72 -77

Molière, 1

Montaigne, M., 12

morality, 33 -46

N

Nabokov, V., 61 , 62 , 63 , 66

names, 110 -113

nature, 80 -81

Nietzsche, F., ix , 5 n, 12 n, 48 , 57 , 103

O

Orwell, G., 54 , 128

overdetermination, 35

P

Pals, A., 122 n

Paracelsus, 48

Parsons, T., 112 n

peekaboo, 6 -9

Pippin, R., 98 n

Plato, 3 , 62 , 84

play, 4 -6, 17 , 27

pleasure, 4 , 6 -7

politics, 107

possibility, 40 -46

postmodernism, 71 -77

privacy, 13 , 15 , 16 , 19 n, 29 -30

Pyrrho, 75

Q

Quine, W., 25 , 112 n, 113 n

R

Rabelais, F., 58

rationality, 36 -46

realism, 14 , 78 -88, 92 , 93 -94, 110 -120

rewriting, 15n, 34 -35, 80 , 96 -97, 104 -109

Rorty, R., 54 -66, 71 , 74 , 75

S

Sartre, J., 3 , 55

Savignano, L., 102 -103

Scarry, E., 64

Schelling, F., 48

Searle, J., 124

self-indulgence, 2 -4

Shakespeare, W., 62

Sherman, W., 48

Siegel, J., 11 n

Skyrms, B., 115 n

Socrates, 87

Sorel, J., 61

Spinoza, B., 3

Sproull, L., 11 n, 12 n, 13 n

statements, 114 -115

subjectivity, 12 -13

surprise examination paradox, 69 , 71

T

teaching, 129 -130

transcendental arguments, 52 , 53

transcendental philosophy, 78 -88

trying, 110 -120

typos, 100 -102, 104 , 108 , 109 , 116 -118

U

Unger, R., 59 -60

utopia, 57 -58

W

Walker, R., 47 -53

whim, 2

Wittgenstein, L., 48 , 110 , 119 , 124

Wood, A., 98 n


138

Designer: U.C. Press Staff

Compositor: Wilsted & Taylor

Text: 10/13 Sabon

Display: Sabon

Printer: Thomson-Shore, Inc.

Binder: Thomson-Shore, Inc.


Preferred Citation: Bencivenga, Ermanno. My Kantian Ways. Berkeley:  University of California Press,  c1995 1995. http://ark.cdlib.org/ark:/13030/ft9j49p35z/