Preferred Citation: Caws, Peter. Yorick's World: Science and the Knowing Subject. Berkeley:  University of California Press,  c1993 1993.


Yorick's World

Science and the Knowing Subject

Peter Caws

Berkeley · Los Angeles · Oxford
© 1993 The Regents of the University of California

For Nancy and Elisabeth

Preferred Citation: Caws, Peter. Yorick's World: Science and the Knowing Subject. Berkeley:  University of California Press,  c1993 1993.

For Nancy and Elisabeth


Yorick appears in the title of this book because of his head—or more exactly, his skull. He stands, however, more for the materiality of humans than for their mortality. The point is that he had a world, once, and he had it by virtue of what was in his skull. Hamlet was no neurologist but he got the materiality right: "Why may not imagination trace the noble dust of Alexander, till he find it stopping a bunghole?" I have more to say about Yorick later on (and in chapter 26). For the moment he serves truth in advertising: the reader may know from the start that in my view if I have a world, and if I have science—which is a second-order aspect of that world—it is thanks to my individual embodiment as part of a material universe, a part that enjoys the status of subject in relation to its world as object.

Science is not in the material universe except by way of the embodiment of the knowing subject. Science is the subject's way of having the structure of its world—the theoretical part of that world—match what it takes to be the structure of the universe. ("Match" covers a multitude of possibilities; it is not necessarily an exact function.) The reality of the universe is hypothetical, but that obviously does not mean that the hypothesized universe is to be regarded as less than real. These elliptical remarks will, I hope, be illuminated by what follows, but my particular brand of materialism is developed in an earlier work, Structuralism: The Art of the Intelligible (1988), especially chapter 12, and the interested reader may pursue it further there.

This book assembles in one place most of the more or less finished products of that part of my professional activity over the last three


decades which has been devoted to the philosophy of science, excluding however (with one exception) material already published in book form in The Philosophy of Science: A Systematic Account (1965) and Science and the Theory of Value (1967). As the dates of those works suggest, my main concentration on this field was early in my career; and as is clear from the title of the second, my attention soon wandered from mainstream philosophy of science to the relevance of scientific practice to other parts of philosophy and culture. I say "mainstream" because this is how part of the discipline has regarded itself, though the term is relative. As will become clear, it has not always seemed to me a stream usefully navigable for cargoes of the greatest philosophical import. This is because it has systematically failed to pay sufficiently serious attention to a precondition of its own possibility, namely (as suggested above), the dependence of science itself, and a fortiori of any reflective analysis of science, on the engagement of a knowing subject—and in every case an idiosyncratic one at that.

This question of the subject is one that I have pursued in other domains. But my original attachment to science and the philosophy of science, if temporarily bracketed, has remained—to borrow an expression of Husserl's—"as the bracketed in the bracket," emerging from time to time as occasions, problematic or professional, have demanded. There is a sense in which, even when engaged in so-called continental philosophy, or in the philosophical aspects of literature or psychoanalysis or politics, I have never abandoned the realist and empiricist stance bred into me by physics and the philosophy of science. But instead of declining to entertain possible objects of experience outside the scientific, or refusing them a place in the realist scheme of things—as many of my colleagues in those domains tend to do—I have taken it as a philosophical challenge to distinguish between different objects of experience, and to show how those that lie outside the purview of natural science have their own claim to reality.

In the end these lines of inquiry have converged. I do take it to be possible to draw a radical distinction between the natural sciences on the one hand and the social, or as I now prefer to say the human, sciences on the other. My way of doing this is to assign as objects to the human sciences (under a covering realist hypothesis) just those events and processes that have, among their causal antecedents, episodes of conscious human intentionality, and to assign as objects to the natural sciences events and processes that have no such episode among their causal antecedents. This has in the first instance nothing to do with the methodologies of the respective sciences. It is an ontological move: it has the effect of dividing the world of my attention into a


natural part and a human part. The division is a human, not a natural, one—there is a sense in which we and all our works are a part of nature. But it is plausible and effective: a simple but illuminating exercise is to classify familiar objects in its terms (assigning as objects of inquiry, to give a quick example, the mechanisms of intoxication and the principles of the production of intoxicants to the natural sciences, but the desire for these substances, their distribution and consumption, and what is done or made under their influence, to the human sciences).

In principle it looks as though the human might be reducible to the natural. But the very idea of a natural-scientific explanation of human action involves a circularity, because the explanation of nature, even in its own terms, is already a human enterprise. If therefore we consider them in themselves, apart from any distinction in terms of their objects, the natural sciences and the human sciences are entirely the products of conscious human intentionality; the theories that constitute them are (as their name suggests) outlooks on the worlds of their practitioners, explanatory stances adopted for the purpose of bringing the complexity of experience into intelligible order. And the relations that hold between the sciences and their objects, natural or human, must themselves be animated and sustained by knowing subjects.

These subjects have the additional property of being free (a point I claim here without argument, though I have provided plenty of that elsewhere), and as such enjoy great latitude in the choice and formulation of problems. The idiosyncrasy of one such subject and of his choices is reflected practically in the heterogeneity of the work collected here. But the underlying theme—the primacy of the knowing subject—is recurrent, if sometimes only implicit. The tone and level are variable, from the popular to the scholarly, and I have made no attempt to impose uniformity in these respects. The previously published chapters are essentially unchanged except in one significant way, namely, that I have consciously sought out and corrected the sexist use of pronouns, which was once transparent to everyone but should now, given the feminist prise de conscience, be unacceptable to anyone.

In the case of material presented orally but not previously published I have allowed myself greater freedom to adapt, but even here the individual chapters (though not arranged chronologically) bear the marks of their contingent origins and have not been made to speak in one voice. A small but telling point: as a theoretician preoccupied with the embodiment of the subject, I have tended to stress from time to time the obvious but crucial importance of brains, and particularly of their complexity, one primitive measure of which is the number of neurons they contain. In the course of my professional career neurologists have continuously revised upward their estimates of this number,


so that in different chapters the reader may find casual allusion to anything from five billion to a hundred billion neurons. But I have not gone back to change the earlier numbers; it is instructive, I think, to leave them where they lie, as testimony—if any be needed—to the always provisional character of scientific knowledge.

The arrangement of the material is roughly thematic, which helps clarify what problems are being dealt with but has the disadvantage that chapters of varying technical difficulty are lumped together. It may be helpful, therefore, to identify a few chapters, written in a more colloquial style and originally intended for a wider audience than some of the others, as routes of access for nonprofessionals. For readers whose main interests are historical and social a good starting point would be chapters 3 and 4. Those with interests in practice and technology might first try chapters 12, 13, and 16. Chapters 18 and 19 deal with issues in the theory of knowledge, and chapters 25 and 26 with metaphysical issues, in more or less self-contained and, I hope, approachable ways. But I do not mean that these chapters contain nothing of interest to the professional, nor that the others are out of reach to everyone else.

There is, no doubt, something arbitrary and whimsical in having put all this under the sign of Yorick—"alas, poor Yorick!" as Hamlet says—and it certainly isn't the mortuary aspect of his skull that I want to invoke, though Shakespeare won't allow that to become too depressing: his gravedigger is, after all, a clown. About Yorick when alive the play doesn't tell us a great deal: he was the king's jester; he used to carry the child Hamlet on his back; he once poured a flagon of Rhenish on the gravedigger's head; he died when Hamlet was about seven. But Hamlet says of him that he was "a fellow of infinite jest, of most excellent fancy," whose "flashes of merriment . . . were wont to set the table on a roar"—an agreeable chap, in short, and not at all a bad patron for a book, even a serious book. Not that I want to invoke the jest or the merriment either (though I would certainly align myself on the side of Democritus as against Heraclitus, the laughing philosopher against the weeping one), but there is something suggestive in the fancy. For "fancy" has its origins in "fantasy" (or "phantasy"), which puts it in the same linguistic family as "phenomenology"—it is a matter of appearances, which constitute the life-world of the knowing subject. Science is creative, it is imaginative, and as Edmund Husserl points out it is just one of the things that occupy the life-world;[1] if that world is in the end (and Shakespeare would certainly be of this opinion) a play of fancies, science would surely count among the most excellent.



The body of work collected here owes so much to so many people—from teachers to fellow-students to friends and professional colleagues, not omitting readers for journals, lecture audiences, and my own students over decades of university teaching—that even constructing an exhaustive list, let alone specifying what was due to the individuals named in it, would tax memory and self-knowledge beyond their present resources. I mention some names from the earlier stages of my intellectual development in the Introduction; later on, as numbers grow, specific influences become harder to isolate.

The usual acknowledgments are in order to editors and publishers who have allowed me to reprint what appeared in their books or journals, a listing of which will be found on page 381. Redoubled thanks are due in those cases where the contribution in question was solicited by them, rather than submitted by me, since that often induced me to attend to issues I might otherwise never have tackled. I think in this connection especially of Jon Moreno, whose long-standing invitation to write about quality and quantity inspired the excursion into the philosophy of mathematics that appears as chapter 19.

In a somewhat similar vein I should perhaps record my gratitude—not that I felt it at the time—to the authorities of Trinity College, Hartford, who after I had accepted it withdrew, on budgetary grounds, the position the philosophy department had offered me for my first year out of graduate school, thus ensuring that I should begin my career not by teaching philosophy in the East but by teaching science in the Middle West. The one year I spent lecturing undergraduates on basic science


forced me to get up to speed in the biological and earth sciences, an invaluable complement to the physics in which I had spent my undergraduate years. To this day I remain grateful to Michigan State for rescuing me from unemployment, and to the University of Kansas for calling me back to philosophy the following year.

To colleagues and institutions who have entrusted me with lectureships and with offices that required the delivering of addresses I also owe debts of gratitude: Max Wilson for chapter 4, Russ Hanson for chapter 6, Grover Maxwell for chapter 9 (and posthumously for chapter 7), and George Bugliarello for chapter 13. In other cases the connection is less direct; I owe to Mel Kranzberg, for example, the invitation to be a national lecturer for Sigma Xi, which helped keep up my activity in the philosophy of science even though none of the lectures found their way into this book. Again, readers and commentators have been many; I am especially indebted to Marx Wartofsky and an anonymous reader for the University of California Press for reactions to the book as a whole, and most recently to Steve Fuller for a helpful critical reading of the final chapter.

Without the enterprise and encouragement of Ed Dimendberg at the Press, much of this work might never have appeared in book form. Lisa Chisholm's resourceful and nonintrusive copyediting made the last stages of production a pleasure instead of the ordeal they often can be. My secretary, Karen Greisman, cheerfully performed prodigies of retyping, and my graduate assistant, Leslie Baxter, helped immensely at every stage, assembling the constituent materials and bringing her sharp eye and mind to bear on countless details.

My wife, Dr. Nancy Breslin, and my daughter, Elisabeth Breslin Caws, to whom this book is dedicated, filled and continue to fill the life-world of this particular knowing subject with a happiness no less prized for its having become habitual. Elisabeth also tried to eat the manuscript; I think this was an expression of approval, though I have to admit that, if so, it is one she confers somewhat indiscriminately at this stage of her life on reading matter that happens to come her way.




From Physics to the Human Sciences—The Itinerary of an Attitude

Taken in itself, each of the chapters that follow makes a more or less circumscribed point in its own way. They were not originally conceived in relation to one another, but their publication together offers an opportunity to rethink them as a coherent body of work, or at least as one facet of such a body of work. The best way of doing this is to say something of the project, in the Sartrean sense, out of which they arose.

Scientific Roots

My engagement with the philosophy of science goes back to readings of Sir James Jeans, Sir Arthur Eddington, and Alfred North Whitehead while I was still in school. The Jeans and Eddington were my father's; he wanted to understand the mysterious universe because it glorified God—or rather, I suspect (he was a humble man), he just wanted to feel how mysterious it was, thus savoring at once God's greatness and his own insignificance. He was impressionable, and continually awed by the dimensions of the atom (the nucleus as a pea in St. Paul's Cathedral) or the distance of the galaxies.

The effect of his sharing all this was that it became familiar to me and not very mysterious at all. I took physics in school, being initiated (which is, after all, the old sense of mystery) at the hands of a crusty and acerbic teacher whose name was S. V. Shingler. Two memories of Mr. Shingler stand out: first, his daily tirades in class about the hopeless stupidity of his pupils, and second, a more personal rebuke. In working


up some notes on fluid pressure—one of the very first assignments in the fourth form perhaps (I must have been about thirteen)—I ended with a flourish, writing the basic formula "p = f/a" in large letters in the middle of the notebook page and drawing a little box around it. It was a neat bit of work and I was proud of it. Mr. Shingler struck the formula through with his red pencil and made me redo the page. No physical expression, he said, was more or less important than any other; I would please make them all the same size. His tone as he administered this lesson was one of withering scorn mixed with genuine affection.

Thanks to the peculiarities of the British educational system I studied nothing formally except physics, mathematics, and a bit of chemistry between the ages of fifteen and twenty-one. This coincided—sometimes to the detriment of academic work—with a period of personal struggle against a set of beliefs into which I had been indoctrinated since infancy by my parents, who belonged to a small and fanatical sect known as the Exclusive Brethren. The Brethren were always metaphorically writing things in large letters in the middles of pages: they hung great framed Biblical texts everywhere, making insistent claims on belief or action, and conducted their lives in an atmosphere of exaggerated fear and piety.

Physics seemed obvious from the beginning; religion became more and more dubious. Questions about belief, what it was and under what conditions it was justified, arose on both sides. Some of the claims of cosmologists and quantum theorists were every bit as implausible as those of theologians. But scientists were tentative where preachers were dogmatic, and it helped to remember that things didn't become truer because they were written large, or—as I was to put it many years later, in a review of a fellow philosopher of science—that "hypotheticals do not turn into categoricals just because one shouts them at the top of one's lungs."[1] Nothing in science had the canonical and sacrosanct status of religious belief; everything was provisional. Local observations, suitably specified, and rule-governed derivations from stated givens—like the formula for fluid pressure—had what I would now call apodictic certainty (which, Kant to the contrary notwithstanding, is not the same thing as necessary truth), but beyond that every step had to be argued. Extrapolations and hypotheses were all right, but only as long as one remembered that that was what they were.

Science, therefore, never had for me the megalomaniacal pretensions so many people claim for or attribute to it. It was certainly not a substitute for religion—on the contrary, it was an antidote. The idea that science is just another kind of faith overlooks an essential difference between science and religion: as a scientist I might share with believers a kind of practical confidence in the stability of the everyday world,


but I rejected not merely as unnecessary but also as unworthy any commitment to an explanatory account of the origin or meaning of that world made simply for the sake of having something to believe, or for that matter any unwarranted extrapolation of the scientific account itself. As I came to see it, Newton's recommendation in his third Rule of Reasoning in Philosophy that locally encountered qualities should "be esteemed [emphasis added] the universal qualities of all bodies whatsoever," subject always to the qualification in the fourth Rule ("till such time as other phaenomena occur"),[2] only made sense, while on the other hand Laplace's postulation of "an intelligence . . . able to embrace in a single formula the movements of the largest bodies in the universe and those of the lightest atom"[3] was just a bit of unwarranted melodrama.

At the same time science didn't seem, locally, to be more than a part of the story; it coexisted happily with the rest of life. Even if everything turned out to be explainable, that would not necessarily spoil its quality as experience. Eddington had been quite good on this point; I quote one of the relevant passages in chapter 22. So again, one of the things frequently held against science, one of the things that Whitehead himself had held against it—that it reduces reality to the mere hurrying of material, endlessly and meaninglessly, or words to that effect—struck me as based on a misunderstanding. To do Whitehead justice, what he was criticizing was the "scientific world-view" that emerged in the sixteenth and seventeenth centuries, but he seemed to think, as many people still think, that scientific work led more or less inevitably to this view, and that simply was not my experience.[4]

One other attitude to science that dates from this early period is that it has always seemed to me a great playground of ideas. I read science fiction more or less avidly, but even in everyday life there were all sorts of ways in which scientific knowledge could transform or deform the ordinary, thus rendering it more interesting. One juvenile example of this is from roughly the period of my apprenticeship with Mr. Shingler, though it belonged in the chemistry laboratory next door, which was presided over by Dr. Stubbs. The structural elegance of organic chemistry came just too late in the curriculum to convert me to the subject (chemistry up to that point had been rather a cookbook affair), but it fed a certain speculative bent. Hydrocarbons come in series of ascending complexity; for example, the series of acids goes from formic (H.COOH) to acetic (CH3 .COOH), then to propionic (C2 H5 .COOH), and so on. The alcohol series however begins with methyl (CH3 .OH) and continues with ethyl (C2 H5 .OH), and so on. It is obvious on comparison that there is a missing first member in the alcohol series, namely the analogue of formic acid, with its single hydrogen rather than a hy-


drocarbon group. In the case of the alcohols this would clearly be H.OH. But that is water—so a case could be made for regarding water as an alcohol.

This was surely not original with me, though it was my own at the time. Also the argument had a fatal flaw: as Dr. Stubbs patiently pointed out, you can't have an organic compound without carbon. It amused me anyway, but I must I think have been after provocation as well—for example, people would have to redefine temperance. With my family I acquired a reputation for frivolity. This was no laughing matter, but then they took almost everything with deadly seriousness, whereas I thought (and still do) that there were very few things in life, with the possible exceptions of love and justice, worth taking altogether seriously. Traces of this perverse rethinking of the familiar are to be found here and there in this book.

Systematic Philosophy of Science

To a first degree in physics I added, after a transatlantic flight from religious suffocation, a doctorate in philosophy, for which it was natural to write a dissertation in the philosophy of science. The task of this discipline I took to be the understanding of what science was doing conceptually, not historically or anecdotally, which explains a lack of sympathy for subsequent efforts to make it "a more accurate reflection of actual scientific practice," as some revisionist philosophers of science put it. The structure of science as I envisaged it at this time involved a lowest level of concepts that corresponded to recognizable complexes in the perceptual domain, a next higher level of constructs which were qualitatively similar to concepts but had undergone a process of refinement (definition, quantification, etc.), and a highest level of isolates that had no direct or obvious correspondences in experience but were invoked because of their theoretical power. The isolates were hypothetical and for the most part invented, though it seemed possible that some of them might be called into being by structural considerations, as a matter of inference or of Gestalt completion. This terminology, largely adapted from that of my sponsor Henry Margenau, was not destined for wide acceptance, though I still think it lends itself to an interesting variant treatment of the observational-theoretical dichotomy (about which I shall have more to say). I had already abandoned it—at least the part about the isolates—by the time of my attempt at a systematic account of the philosophy of science in 1965. But I did not abandon then or later the realist conclusion of the dissertation nor my


reasons for reaching it; they are dealt with briefly in chapter 21 of the present book, which was originally written as a contribution to a Festschrift for Margenau.

My realism was what would now be called a structural realism, in that I did not necessarily expect the separateness and identity of "things" in the perceptual world to be faithfully mirrored in the real one, even though all their properties corresponded to something in the real, understanding by this term a universe independent of and ontologically prior to my knowledge of it. One could reasonably postulate an isomorphism, under some transformation, between the perceptual/conceptual and the real, but to ask what something is like when we aren't attending to it was to ask a silly question, since things are only "like" anything when we are attending to them. This did not mean a fall into idealism: attending to them didn't constitute the world in which the things were grounded, it only fixed how they would appear in my world. Again, my realism itself was hypothetical, and entertained by individuals, whose conceptual schemes were idiosyncratic and only partially isomorphic to one another. It made no sense to object that because something was hypothetical, it couldn't be real—that missed the whole point of making the hypothesis in the first place. That it was real was the hypothesis. I had not yet encountered phenomenology—one could get a doctorate in philosophy at Yale without ever hearing of it, an astonishing testimony to parochialism when one thinks of it, and a devastating indictment of places where it may still be true—and could therefore not see the hypothetical structure of the real as intentional. (It may be worth remarking that conceptual schemes as I construed them, meaning the conceptual furniture of individual thinkers, do not fall under Donald Davidson's later strictures in "The Very Idea of a Conceptual Scheme.")[5]

The Philosophy of Science: A Systematic Account ,[6] written after a number of years of teaching in this area, set out to organize, for didactic purposes, the content of what was at that time still an emerging discipline among the major subdivisions of philosophy as taught in universities. I did not consider that it had itself to be scientific or to mimic the technicalities of science. For heuristic purposes I made use of some diagrams and simple formulae, especially when dealing with logic and probability theory, but my main concern was to convey a sense of conceptual structure—always remembering that the subjects who were to entertain it were embodied macroscopically in place and time (in what I would later call the "flat region") and would stay that way, no matter how the objects of their interest might be pushed in the direction of the small or the fast or the distant.

It was my first book and in it I made a deliberate attempt to be


approachable. As some sharp-eyed reviewers pointed out, it was flawed by errors of scholarship, not excusable—as I am quite ready to admit—even on the grounds that I was painting in broad strokes on a large canvas. But as I look back I am struck by something that, now that it occurs to me, may be relevant to some of the material in the present book. The reviewers' complaints were not at all that I had got it wrong about science, nor indeed that any of my main claims were off target, but rather that I had misrepresented some details about the work of other philosophers of science—that I had attributed to Carnap a view he had once explicitly disavowed, that I had implicitly conflated the positions of Poincaré and Duhem on a point where they had in fact diverged. I think the trouble was that for me scholarship wasn't the main point, that I lacked the appetite for detail and the talent for perseverance that marked many of my colleagues. (Perhaps this plays out yet further the rejection of the kind of reverence for the Word I was surfeited with in youth.) At all events my attitude has always been that the fact that X said Y isn't really important, philosophically speaking, even if X is Plato or Kant; what matters is what reasons he or she gave for saying Y and whether they should compel our assent. Of course if X didn't say Y nothing excuses the misattribution, which is why my post facto contrition is genuine, and why I apologize in advance for such lapses as may have escaped my now more critical eye in what follows. But in cases like this history, not philosophy, is the offended party. In a similar way, when my students tell me what they think, I sometimes say—taking care to temper the point (perhaps I learned something from Mr. Shingler)—that it doesn't matter what they think; their opinions will become interesting to me only when they can tell me why they hold them.

More germane to the philosophy of science proper was the gentle reproach of a former student of mine, himself on the way to becoming a distinguished philosopher of science, who wrote to say how it worried him that while "Putnam, Feyerabend, Hanson, Kuhn et al. seem[ed] to have pretty effectively destroyed the tenability of the theory-observation dichotomy," I on the other hand seemed to cling to it in my book. But I thought they had done no such thing, and think now that I concede too much in chapter I of this book in calling the strong dichotomy untenable. In a subsequent paper (which, bucking such a trend, was never published and is now lost, or I should have included it here) I produced, as a test case, the Chinese observation of a "guest-star" in the year 1054. Thanks to astrophysics we now know, from the celestial house in which it appeared, that this was the supernova whose remains were recorded by Charles Messier in 1784 as M1 (the first item in his catalogue of nebulae) and were later named the Crab Nebula by


the Earl of Rosse, who thought them very like a crab. Even if the observations of Messier and the Earl of Rosse were colored by astrophysical theory, which I doubt to have been the case in any developed sense, those of the Chinese certainly weren't—and yet, because they fit the retrodicted light curve of the supernova, they count as confirming evidence of the theory.

Of course if what is meant by the rejection of the observational-theoretical dichotomy is that all grasping of anything in perception involves judgment, or, in Coleridge's words, "the meanest of men has his theory, and to think at all is to theorize,"[7] that gets rid easily enough of theory-free observations. However, on the one hand it trivializes theory and on the other it makes room for the reemergence of the dichotomy at a higher level. For it will frequently be true that the background theory that is thought to contaminate the observations will also be a background theory for the theory that is invoked to explain them—but that the explanatory theory will be quite distinct from the background theory and will share no terms with it. So once again there will be a sharp distinction, against that background, between observation statements and theoretical statements.

Branching Out

A normal career in the philosophy of science would no doubt have involved plunging into the professional fray with these and other arguments, but even while engaged on the systematic project my interests were beginning to turn away from the defining problems of the field. Those problems, some of which are noticed occasionally in what follows, came to include paradigms, research programs, the realism-pragmatism debate, anthropic speculations, and eliminative materialism. As will become clear in the later parts of this book, a kind of reconvergence has taken place, especially in the domain of artificial intelligence (see for example chapter 24), now that the hardwired locus of the knowing subject is beginning to be taken more seriously.

A decisive event at the time of which I am speaking was a request from some bright and insistent students at the University of Kansas, who wanted to read existentialism with me. I was the youngest member of the department and the others had already refused. My job was to teach logic and the philosophy of science, but on the one hand I was curious about Kierkegaard, whom I had encountered in a backhanded way at Yale (where he had been introduced as a prelude to an exemplary dismissal), and on the other I liked the students. We read Kierkegaard, Jaspers, Heidegger, and Sartre; later I added Husserl on my own. It


amazed me that this rich material was held in such low esteem in the trade. It made no internal difference to the technical problems of the philosophy of science but it put, as it were, a modal prefix in front of the whole enterprise, the absence of which constituted, as I saw and still see it, a culpable failure of self-knowledge. And it did make an external difference (see for example chapter 20 of the present book).

Also I had begun even earlier to have some curiosity about the possibility of extrapolating results in the philosophy of science to theories in other contexts, notably at first that of value, an inquiry that resulted in Science and the Theory of Value .[8] Attending to these and other eccentric speculations made all the more sense because of a growing feeling that much technical philosophy of science was in some quite deep way beside the point. It was full of what I thought spurious formalisms and aimed for what I suspected to be a spurious exactitude—spurious because the formalisms were often decorative and not used for any essential purpose (like proving theorems, as in mathematics) and because no adverse consequences followed (as they surely would have in the empirical sciences) from drawing conclusions with less than perfect exactness, or as Aristotle puts it in the Nicomachean Ethics , "roughly and in outline."[9] Aristotle goes on in the same passage to say that "it is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits," and it seemed to me that philosophers of science who thought that what they did should be formal and exact were getting confused about their subject.

Philosophy is not a natural science, nor an exact science, and trying to make the philosophy of science imitate the hard sciences by the refinement of its technical formulae (as I once remarked at a conference, to the indignation of the advocate of "exact philosophy" on whose paper I was commenting) made about as much sense as my moving to Boston from New York, where I was then living, because I really wanted to live in London. The difference is stark and simple but often not grasped. The natural sciences look for their objects in the natural world, and what happens in that world selects, in the final analysis, what the science in question can plausibly say. The object of the philosophy of science is science, but science is not in the natural world . One aphoristic way of putting this is to say "the stars are indifferent to astronomy": they did whatever it is that they do long before astronomy was thought of, and news of what most of them are doing now may well arrive in these parts long after astronomy has been forgotten. Astronomy is something that human beings have made up—allowing themselves to be instructed by evidence from the stars, but deciding among themselves how to interpret that evidence and what conjectures to float in order to account for it.

I find myself hedging here, however, by taking care to say "natural


science," "exact science," and so on. The philosophy of science is the philosophy of what, exactly? And how can I use "exactly" in this challenging way when I have just been making excuses for inexactitude? In the period of my professional formation "science" nearly always meant "physical science" and "exactitude" nearly always meant "formal (or quantitative) exactitude." There were of course the biological and the social sciences, but these, when they were mentioned at all, tended to be compared to the physical sciences as ideals; their special problems were probabilistic or statistical but would become straightforwardly causal if only we knew enough. It was possible to expound the philosophy of the social sciences without once mentioning the feedback effect of knowledge of a theory on the population whose behavior it set out to explain. As to exactitude, the origins of the term certainly suggested something demanding—in the special and rather sinister case of "exaction" often enough a quantitative demand: the uttermost farthing, the pound of flesh. But exactus is the past participle of exigo, and it seemed possible to be exigent philosophically, to require reflective thinking-through, without insisting on axiomatic formalization. And "science" itself had only relatively recently, and only in the English-speaking world, come to have the narrow connotations of the quantitative (a term not itself always clearly understood—see for example chapter 19 of this book). Even in the English-speaking world, at Cambridge, older uses were preserved in the designation of science as natural philosophy, and of philosophy as moral science.

The idea of a thorough and demanding theoretical account is, in this light, the idea of a science, even an exacting science. In Science and the Theory of Value "science" still has the old meaning and there is no suggestion that there might be such a thing as a moral science. But a twenty-year detour through Continental philosophy—which began (as a main focus of professional work, rather than as a side interest) with the structuralists and only later, as a detour within a detour, involved concentration on the single figure of Jean-Paul Sartre[10] —made me thoroughly comfortable with the European notion of the Geisteswissenschaften or the sciences humaines, inquiry into which made it clear that they were the lineal descendants of John Stuart Mill's version of the moral sciences.

The Human Sciences

I said just now that natural science looks for its objects in the natural world; in a similar way one might say that a human science would look for its objects in the human world. Now philosophy, and the philosophy of science, are "objects" among others in the human world; the "natu-


ral world" itself is, paradoxically enough, also an object in the human world. Nobody has ever dealt with this situation better than Husserl (in The Crisis of European Sciences and Transcendental Phenomenology ). Husserl's key idea is that of the Lebenswelt , the "life-world," something that belongs in the first instance to the individual subject, although Husserl moves on (mistakenly, I think) to a collective form of it.[11]

This world, this intentional domain of temporality and spatial extension, which is not an abstraction from anything but is the totality of lived experience at every moment, includes the natural and the human parts spoken of in the Preface—but as remarked there this very distinction is a human construction. The "natural world" component of the life-world encompasses everything I encounter or that happens, within my experience or within the reach of my learning, that would have happened even if there had been no human intentions (or intentionalities). Deciding just which things fall under that description is easy to a first approximation but becomes harder, as is usually the case at conceptual boundaries, the more "human" the natural becomes: What about language? What about the incest prohibition? But these contested cases do not vitiate the basic distinction. The life-world includes thought, and the distinction between natural and human is particularly interesting here: thoughts that occur to people unwanted, especially those that occur when they are very much not wanted, have to be treated as natural pathologies.

I do not wish to develop these ideas at much greater length here, since they form the object of several chapters in part VI of the book, but a couple of supplementary points may be in order. First—to return to a controversial issue—what I may call my scientific world is itself a complex domain in the life-world, by no means coterminous with the natural world; it will include parts of the natural world that fall under scientific explanation, and parts of the thought world that are involved in the explanatory activity. This being the case, however, it can readily be divided into an observational part and a theoretical part, once again no doubt with ambiguities at the boundary that, once again, do not vitiate the distinction itself. Second, all this talk of "worlds" invites a distinction, hinted at above, between "world" and "universe." Universe would stand for the totality of what there is, including us but also including the vastly greater sphere of what underlies and surrounds and precedes and will follow us; world would stand, in effect, for the reach of the human—which the very term seems originally to have meant, a wer-ald or "age of man," "age" being understood as an epoch or a life. Note once again however that the idea of the universe will be an item in my world.

Philosophers of science all too readily hypostatize the entities of


which they speak—the propositions, the problems, the laws, the theories, the research programs, the revolutions, the sciences themselves—as if there were a domain in which they existed independently, waiting to be thought about, a domain whose internal structure would perhaps embody some truth about them all, and provide a ground for the settling of disputes. Karl Popper even invented such a domain, which he called the Third World, or (in order to avoid confusion with geopolitics) World III.[12] In this he seemed to be echoing Gaston Bachelard's call for a "bibliomenon" to supplement noumena and phenomena,[13] though when I suggested this to him privately he rejected the idea indignantly, claiming originality for all his ideas. At all events World III seems to me a perfect candidate for Ockham's razor, since it is wholly unnecessary—everything it does can be accommodated in the life-worlds of individual subjects (always remembering that representations of other subjects, mediated by their embodiments, are included as elements in those life-worlds).

When a subject intends a problem or an argument, as I am doing now (and as I can assume the reader to be doing in his or her "now"), the problem and the argument, and what they are about, and their referents, and their histories, are all called into being, as it were, are invoked, are animated, by the subject in the moment of their being intended. There is no reaching out to some other domain: all that is happening has to be drawn, in the moment, from resources locally available: memory, including language, perception, conceptual apparatus, texts perhaps. It is as a thinking and knowing subject that I engage in scientific or philosophical pursuits, and such pursuits happen nowhere, as far as we have any means of knowing, except in life-worlds like ours. Nor of course do any other pursuits, in the sense of activities directed towards ends.

The human sciences deal with life-worlds and their products; they are themselves inscribed in such life-worlds, namely, those of their practitioners. The last chapter of this book is devoted to them. What I hope to have shown here is how a conception of science that I learned as a young physicist, among the "hard sciences," has evolved through a long practice of philosophical reflection into something more inclusive, to which the hard sciences are integral but which they do not begin to exhaust. The hard sciences take their data from experimentation and their structure from mathematics—but experimentation and mathematics are themselves only human strategies for finding intelligibility in, or lending it to, an otherwise unintelligible world, and as such take their place in turn among the objects of the human sciences.




Preface to Part I:

The thematic unity of this somewhat heterogeneous first part could be expressed roughly as: what science can do—and what it can't be expected to do. The first chapter is a gesture, in two senses. I was fortunate to find myself at Yale during Peter Hempel's last year there; in my first year he was at Harvard visiting and in my third he went to Princeton for good, but in that crucial second year (as things go in American graduate education) there he was, and I took both his courses in the philosophy of science. He was an exemplary teacher, from whom I learned more, perhaps, than from any other single person, and my putting his chapter first is an acknowledgment of that fact. But it also makes an implicit claim about the book as a whole. Hempel was and is a philosopher of science's philosopher of science, and I would like what I have to say to be regarded as belonging to the conversation that he has animated over his long career.

The first chapter defends Hempel's view of the central task of science as explanation and of the philosophy of science as the analysis of the structure of explanation. The second chapter, however, places some limitations on how that structure is to be instantiated. In the late fifties I had become interested in the general systems theory of von Bertalanffy, which seemed to promise a systematic extension of the network of explanation from physics to biology without compromising the specificity of the latter—and to do so under the rubric of cybernetics and information theory, something of automatic interest to an exphysicist because of its affinity with thermodynamics, the most philosophically intriguing branch of physics until the arrival of relativity and quan-


tum theory. In 1966 I found myself in the presidency of the Society for General Systems Research and under the necessity of addressing the annual meeting. Among some of my colleagues in the Society I had detected a rampant tendency to suppose, somewhat after the manner of Hegel, that ontology could be read off from logic—that if one could build hierarchically layered theoretical systems the world must contain, somewhere, their real counterparts. The argument of the chapter serves as a gentle rebuke to these pansystematists.

Chapter 3 is a change of pace and has an earlier origin, but it fits in because it demonstrates in a dramatic context some limits of theoretical explanation. The context was of particular interest to me because Philip Henry Gosse had been a member of the sect to which my parents belonged and in which I grew up. He provides a splendid test case of the scientist who wants to believe an account that is at odds with the best current hypotheses in his or her field: it turns out to be possible, because of the fallacy of affirming the consequent, to reject any set of hypotheses and replace them with a magical account, and nothing in the philosophy of science can stand definitively in the way. (The fallacy of affirming the consequent occurs when someone tries to infer the truth of the antecedent, p , of a conditional "if p then q " from the truth of the consequent, q .) The hypotheses of a theory have no status—except a hypothetical one. What needs to be added, however, is that the magical alternative has, similarly, only a magical status, and the fact that scientists are modest enough not to jump to a plausible conclusion is no excuse for other people to jump to implausible ones. Gosse no doubt believed he had good reasons for his religious belief, but it is not clear that he had examined them responsibly—though none of us is in a position to render a final judgment on that point.

The fourth chapter comes from very much later, and takes up the same issue in a more didactic way. The distinction between event and process that I draw in contrasting creationism to evolution overlooks the possibility that creation itself might be an ongoing process (indeed one scientific theory, celebrated in its time, maintained that it is—I mean Fred Hoyle's cosmological theory of continuous creation). But my point in the chapter is to examine the religious position that Gosse and others have held, and in that position creation is an event by my definition.


Aspects of Hempel's Philosophy of Science

[Note: For the purpose of reading this chapter the reader is asked to make an effort of temporal translocation—to adopt, in imagination, the standpoint of the late sixties rather than that of the nineties. The year is 1967: the philosophy of science is by now an established academic discipline, whose current excitement centers on new concepts like paradigms and research programs. I have been asked by Richard Bernstein, editor of the Review of Metaphysics , to assess the work of one of the pioneers, a teacher we had in common a decade ago, whose collected essays have just been published.]


The generation which separates Hempel's latest major publication (Philosophy of Natural Science , 1966)[1] from his first (Der Typusbegriff im Lichte der Neuen Logik , 1936, written jointly with Paul Oppenheim)[2] has seen the philosophy of science come into its own as one of the chief subdivisions of philosophy, with a recognizable and coherent set of problems yielding (or, as in the case of induction, refusing to yield) to a recognizable and coherent set of strategies for solution. Not, of course, that in 1936 the philosophy of science was a new discipline—far from it: if anybody deserves credit for getting the field started it is probably Democritus. Nor that the publication of Der Typusbegriff marked a new era in the development of the subject, the recent literature of which included, after all, The Logic of Modern Physics ,[3]Der


Logische Aufbau der Welt ,[4] and Logik der Forschung .[5] The point which these facts illustrate is simply that Hempel's professional career spans a period of intense activity (a good deal of it stimulated by the three books just mentioned) during which the philosophical discipline to which he has made his greatest contribution arrived at an evident maturity and autonomy. The aim of this essay is to examine his contribution to that activity, and to deal with some recent arguments to the effect that the process of development has carried the philosophy of science away from science itself, on which in some sense or other it clearly depends for its intellectual relevance and honesty.

If a newcomer to philosophy were to ask what single concept characteristically preoccupies philosophers of science (as the concept of being , for example, preoccupies metaphysicians) the appropriate answer could only be explanation . If we look for a leading motif in the work of Hempel, we get the same answer. Now it is a remarkable fact that the book which, at the beginning of Hempel's career, summed up the pedagogical content of the philosophy of science—I mean of course Cohen and Nagel's An Introduction to Logic and Scientific Method (1934)[6] —contains no reference to explanation in the table of contents, and has no entry for it in the index. Whether or not the concept of the philosophy of science as the analysis of scientific explanations is an adequate one (which need not be insisted on for the purpose at hand), there can be no doubt that the central importance of such analyses at the present time is due in no small degree to Hempel's own work. He has now provided us, in Philosophy of Natural Science (referred to here as PONS ), with his own pedagogical introduction to the subject, which is, as might have been expected, a lucid distillation of the major themes to which he has recurred again and again in other writings.

Beginning with a concrete illustration of scientific inquiry—the classic investigations of Semmelweis into the causes of childbed fever—PONS leads the student through a discussion of the testing of hypotheses to a set of criteria for confirmation and acceptability. There follows a standard account of deductive-nomological explanation (that it can be called "standard" is due entirely to the fact that there is a standard, namely the one set earlier by Hempel himself),[7] an analysis of the difference between this and explanation by statistical laws, and finally three chapters on theories, concept formation, and theoretical reduction respectively. I give this outline not only in order to recommend the book for instructional purposes, which it serves admirably and with rare authority, but also because PONS presents with unambiguous clarity a number of characteristic theses which seem frequently to be misunderstood by Hempel's critics. These theses have also, as we shall see, been presented clearly enough elsewhere, but the setting


in PONS is simple and didactic and brings them into relief. For the purposes of the present discussion two of them are worth stating, one having to do with the nature of explanation and the other with the language of scientific theory.

Explanation, for Hempel, is a logical relation between sentences. The premises together constitute the explanans , the conclusion is the explanandum . Strictly speaking, of course, we should say explanans sentences and explanandum sentence (p. 50), but the familiar shortened forms ought not to lead to difficulty. What the explanandum sentence refers to is the explanandum phenomenon . The point to be drawn attention to here is that what is to be explained is in the first instance a particular occurrence , not a class of occurrences or a law governing such a class, although by extension the explanation of laws can at once be subsumed under the same pattern. Hempel indeed says (p. 1) that the empirical sciences "seek to explore, to describe, to explain, and to predict the occurrences in the world we live in" (emphasis added). A clear understanding of this point would have averted a number of difficulties springing from the belief that the explanandum is typically a theory. Feyerabend, to give only one example, is able to dismiss the empiricist theory of explanation (whose chief exponents have been Hempel and Nagel) as "an elaboration of some simple and very plausible ideas first proposed by Popper,"[8] which however concern the deductive relations between different theories; he then goes on to impute to the empiricist theory all sorts of repressive influences on the progress of science which could not possibly be exerted by the analysis of explanation put forward by Hempel. I am not concerned at this juncture to defend the adequacy of that analysis, but in order to comment on its adequacy one must at least be clear about what it says. (I shall have more to say later about the analysis itself and about Feyerabend's criticism of it.)

The second theme I wish to touch on in this preliminary review of PONS is that of the distinction between the language in which a theory is couched and the language which describes what the theory sets out to explain. Since a good part of what follows will deal with this distinction I will save the polemics for later, and limit my remarks here to an exposition of Hempel's point of view. As a matter of fact the account in PONS represents a rather muted stand as compared with some earlier treatments of the same topic; I do not think that Hempel has changed his mind, but he seems to have found a less vulnerable way of saying what was in it. The progress of science consists, among other things, in an enrichment of the vocabulary by means of which scientists describe the world as they understand it. Phenomena described in familiar terms (e.g., alternate rings of brightness and darkness between a lens


and a glass plate) are explained by the postulation of unfamiliar properties (fits of easy transmission and easy reflection, to use a classical but now abandoned formula).[9] The phenomena to be explained are by definition observable , and they are described in observation terms ("rings," "darkness," "lens," etc.); the explanation involves unobservable , i.e., purely theoretical, entities or processes, and these are indicated by theoretical terms ("fits"). The connection between the observable and the theoretical is provided by so-called bridge laws or rules of correspondence. This is the standard version of the observational-theoretical distinction, and it has recently been under heavy fire from the antiformalist right. In PONS Hempel makes the point as follows:

While the internal principles of a theory are couched in its characteristic theoretical terms ('nucleus,' 'orbital electron,' 'energy level,' 'electron jump'), the test implications must be formulated in terms (such as 'hydrogen vapor,' 'emission spectrum,' 'wavelength associated with a spectral line') which are "antecedently understood," as we might say, terms which have been introduced prior to the theory and can be used independently of it. Let us refer to them as antecedently available or pretheoretical terms . (p. 75)

In this way the observational-theoretical distinction is explicitly relativized to the emergence of a particular theory. Such a relativization has been implicit in most earlier formulations, at least since the program of founding science on an immutable observation language was given up, if indeed anybody ever really adhered to such a program.


PONS is only the latest in a long series of publications which (to speak only of those in English) have been appearing steadily since the early 1940s. And it is of course on these publications that Hempel's philosophical reputation rests. With the exception of the monograph Fundamentals of Concept Formation in Empirical Science (1952)[10] they have all been papers in learned journals or other collections of articles by various authors. The major ones have become landmarks in the literature of the philosophy of science: "The Function of General Laws in History" (1942), "Studies in the Logic of Confirmation" (1945), "Studies in the Logic of Explanation" (1948), "Problems and Changes in the Empiricist Criterion of Meaning" (1950), "A Logical Appraisal of Operationism" (1954), "The Theoretician's Dilemma" (1958), "In-


ductive Inconsistencies" (1960). All seven of these papers, and a few more, have now been reprinted in the collection Aspects of Scientific Explanation (1965),[11] in which is also printed for the first time a long essay which gives its title to the book. It is this book, rather than PONS , which occasions the present review, and I now turn to it.

The reprinted papers are grouped into four categories, dealing respectively with problems of confirmation, problems of cognitive significance, problems of the structure and function of theories, and problems of explanation, the last occupying more than half of the book. Together they constitute a documentary resource of the first importance, bringing together in one place the focal arguments of most of the major post-war developments in the philosophy of science. The articles are reprinted "with some changes" (I have not undertaken the task of locating them), so that history may have been tampered with in minor respects. It would have been of great interest to be told more about the circumstances under which they came to be written, especially since some of them have appeared in several places on different occasions and in slightly different versions. Hempel refers in the preface to "the Appendix on their origins," but at least in my copy of the book no such appendix is to be found.

In spite of the diversity of their origins the papers as a whole display a remarkable consistency. They are characterized by intellectual rigor, painstaking attention to detail, a kind of imperturbability which is Hempel's trademark. The style is matter-of-fact; the impression of Hempel which emerges from a sustained reading of the book is that of a craftsman of ideas, building with a due sense of responsibility a structure which will be expected to bear a certain intellectual weight. There is not much to dazzle or excite, but there is virtually no hesitation either. In the title essay Hempel has taken the opportunity to deal with some of the criticisms which have been provoked by his work, notably those of Scriven, and he makes it abundantly clear that the main elements of the structure are not to be shaken easily.

Because of this extreme solidity in its contents, Aspects does not make easy reading, at least not if it is read all at once. The papers appeared, as has been indicated, at intervals of a few years, and since each was self-contained when it was written, and yet represented a facet of a single coherent philosophical position (there have been changes, of course, but I think no major or radical ones), there are frequent recapitulations of contiguous points—the difference between deductive-nomological and inductive-statistical explanation must be spelled out half a dozen times. After a while the dictum of Anaxagoras comes forcibly to mind: in everything there is a portion of everything. And Hempel's concern to be lucid and clear (which makes him, for those of


us fortunate enough to have been his students, one of the best teachers of philosophy imaginable) leads him to the use of down-to-earth examples which may give the impression that the whole analysis is simplistic. The case of little Johnny and the measles (p. 177), as an illustration of statistical explanation, is followed by the case of little Tommy and the measles (p. 237), so that one is grateful for the variety provided (p. 301) by little Henry and the mumps. (In PONS it is little Jim and the measles again, p. 58.) It would nevertheless, I think, be a great mistake to argue from simplicity in examples to simplicity in understanding, although the temptation to do so is certainly excusable. Toulmin, in his review of Aspects ,[12] says,

Hempel's formal schematizations smooth out these differences [between modes of scientific discourse] without comment or apology. He introduces variety into his examples simply by switching between "all ravens are black" on page 12, "all storks are red-legged" on page 105 and "all robins' eggs are greenish-blue" on page 266. These are the kinds of sentences he calls "laws of nature"!

Quite apart from the fact that Hempel can and does on occasion use more sophisticated examples, this point is not as trivial as it appears. The question—and with it I approach the main part of this essay—is, what is Hempel's philosophy of science out to achieve, and would this end be better served by constant sensitivity to the "varied and historically developing activities of workaday science" (to quote Toulmin again) than by the patient attempt to construct a simple but adequate logical model of scientific theory? It can be said at once that any account of theory which is not applicable to red-legged storks and children with measles is a fortiori not applicable to more complex cases, a remark I offer as a caution to the antiformalists. The advantage of a sturdy but simple model is that, as long as it does not contradict the facts of scientific practice, it may perhaps be adapted to those facts and yet retain its essential simplicity. The essential simplicity of the Hempelian model is no argument for its replacement; the aim of the philosophy of science, like the aim of science itself, is after all the simplest formulation of the truth. Feyerabend, among others, appears to think science so complicated that a simple account of it could not possibly be true; he says, for example, of "Nagel's theory of reduction and the theory of explanation associated with Hempel and Oppenheim":[13]

It is to be admitted that these two "orthodox" accounts fairly adequately represent the relation between sentences of the 'All-ravens-are-black' type, which abound in the more pedestrian parts of the scientific enter-


prise. But if the attempt is made to extend these accounts [to certain comprehensive theories, e.g., Maxwell's electrodynamics and the theory of relativity] . . . then complete failure is the result.

If this means a complete failure to account for the way in which Maxwell's theory explains electrodynamic phenomena, then it is false; if it means a complete failure to account for Maxwell's theory, then it exhibits the confusion about the nature of the explanandum referred to above in connection with PONS , with which Hempel himself has dealt in Aspects (p. 347, fn. 17).

The solution offered by Feyerabend—which is to entertain many theories, not just the best available one[14] —does not entail the rejection of the Hempelian model, for each of the alternatives might turn out to satisfy Hempel's criteria. Hempel has never, as far as I know, maintained that there could not be two or more theories of equal probability (or "systematic power," Aspects , pp. 278 if.), although the history of science does not offer many examples of this. But then Feyerabend's complaint is at bottom not so much against Hempel's position as against a misuse of orthodoxy in the philosophy of science, in particular because of the orthodox choice of the Copenhagen interpretation of quantum theory in preference to the determinist interpretation of Bohm and Vigier, with which Feyerabend has associated himself. On this point one is bound to sympathize with him; but in fact the dominance of the Copenhagen view is not the fruit of a conspiracy on the part of orthodox philosophers of science—it appears at the moment to be a free choice on the part of scientists, most of whom are very little influenced by the philosophy of science. It would be too bad to seek to overthrow a philosophically valuable system on the grounds that it inhibits a particular development, when in fact it has nothing to say about that development and indeed never intended to have anything to say about it.

To do Feyerabend justice, there is one point in the original Hempel-Oppenheim account of explanation which does lead to difficulty, and which might be construed as a subscription to the "principle of consistency" which is held to be responsible for the reactionary influence of the empiricist theory. This is the "empirical condition of adequacy" for an explanation, given as:

(R4) The sentences constituting the explanans must be true.
(Aspects , p. 248)

The difficulty is that if this requirement is adhered to successfully no future observation, and therefore no future true theory, could contradict the explanans. Once established, a theory is established forever, so that further progress, at least of a revolutionary sort, is ruled out


a priori . An analogous difficulty occurs, however, with the familiar definition of empirical law as true generalization, and the empiricist theory has had no particular difficulty in circumventing it. Everybody knows by now that universal propositions cannot be conclusively verified, but that does not mean that every time we enunciate one we have to attach to it some modal prefix expressing the tentative nature of our confidence in it. A lawlike statement may be accepted as true within some theoretical framework, but we may still be prepared to admit that future observations will require revision of the framework and abandonment of the statement in question. Hempel's own view of this matter at the time of his early work on explanation required that laws be true, tout court , so that a generalization which was probably true was probably a law, rather than being a genuine law with high probability. He has not, as far as I can tell, modified this stand—in Aspects he speaks of "presumptive laws," (p. 488) in such a way as to suggest that such laws are the only ones we can have, as indeed is the case if laws, in order to be properly so called, must be known to be true. And presumably the best explanans we can have is also a presumptive explanans, which would leave the way open for its rejection in the light of new knowledge.


While one is bound to resist the objections dealt with above, insofar as they seek to dismiss Hempel's account on the grounds that it is too simple in its logical structure, or too rigid in its insistence on truth in the explanans, still it must be admitted that they raise legitimate questions with which any proponent of a position like Hempel's must be prepared to deal. The best strategy for dealing with them seems to me to consist in setting out the objectives of the position, as they can be reconstructed from Hempel's writings and from the recent history of the philosophy of science. If it is clearly understood what a philosopher is trying to do there is then less excuse for criticism to the effect that he or she is not doing something else.

The problematic of Hempel's philosophy comes directly from logical positivism, with which he was associated as a member of the Society for Empirical Philosophy in Berlin in the 1930s. In spite of differences in the interpretation of its philosophical mission, the various adherents of the positivist movement at that time reacted to a common stimulus, namely the dramatic success of the physical sciences in handling some kinds of knowledge of the world, and the attendant problem as to the


status of that knowledge, especially in the light of twentieth century developments such as relativity and quantum theory. To anybody trained in physics, as Hempel himself was, one of the most striking things about it is its heavy reliance on formal methods, mostly of course mathematical ones. Physics is an empirical science, but its best results are obtained by switching over as rapidly as possible to a formal mode of procedure, returning to the empirical only at the last minute in order to confront observation with a prediction, for example. Not that empirical relevance is really given up at any stage of the proceedings; the point is that the empirical objectives served by physics, and the empirical control to which its conclusions are subjected, provide a framework within which physicists are free to employ whatever methods they like, and the methods which lend the greatest clarity and economy to their work turn out to be formal ones. This looks like a lesson for the pursuit of knowledge in general, and it was taken as such by the logical positivists. Everybody now agrees that they went too far in trying to exclude as nonsensical statements whose empirical warrant was less clear than that of statements in the physical sciences, but that does not mean that their insistence on the paradigmatic virtues of the physical sciences was mistaken. (As a matter of fact Hempel never, as far as I know, contributed much to the antimetaphysical polemic, whose chief spokesman was always Carnap.)

Most of Hempel's work, however, was done after logical positivism, as a movement, had ceased to exist. In some respects he continued to derive inspiration from its survivors, notably Carnap; a good deal of his work on confirmation, for example, takes the form of an elucidation or elaboration or correction of something of Carnap's. But for the most part he has followed an independent line, never abandoning the old problems set by positivism, but arriving gradually at firm and often original conclusions about them. The originality is not sweeping—that could hardly be expected in a disciplined inquiry of restricted scope. But by comparison with some sweepingly original hypotheses which have recently commanded attention in the field, Hempel's unpretentious but solid achievements have a reassuring authenticity. And his contribution has not been without its own drama, the best example of which is his paradox of confirmation, a startling result whose publication in 1945 introduced a thorn into the side of confirmation theorists (including Hempel himself) which has still not lost its power to irritate. The paradox is, of course, that by the substitution of the contrapositive form "All non-black objects are non-ravens," "any red pencil, any green leaf, any yellow cow, etc., becomes confirming evidence for the hypothesis that all ravens are black" (Aspects , p. 15). It is this paradox


with which, for many people, Hempel's name is principally associated, although in itself it is a minor by-product of the philosophical program to which he has devoted himself.

That program has been devoted to clarifying our understanding of the way in which science constitutes knowledge of its objects. Hempel is not uninterested in the development and the practical usefulness of science, but these have not been at the focus of his attention. Science has developed, and it is useful; the interesting philosophical questions concern its structure and its validation. The same questions might have arisen, and the same answers been given them, if the development had followed different lines and if the usefulness had been much less obvious, although as pointed out above the contingent stimulus for the movement of inquiry to which Hempel belongs was, among other things, the impact of a particular development. At the beginning of the title essay of Aspects Hempel rephrases his often reiterated concerns as follows: "What is the nature of the explanations empirical science can provide? What understanding of empirical phenomena do they convey?" (p. 333). The rest of the enterprise follows from these questions. What is wanted of an explanation? What kind of question is it an answer to? Hempel distinguishes between "explanation-seeking" and "reason-seeking" questions, the function of the former being to render empirical statements intelligible , that of the latter to render them credible (p. 488). He is mainly preoccupied with intelligibility.

A cursory inspection of what scientists actually say shows that, at least for the nonprofessional (a category that, for any particular science, includes most scientists, since any one of them only professes his or her own speciality) the intelligibility of science is not to be found on its surface. In order to render it intelligible a program of "formal reconstruction of the language of empirical science" (p. 131) is embarked upon. By this something different from the formalization found useful in the practice of science is intended—in that respect the language is quite formal enough already. The program of formal reconstruction seeks to identify categories of scientific statement—those which describe particular empirical facts, those which express constant relations between such facts, those whose postulation helps to account for such relations, etc.—and to fit them into a logically coherent scheme which is to be the formal paradigm of a science. Statements in the various categories will have their own characteristics and their own special links to statements in other categories. Each developed science will be seen to have some statements whose form qualifies them for membership in each category; these actual statements will jointly exemplify the logical relations recovered by the formal reconstruction, and their role in the science in question will thus be clarified. I say "recovered"


because as science is practiced the relations between its statements are often concealed by formulations developed historically for purposes of efficiency rather than intelligibility. The formally reconstructed science may be no more efficient than the unreconstructed one, in fact it will probably be less so. But, as Hempel puts it in another connection, "the purpose of those who suggest this conception is not, of course, to facilitate the work of the scientist but rather to clarify the import of his formulations" (p. 221).

The simplest reconstructed science would have two categories of statement, namely, reports of observations and expressions of lawlike relations between entities. The nonlogical terms occurring in these statements would be either observational (pretheoretical, in the language of PONS ) or theoretical. And there would be two logical relations: a deductive one going from lawlike statements to observation reports and carrying the burden of explanation, and an inductive one going in the other direction and carrying the burden of confirmation. This is the basic model, and while it may be equipped with optional extras for special purposes, it recurs essentially in all logical reconstructions of scientific theory. The reports of observation are clearly of critical importance, since they constitute the empirical basis of the science and the starting-point for its confirmation. The logical positivists were at first rather naive about the status of observation statements; they believed it possible to capture in language the data of an unprejudiced awareness, and thus to place science on a completely veridical foundation. It is now clear that no perception is uncolored by theoretical understanding, and that no pure language of description would be available even if it were. There is thus no possibility of attaching the logical structure directly to the world, as it were; and in his analysis of confirmation Hempel has allowed for this by surrounding the purely logical activities for which he takes responsibility with a penumbral area in which he admits that pragmatic judgments are necessary. Pragmatic considerations enter the picture before logical concerns take over, at the point where the theory confronts experience; and they remain after logic has done its work, at the point where the theory had to be accepted or rejected . But again that does not mean that the quality of the logical analysis which goes on in between has to be modified by the pragmatic concerns which precede and follow it.

Another respect in which the positivists set out with revolutionary zeal, only to discover as time went on that their goal was unattainable, concerned the cognitive significance of the terms and statements of science. They wished at first to define all theoretical terms by means of observation terms, a requirement which soon had to be modified, in the light of Carnap's work on dispositions, to accommodate reductions as


well. Reduction sentences define theoretical terms only partially, but they preserve a functional separation of such terms from observation ones. The trouble is that the use of theoretical terms often depends intimately on their association with observation ones, since "as a rule, the presence of a hypothetical entity H  . . . will have observable symptoms only if certain observational conditions, O1 , are satisfied" (p. 208). Hempel therefore abandons the functional separation of observation and theoretical terms, and takes the unit of significance to be a system of statements in which both types occur. Such a system he calls an "interpretative system." Its significance still depends on the prior significance of the terms of a roughly observational vocabulary, but Hempel no longer insists that these terms should refer to perceptual contents directly—he will even accept disposition terms into the basic vocabulary if they are "well understood in the sense that they are used with a high degree of agreement by competent observers" (p. 209). Although, however, the interpretative system contains observation terms and theoretical terms together, it is not suggested that the distinction between them should be given up; this point will be dealt with at greater length in the next section.

It is not one of the functions of this essay to exhibit the program of formal reconstruction in detail for any particular science, which in any but the most trivial case would be a long and technically intricate task. That would be best done, in any case, by someone professionally concerned with the science in question, or perhaps by collaboration between such a person and a professional philosopher of science. Such collaboration is all too infrequent. The point I wish to reiterate at the end of this sketchy presentation of the formalist program is that the intention of the program is not to help science to be done, it is to help it to be understood; it is not itself a scientific program but a metascientific one. Elements of the metascientific structure do not have to resemble the elements of the scientific structure to which they correspond, any more than people have to resemble their addresses or social security numbers, so that some of the complexities of the daily business of science are strictly irrelevant to the question of the adequacy or inadequacy of the formal model. In one of the earlier papers Hempel points this out explicitly: "for the sake of theoretical comprehensiveness and systematization," he says, "the abstract model will have to contain certain idealized elements which cannot possibly be determined in detail by a study of how scientists actually work" (p. 44). This does not mean that such a study ought to be neglected by the philosophy of science, only that it belongs to a part of the field which formalists, in their formalist moments, do not happen to be cultivating.



The two respects in which Hempel's account has excited the liveliest controversy have already been alluded to. One is external to the system, and challenges its relevance to the activity of which it purports to be at least a partial philosophical analysis. The other is internal to it, and challenges a distinction on which part of the analysis rests. The latter is the more serious, and I shall deal with it first.

The observational-theoretical distinction as it existed among the early positivists was, as pointed out above, untenable. The challenge to Hempel and his colleagues, in essence, is: Why preserve the distinction at all? Powerful arguments have been marshalled to show that whenever it is insisted upon in concrete cases it can be rendered virtually insignificant (the best presentation of this view is probably that of Achinstein).[15] The position I shall advocate here is that the distinction is important and useful, that recent attacks on it have overlooked one of its principal uses, and that it ought not to be abandoned.

Distinctions of whatever kind are intended to distinguish between classes which for some purpose or other are conveniently kept distinct. It may be, however, that between something that obviously belongs to one class and something that obviously belongs to another there occurs a series of intermediate gradations, and that with respect to something falling roughly between the two a decision about its classification may be so difficult that the distinction cannot be usefully applied to it. What I wish to emphasize is that this failure does not affect the usefulness of the distinction as it applies to the original case. An analogy may make the point clearer. Chelsea and Greenwich Village are contiguous areas of New York City. The Chelsea Hotel is clearly in Chelsea; the Café Figaro is equally clearly in Greenwich Village, If I walk from one to the other I may reach a point at which I am hard put to it to say whether I am in Chelsea or the Village, but that does not mean that there is no longer any point in having two names for the two areas. (The weakness of the analogy is that in the geographical case the question can be resolved by drawing a line on a map, while in the philosophical case the problem is not neatly two-dimensional.) The fact that in a given case a term cannot be called unambiguously observational or theoretical, or that in one case it shows up as observational and in another as theoretical, does not invalidate the distinction, it only shows that it has its limitations.

Of course if the limitations were so severe that none of the purposes for which the distinction was wanted could be served, there would be every reason to discard it. The original purpose for which the positivists


wanted it was to show which terms were significant directly, as elements of an observation language, and which were significant indirectly, according to a criterion which showed their dependence on or relevance to elements of the observation language. This purpose is thwarted because there is some confusion about the meaning of "direct" and because even when that seems clear, terms keep shifting about, appearing directly significant in some contexts and indirectly significant in others, But there are other purposes to be served than the establishment of meaning criteria. The present interest of the distinction is one not generally recognized, although Hempel himself adumbrates it in the passage from PONS quoted at the end of the first section of this essay. It has to do with the way in which terms shift about, and especially with their careers before they become infected with theory. In science as a whole there are not many terms left of which this can be said, although in the formation of scientists it is a stage which is often repeated. And every now and then the advancement of science itself turns up a use for a term, either from its own vocabulary or from that of ordinary language, as an observational precursor of some theory which is in the process of establishing itself and which has not as yet infected anything.

It is in the historical development of science that the observational-theoretical distinction shows up most clearly. The classes obviously overlap; but at the leading edge of the historical process the observational is always slightly ahead. No theory was ever constructed in order to account for observations which had not yet been made . At this point the commonplace that every scientific observation is made with the establishment or refutation of some theory in mind will be raised as an objection, but the inconsistency between that and the claim just advanced is only apparent. Now, at the advanced stage in which science finds itself, it is of course unthinkable to spend experimental time and money on any other pursuit than the confirmation or refutation of theories, except as it is spent (and this accounts for most of it) on following out in detail the consequences of a theory which is already taken to be confirmed. This was not always the case. A lot of observation was in hand before the first really testable theory was formulated. (Hempel, in Aspects [p. 139], refers to Northrop's "natural history" stage of inquiry, and speaks of "the shift toward theoretical systematization" as something that takes place in time.) And it is still the case that in pursuit of evidence for already-formulated theories the observer sometimes comes upon an observation which does not obviously connect with them at all, and which has to be written down as anomalous or at least unexpected. It is most commonly in such episodes that new theories have their origin, and with respect to the new theory such observa-


tions are at first merely observational. Once the new theory is formulated further observations fall into the old pattern, that is to say the language in which they are reported is colored by the new theory. But in the period between the detection of the anomaly and the emergence of the theory which accounts for it the observational-theoretical distinction is fully operative. And since scientific education in a nontheoretical world is a kind of ontogenetic recapitulation of the phylogenetic advancement of science, the distinction remains familiar even in periods when fundamental discoveries are infrequent.

The observational-theoretical distinction, then, not only plays a part in clarifying, for the individual, the relation between theory and the observation which supports it, it also plays a part in clarifying the historical situation attendant on the emergence of new theories. But it is precisely on this second point that the other principal criticism of the formalist program hinges. Formal reconstruction, it is said, falsities the character of science, which in its real development does not follow a tidy dialectic of observation and theory but is to be understood only through intricate historical and sociological analysis. Nobody would deny that historical and sociological analyses are of great importance in their own right, but there is a danger of confusion in setting them over against philosophical analyses of the reconstructionist variety. One of the recent historical theses which has drawn most attention to itself is that of Kuhn,[16] according to which the development of science is from paradigm to paradigm, each paradigm controlling a period of normal science, by a series of crises and revolutions. This seems to me a highly plausible view, and Kuhn makes an impressive case for it. He suggests, however, that he came to it partly out of dissatisfaction with contemporary philosophy of science, which represented science as being something which, in his own practical experience of it, it clearly was not. The earlier discussion in this essay should make it clear enough how such a misunderstanding of the intentions of the philosophy of science might arise. (It may be added parenthetically that if every misunderstanding led to a contribution of such originality, misunderstanding might well become one of the goals of inquiry.) There is, as far as I can see, no incompatibility whatever between Kuhn's view of science and Hempel's, since each new paradigm (in Kuhn's sense of the term) might turn out to satisfy Hempel's criteria for a developed science, just as it was suggested that all Feyerabend's alternatives might. Kuhn, in fact, has had some difficulty in characterizing the notion of "paradigm" even to his own satisfaction, which leaves open the possibility that when it shows its true colors the paradigm may turn out to look like an exemplification of Hempel's formally reconstructed science. The great advantage of formalisms is that they adapt themselves readily to new


content; the progress of science might therefore be reinterpreted in terms of the distance from the paradigm (in Hempel's sense) at which the science of a given epoch finds itself, crises occurring whenever the distance becomes too great, and revolutions restoring the acceptable form with new observational and theoretical content.

I do not wish to saddle Hempel with the views put forward in this section of the essay; they owe their inspiration to him, but he might wish to disown them. He does not, at least in Aspects, take the historical significance of the formal model as far as I have done, and indeed he recognizes that the model might be used as an agency of conservatism rather than of revolution. My suggestion here has been, not that the revolution derives from the model, but that the model may be used as a touchstone for the conditions under which the revolution might come about. In practice, of course, anomalous observations may not give rise to a new theory at all, they may simply be thrown out because they are at variance with the old one. In the little known essay "Science and Human Value" (Aspects , pp. 81–96) Hempel suggests that, in the case of a "previously well-substantiated theory," this is entirely proper, although he adds that it "requires considerable caution; otherwise, a theory, once accepted, could be used to reject all adverse evidence that might subsequently be found—a dogmatic procedure entirely irreconcilable with the objectives and the spirit of scientific inquiry" (p. 95). In the light of this and other passages I do not think that the charge leveled against the formalist position by Feyerabend, Kuhn, and others, to the effect that it discounts and even stifles scientific progress, can be made to hold water, although it must be admitted that this is a side of the picture to which Hempel himself has paid very little attention. It has been my purpose to show that this was not culpable neglect; he happened to be paying attention to another, and at least equally important, aspect of the philosophy of science, quite enough in itself to absorb the energies of a single philosopher.


A good deal of the reaction to Hempel's work—and I have tried to give a sampling of that reaction in the foregoing pages—can be summed up by asking whether it was really necessary. If science can be understood in all its concreteness and complexity by attention to the nuances of its language and the convolutions of its history, why bother to provide a simple schema whose intelligibility is bound to be inversely proportional to its truth? On a metascientific level this problem is strangely reminiscent of a problem which occurs at the scientific level, and which


Hempel himself has called "the theoretician's dilemma." If a theory is an accurate account of the world of observation, then it is a good theory; but if that world is (as by definition it must be) directly accessible to observation, then the theory is superfluous. If it is not an accurate account of the world of observation, then it is a bad theory, and ought to be discarded. Either it is or it is not an accurate account, and either way, it seems, we have no real use for it.

The puzzle involved here has something in common with another puzzle familiar to students of logic. Deductive logic never yields more truth in its conclusions than was supplied in its premises. The premises for any axiomatic development are the axioms themselves; what then is the point of proving any theorems, if everything they say is already present in the axioms? Curiously enough, Hempel himself makes an analogy between his concept of explanation and metamathematical proof theory (p. 412); the context is a defense of the covering-law model (a name which has come to be attached to the formal reconstruction of science as Hempel has practiced it) against charges by Scriven that it is not applicable to all sorts of everyday situations in which we ordinarily and naively use the term "explanation." The analogy seems to me appropriate on two counts. Formal methods in logic and mathematics are principally of interest to logicians and mathematicians; for God they are useless, because God sees the conclusions in the premises; the average man or woman in the street, who does not understand the premises and does not need the conclusions, find them useless also. Similarly for scientific theories and their formal reconstruction. God may be presumed to understand the world and science too; the average person takes little interest in either. For those, however, whose intellectual powers are less than God's but whose intellectual curiosity is more active than the average, the middle region between omniscience and ordinary language has its attractions. It is with no intention of disrespect to philosophers who cultivate a sensitivity to the ordinary language of science that I suggest a reexamination of the virtues of formal reconstruction, childishly simple as the enterprise may appear.

Simplicity, after all, is not necessarily achieved simply. "The central theme of this essay," says Hempel, referring to the title essay of Aspects (p. 488), "has been, briefly, that all scientific explanation involves, explicitly or by implication, a subsumption of its subject matter under general regularities; that it seeks to provide a systematic understanding of empirical phenomena by showing that they fit into a nomic nexus." Some people are tempted to say, well, if that is all there is to it, we knew it all along. But the point is we did not know it all along; it seems familiar now only because the work of the last thirty years has made it so. That this view of explanation is now an obvious point of


departure for further work in the philosophy of science is in large measure due to Hempel. There is a great deal more to be done, and much of it inevitably will consist in amplifying, correcting, and contradicting pronouncements of Hempel's. But the self-imposed limitations on the scope of his achievement, and the extraordinarily close way in which it has been argued to, suggest that it will be an element to be reckoned with for a long time to come.


Science and System:
On the Unity and Diversity of Scientific Theory

Theories are ways of looking at things. A theoros in ancient Greece was "a spectator, an observer, one who travels to see men and things; an ambassador sent by the state to consult an oracle, or to observe the games." But that etymology makes it clear that theories cannot be just casual ways of looking; there is something ceremonial, almost official, about any view of the world which qualifies as a theory. Without pressing the point about oracles (or games) we can recognize in the present organization of science a reluctance to dignify with the title "theory" any mere working hypothesis. It was some such reluctance, perhaps, that led the founders of the Society for General Systems Theory to change its name, soon after its establishment, to the Society for General Systems Research. (This modesty did not last—the Society changed its name again, a few decades later, to the International Society for Systems Science.)

It is nevertheless about general systems theory that I wish to speak, in spite of this tacit admission that it remains as yet largely programmatical. Systems theory is presumably a suitably qualified way of looking at systems. Sustema , again, means simply "that which is put together, a composite whole," so that there is at first nothing particularly illuminating about its etymology. But "system," also, has acquired connotations. In this case, unfortunately, there are two sets of connotations which pull in opposite directions. On the one hand, system has for many people—especially philosophers but often scientists too—represented the highest form of knowledge, a perfect vision of the organization of the world. On the other, system has often been an excuse for


the drawing of premature conclusions and even for the suppression of evidence. The struggle which accompanied the overthrow of the Aristotelian-Thomistic system in the Renaissance has never quite been forgotten by modern science. Yet that system, considered in relation to its proper subject-matter, had been the source of enlightenment and even progress. As Whitehead puts it, "In its prime each system is a triumphant success; in its decay it is an obstructive nuisance."[1] It has been the misfortune of systems, especially philosophical ones, to be remembered for the nuisance rather than the success: the success gratifies the contemporaries of the system, but the nuisance may live on for centuries.

Before going further, a serious ambiguity which has already crept into the discussion must be dealt with. Systems theory is a way of looking at systems, but theories themselves are also systems. They are composite wholes whose parts are propositions, related to one another in complex and dynamic ways. The chief difference between a theory and a physical system is that the parts of a theory are conceptual, and therefore in principle more flexible than the parts of a physical system. Theories are simpler to construct and, in principle, simpler to discard, and this versatility is the secret of their usefulness and their importance. A theory (again in principle) can do anything a physical system can do, more quickly, more efficiently, and with less fear of the consequences. The chief function of theories, therefore, is to anticipate the behavior of physical systems. If in theory the device blows up, in practice it had better not be built that way.

Each theoretical system confronts the physical system of which it is the theory, and this confrontation is not a bad image of the human activity we call science. As a paradigm we may take the classical investigation of Galileo. Here the physical system consists of a ball, an inclined plane, a timing device, and a gravitational field. The input to the system is the release of the ball from the top of the plane, with the timing device in some prearranged state; the outputs are a series of intermediate positions of the ball and a corresponding series of states of the timing device, ending with the ball at the bottom. The theoretical system confronting it consists of an algebraic function, i.e., of a set of relations between numbers. The input to this system is a set of initial conditions, and the output is a set of solutions for the ball at various stages of its descent. Physical systems of one sort and another have, of course, always been with us, and elementary theoretical systems had been developed by earlier thinkers than Galileo. What is novel in this development is that the two systems have the same form—they are isomorphic with one another and therefore behave in the same way


within the limits of the isomorphism. We now take this condition for granted, but Galileo thought it worth making explicit:

It seems desirable to find and explain a definition [of naturally accelerated motion] best fitting natural phenomena. For anyone may invent an arbitrary type of motion and discuss its properties . . . but we have decided to consider the phenomena of bodies falling with an acceleration such as actually occurs in nature and to make this definition of accelerated motion exhibit the essential features of observed accelerated motions. And this, at last, after repeated efforts we trust we have succeeded in doing.[2]

The "repeated efforts" suggest that getting the isomorphism is not a particularly easy task, a point to which I shall have occasion to refer again later on.

Now, isomorphism is not something that belongs to the theoretical system and the physical system separately; it appears only when they are taken together, and this has to be done from a vantage-point outside them both. It is not the function of theory to reflect on its adequacy to the world; its function is to be adequate to the world. Questions about the adequacy of scientific theories, like questions about their logical structure, their origins, their usefulness, etc., are metascientific questions; and a number of metascientific disciplines—the philosophy of science, the history of science, the sociology of science—have grown up to deal with them. These disciplines, however, are not simply descriptive, they are theoretical too, or better metatheoretical —ways of looking at scientific theory, or ways of looking at ways of looking at things. And this means that they also incorporate systems, metascientific systems, which confront the scientific systems (now including theories and physical systems and the interactions between them) of which they are the respective metatheories. It begins to look as if there are systems everywhere, and as if everything could be regarded as a system. That this is in fact so is one of the allegations most frequently made by the critics of systems theory. For of course if it were so, calling something a system would give no useful information about it; the term by itself would cease to make any distinction between one state of affairs and another.

Fortunately, everything is not a system, and the term is by no means an empty one. At the very least, to say of a number of elements, whether physical or theoretical, that they constitute a system is to deny that they amount merely to a pile of objects or a list of words. Since there are piles of objects and lists of words, that is an important distinction. And since a pile of objects, even when the objects are carefully stacked (and even if they are fastened together) remains a pile of objects


(albeit a sophisticated one), and a list of words, even if arranged according to the rules of grammar and syntax, remains a (similarly sophisticated) list of words, the ideas of dynamic interrelation and/or of coherent functioning are also implicit in the concept of system. Dynamic interrelation entails changes of state; coherent functioning entails inputs and outputs. It is easy to see how these properties may be reflected in physical systems, theoretical systems, and metatheoretical systems, and yet take radically different forms in the three cases. The confusions that often arise are largely due to carelessness in distinguishing levels—to mixing up objects and the descriptions of objects, theories and the conditions of theoretical adequacy.

This kind of confusion is especially likely to occur in discussion between people whose usual preoccupations are on different levels. For the question can be asked (on a metatheoretical level) whether the various theoretical systems, which have been devised to cope with the great variety of physical systems to be found in the world, themselves constitute a higher-order system (a general system, perhaps)—whether the sciences taken together are like a list of words or like a theory, whether there can be not merely a philosophy of science but also a science of science. From the beginning the attraction of general systems theory was that it seemed to offer a new basis for the unification of science. My question then is, how are the sciences one, and how are they many? Can there be a general theoretical system, or only a general metatheory of particular theoretical systems, i.e., a general metatheoretical system? If there is a general theoretical system, what are its elements and the principles of its articulation?

It must be admitted from the start that in the development of science every successful step has so far consisted in the establishment of a particular theoretical system applicable to a particular physical system, as in the Galilean case discussed earlier. Every problem presents itself in a particular connection, at a particular juncture of space and time; every law is bounded in its reference and the conditions of its application; every explanation is relevant to some restricted (even if infinite) set of possible observations. The systems in question may have been more or less ramified, but even in its most ramified form no single system has yet extended so far as to cover even one of the conventional fields of science (although some have crossed the boundaries of these conventional fields). The conventional divisions among the sciences have been established on the grounds of similarity among observations and explanatory concepts which suggested the possibility of unification into a single deductively organized system, ideally in axiomatic form; but the relations between the parts of physics, for example, are much looser than this ideal of system would indicate. Physics, in fact, remains


in many ways more like a list of words (each "word") representing, it is true, a substantial bit of theory) than like a unified theoretical system; and physics has done better than any of the other sciences. I wish to stress this point because many people seem to think that the unity of science has to contend only with divisions between the sciences, overlooking the equally serious divisions within them.

We might nevertheless admit that unity is a realizable goal, at least in principle, for the particular sciences. The question is whether this is also true of science in general, or, if it is not true, whether nonetheless an effort towards it may not yield benefits to the scientific community. (It is not a condition of human ambition that its realization should be possible.) The answer to the question depends in part on what is understood by the unity of science, and on what good it is thought unification might do. On these points there has been widespread disagreement. If we are not to limit our collective attention to particular kinds of system, whether in engineering or biology, mathematics or philosophy—then an attempt to resolve such disagreements will be of some service. I therefore wish to discuss a number of different interpretations that have been placed on the program of the unification of science, and to give my opinion, for what it is worth, of their relevance to the interests of general systems theorists.

The three most familiar conceptions of the unity of science are, in the order in which I shall deal with them,

1. unity as reduction to a common basis

2. unity as synthesis into a total system

3. unity as the construction of an encyclopedia.


If we could devise a single descriptive language in which all the terms of all the sciences could be defined or to which they could be reduced, and if we could discover a single set of laws from which all the laws of all the sciences could be derived or to which they could be reduced, then we would have a single science. This version of the program of unification occurs in different forms, depending on the vocabulary of the basic language. The logical positivists wished to found science on a sense datum language or, when that failed, on a physical thing language ; the basic predicates, in other words, were to be descriptive of macroscopic states of the world, such as measuring-rods, meters, clocks, etc. More recent writers have chosen instead predicates de-


scriptive of the most elementary physical units to be found in the world, namely the predicates of elementary particle theory. It is in the latter case that we speak of the "reduction of all sciences to physics." In the former, all sciences (including physics) are reduced to a common observation basis, although since this observation basis is physicalistic and the name for the view that this kind of reduction is possible is physicalism,[3] the two are easily confused. Once the language is settled, the reduction of laws follows the same pattern in both cases. The conditions of reduction are clear: the regularities described by the terms and explained by the laws which are reduced must be describable by the terms and explainable by the laws to which they are reduced, the former terms and laws thereby being eliminated from the description and explanation. It may also be the case that the science which is reduced normally deals with objects whose parts are normally dealt with by the science to which it is reduced, although this is not essential; in this connection the term "microreduction" has been suggested.

Neither of these versions of reductionism can be said to have succeeded, but it would be rash to say that they never could succeed. Oppenheim and Putnam, on the basis of the assumption of microreduction down to particle physics, are prepared to speak of the unity of science as a "working hypothesis."[4] Since all such programs of reduction involve one great inconvenience, namely, that the laws of the reduced science look very complicated when they are expressed in the language of the fundamental science (imagine trying to express the laws of economics in the language of quantum mechanics), Oppenheim and Putnam list six reductive levels , as follows:

6 Social groups

5 (Multicellular) living things

4 Cells

3 Molecules

2 Atoms

1 Elementary particles

and content themselves with reduction between adjacent levels. Reduction is obviously transitive, but it would be silly to try to demonstrate that in particular cases. This is unity in principle rather than in practice, and it is highly plausible, as so many things are in principle. Reduction to a physical thing language is even more plausible for a very obvious reason, namely, that all the data on which we base our conclusions, in sciences as diverse as cosmology and microbiology, have to be rendered in macroscopic form (spectral lines, chromatograms) before we


can take them in; our biological community forces a kind of epistemological unity on our knowledge. The thesis of physicalism is stronger than this, of course, since it will not accept just any macroscopic observation as part of the basis (e.g., "it's alive," "she's angry"), but Carnap at least considered it to be established in its essentials: while he says that "there is at present no unity of laws ," he continues,

On the other hand, there is a unity of language in science, viz., a common reduction basis for the terms of all branches of science, this basis consisting of a very narrow and homogeneous class of terms of the physical thing-language.[5]

The possibility of reduction is philosophically interesting, but the trouble with it as a working basis for the unity of science is that nobody really wants to do it. There are cases, of course, in which reduction really works and everybody is grateful for it, as when the theory of certain diseases, which had formerly been studied only at the gross level of symptoms, was reduced to a theory about microorganisms and cells. Yet there is a limit to the complexity that one discipline can handle, and physicists do not at the moment want to burden themselves with organic chemistry, let alone political science. The world may be a single very complex physical system, but that does not mean that a single very complex theoretical system is the best way of representing it.


The reduction of one science to another makes no assumptions about a similarity of formal structure on the two levels in question. It is only necessary that the terms and laws on the higher level should be eliminable in favor of terms and laws on the lower level. I wish now to move on to a stronger conception of the unity of science in which such a formal similarity appears either instead of or in addition to the thesis of reduction. There is said to exist a model or pattern of scientific theory, of which each particular theory is an instantiation, so that higher level sciences recapitulate, although with more complex elements, the structure of lower level sciences. This claim is clearly independent of the reductionist claim, and many of its proponents reject the latter on the grounds that genuine novelty emerges at the different levels, although the basic pattern is reproduced on each.

Although reduction is not a necessary part of this synthetic view, the sciences are nearly always arranged in a hierarchy very similar to


the reductionist one given above; a part-whole relationship of some sort is held to obtain between levels, although the wholes may not be explainable without residue in terms of their parts. The sciences thus form a totality, the unit of which is provided by the archetypal structure that reappears at each stage, like a similar arrangement of rooms on the different floors of a tall house. One of the most thorough workings out of this idea is to be found in the system of Synoptic Naturalism developed by George Perrigo Conger. The "Argument" of this work puts the view under discussion so clearly that it is worth quoting.

The universe, studied under the limitations which beset human thinking, presents itself as a vast system of systems which are strikingly similar in the general principles of their structures and processes. Among the major systems, or realms, are those commonly referred to as Matter, Life, Mind. Each of these realms develops through a series of levels, and studied in empirical detail, from level to level and from realm to realm, the structures and processes of matter, or the physical world, are seen to resemble those of life, or the organisms, and both the physical world and the organisms resemble a nervous system functioning as a mind. The resemblances of structures and processes throughout the levels and realms indicate that the universe is not merely a series of evolutions, but also of "epitomizations" and "cumulative coordinations." Further study indicates that prior to the physical world there are three other major systems, which, because of the detailed resemblances of their structures and processes to those just considered, are also identifiable as realms. These are the realms of Logic, Number, Space-Time or Chronogeometry. With these included, the universe is said to develop in successive epitomizations and coordinations, from structures and processes which are logical to structures and processes which are personal and societal. A study of man's adjustments of his structures and processes to those of the surrounding universe which he epitomizes and coordinates provides some applications to problems of ethics and opens some new naturalistic resources for philosophies of religion.[6]

The remarkable and in a way sad thing about this work is that no publisher would publish it (it was printed by the University of Minnesota Library) and hardly anybody has read it. The more basic reasons for this neglect are to be dealt with later; for the moment, however, I must say that, in spite of the enormous amount of painstaking work put into this project by Conger, and into similar projects by many other similarly dedicated workers, the evidence for such a periodic hypothesis seems to me extremely slim. It is true—and this is one of the phenomena which has been of greatest importance in the development of general systems theory—that similarities of structure are found between parts of scientific theory widely separated from one another in


the system of levels. These cases of isomorphism between theories, each isomorphic with a very different physical system, have led some people to the expectation that if only we look hard enough we must find them everywhere. ["The human understanding," says Francis Bacon, "is of its own nature prone to suppose the existence of more order and regularity in the world than it finds. And though there be many things in nature which are singular and unmatched, yet it devises for them parallels and conjugates and relatives which do not exist."[7] ] That there should be cases of such isomorphism between theories, which makes it possible to use one theory as a model for another and suggests lines of investigation which might otherwise be overlooked, seems to me a natural consequence of the fact that the limited number of degrees of freedom in the physical world restricts the number of possibilities of structure; that the number of possibilities is still further reduced if to the structure in question is added a function (such as growth, homeostasis, replication, etc.), since this has the effect of eliminating all but a few of the degrees of freedom available to the structure in its nonfunctional state; and that the natural limitations of our intellect keep down the number of types of theoretical systems we are capable of constructing. But it is one thing not to be surprised by exceptions, another to mistake them for the rule.

There is, of course, a sense in which all possible sciences do conform to a rather narrowly defined set of rules, namely those of logic and its various associated disciplines (set theory, mathematics in general). In a finite world containing a finite number of kinds of things reacting with one another in a finite number of ways, there is a finite number of possibilities, and all of these are anticipated in principle by some branch or other of mathematics. The number of branches of mathematics which has been worked out to any degree of complexity is severely limited, although for the reasons given above it is not entirely surprising that already some branches should turn out to be applicable to more than one set of empirical conditions. To be logical, however, only means not to be inconsistent; it does not mean to conform to any particular pattern in detail, and in general we find the empirical truths of particular sciences filling out the skeleton of logic and mathematics in quite different ways and at quite different points.


The thesis of reduction assumes that there is a basic science in terms of which the truths of all the others can be expressed; the synthetic view outlined above assumes that there is a superscience in whose


image all the particular sciences are made. Neither view can be entirely mistaken. Partial reductions between adjacent levels have been carried out with great success; and isomorphisms do exist between elements of theories on different levels, so that their logical form might plausibly be thought of as an element of a super theory. But both are clearly programmatic rather than demonstrable, and each encounters serious philosophical difficulties: reduction faces the problem of emergence, the hierarchical synthesis faces the danger of Platonism. (Each of the isomorphisms between theories must be tested empirically on both sides before it can be accepted, and by the time this is done its predictive—as apart from its suggestive—power has already been rendered superfluous.)

A weaker position, which reflects elements of both these strong ones, can be found in the works of people like Spencer in the nineteenth century and Otto Neurath in the twentieth. Here the unity of science is a social rather than a logical matter; science, as a cooperative enterprise, involves investigations of different parts of the world by different people, whose findings can then be assembled in a classification without hierarchy, a system of cooperation without precedence. No one will deny that chemistry deals with aggregates whose parts are dealt with by physics, but this does not mean that physics has to be done first, nor that chemistry may not throw some light on the problems of physics. "The division of labor in science," says Herbert Spencer,

has been not only a specialization of functions, but a continual helping of each division by all the others, and of all by each. Every particular class of inquirers has, as it were, secreted its own particular order of truths from the general mass of material which observation accumulates; and all other classes of inquirers have made use of these truths as fast as they were elaborated, with the effect of enabling them the better to elaborate each its own order of truths. From our present point of view, then, it becomes obvious that the conception of a serial arrangement of the sciences is a vicious one. It is not simply that the schemes we have examined [the schemes of Oken, Hegel, and Comte] are untenable; but it is that the sciences cannot be rightly placed in any linear order whatever. . . . Any grouping of the sciences in a succession gives a radically erroneous idea of their genesis and their dependencies. There is no one "rational " order among a host of possible systems. There is no "true filiation " of the sciences. The whole hypothesis is fundamentally false.[8]

The critical insight in this passage is contained, I think, in Spencer's reference to the secretion of particular orders of truths from "the general mass of material which observation accumulates." Observation can begin anywhere within reach of our senses (or within reach of our


instruments), but of all the regularities we actually take note of, some on this level and some on that, only a few will serve as the starting point for interesting scientific developments. Neurath, for his part, stresses the unrealistic character of other programs for the unification of science, most of them (like that of Leibniz, for example) having been based on questionable a priori assumptions. "The historical tendency of the unity of science movement," he says,

is toward a unified science, departmentalized into special sciences, and not toward a speculative juxtaposition of an autonomous philosophy and a group of scientific disciplines. If one rejects the idea of such a super science and also the idea of a pseudorationalistic anticipation of the system of science, what is the maximum of scientific coordination which remains possible? The answer given by the unity of science movement is: an encyclopedia of unified science. . . . One cannot compare the historically given with "the real science." . . . An encyclopedia and not a system is the genuine model of science as a whole. An encyclopedic integration of scientific statements, with all the discrepancies and difficulties which appear, is the maximum of integration which we can achieve. . . . It is against the principle of encyclopedism to imagine that one "could" eliminate all such difficulties. To believe this is to entertain a variation of Laplace's famous demon. . . . Such is the idea of the system in contrast to the idea of an encyclopedia ; the anticipated completeness of the system as opposed to the stressed incompleteness of an encyclopedia.[9]

Now, this talk of encyclopedias will seem like pretty pale stuff to diehard unitarians, and it may appear that by now I have forgotten all about general systems theory. But my underlying question is, what is the relevance of general systems theory to the unity of science (and vice versa)? So far I have simply tried to show what some people have understood by the unity of science. I want now to ask a supplementary question, namely, what is the point of having a unified science? After dealing with that, I shall return to general systems theory more specifically.

Most efforts at the unification of science, I think, have been undertaken from either of two motives, one perfectly sound, the other involving serious dangers. Let me deal with the dangerous one first. It is a common complaint that human life and human knowledge are regrettably unsystematic and fragmented, and there appears to be a very powerful psychological desire to get it all tied together into some coherent whole. Science seemed for a long time the ideal agent of such a total integration. A passage attributed somewhere to Hugo Munsterberg sums up this feeling admirably: "Our time longs for a new synthesis—it


waits for science to satisfy our higher needs for a view of the world that shall give unity to our scattered experience." On an elementary level this desire for wholeness shows up in Gestalt phenomena of "closure," in which the mind moves from an almost complete representation to a complete one, or even in more advanced stages from a mere indication to a complete representation. This turns out to be very useful in perception, although even there it has its risks. In a search for the unity of science it is, I believe, pernicious if taken by itself; if, that is, all that is desired is kind of mental closure, a tidying-up of the scientific conception of the world. The presentation of separate though apparently related elements is no argument for the independent existence of a whole of which they are the parts, although that is what the need for closure seems to drive many people to when it comes to the contemplation of scientific theories. (It must be understood again that I am not saying that the world is not a unified system; I am saying that we have no grounds as yet for claiming that theory is.) According to R. G. Collingwood, scientists simply cannot help trying to unify science, although he thinks that their attempts are doomed to failure:

Science always believes that it has just discovered the ultimate truth and that all past ages have been sunk in a fog of ignorance and superstition. It has no sense of its solidarity with and debt to its own past and other forms of consciousness. And further, it is as impossible to classify sciences and reduce them to a single ordered cosmos of thought as it is to do the same with works of art.

The attempt has been made over and over again to reduce all the sciences to such an ordered whole. It seems obvious that there must be a table or hierarchy of sciences in which each has its proper place and so there would, if science were the rational activity it believes itself to be. If there really were a Platonic world of pure concepts, in which every concept was dovetailed into the rest, each having a science to expound its nature, then there would be a corresponding world of sciences. But, as Plato himself saw to his dismay, there are concepts of mud and filth and anything one likes to name, and these can never fit into a place in the world of absolute being. The concepts of science are abstract and therefore arbitrary, and because anyone may make any abstraction he likes, there cannot possibly be a system or world of science.[10]

Collingwood repeats the point taken above from Herbert Spencer, and it is on this point, I think, that the principal criticism of inclusive attempts at unification hinges.

Every theoretical system, as was remarked above, applies to some particular physical system or set of systems. These physical systems in turn are chosen from a large number of physical systems encountered every day by the observer. Some extrascientific impulse leads to the


choice of one of these for study rather than another, and the present diversity of science is due to the fortunate circumstance that different people have taken an interest in different bits of the world. No theory can get started without some such selection—not merely a selection of a suitable object of inquiry, but also a rigorous selection of the kinds of observation made (e.g., by the insistence on standard conditions, repeatability, a sufficient number of instances, etc.). Science does not explain the world, it explains only very highly selected aspects of certain small parts of it; and for every theory it is just as important that it should leave the rest of the world out of account as that it should take into account the particular aspects of the particular part with which it is immediately concerned.

Theories have often been compared to maps in which can be observed a similar selectivity. Railway maps, for example, show only railways, road maps only roads, relief maps only heights above sea level, etc. Each map is useful primarily for the specialized information it conveys. Each map contains also, it is true, hints about the country in question not directly related to its primary function. From curves in the track, and the positions of termini, bridges, and tunnels, it is possible to get a good deal of topographical information out of a railway map, although it is not the business of railways to describe the territory they traverse. They have, however, to conform to it more or less narrowly according to the available resources of civil engineering, and that necessity builds information about it into them. Similarly, airline maps give a good deal of information about population density, although here there is a danger that a refueling stop in the middle of an ocean may look like a large city.

Yet, even if all the specialized maps are superimposed, there remain indefinitely many truths about the country which can be found on none of them. Some of these truths, if interesting enough, may justify the creation of a new map, for example, a map showing the frequency of fatal accidents at grade crossings. But a map which showed everything would have to be as complicated as the country itself. One of the conditions for the usefulness of a theoretical system is that it should in certain critical ways be simpler than the physical system which it represents. The ideal of unified science as the explanation of everything would be like the map which showed everything (including incidentally a much reduced version of itself [on which would appear a much reduced version of itself], and so on); and this would miss the whole point of its being a map.

It is due, I think, to a supposition that the function of theory is to explain the world that a conviction of unity in the world (a very plausible bit of metaphysics) has seemed to many scientists to call for a


parallel unity in theory; but just as the point of a map is that it should be in some ways different from the country it represents, so the point of a theoretical system is that it should be different from the physical system it represents. And just as maps record only a very few of the features of the landscape, even when they are all taken together, so theoretical systems, even when they are all taken together, represent only a very few of the physical systems to be found in the world.

It may be argued that one of the weak points of the map analogy is the fact that while there are only very arbitrary relationships between roads, railways, population density, etc., in the sense that roads are not parts of railways nor railways parts of roads, the traditional divisions of the sciences do have a logical relationship, based on the part-whole relationship referred to earlier. I do not wish to deny this, although it seems to me just another example of borrowing the unity of science from the unity of the world. The physical world, as nearly as we can tell, is (in the words of Herbert Simon) a "nearly-decomposable hierarchic system,"[11] characterized by a series of levels each linked by a part-whole relation, but so arranged that the forces which bind the parts together are always stronger than the forces which bind the wholes, of which they are parts, together as parts of some greater whole. This arrangement clearly involves no presumption that the forces on one level will be in any way like the forces on another—indeed quite the contrary; but it does suggest a kind of natural classification of phenomena and the usual corresponding subdivision of the sciences. The lack of any presumptive similarity of forces on different levels also means the lack of presumptive similarity of the sciences dealing with those levels, which can be more or less complicated quite independently of one another. Nor does this arrangement assume that there can be only one science on any level, or that there can be no sciences which straddle levels. If we remember the simple definition of science as the confrontation of some theoretical system with some physical system to which it is adequate, we can see that any physical system (e.g., a solar system, a subway system, a hospital, a herd of elephants, an elephant, a symbiotic plant system, an ant-hill, a luminescent bacterium, a fatty acid molecule, a uranium atom) invites the construction of a science, and any regularity in its behavior will serve as a ground for generalization. All that is required is that the nature of the elements and of their interrelationships should be clear, and that the setting of the system in relation to other systems which affect it should be taken account of.

The latter point is important. The traditional concept of scientific investigation—and the case of Galileo conforms to this—involved the examination of some phenomenon in isolation . It is of the essence of the systems approach to realize that we not only can but must study


some phenomena in vivo , as it were, while all sorts of boundary effects (both inputs and outputs) are actually in operation. The sciences thus generated may turn out to cluster, and they may turn out to have surprising isomorphisms, but these are to be discovered only after the fact. (The traditional sciences grew by such clustering; Newton, when he united Galileo's mechanics with Kepler's astronomy, was acting like a good general systems theorist.) Also the structure of the theoretical system in question has nothing whatever to do with the place of the physical system with which it deals in the order of nature (we do not have to write the equations of cosmology in very large letters and the equations of nuclear physics in very small ones), and the relations between the sciences have nothing to do with the physical relations between the things with which they deal. It is true that physical objects are parts of biological objects, but it does not follow from this that descriptions of physical objects are parts of descriptions of biological objects, or that laws governing the behavior of physical objects are parts of laws governing the behavior of biological objects. There is a relation between physics and biology, but it goes through the world, not through some superior science uniting physics and biology.

The trouble with exercises in panscientific unification is that there is very little real use for them, and therefore very little real need except of a psychological sort. Every now and then a really useful bit of general systematization has been stimulated by some more or less prosaic problem—the arrangement of books in libraries, for example, or the efficient organization of the educational process. Even here, however, the tendency is to become Procrustean whenever some science proves intractable with respect to the rest of the system, or to introduce ad hoc complications which spoil the virtues, such as they are, of this kind of integration. As a general rule, the fantastic quantities of energy expended on one Utopian speculation after another suggest a motivation quite different from that of the research scientist who tackles one difficult piece of research at a time. And this motivation—the insistence on completeness in the speculative system for its own sake, apart from an empirical demonstration of it, simply because incompleteness is felt as unsatisfactory—seems to me to constitute a danger of which general systems theorists need to be especially aware.

To come back in conclusion to the main point of this excursion, recall as said earlier that there are two senses in which the unity of science can be understood from the motivational point of view, one dangerous and the other beneficial. Let me now turn to the beneficial one.

The most useful conception of the unity of science seems to me to lie somewhere in the middle of the triangle defined by the reductive,


synthetic, and encyclopedic conceptions outlined above. Where reduction can be done usefully, it should be done; where isomorphisms can be found, they should be found; and where disciplinary barriers to communication can be broken down, they should be broken down. What I have been chiefly criticizing here is an a priori approach to this problem, the assumption that there must be isomorphisms, the assumption that every science must fit into some rational order of the sciences. What I should wish to substitute for this is an empirical approach—not the claim that isomorphisms are necessary, but the recognition that they are possible, and the resolve to search for them wherever they occur. If a direct bridge is thus built between physics and biology, or between crystal growth and population movement, it is not because there had to be a bridge but because there happens to be one which somebody had the sense to exploit. It might indeed turn out after a sufficient period of time that there emerged a more systematic unity of science in one of the senses discussed previously, although I doubt it. In any case, if that happened, we would have arrived at our unity honestly and not by a rationalistic and Utopian shortcut. Kenneth Boulding has put this point elegantly in an article in the very first issue of General Systems (1956):

General systems theory is a name which has come into use to describe a level of theoretical model building which lies somewhere between the highly generalized constructions of pure mathematics and the specific theories of the specialized disciplines. The objectives of general systems theory can be set out with varying degrees of ambition and confidence. At a low level of ambition but with a high degree of confidence it aims to point out similarities in the theoretical constructions of different disciplines, where these exist, and to develop theoretical models having application to at least two different fields of study. At a higher level of ambition, but with perhaps a lower degree of confidence it hopes to develop something like a "spectrum" of theories—a system of systems which may perform the function of a "gestalt" in the theoretical construction. Such "gestalts" in special fields have been of great value in directing research towards the gaps which they reveal.[12]

The chief point here again is that this spectrum of theories is something to be developed, not assumed—to be worked out from cases, not from principles. The difference in approach reflects the difference in interpretation of the term "general." For the cosmic systematizer, nothing is general unless it subsumes everything; for the humbler systems theorist, anything is general, in a way, if it subsumes more than one thing.

The function of general systems theory, then, is like the function of any other theory, namely to construct a theoretical system adequate to


its own subject matter—in this case the generalities (in the weaker of the two senses just mentioned) that are found as a matter of fact, not of principle, to unite different scientific theories. General systems theory is in fact a metatheory, to use the language of the earlier part of this paper, although it is in the curious and fortunate situation of being convertible into a straightforward theory for certain physical systems, mostly falling in the interstices of the conventional disciplines. The isomorphism with the structure of theories (and hence with some elements of the structure of the world) which general systems theory seeks to establish is no easier to arrive at than Galileo's results were, but great progress is being made with certain systems of a middle level of complexity. The need for such a theory, and for a society such as this one, is greater now than it has ever been. Spencer, at another place in the work quoted before, says that the sciences affect each other not only directly but indirectly; "where there is no dependence, there is yet analogy—equality of relation ; and the discovery of relations subsisting among one set of phenomena constantly suggests a search for the same relations among another set."[13] We might say now that the discovery of the former relations would constantly suggest a search for the latter if only anybody realized that they had been discovered. The chances are, unfortunately, that nobody will realize that they have been discovered, or that they constitute a useful isomorphism with another field, unless he or she is precisely on the lookout for new relations and their possible isomorphisms. What a commitment to general systems stands for, if nothing else, is a common resolve not to lose a certain interdisciplinary interest, an openness to developments in other fields and to their possible relevance for our own. At the rate of proliferation which science has now achieved, this posture is an increasingly difficult one in which we need all the mutual reinforcement we can get.

It would nevertheless be a mistake to regret the present state of activity in science, even though it leads to professional meetings as gigantic as the one we are now attending [AAAS annual meeting, 1966]. The constant development of science must, it is true, be a source of considerable irritation to the cosmic systematizer, who must often be tempted to feel that if only everything would stop for a little while the system could be finished. It might indeed be true that if we could halt progress we could get the definitive system; but I am afraid that the corollary of that proposition is also true, namely, that if we got the definitive system we should find we had halted progress. It is easy for us to forget it, but the world is always more complex than our best theory about it. And we should not forget, either, the extremely parochial nature of our own scientific enterprise, and how its conclusions are largely determined by the fact that we happen to be a particular


size, and to have particular sensory capacities, and to be located at a certain epoch and on a certain planet. By this I do not mean to belittle the achievements of science; quite the contrary. That we have any theoretical systems adequate to any physical ones seems to me one of the great triumphs of the human intellect. I mean merely to emphasize that we have stumbled into this world under a set of fairly tight constraints that make it highly implausible that we should have arrived at any very great measure of truth about the whole. Some of what we know we know pretty well, perhaps as well as it can be known; but we do not know much.

As the little we know proliferates—and in relative terms it is doing so very rapidly—science continually struggles to bring various bits of it into comparatively simple and therefore tractable form. Every simplification is a distortion, but simplification is a condition of the usefulness of scientific theory, and without it we could not keep any sort of grip on the complexity of the world. The advancement of science is in fact a continuous dialectical interplay between complexity and simplicity: complexity in the world as we continually probe more deeply into it, simplicity in our theories as we achieve new formulations and modest unifications. This dynamic process was described very beautifully by Poincaré in an address to the International Congress of Physics in Paris in 1900:

In the history of the development of physics, two contrary tendencies may be distinguished. On the one hand, new connections are constantly being discovered between objects which seemed destined to perpetual separation; scattered facts cease to be irrelevant to one another, and tend to order themselves into an impressive synthesis. Science moves towards unity and simplicity. On the other hand, observation uncovers new phenomena every day; they have to work for their place [in science], and sometimes it is necessary, in order to find a place for them, to destroy a corner of the edifice. In well-known phenomena, demonstrably uniform at a gross level of observation, we perceive gradually variations of detail; what we had thought simple becomes complex again, and science seems to move towards variety and complication.[14]

The problem is to keep a balance—not to get lost in the complications, but not to fly off to a spurious unity. The original stimulus for the founding of the Society for General Systems Research was a realization that complications will engulf us if we do not take defensive action, and the form that the defensive action took in this case was a search for systematic unities. I might sum up my remarks by insisting on this plural. There are many ways, not a single way, in which science is unified, and at the present state of our knowledge there are inescapable


ways in which it is diverse. The working conception of science remains the confrontation of a particular theoretical system with a particular physical one, and any systematization which does not rest in the end upon a recognition of this fact is in danger of losing any empirical relevance whatever. On what level of abstraction one chooses to work, and with what generality of empirical reference, are matters to be determined by the problem in hand and the inclination of the worker. The dialogue between the theoretical and the practical, between the general and the particular, between the complex and the simple, between unity and diversity, requires the different contributions provided by such different perspectives. I take it that it is one of the functions of general systems theory to monitor this dialogue and keep it alive.


Gosse's Omphalos Theory and the Eccentricity of Belief

Eccentricity has always abounded when and where strength of character has abounded; and the amount of eccentricity in a society has generally been proportional to the amount of genius, mental vigour, and moral courage it contained.
John Stuart Mill, On Liberty


Eccentricity took many forms in Victorian England, but in keeping with the atmosphere of the times there were two especially noticeable varieties. There were religious eccentrics, like John Nelson Darby, a passionate nonconformist who solved the ancient problem as to the nature of the sin against the Holy Ghost by identifying it with the taking of Holy Orders; and there were scientific eccentrics, like Andrew Crosse, who in the course of electrical experiments at his country estate created a new species of beetle (Acarus crossii ) and brought down on himself a torrent of totally undeserved abuse on the grounds that he was trying to be God. From time to time these tendencies were combined in a single individual, with invariably interesting results. Religion and science have never really been comfortable in each other's presence, and the antics to which people are driven who try to make them so have not ceased yet. Contemporary attempts, however, seem anemic in comparison with the fierce controversies of the nineteenth century. What now is done weakly, even pathetically, was then a matter for "genius, mental vigour, and moral courage"; and while the result might have been to make people look ridiculous, it did not make them look puerile. The subject of this essay seems often comic, sometimes tragic, but always a man of strong character and firm will.

Philip Henry Gosse is best known, if at all, as the overbearing Father in Edmund Gosse's autobiographical sketch Father and Son , although the sympathies of the reader of that book are likely to lie, as they were intended to lie, with the son. The story is the familiar one: a sickly


child, brought up under the stern and repressive eye of a Victorian father, eventually throws off the burden and sets out to live his own life. He was, of course, quite right to do so, and I do not wish to suggest otherwise. My purpose is to draw attention to what Edmund Gosse himself calls "the unique and noble figure of the father"[1] —a distinguished naturalist, author of one of the most brilliant failures in the history of scientific theories, and in his own right a more colorful figure than the son as whose father he himself suspected he would one day be known. He was born in 1810, the son of an itinerant miniature painter, and died in 1888 a Fellow of the Royal Society and the author of more than thirty books and of innumerable scientific papers. It is perhaps best to begin with an account of his scientific development.

At first glance there is nothing eccentric in the professional life of Philip Gosse. Brought up in a small seaport town where the principal form of recreation was exploring the shore or the surrounding country, and spending a great part of his early life in comparatively remote and wild places—first Newfoundland, then Canada, and finally Alabama—it was not surprising that his innate powers of keen observation should have led him into a career as a naturalist. In Newfoundland, where he was employed as a clerk in a whaling office at Carbonear, he bought Kaumacher's edition of Adams's Essays on the Microscope , an act which he regarded, in his characteristically self-critical way, as a formal dedication to a life of science. By the time he left Newfoundland for an abortive attempt at farming in Ontario he had already begun an extensive collection of insects which occupied the foreground of his attention; his last memento of Newfoundland was a rare cockroach, and the sole comment in his diary when he first reached Canada was the following: "July 15.—As I this day arrived in Quebec, I procured some lettuce for my caterpillars, which they ate greedily."[2] This single-mindedness in matters of biology remained with him for the rest of his life; the birth of his only child appears in the diary with the entry: "E. delivered of a son. Received green swallow from Jamaica."[3] Of course such things might be interpreted, not unjustly, as indicating a certain stolidity of character, and there is plenty of other evidence to show that Gosse, as a young man, took things very seriously indeed, himself most seriously of all.

The Canadian venture proving a failure, Gosse traveled to Philadelphia (observing en route the rudeness of the natives of Vermont) and there met a number of the leading American naturalists of the period, including members of the remarkable Peale family.[4] From Philadelphia he proceeded, mainly by ship, to Mobile, and thence to King's Landing and Dallas, Alabama, where for nine months he was a schoolmaster. The natives of Alabama were also rude, and they were still extremely


anti-English (it was barely sixty years since the Revolution); and although Philip Gosse enjoyed many things about his stay in the South, including the "woffles" which were served for breakfast, the frequent violence, especially towards the Negroes, and the almost tangible moral strain of slavery, made him glad to leave and return to England after twelve years in the Americas.[5]

It was not easy to find suitable work in England, and for the first year after his return Gosse lived in something close to penury. He spent some time, however, in working the notes of his Canadian period into a manuscript entitled The Canadian Naturalist , a series of imaginary conversations, somewhat stiff in tone, between a father and son, on the flora and fauna of the region in which he had stayed. At first he met with no success in finding a publisher, but finally, when he was at "the extremity of dejection and disgust," he was sent for by Mr. John Van Voorst of Paternoster Row. Edmund Gosse describes the interview:

The publisher began slowly: "I like your book; I shall be pleased to publish it; I will give you one hundred guineas for it." One hundred guineas! It was Peru and half the Indies! The reaction was so violent that the demure and ministerial looking youth, closely buttoned up in his worn broadcloth, broke down utterly into hysterical sob upon sob, while Mr. Van Voorst, murmuring, "My dear young man! My dear young man!" hastened out to fetch wine and minister to wants which it was beyond the power of pride to conceal any longer.[6]

This was the beginning of a long association between author and publisher. The Canadian Naturalist showed what he could do in a literary direction, and as time went on he learned to do it brilliantly. He could be erudite and familiar at the same time, interspersing careful zoological and botanical observations with amusing anecdotes, providing his own illustrations in line or watercolor, and turning out, over the next thirty-five years, a dozen or more enormously successful books of popular natural history. He acquired a large and faithful public, which enthusiastically bought his books and took them to the seaside, despoiling in the process (much to his chagrin) the shore which was his favorite collecting-ground. Gosse's relation to his readers is perfectly foreshadowed in the relation between the father and the son in The Canadian Naturalist . The father, in the opening chapter of that book, proposes a series of excursions into the neighbouring countryside: "Charles.—Few things would give me greater pleasure. I have often felt the want of a companion in my walks, who, by his superior judgement, information, and experience, might remove my doubts, gratify my curiosity, and direct my attention to those subjects which are instructive as well as amusing; for I anticipate both instruction and


amusement from our inquiries, and enter into your proposal with delight."[7] The genteel sections of the Victorian middle classes were equally delighted, and were instructed and amused in the thousands not only by Gosse's books but also by his invention of the aquarium, which brought the seashore into drawing-rooms all over the country.

Scientific work of a more serious nature was not, however, neglected. Gosse crossed the Atlantic once more for a two-year study of the birds of Jamaica, which produced one of the important early works on the ornithology of the West Indies. His inflexible uprightness of character is illustrated by an incident in connection with the publication of a supplement to that work, the Illustrations of the Birds of Jamaica , a rare and exceedingly beautiful set of colored plates each bearing the inscription "P.H.G. del. et lith." These were published by subscription, and in the course of printing it became apparent that the cost of production would exceed the total amount subscribed; but rather than change the price of the work once announced, Gosse absorbed the extra cost out of his own pocket, actually publishing the set at a loss. Subsequent studies, especially of small and microscopic forms of marine life, led to his election to the Royal Society in 1856. Darwin corresponded with him, asking for information in connection with his own painstaking work on variation, and he was honored by being taken into the confidence of the biological revolutionaries of the 1850s:

It was the notion of Lyell . . . that before the doctrine of natural selection was given to a world which would be sure to lift up at it a howl of execration, a certain body-guard of sound and experienced naturalists, expert in the description of species, should be privately made aware of its tenour. Among those who were thus initiated, or approached with a view towards possible illumination, was my Father. He was spoken to by Hooker, and later on by Darwin, after meetings of the Royal Society in the summer of 1857.[8]

Gradually his interest became concentrated in a few highly specialized areas, particularly the Rotifera, and he wrote one classic of nineteenth-century zoology, the Actinologia Britannica , which remained the standard reference work for many years. He was an indefatigable observer, and cannot really be said to have retired at all: at the age of seventy-five he was still busily occupied, publishing in 1885 a monograph on The Prehensile Armature of the Papillonidae .

Gosse's great merit as a scientist lay in a capacity, rarely encountered, for precision and minuteness in observation, which called for extraordinary resources of patience and eyesight, neither of which seems ever to have failed him in connection with his scientific work. In The Birds of Jamaica he enunciates a principle to which he always


adhered and which is of supreme importance in the descriptive branches of science:

Perhaps a word of apology may be thought needful for the minuteness with which the author has sometimes recorded dates, and other apparently trivial circumstances, in his observations. It is because of his conviction, that an observer is hardly competent to determine what circumstance is trivial, and what is important: many a recorded fact in science has lost half its value from the omission of some attendant circumstance, which the observer either did not notice or thought irrelevant. It is better to err on the side of minuteness than of vagueness.[9]

When, at rare intervals, he allowed himself to wander from this close attention to the facts, the results were, from a scientific point of view, less happy. His speculations, largely on the question of the creation and extinction of species (although he also put forward the theory that some frequently reported sea serpents were really prehistoric monsters) were generally naëve, while his taste, left to its own devices, ran in the direction of the Gothic novel. The subtitles of that most romantic work, The Romance of Natural History , show the scientist in an entirely different light. Chapter 10, entitled "The Terrible" (other chapters are called "The Vast," "The Wild," "The Unknown"), deals with the following surprising collection of incidents: "Horrible Death of Thackwray—Hottentot's Adventure with a Rhinoceros—Similar Adventure of Mr. Oswell—Terrific Peril of Captain Methuen—Nearly Fatal Combat with a Kangaroo—Horrid Voracity of Sharks—Coolness of an Indian Officer—Ugliness of Vipers—Shocking Adventure in Guiana—Another in Venezuela—Fatal Encounter with Bees in India." The last of these episodes has, for this study, a special interest. It concerns two English gentlemen, Messrs. Armstrong and Boddington; the victim, inevitably, was "alas! Mr. Boddington," who, "unable any longer to resist the countless hordes of his infuriated winged foes, threw himself into the depths of the water, never to rise again." Gosse is not actually sure that the assailants were bees, and covers his admission of ignorance with this remarkable statement: "Whatever the true nature of the insect, it affords an apt illustration of such passages of Holy Scripture as the following:—'The Lord shall hiss for . . . the bee that is the land of Assyria,' (Isa. vii. 18.) 'The Lord thy God will send the hornet among them, until they that are left, and hide themselves from thee, be destroyed.' (Deut. vii. 20.)"[10]

Overlooking for the moment the claim to aptness (from whom was Mr. Boddington hiding? and why Assyria?), here is a strange insertion into the work of a Fellow of the Royal Society. But by this time, after twenty years, anybody familiar with Gosse's writings would have taken


it in stride. Wherever one looks one finds passing confessions of faith, references to the Bible, exhortations to the young, and while these might at first be taken for customary piety, the weight of the evidence, and the recondite nature of some of the allusions (such as those in the case of Mr. Boddington) soon suggest a different hypothesis. It is impossible to do justice to the life and work of Philip Gosse without paying close attention to this other side of his character.


When Philip Gosse returned to England from America in 1839, urgently in need of employment, he was offered a post in a provincial museum. He was hardly in a position to be particular about conditions of work, and the offer was really an act of charity on the part of an interested friend, but he turned it down.

I should fear [he wrote] that I should be thrown into situations in which I might find it difficult to keep that purity of intention which I value more than life; and likewise, that my opportunities of being useful to my fellowmen, especially to their souls, would be much curtailed. I view this transient state as a dressing-room to a theatre; a brief, almost momentary visit, during which preparation is to be made for the real business and end of existence. Eternity is our theatre: time our dressing-room. So that I must make every arrangement with a view to its bearing on this one point.[11]

Apparently he was entertaining, at this time, the idea of entering the ministry of one of the evangelical sects. But he could hardly be said to have been brought up in a religious atmosphere. For the origin of this pious tendency it is necessary to go back to Newfoundland, and to the time, almost exactly, of his purchase of Adams on the microscope—a time at which he "became, suddenly and consciously, a naturalist and a Christian."[12] The stimulus for his conversion, if it can be called that, was an illness of his sister Elizabeth, far away in England, to whom he was closely attached. "My prominent thought in this crisis was legal. I wanted the Almighty to be my friend; to go to Him in my need. I knew He required me to be holy. He had said, 'My son, give Me thy heart.' I closed with Him, not hypocritically, but sincerely; intending henceforth to live a new, a holy life; to please and serve God."[13] It was as if he had signed a contract with God; and it did not occur to him to doubt, since he knew himself to be strong enough in character to keep his part of the bargain, that God would in turn do what was expected of Him.

This contract of faith he interpreted as requiring the acceptance,


word for word, of the literal and symbolic truth of the Bible. The double sense is important. While the plain meaning of the text was to be zealously defended, there was more to be discovered beneath the surface. Gosse applied himself to the investigation of this hidden truth with an energy matched only by that which he devoted to his researches in natural history. At first these studies were carried on in comparative isolation, but after his return to England two circumstances mitigated this spiritual loneliness. He found, in the suburb of London where he was for a short time a schoolmaster, a group of Christians, followers of J. N. Darby, called by the outside world "Plymouth Brethren" but by themselves simply "the Brethren," or, modestly, "the Saints." Darby, as was remarked earlier, disapproved of the ministry, so that Gosse was no longer tempted in that direction; but he found among these people a kind of intellectual interest in salvation and prophecy perfectly in sympathy with his own convictions. He was, throughout his life, evangelical, but never in the passionate sense usually attached to the word. His concern for the souls of men sprang less from sympathy than from duty, and the duty was not necessarily pleasant—it was part of the agreement with God, a service demanded in exchange for the right to enter into the mysteries of the interpretation of Scripture. Independently of this connection he met, and later married, Emily Bowes, the daughter of a Bostonian couple, her principal attraction being an equally fervid, equally rigid, and equally eccentric form of Christianity with his own. Together they read the prophets and commentaries on the prophets, treading eagerly, in the words of Edmund Gosse, "the curious path which they had hewn for themselves through this jungle of symbols."[14] The death of his first wife after only nine years of marriage left him, if anything, more isolated than before (the Saints proving too tame and unimaginative for his fierce symbolic tastes), and drove his already rather stern and humorless character into a melancholia from which he never completely recovered.

It was inevitable that such exclusive and fanatic attention to the details of biblical exegesis should before long produce a distorting effect on Gosse's attitude to the contemporary world and, eventually, to science itself. The commentators were, if anything, more prophetic than the prophets, and led the inquisitive couple "to recognise in wild Oriental visions direct statements regarding Napoleon III and Pope Pius IX and the King of Piedmont, historic figures which they conceived as foreshadowed, in language which admitted of plain interpretation, under the names of denizens of Babylon and companions of the Wild Beast."[15] The Church of Rome in particular figured largely in the deciphering of the Book of Revelation, and it was denounced and hated with a special passion. "We welcomed any social disorder in any part


of Italy, as likely to be annoying to the Papacy. If there was a customhouse officer stabbed in a fracas at Sassari, we gave loud thanks that liberty and light were breaking in upon Sardinia."[16] The effects of all this were felt in the most unlikely quarters. There was, for instance, a man who used to pass down the street where the Gosses lived selling onions, with a cry of

Here's your rope
To hang the Pope
And a penn'orth of cheese to choke him.

The cheese [writes Edmund Gosse] appeared to be legendary; he sold only onions. My Father did not eat onions, but he encouraged this terrible fellow, with his wild eyes and long strips of hair, because of his "godly attitude towards the Papacy."[17]

Such peculiarities might have been merely amusing, had they confined themselves to international affairs. But scriptural theory found other applications closer to home, and Philip Gosse developed, out of a naturally strong moral sense and a tendency to introspection, a morbid sensitivity of conscience and a practice of hypercritical self-vigilance which he did not hesitate to extend to his family (principally Edmund) and to the congregation of which, after the death of his wife and his removal to Devonshire, he became informally the pastor. This side of his character is so well known from Father and Son that there is no need to dwell on it here. The introduction of religious conviction into daily life produced, however, another effect of more direct interest, namely a relation between the scientist and his field of study perhaps unique in the history of science among workers of comparable distinction.

Nature was the work of God, and as such was to be taken seriously. It must, as the work of God, be perfect. Accordingly, for Gosse, the suggestion that anything in Nature might have been better arranged, or the slightest hint of levity in connection with it, were almost comparable to blasphemy, and he was ready to meet either with indignation on God's behalf. In The Ocean , for example, he scornfully rejects a tentative version of the theory of development: "Goldsmith flippantly asserts, that the Shrimp and the Prawn 'seem to be the first attempts which Nature made when she meditated the formation of the Lobster.' Such expressions as these, however, are no less unphilosophical than they are derogatory to God's honour; these animals being in an equal degree perfect in their kind, equally formed by consummate wisdom, incapable of improvement."[18] But there was a danger in thus zealously


guarding God's rights in Nature—the danger that he might, as time went on, come to take a certain proprietary attitude towards it himself; and to this temptation he soon succumbed. He felt fully justified in doing so, and would have been surprised and indignant, as religious people tend to be, if anybody had pointed out to him that to presume on God's favor was a form of spiritual pride. But there is no doubt that Philip Gosse was both proud and presumptuous, and in the Devonshire Coast there is a remarkable juxtaposition of passages which form such a clear basis for this indictment that I shall, at the risk of tedium, quote them extensively. He is discussing the aesthetic qualities of natural objects:

But there is another point of view from which a Christian . . . looks at the excellent and the beautiful in Nature. He has a personal interest in it all; it is a part of his own inheritance . As a child roams over his father's estate, and is ever finding some quiet nook, or clear pool, or foaming waterfall, some lofty avenue, some bank of sweet flowers, some picturesque or fruitful tree, some noble and widespread prospect,—how is the pleasure heightened by the thought ever recurring,—All this will be mine by and by! . . . So with the Christian. . . .

And thus I have a right to examine, with as great minuteness as I can bring to the pleasant task, consistently with other claims, what are called the works of nature. I have the very best right possible, the right that flows from the fact of their being all mine,—mine not indeed in possession, but in sure reversion. And if anyone despise the research as mean and little, I reply that I am scanning the plan of my inheritance. And when I find any tiny object rooted to the rock, or swimming in the sea, in which I trace with more than common measure the grace and delicacy of the Master Hand, I may not only give Him praise for his skill and wisdom, but thanks also, for that He hath taken the pains to contrive, to fashion, to adorn this, for me .

And then there follows immediately this statement:


I have the pleasure of announcing a new animal of much elegance, which I believe to be of a hitherto unrecognised form. I shall describe it under the appellation of Johnstonella Catharina . . . .

The elegant form, the crystal clearness, and the sprightly, graceful movements of this little swimmer in the deep sea, render it a not altogether unfit vehicle for the commemoration of an honoured name in marine zoology. . . . I venture respectfully to appropriate to this marine animal, the surname and Christian name of Mrs. Catharine Johnston, as a personal tribute of gratitude for the great aid which I have derived from her engravings in the study of zoophytology.[19]


Of course it is, in a sense, unfair to put the matter in this way, and to suggest a patronizing flourish in this innocent piece of nomenclature; but there is some justice in it. Ever since that day when, in Newfoundland, he had come to terms with God, Philip Gosse had, consciously or not, felt himself in a position of privilege. Nothing illustrates this attitude more clearly than the nature of his prayers.

Edmund Gosse has vividly described how his father, with clenched fists and cracking fingers, knelt nightly and wrestled with God, his supplications occasionally turning into outright demands. From other sources we can gather what the objects of those demands were. There were three things during his life that Philip Gosse wanted very badly indeed, and to which he expressly devoted a great deal of his spiritual energy in prayer; and in the end, to all appearances, God failed to live up to his commitments, for none of the three requests was granted. The first, and most persistent, was inspired by his reading, as a young man, Habershon's Dissertation on the Prophetic Scriptures , in which the Second Coming of Christ was vividly anticipated; in his own words: "I immediately began a practice, which I have pursued uninterruptedly for forty-six years, of constantly praying that I may be one of the favoured saints who shall never taste of death, but be alive and remain until the coming of the Lord, to be 'clothed upon with my house which is from heaven.'"[20] This is not an infrequent prayer among evangelical Christians, who in general, however, seem content to die without a feeling of having been cheated. Not so Philip Gosse. Even in life his confidence was such that he lived in momentary expectation of this apotheosis, and would be chagrined when it did not occur: "He would calculate, by reference to prophecies in the Old and New Testament, the exact date of this event; the date would pass, without the expected Advent, and he would be more than disappointed,—he would be incensed. Then he would understand that he must have made some slight error in calculation, and the pleasures of anticipation would recommence."[21] But at death it was not a question of miscalculation. His second wife, Eliza Gosse (née Brightwen), wrote in a short memoir that "this hope of being caught up before death continued to the last, and its non-fulfilment was an acute disappointment to him. It undoubtedly was connected with the deep dejection of his latest hours on earth."[22]

The second prayer concerned his son, Edmund, and was of especial importance to him as incorporating the last wish of his first wife. Philip and Emily Gosse had, from the beginning, dedicated their child, like Samuel, to the service of the Lord; and Emily, dying of cancer in 1857, reiterated that dedication in the most solemn and saintly manner possible, so that God himself, it seemed, must be bound to accept it and ensure its consummation. For many years all was well, and when Ed-


mund was publicly baptized and admitted to the communion of the Brethren at the age of twelve Philip Gosse felt the sacred responsibility to be almost discharged. But in truth Edmund had hardly known what he was doing, or that any other life than that among the Brethren was conceivable, and when he went to London as a young man to work in the British Museum he discovered that his tastes and talents lay in other directions. Gradually severing his links with the Evangelical Movement, he entered upon a career as a man of letters. Philip Gosse wrote angrily to his son and prayed angrily to his Maker, but in vain.

There remains one episode out of the three in Philip Gosse's life of prayer. It was of shorter duration, but its implications were of vastly greater scope, and its historical interest is such that it will be dealt with in a section by itself.


Protestant Christianity, as Martineau somewhere remarks, is built upon the authority of the Bible, as Catholicism is built upon that of the Church. The vulnerability of the first position, as compared with the flexibility of the second, is obvious, for the Church can discreetly change its mind, while the Bible, as a historical document, is by definition incapable of adapting to novelty. Catholicism survived the nineteenth century much better, in its own sphere of influence, than Protestantism did, for this very reason; for in that century more than in any other the intellectual sympathies of the world were alienated from the Bible by the exposure of many apparently straightforward statements of fact in it as ignorant legends. The blow was not, of course, mortal. Ignorant people continued to believe the legends, and the intellectuals began to treat them as mythical adumbrations of profound truths. But those few really serious thinkers to whom the Bible had been genuinely and directly authoritative experienced a most disturbing conflict of loyalties. Philip Gosse is a perfect example of the type.

The greatest problem before 1858, when Darwin and Wallace brought out into the open the question of the origin of species, was geological. According to Archbishop Ussher's reading of Genesis there could not, in 1857 (the year in which Gosse published his own work on the subject), be anything in the world more than 5,861 years old, according to rapidly accumulating stratigraphical and paleontological evidence there was scarcely anything of interest in the world whose history was not much longer than that by hundreds of thousands, even millions, of years. The stratigraphy might be accommodated, at a stretch, by introducing that famous gap of aeons between the first and second


verses of Genesis 1, but this did not help the paleontology, especially that of species closely related to living ones, even identical with them. The "days" of creation might be extended to cover geological ages, but there were difficulties there about the order of appearance of fossils in the stratigraphical record, and besides, to the purists, this seemed already to be taking hardly permissible liberties with the manifest declarations of the Holy Spirit. These were grave perplexities for those "to whom," in Gosse's own words,

the veracity of God is as dear as life. They cannot bear to see it impugned; they know that it cannot be overthrown; they are assured that He who gave the Word, and He who made the worlds, is One Jehovah, who cannot be inconsistent with Himself. But they cannot shut their eyes to the startling fact, that the records which seem legibly written on His created works do flatly contradict the statements which seem to be plainly expressed in His word.

Here is a dilemma. A most painful one to the reverent mind! And many reverent minds have laboured long and hard to escape from it.[23]

Most of them gave up the struggle, either closing their eyes to the evidence, or abandoning the literal interpretation of the Bible, or in many cases just learning to live with the dilemma as something too great for the limited intelligence of man. This last was at least a humble, if not a comfortable, position. But none of this would do for Philip Gosse; he would be content with nothing less than a complete solution of the riddle. The incredible thing is that he succeeded in finding one so perfect that it was, and remains, proof against all refutation. And although he called the book in which he presented it to the world "an attempt to untie the geological knot," his method has all the audacity of Alexander at Gordium.

It was this book, Omphalos ,[24] whose acceptance by the world of science formed the object of Gosse's third petition to God. His own attitude towards it is made explicit in the preface:

I would not be considered an opponent of geologists; but rather as a cosearcher with them after that which they value as highly as I do, TRUTH. The path which I have pursued has led me to a conclusion at variance with theirs. I have a right to expect that it be weighed; let it not be imputed to vanity if I hope that it may be accepted.

But what I much more ardently desire is, that the thousands of thinking persons, who are scarcely satisfied with the extant reconciliations of Scriptural statements and Geological deductions,—who are silenced but not convinced,—may find, in the principle set forth in this volume, a stable resting-place. I have written it in the constant prayer that the God of Truth will deign so to use it; and if He do, to Him be all the glory![25]


That God would deign to use it, given the irresistible force of the argument, seemed beyond all doubt.

Never was a book cast upon the waters [writes Edmund Gosse] with greater anticipation of success than was this curious, this obstinate, this fanatical volume. My Father lived in a fever of suspense, waiting for the tremendous issue. . . . My Father, and my Father alone, possessed the secret of the enigma; he alone held the key which could smoothly open the lock of geological mystery. He offered it, with a glowing gesture, to atheists and Christians alike. This was to be the universal panacea; this the system of intellectual therapeutics which could not but heal all the maladies of the age. But, alas! atheists and Christians alike looked at it and laughed, and threw it away.[26]

In this the Christians, at least, were ill-advised; but at all events the reception of the book meant that here too Gosse's prayers had failed to find a response. Had he known at the time, as he did not, of the two other great disappointments that were in store for him, it might well have broken his spirit; as it was, coming soon after the death of his wife, the failure of Omphalos had a sufficiently disturbing effect. But it is time to examine the theory itself. Gillispie says that it was "far from original," and Gosse himself admits that he got the germ of the idea, partly from an anonymous tract, and partly from Granville Penn's The Mineral and Mosaic Geologies of 1822. Nevertheless its working out in Omphalos and the detail with which its application is followed through bear Gosse's individual mark.

The book is an account of an imaginary court inquiry, with witnesses. One curious thing about it is that, except at the very end, there is no appeal to the Bible; and as for Archbishop Ussher, he is not once mentioned. The whole tone of the book, in fact, is modern, and with one or two critical exceptions there is nothing in it which could not have been accepted by the most hardened atheistic geologist of the time. The case for the geological ages is presented fully, even sympathetically, as the testimony of "The Witness for the Macro-Chronology"; strata, fossils of plants and animals, erosion—all the available evidence is brought out. There are two examples chosen for special attention: the pterodactyl (illustrated by an unintentionally humorous woodcut of a bat with bulging eyes and gaping fangs) and the Jurassic tree Lepidodendron . But when all the data have been marshalled, Gosse puts his finger skilfully on the Achilles heel of the whole argument: "There is nothing here but circumstantial evidence; there is no direct testimony. . . . You will say, 'It is the same thing; we have seen the skeleton of the one, and the crushed trunk of the other, and therefore we are as sure of their past existence as if we had been there at the


time.' No, it is not the same thing; it is not quite the same thing; NOT QUITE. . . . It is only by a process of reasoning that you infer they lived at all."[27] Of course he is quite right; the inference of causes from effects commits a logical fallacy. Sciences which deal with the past, or with the unobservable of any kind, constantly commit it—they have no alternative. This fact is tacitly admitted, and then quite properly forgotten, as far as the daily work of the scientist is concerned. But when somebody like Gosse gleefully draws attention to it there is absolutely nothing that can be brought forward in its defense—the only recourse is a challenge to the critic to produce an alternative, and equally plausible, explanation of the effects as they appear. Such a challenge Gosse was quite prepared to meet.

His own theory invokes two postulates, the creation of matter and the persistence of species. "I assume that at some period or other in past eternity there existed nothing but the Eternal God, and that He called the universe into being out of nothing. I demand also, in opposition to the development hypothesis, the perpetuity of specific characters, from the moment when the respective creatures were called into being, till they cease to be."[28] As a matter of fact the second postulate is superfluous—Gosse's theory, while it certainly removes the necessity for a theory of development (or of variation and natural selection), is not incompatible with such a theory. And as for the first, although he refuses to discuss it, nobody was in a position to maintain that there was any better account available of the origin of the universe, assuming that it had an origin. At least the Christians could accept the point without difficulty. Now creation is generally taken to be a beginning of history, and thereby also of natural history—the first verse of Genesis makes the idea explicit. It certainly is a beginning in some sense, but Gosse's reflections led him to see that it could not be so in the way in which, for example, birth is. Birth is the beginning of a phase, but it depends on an earlier phase, namely prenatal development, whereas creation must be an absolute beginning de novo , depending upon no antecedents whatever except the will of the Creator. Suppose a creator setting about the creation of some natural object, a fern, a butterfly, a cow; at what stage of its existence should he choose to call it into being? We might unthinkingly choose the mature form; but is there any reason why this should be preferred to an immature or embryonic form? Is any stage fundamentally more suitable than any other as a starting-point of natural history? Gosse concluded not—indeed that there is no such thing as a natural beginning of this necessarily ultimate sort, the course of nature being, in fact, circular. "It is evident that there is no one point in the history of any single creature, which is a legitimate beginning of existence. . . . The cow is as inevitable a sequence of the embryo, as


the embryo is of the cow."[29] Such a beginning must, therefore, be supernatural. "Creation, the sovereign fiat of Almighty Power, gives us the commencing point, which we in vain seek in nature. But what is creation? It is the sudden bursting into a circle ."[30] And just as the life-cycle of the individual is closed upon itself, so the cycle of species, of life itself, of the planet and the solar and stellar systems, may in principle be ever repeating, from eternity to eternity, only to be commenced or terminated by an irruption from without.

Gosse's stroke of genius thus lay in separating the question of creation from the question of history altogether. The older view has its classical expression in Donne: "That then this Beginning was , is matter of faith, and so, infallible. When it was, is matter of reason , and therefore various and perplex't."[31] Gosse brought it all into the province of faith by suggesting the possibility that natural objects might be created with a history , or at least with the appearance of one. And this suggestion, once made, ceased to be a suggestion and became an indispensable necessity: a natural object could not be a natural object without an apparent history. A tree would not be a tree without rings, which indicate its age, and even a newly created tree must have rings. A man would not be a man without a navel, Sir Thomas Browne to the contrary notwithstanding.

The whole organisation of the creature thus newly called into existence, looks back to the course of an endless circle in the past. Its whole structure displays a series of developments, which as distinctly witness to former conditions as do those which are presented in the cow, the butterfly, and the fern, of the present day. But what former conditions? The conditions thus witnessed unto, as being necessarily implied in the present organisation, were non-existent; the history was a perfect blank till the moment of creation. The past conditions or stages of existence in question, can indeed be as triumphantly inferred by legitimate deductizon from the present, as can those of our cow or butterfly; they rest on the very same evidences; they are identically the same in every respect, except in this one, that they were unreal . They exist only in their results; they are effects which never had causes.

Perhaps it may help to clear my argument if I divide the past developments of organic life, which are necessarily, or at least legitimately, inferrible from present phenomena, into two categories, separated by the violent act of creation. Those unreal developments whose apparent results are seen in the organism at the moment of its creation, I will call prochronic , because time was not an element in them; while those which have subsisted since creation, and have had actual existence, I will distinguish as diachronic , as occurring during time.

Now, again I repeat, there is no imaginable difference to sense between the prochronic and diachronic development.[32]


Natural history thus appears as an unbroken progression, from some unimaginable beginning in the mind of God to the state of the world at present; somewhere in between an extrinsic act of creation occurred, and as prochronic events ceased, diachronic ones—identical in every essential point—began. When did this take place? Is there any way of deducing it from the evidence? Obviously not: "The commencement, as a fact, I must learn from testimony; I have no means whatever of inferring it from phenomena."[33] Fortunately the testimony is available. God need not have told us when the Creation occurred, but as a matter of fact he has done so, in Genesis, and it would be ungrateful—not to say foolish or even impious—in men of science to overlook the fact. So far they have "not allowed for the Law of Prochronism in Creation,"[34] but without it all calculation is useless; "the amount of error thus produced we have no means of knowing; much less of eliminating it."[35] Accordingly every scrap of evidence for the Macro-Chronology contains a fatal flaw; and, as Gosse triumphantly concludes: "The field is left clear and undisputed for the one Witness on the opposite side, whose testimony is as follows:—


But what, after all, did this victory amount to? To begin with, it showed that there had never really been a struggle: "I do not know that a single conclusion, now accepted, would need to be given up, except that of actual chronology. And even in respect of this, it would be rather a modification than a relinquishment of what is at present held; we might still speak of the inconceivably long duration of the processes in question, provided we understand ideal instead of actual time;—that the duration was projected in the mind of God, and not really existent."[37] Reduced to this, the conclusion is merely metaphysical, that is to say empirically empty; to assert that the world was created is rather like asserting that overnight everything in it has doubled in size, including rulers and retinae—nobody can tell the difference. One might as well retort that really everything has halved in size, or that everything has been uncreated, the former existence being real and the present ideal, for all that any experiment can possibly indicate to the contrary. Put in another way, Gosse's claim comes to the same thing as maintaining that, before creation, Berkeley's philosophical position was the correct one, while after it Locke's was. Unfortunately most people persisted in seeing more in it than that, continuing to believe that there was a genuine difference of opinion between the geologists and the Holy Ghost, that it was impossible to agree with both but that it mattered which one agreed with. Gosse was surely right—it did not matter, at least not in the way that most people supposed, since (apart from the


extrascientific point of faith) one could agree with both; but few could follow his intellectual maneuvers, perfectly rational though they were.

And then any victory, even the most conclusive, becomes hollow when nobody takes the slightest notice of it, or when the few who do misinterpret it completely. Having instructed the printers to prepare an unusually large edition of his book against what he was certain would be a universal demand, Gosse found himself in possession of most of it, while the few copies that went out produced a critical reaction of a totally unexpected sort. The theory of Omphalos , after suitable distortion—not only by the malicious—became monstrous, asserting nothing less than that God had placed fossils in the rocks for the express purpose of deceiving scientists into thinking that the earth was older than it really was. Perhaps the cruelest blows were struck by that perpetually well-meaning, infallibly clumsy Victorian, Charles Kingsley.

We have reason to be grateful for Kingsley's blunt insensitivity, which produced, like the irritating specks of sand in oysters, responses of great beauty in diverse quarters—the two most famous cases are, of course, Newman's Apologia pro Vita sua and Huxley's celebrated letter on the death of his son. There is no record of a similar reaction on Gosse's part, but the stimulus was certainly no less painful. The theory itself, it is true, was perfectly acceptable to Kingsley: "Your distinction between diachronism and prochronism [he wrote to Gosse], instead of being nonsense, as it is in the eyes of the Locke-beridden Nominalist public, is to me, as a Platonist and realist, an indubitable and venerable truth."[38] But Gosse's use of the theory to justify the geologists in the form, if not the substance, of their conclusions, while at the same time preserving the literal truth of Scripture, was too much for him. "Your book tends to prove this—that if we accept the fact of absolute creation, God becomes a Deus quidam deceptor . . . . You make God tell a lie. It is not my reason, but my conscience which revolts here."[39] Such obtuseness was bad enough—for Gosse's whole point had been to show that God had not lied at all, that indeed he had been scrupulously honest (as Gosse himself would have been in similar circumstances), correcting in one mode of communication, namely Biblical revelation, a possible misconception which might arise in the interpretation of a message in another mode, namely geological evidence—but there was worse to come. Kingsley, self-confident as ever, went on:

I cannot give up the painful and slow conclusion of five and twenty years' study of geology, and believe that God has written on the rocks one enormous and superfluous lie for all mankind.

To this painful dilemma you have brought me, and will, I fear, bring


hundreds. It will not make me throw away my Bible. I trust and hope. I know in whom I have believed, and can trust Him to bring my faith safe through this puzzle, as He has through others; but for the young I do fear. I would not for a thousand pounds put your book into my children's hands. . . . Your demand on implicit faith is just as great as that required for transubstantiation, and, believe me, many of your arguments, especially in the opening chapter, are strangely like those of the old Jesuits, and those one used to hear from John Henry Newman fifteen years ago, when he, copying the Jesuits, was trying to undermine the grounds of all rational belief and human science, in order that, having made his victims (among whom were some of my dearest friends) believe nothing, he might get them by a "Nemesis of faith" to believe anything, and rush blindfold into superstition. Poor wretch, he was caught in his own snare.[40]

Bitter words for a supporter of the onion man! And especially bitter the remark about children, for whose mental and moral improvement Gosse, in his popular writings, had been so solicitous. But then Kingsley and Gosse were fundamentally at cross purposes in this matter. Kingsley's aversion for Rome was intellectual, Gosse's emotional; Gosse's interest in religion and science was intellectual, Kingsley's sentimental. The comparison of Gosse and Newman, ghastly and inconceivable as it would have seemed to them both, was not in fact entirely unjust, for Newman, in the Apologia , says: "From the age of fifteen, dogma has been the fundamental principle of my religion: I know no other religion; I cannot enter into the idea of any other sort of religion; religion, as a mere sentiment, is to me a dream and a mockery"[41] —in which substituting for "dogma" "the infallibility of the Scriptures" renders Gosse's belief exactly. Both Newman and Gosse had seen that the defense of truth on the highest level leads sometimes to an appearance of deception on a lower, and both had been reprimanded for it by Kingsley, to whom truth was a simple, straightforward, rather typically English sort of thing.

Newman, however, was the better off; for the Church provides an environment friendly to such subtleties, let infidels protest as they may; but what is a lonely Protestant to do, when God refuses to look after his own interests, and allows his shortsighted and enthusiastic servants to spoil the work of those who are more perceptive and austere? Nothing could shake Gosse's faith in the Bible, but its author, engaged as he was in guiding the Kingsleys of the world safely through their puzzles, might perhaps be guilty of negligence. In his reaction to the failure of Omphalos Gosse almost suspected as much. "I think there was added to his chagrin with all his fellow mortals a first tincture of that heresy which was to attack him later on. It was now that, I fancy, he began, in his depression, to be angry with God."[42] But this was not the


petulant anger of a disappointed scholar. It is exactly here that Gosse's enormous intellectual strength shows to its best advantage—the strength, in fact, not only of his intellect but also of his will. He knew he was right, even if God did not. And he was not broken; four years later he is at it again, in a second series of The Romance of Natural History , incorporating more and more of the contemporary advances of science into his own scheme, never yielding an inch in his fidelity to the inspired word. Kingsley had also accused him of the apostasy of evolution: "I don't see how yours [i.e., Gosse's prochronism] differs from the transmutation of species theory, which your argument, if filled out fairly, would, I think, be."[43] Indeed there was a superficial similarity, but Gosse was careful to make the distinction for those who cared to look for it. Species may, without violating the sanctity of Scripture, succeed one another; they may not evolve from one another.

We know that the rate of mortality among individuals of a species, speaking generally, is equalled by the rate of birth, and we may suppose this balance of life to be paralleled when the unit is a species, and not an individual. If the Word of God contained anything either in statement or principle contrary to such a supposition, I would not entertain it for a moment, but I do not know that it does. I do not know that it is anywhere implied that God created no more after the six days' work was done. His Sabbath-rest having been broken by the incoming of sin, we know from John v. 17, that He continued to work without interruption; and we may fairly conclude that progressive creation was included as a part of that unceasing work.[44]

Gosse's devotion and ingenuity in the service of science and religion were unlimited; and in the end even the total indifference of both parties was not enough to stop his heroic rearguard action in defense of their divinely appointed unity.


Edmund Gosse's charge against his father is that of inhumanity. "He regarded man rather as a blot upon the face of nature, than as its highest and most dignified development. . . . Among the five thousand illustrations which he painted, I do not think there is one to be found in which an attempt is made to depict the human form. Man was the animal he studied less than any other, understood most imperfectly, and, on the whole, was least interested in."[45] There is, in fact, at least one illustration containing human figures, but it only serves to reinforce the charge: the preface to The Ocean is accompanied by a woodcut of "The Whale


Fishery," showing two men being tossed out of a boat into the jaws of a gigantic cetacean. As to the other assertions, Edmund may have been right—certainly his own experience led to no other conclusion. And yet it is perhaps too easy a judgment. One of the tragedies of an overintellectual faith is that it may conceal, effectively and permanently, more natural feelings. Abraham, with his sons in his bosom, is a model of paternal affection, but it is a grim reflection that, had there been no ram in the thicket, nothing would have prevented him from murdering Isaac. Kierkegaard makes of Abraham a hero of faith, and the heroes of faith are generally those for whom, in the end, everything works out right, either in martyrdom or in earthly felicity. For Gosse, in a sense, nothing worked out right, yet his life, although it ended in dejection, did not end in defeat. As in Mr. Van Voorst's office, years before, his self-possession could be overcome only in extremis . He was, to use another favorite term of Kierkegaard's—a term of the highest approbation—an individual ; and if his behavior as an individual was eccentric (as it undoubtedly was) that very fact made it, in spite of his frequently expressed wish to give all the credit to God, a tribute to the human strength of his own character.


Creationism and Evolution

The asymmetry of my title is intentional. It might have been Creation and Evolution, or Creationism and Evolutionism, or even Creation and Evolutionism, but none of these expresses the contrast I want to emphasize now. To make this point clear, some preliminary work is needed.

Let me first distinguish between events and processes. An event is something that comes to be (the term means this), more or less briefly is, and then has been. What counts as "briefly" depends on the sort of event it is and its distance from us. A lecture is an event lasting an hour or so; an election is an event that lasts about a day but extends itself into the campaign and the counting and the victory speeches and concessions. A war might count as an event in historical perspective, but to the people in it it would seem to be made up of many events. And so on. Now creation, according to most religious accounts, is an event, although in Genesis it lasts six days and incorporates subevents: the coming into being of the sun and moon, for example. It is taken to have happened at a definite point in time. For the purposes of this paper I shall assume this to be an accepted view.

A process, on the other hand, is something that goes on over a more or less prolonged period of time; indeed, it can go on indefinitely. Finite processes, having ends in view and resulting in products or consequences, may look from a distance like events, so the boundary is not sharp. For its week, creation might be thought of as having been a process, and in ordinary speech we speak of the process of creation, of a work of art for instance, although particular acts of creation on God's


part are not usually supposed to have the internal sequence of stages that this implies ("He spake and it was done"). But processes that go on indefinitely cannot be confused with events, although they may from time to time produce events as consequences. Tidal drag is a continuous process, due to the moon's orbiting the earth, but a particular high tide in a particular harbor is an event. Evolution, as it is understood by scientists, is this kind of process: as long as the conditions for it hold, it goes on. An event in the evolutionary process might be the extinction of a species, say, the death of the last dinosaur. I shall take this also to be agreed upon.

I turn now to the suffix "-ism." This is an old device for producing nouns out of verbs, adjectives, proper names, etc. The Greeks introduced it in the form "-ismos" for just that purpose. The paradigm case is that of baptism: "baptein" meant to dip or dye, so "baptizein" meant to treat by dipping and "baptismos" came to mean the doing of this in a ritual way, a baptism. We have the same sequence in English, through "-ize" to "ism." What is referred to by the term thus formed may be something relatively abstract, but we may come to use it as if it had a familiar, even concrete, meaning. And in the special case in which an ideological force is attached to it the noun may come to stand for something apparently portentous, altogether out of proportion to the related term from which it sprang. So although "isms" may be innocent enough it is always a good idea to look at them closely, to see whether they are being used as a front for conceptual inflation. As must be obvious, I am not too fond of these isms; one of the problems with them is that they become catchwords and as such can conceal a lot of ignorance. Some familiar examples are Marxism and communism. There was a man called Marx, a brilliant although somewhat unreliable philosopher who wrote many serious works in addition to his (and Engels's) occasional and deliberately inflammatory Manifesto, and it is certainly a good thing to read what he wrote, difficult as most of it is, in order to understand and criticize it. But Marxism as an inflated conceptual or ideological package may be bought, and often has been, without any very careful study of Marx. Similarly there are common goods, communities, etc., and to reflect on the values of community and the ways in which it might be implemented is certainly worthwhile. But communism became a battle cry that suppressed reflection in favor of a jealous and inflexible doctrine.

Finally, by way of preliminaries, a remark on the notion of belief, especially on the distinction between believing in and believing that . The latter is the more recent, since in its origins believing was less a matter of accepting propositions as true than of relating to persons as trustworthy. To believe in something is to find it satisfying, comforting,


reassuring; "believed" comes from the same etymological source as "beloved." But as we all know, being satisfied, comforted, or reassured need have very little to do with the truth of the matter, unless indeed truth comes to be defined in terms of faith rather than reason. The later use of "believe," to believe that such-and-such is the case, is a commitment to truth independent of our subjective approval of it, and as W. K. Clifford used to remark it carries a far higher burden of moral responsibility: to say "I believe that . . . " was, he said, the most serious thing a human being could do.

Now of course there could be such a thing as "evolutionism," and people might go around saying that they believed in it. But such people would not deserve our respect. For apart from the personal cases already alluded to (believing as trusting, especially in people we love or at least have admiration for) "believing in" seems to me an inappropriate stance for an educated person. We want to know what is the case, and to believe what comes closest to that. It is I think wrong to believe what does not have the warrant of the best test of knowledge available at the time, unless there is some pressing reason to do so—some need, for example, for a commitment to action. We do not in my view have a pressing reason to come to a conclusion about the origin of species or life or even of the universe; we are naturally curious about such things but can get along quite well from day to day without the last word on the subject. We can afford to be patient. So I would not recommend that anyone be an evolutionist or believe in evolution.

But of course the same goes for creation and creationism, and here I engage the main issue in this paper. We have an event—creation—that may have happened, and a process—evolution—that may be going on. It would be possible to believe in the event, which would make one a creationist, or to believe in the process, which would make one an evolutionist, but these moves are not recommended. But we might have reasons to believe that evolution is going on, or that creation occurred, and this would not involve us in isms at all. Is there a way of justifying one or the other (or both) of these beliefs, in such a way that it could be shown to satisfy the best presently available criteria for scientific knowledge?

Evolution is an innocent enough word: strictly speaking it means "turning out" or "unrolling" (as revolution means turning or rolling around). Books used to be unrolled when they were made in scrolls, and Latin evolutio meant that. Evolution as a process is things turning out one way or another, the unrolling or unfolding of events implicit in earlier events, as opposed to the breaking in of something totally new. The doctrine that has become known as "the theory of evolution" is a special case of this general concept, and is perhaps better called the


theory of natural selection. The basic mechanism of this special version of the general unfolding of events has four main components, which I will call proliferation, variation, competition, and elimination. (What is not eliminated is assumed to survive; according to some popular views this is supposed to show that it was "fit" to do so, but that is an essentially empty attribution; it is enough for our purposes that it just does survive.)

Now evolution in this sense obviously goes on all the time in almost all domains of human activity. As a trivial example take the business of getting dressed in the morning. You have bought a number of different items of clothing, let us suppose; that satisfies the requirement of proliferation. They are not all alike; that satisfies the requirement of variation. You try one and another (assuming some concern with appearance) and ask yourself whether, given the kind of day it is, your mood, the people you expect to see, and so on, one outfit looks better than another; that satisfies the requirement of competition. And finally you return to the closet everything you tried, except what you have chosen to wear; that satisfies the requirement of elimination. So you have evolved from undressed to dressed, and the cool person who rolls out into the world is the outcome of that evolution.

Examples could of course be multiplied indefinitely. But the question of interest here is whether a process of this sort is now going on, or has in the past gone on, in the case of living things. Charles Darwin (and at just about the same time Alfred Russel Wallace) concluded that it probably had gone on. The requirement of proliferation is satisfied easily enough, because plants and animals are prolific by nature, and far more are born than can survive. (Those that are born are already survivors of an earlier elimination, and the same is obviously true of humans: only one ovum in hundreds, and one spermatozoon in billions, actually grows up into an embryo.) The requirement of variation is satisfied by the differences between individuals, some being stronger than others, some taller, some more intelligent, and so on. And the requirement of elimination is met easily enough by the fact that plants and animals die without reproducing if rather stringent conditions of survival are not met. What stumped both Darwin and Wallace temporarily was the requirement of competition; both of them found the clue to that in Thomas Malthus's Essay on the Principle of Population , which pointed out that a normal population, left alone, would increase geometrically unless limited by starvation, and that there would inevitably be a struggle for scarce resources, in which only the winners would survive.

One of the things that intrigued Darwin was the breeding of domestic animals, in which it is obviously possible to speed up "evolution" by selective interference, so that in a few hundred years the line of dogs


has been made to produce Chihuahuas on the one hand and Great Danes on the other. But of course they are all dogs, and the big question is whether new species come into being. Old species die out, we know that. In fact, we are losing them at the rate of thousands a year because of the destruction of the environment that comes with industry and the expansion of human populations. But could variation go so far as to produce a species that did not exist before? Darwin thought that this was the only hypothesis that could account for distinct species of finches on neighboring islands in the Galapagos archipelago, and he extended it to account for the origin of species in general. This is the weak point in any attempt to make a conclusive case for evolution, because evolutionary time is very long and new species cannot be expected to come into being every day or even every century. And until Darwin called attention to the possibility, a hundred years or so ago, nobody was looking. There is one well-attested case in the literature, the flowering plant Primula kewensis , and by now there may be many others for all I know. Scientists in any case do not use the same criteria for defining species as they did in Darwin's time, and certainly enough is now known of genetics to make the principle unproblematic.

Evolution is therefore easy enough to understand abstractly, and there is no reason to think that the processes of proliferation, variation, competition, and elimination are not now going on and have not always gone on. The hostility of the environment and the superfertility of most species (including humans) make that a very natural conclusion. But it is hard to credit concretely, which is why creationism remains so attractive. How can something as complex as the human eye, for example, have come into being by a series of variations and eliminations without some intelligent planning? This is a serious challenge and needs to be considered. I do not think it is possible to deal with it conclusively, but I would like to offer a counterargument that seems to me at least as difficult to refute.

The so-called argument from design assumes intelligent planning, and a great many people attribute to the Creator the really superior, the practically infinite intelligence that would be needed to produce the marvels that we find on all sides in the natural world. But consider where we get the idea of intelligence: the only cases of it we know, in full-fledged form, occur among human beings with functioning brains, and there is plenty of evidence that the intelligence really is linked to the brain, to its combinatorial powers, its resources of memory, its capacity for linguistic processing, and the like. Now the brain is the single most complex entity in the known world, and its functioning in humans requires that it be embodied, compact, integral, and capable of learning. It must have come into being along with the rest of the world,


and the argument would require that it too be the result of intelligent planning. But the fact just admitted, that intelligence depends on the existence of such a complex entity, hardly encourages us to conclude that the emergence of the complex entity in question depended on intelligence. On what complex entity did that intelligence in turn depend? Was it also embodied, compact, and integral? And how did it learn?

Now of course we are free to answer these questions as we like, unless we care about evidence. We could say, for example, that there was a Supreme Being, supremely intelligent, the workings of whose mind are quite beyond us, who brought us and everything else into the world by an act of creation, intelligence (such as it is) and all. We could deny that the intelligence of such a Being needed embodiment in its turn. And of course I would have no way of disproving any of this, nor, if it gave comfort to anyone, would I wish to do so (unless that person persecuted others for not believing it). But if we do care about evidence or about consistency we should at least ask: Does this have anything to do with intelligence as we understand it? And what is the evidence for either hypothesis, that of a creator or that of creation?

We have seen that the evidence for the hypothesis of evolution is that processes like it are common, that the conditions for it exist, that some instances of minor speciation are known; we might add that cosmology suggests an age for the universe that would have given ample time for the kind of development that evolution represents, considering that microevents in organisms happen in fractions of a second, while life has been around for billions of years. It may be worth dwelling on this for a moment because of another difficulty that sometimes occurs to people. If we were to compute the probability that any particular outcome should occur as a result of evolution, even over vast aeons of time, we might find it vanishingly small, and this might be taken as an argument against its having happened in this way. But it would not be a good or a conclusive argument, because events with very low probability do in fact happen. Consider what is sometimes called the lottery paradox: if five million tickets are sold, then before the drawing the chance that any particular individual will win is one in five million, which certainly seems vanishingly small; but after the drawing a particular individual has nevertheless won. We are already winners in a tremendous lottery, among the spermatozoa I referred to earlier, with odds on the order of a billion to one, and yet here we all are. And so is the animal kingdom, and in it the human race.

Remember that creation is an event but evolution a process. The question of evidence arises in a different form in the two cases. A process can be caught in the act, as it were: if we see things happening (for example, multiple litters, the elimination of weaklings in a struggle


for nourishment) that form an essential part of the process, and if we can easily see how the other parts might happen too, then the hypothesis that the process occurs becomes at least plausible. But with events things are quite different: the event, as I remarked earlier, comes to be, is, and then has been; we refer to a completed event in the past tense. How can we know about past events at all, then? Only by inferring them from evidence that is available now—consequences, traces, remains, records. The sorts of past event that we have come to know about in this way include large meteorite impacts, ice ages, volcanic eruptions, and also (especially since the advent of written records) wars, revolutions, inventions, and so on. Some of the evidence for evolution is in the form of past events, namely, the lives of specimens of now extinct animal and plant species, which have left traces in the form of fossils. The question is, is there any evidence of this sort for creation?

Well, there does seem to be evidence, in the form of background radiation, spectral shifts in stars, etc., for a relatively sudden origin of the physical universe as we know it, an origin which has come to be called the Big Bang. Interpreting that evidence means accepting a lot of high-level physical theory, but there is a consensus among cosmologists, with only occasional challenges (though some of them have been radical, from continuous creation to large-scale electrodynamics), that something of the sort must have happened. This however does not help much when it comes to confronting creation with evolution, since the Big Bang was five billion years before the sun, let alone the earth, came into being, and unless there continued to be interventions (which are not suggested by that evidence) we would still need a long evolutionary process to get to where we are now.

The event called Creation, in most theological accounts, would have to be a good deal more recent than that. Archbishop Ussher thought, on the basis of Old Testament chronologies, that it must have taken place in about 4004 B.C. As far as I know there is no evidence whatever for this. Most of what remains from earlier periods of the earth's history is older than that by hundreds of thousands of years. In fact, there isn't much evidence for anything in the biblical account of the origins of things. There were certainly floods, and pious archaeologists are still looking for Noah's Ark on Mount Ararat; for that matter they may well find it, since a lot of the Old Testament is no doubt historical, and a widespread (though still relatively localized) flood might well have seemed like the end of the known world and provoked schemes of rescue. But that won't help with Creation either.

Of course all this may be beside the point. The evidence, you may say, lies precisely in the biblical account itself. Here it is worth making an important distinction, well known in the law, between evidence and


testimony. Evidence means something we can actually see , like Exhibit A in a trial; testimony means what somebody says . This introduces a different kind of problem, having to do with the reliability of the witness. If the witness was in a position to see what happened, and tells the truth, then testimony is almost as good as evidence for the occurrence of a past event, like a bank holdup or an automobile accident (though the most honest people can honestly misperceive and misremember). If testimony is not reliable then in the absence of evidence we can only suspend judgment. Now, no textual account can be evidence; it can at best be testimony about evidence, like a deposition in a court case, and that reduces to just testimony. Of course we are obliged most of the time to rely on testimony of this sort (of journalists, explorers, scientific researchers), trusting them to have seen and interpreted the evidence correctly, on pain of being reduced to total skepticism. There is nothing wrong with skepticism, which only means looking out, being wary—but being wary all the time tends to inhibit freedom of action.

The biblical account of creation is not testimony about evidence; it is just testimony. This can be all right too in certain circumstances, namely, those in which we are reporting our own actions: we don't say what doing them would have looked like to an observer; we just report that we did them. If God says, "I created the world" (and that is what the biblical account amounts to, if we accept the idea of the inspiration of Scripture) then all we have to decide is (a ) whether it is really God speaking, and (b ) whether his report is trustworthy. Given that (b ) would generally be considered a blasphemous question, the remaining problem is (a ). Actually, addressing that problem is not my main concern here. If the answer is that it is really God speaking, and if we believe in God, then we believe in creation. Note that this is primarily a "belief in" and that the corresponding "belief that" follows from it and from nothing else. What we will then have is "creationism" all right, but it will be a branch of theology. It will certainly not be a "creation science," a term that has been introduced to give theology a kind of respectability it does not need, and which has seriously clouded the issue.

Rather, the question now is, can science be based on testimony, or only on evidence (or on testimony about evidence from known, reliable witnesses)? And I answer: no conclusion drawn from testimony alone can be a scientific conclusion, although it can be an answer to a question that science is powerless to answer. Here is a simple example that will be useful in what follows: Consider a cyclic process, such as a model train running round a track. A scientist comes into the room when the train is already running; the child who put it on the track is standing by.


The challenge to science is to determine, from the available evidence, at what point on the track the child started the train. We will assume that the track has been used at random over a long period, that even if records of surges in house current are available the small surge caused by starting the train (locating which would enable the scientist to extrapolate backwards from the present position of the train once its speed has been measured, assuming the child not to have been messing about with the controls in the meantime) have been masked by other people's starting TV sets, dishwashers, razors etc., and that the floor is spotless, so that the child has left no marks of fumbling in the dust. In the end the scientist is totally baffled; the only thing to do is to ask the child, who points and says, "There!" If the child is telling the truth, that is the answer to the question, and it is the only answer.

What would a scientific account of creation be like? That is a really interesting question, and it was posed in the nineteenth century by one of the few real scientists who have also been creationists, Philip Henry Gosse. What distinguishes Gosse is that he did not think that in being a creationist he was at the same time being a scientist; on the contrary, he was being a fundamentalist, and readily acknowledged that it was only his belief in God, and in the Bible as God's word, that made him believe in Creation at all. But as a scientist he did see that if creation had taken place, scientists would have been in a position to make some very interesting observations. What makes the observations interesting is that they would have been just the observations that would have been made if creation had not taken place. Let me explain. Obviously no scientist can be present before the creation, but let us suppose that we've been created as full-fledged scientists and can get to work immediately after it. We wake up and rub our eyes; the world is full of trees and animals, and, being scientists and anxious to advance our careers, we start making observations. We notice that there are some young animals, some old ones, some pregnant ones; some great trees, some saplings. We cut down one of the trees: what is it made of? Marshmallow? No, it is made of wood, arranged in rings: seventy, let us say, a good mature tree. So the tree is seventy years old. But it can't be—it came into existence, along with us, only minutes ago.

Gosse's point was that it is impossible, from evidence, to infer the fact of creation; we have to rely on testimony. And that comes down to a question of the credibility of the witness. The only cases in which the credibility of witnesses will be the last word are those in which evidence is unavailable. Since for Gosse creation is such a case he rightly (from his point of view) turns his attention to the Bible, which he takes precisely to be the required account of a witness, and a witness of impeccable credibility, since it is none other than God Himself. This


is a switch from science to belief. But the belief system to which he switches is entirely independent of the systematic scientific treatment of the same object.

It would not be impossible to take Gosse's position today—indeed I am surprised that more fundamentalists do not do so. But where does this leave us in relation to our inquiry? The upshot of the argument is this: there is a lot of evidence that the process called evolution is going on, or at least that all the parts of it are going on that we can be expected to have observed in the blink of time (speaking in evolutionary terms) since anyone thought of looking for them. So evolution is a plausible hypothesis, as well confirmed as many hypotheses we have come to rely on, and we do not need to decorate it with an "ism." Whereas creation is a hypothesis with no scientific consequences, and if we want to believe in it we shall have to do so on other grounds. But that, we can be happy to say (though we ought also to be a little awed by it), is a matter on which we enjoy complete freedom of choice.




Preface to Part II:
Hume's Problem

The fallacy of affirming the consequent, as we saw in the case of Philip Henry Gosse, provides anyone at any time with an excuse for rejecting out of hand the entire theoretical edifice of science. In practice people usually only do this when, like Gosse, they have some overriding agenda of another order, for the good reason that the antecedent is very likely to be true after all, the fallacy of inferring it to the contrary notwithstanding. The fallacy hinges on the one-way character of deductive inference, and stands at the threshold of a set of central epistemological problems having to do with extension of knowledge beyond the immediate empirical base. The philosopher who more than anyone else is responsible for having identified and posed these problems is David Hume.

This part of the book deals with various aspects of what I call "Hume's problem," though it might more accurately be entitled "one of Hume's problems and a problem of Peirce's." (One of Hume's other problems comes in part III.) It has always seemed to me that Hume was right about induction (and about causality too, but that is for later). However, that does not invalidate the whole scientific enterprise; rather it shifts the emphasis from questions of pure logic to questions of theoretical strategy. One strategy is to invert the direction of argument, pointing out that the falsehood of the antecedent can be inferred by denying the consequent. But falsification, in spite of its celebrity at Popper's hands, never seemed to me to be of much help, for the rather simple reason that what people really want, after all, is true theories.

Chapter 5 was originally given in Spanish to the philosophical faculty


at the University of Costa Rica, as a dry run for its presentation at the first International Congress of Logic, Methodology, and Philosophy of Science at Stanford in 1960. At the congress a philosopher in the audience appeared to be giving animated signs of approval; at the end of my talk he rushed up, and I expected congratulations or at the very least some intelligent question. What he said, and had evidently been waiting to say all that time, was "I've read Hume on cause but I never expected to hear Caws on Hume!" He then fled, with peals of laughter. (This chapter represents the one place in the book where I have allowed myself to include some passages formerly reprinted in The Philosophy of Science: A Systematic Account [Princeton, N.J.: Van Nostrand, 1965], 258–265.)

"The Structure of Discovery" was dedicated, when I delivered it to section L of the AAAS, to the memory of Norwood Russell Hanson, a friend who had been killed in the crash of his Grumman Bearcat a few months before. Russ Hanson had made central contributions to the elucidation of the problem of scientific discovery; my thought in this paper was that a still unexplored angle of the question concerned the nature of the supposedly effective procedures that deductive logicians had at their disposal, which were lacking to their inductive brethren. It seemed to me that the effective procedures weren't effective unless animated by some logician, that the inferences didn't themselves "follow" unless someone was around to follow along with them. This led to a kind of naturalism about logic and hence about discovery, which seemed less mysterious when it was realized that something like it was happening all the time. In this chapter there are some adumbrations of the structuralism on which I was beginning to work at the time of writing it.

Chapter 7 was written, somewhat later, in memory of another friend, Grover Maxwell; it pursues the naturalistic theme of the previous chapter to the point of suggesting that the emergence of knowledge is as simple as any other evolutionary process—and quicker, given that the model is Lamarckian rather than Darwinian. The recipe "say anything you like; repeat only what is true," while requiring an evolutionary time scale, with a bit of the luck of the lottery thrown in, will eventually produce a respectable body of knowledge—has produced it, since that is clearly the way in which human knowledge has grown.


The Paradox of Induction and the Inductive Wager

Problems of self-reference in philosophy have been at the bottom of many of the now classical paradoxes, and this paper will attempt to show that paradoxical conclusions follow from an analysis of the self-reference of induction. The conditions necessary for the generation of paradoxes always include a negation; for example, in Grelling's paradox of heterological terms, it is only when a term is not descriptive of itself that difficulties arise. Similarly, in this case the paradox rests on the assumption that the principle of induction has not been successfully proven. It has often been remarked that induction cannot be relied upon for a proof of itself; but if other proofs had been successful, the continued reliability of inductive inferences would only serve to confirm the principle more fully. If other proofs are not successful, the continued reliability of inductive inferences is, in a sense, an embarrassment. I shall assume that the latter is the case, and shall try first to formulate the source of embarrassment and second to show that, although some authors have gone to extraordinary lengths to avoid a confession of defeat with respect to induction, capitulation is not as dishonorable as it might seem.

Before beginning, however, it is necessary to state which problem of induction is in mind, since one problem has proliferated into many by a process, sometimes referred to as "transformation," in which a closely related but soluble problem is substituted for the original insoluble one. The problems that have been solved include the development of a logical theory of probability, the use of the statistical syllogism, and so on; the original problem, and the one with which I shall be


concerned, is that of inferences as to future events drawn from past observations. As used here the term "future events" covers also the future discovery of information about past or distant or concealed events. The difficulty is expressed in the following passage from Hume, whose "statement of the case against induction," as Keynes says, "has never been improved upon."[1]

These two propositions are far from being the same, I have found that such an object has always been attended with such an effect , and I foresee, that other objects, which are, in appearance, similar, will be attended with similar effects . I shall allow, if you please, that the one proposition may justly be inferred from the other; I know, in fact, that it always is inferred. But if you insist that the inference is made by a chain of reasoning, I desire you to produce that reasoning.[2]

And again:

Let the course of things be allowed hitherto ever so regular; that alone, without some new argument or inference, proves not that, for the future, it will continue so. In vain do you pretend to have learned the nature of bodies from your past experience. Their secret nature, and consequently all their effects and influence, may change, without any change in their sensible qualities. This happens sometimes, and with regard to some objects: Why may it not happen always, and with regard to all objects? What logic, what process of argument secures you against this supposition?[3]

To this challenge Hume found no answer, and this fact is responsible for his reputation as the worst pessimist in the history of induction. Nevertheless, he himself had the greatest confidence in the principle. There is an interesting passage in the Enquiry where he actually does apply an inductive procedure to the problem of induction: "This negative argument," he says,

must certainly, in process of time, become altogether convincing, if many penetrating and able philosophers shall turn their enquiries this way and no one be ever able to discover any connecting proposition or intermediate step, which supports the understanding in this conclusion.[4]

But there are obviously two sides to this question, and a little later he appears to have changed to the other:

I must confess that a man is guilty of unpardonable arrogance who concludes, because an argument has escaped his own investigation, that therefore it does not really exist. I must also confess that, though all the learned, for several ages, should have employed themselves in fruitless


search on any subject, it may still, perhaps, be rash to conclude positively that the subject must, therefore, pass all human comprehension.[5]

This is exactly the situation in which we find ourselves. "Many penetrating and able philosophers," at least, if not "all the learned," have attempted to discover logical grounds on which a demonstration of the certainty of inductive inferences could rest; many more, since the abandonment of the search for certainty, have tried to do the same for its probability. No proposal for a solution, so far put forward, can claim to have been successful. That the outlines of an inductive logic, based on the theory of probability, now exist, cannot be denied; but this does no more to solve Hume's problem than the fact that there exists a Euclidean geometry helps to make the universe Euclidean. It has puzzled many thinkers that such a gulf should exist between deductive inferences, which everybody agrees to be binding in all circumstances where the premises are true, and inductive inferences, which appear so uncertain; the attempt has therefore been made to locate both kinds of inference on a continuum, so that inductive inferences would be just like deductive ones, only less so. But one circumstance renders all such attempts abortive. If the conclusion of a deductive argument is false, this at the same time renders the premise false, and this may be known immediately; if, for example, it is asserted that all S is P and hence that this S is P, the observation that this S is not P makes it false, according to the usual meaning of "all," to assert that all S is P. No such relation of necessity is available for induction, and if the inference is rephrased to make one appear, it becomes a deductive inference. It seems to me that Hume was right in locating the crux of the argument in necessary connection, and that he was also right in the assumption that the only relation which might provide such a necessary connection would be a causal one. The latter point will be referred to again.

Let it be conceded, then, at least for the purposes of argument, that all attempts to solve the problem of induction have so far been unsuccessful.

Let the i th attempt at solution be called Ai ; then (overlooking the considerable difficulties involved in identifying the A's) we might exhibit a series

A1 , A2 , . . . An

to which the methods of induction could be applied. If we let U stand for the predicate "unsuccessful," the state of affairs may be described by the sentence, "All A's so far observed are U." This can clearly serve as the premise of an inductive inference, the conclusion of which


will be "All A's are U," or "Probably all A's are U," or "At least 99% (or some other figure, depending on the theory adhered to) of A's are U." The making of such an inference depends, of course, on the reliability of the principle of induction. The assumption of the reliability of the principle leads, therefore, to the conclusion that it is probably indemonstrable. Conversely, if somebody were at last to produce a convincing argument for its validity, that would in effect justify us in saying that this very result was impossible, or at least highly improbable, since it provided a counterexample of a generalization of a type whose soundness had just been demonstrated.

The longer we go on using the principle of induction, then, the less likely we are to find a justification for it. This is what I have called the "paradox of induction." It is not a rigidly formalized paradox—the introduction of probability prevents that—but whatever variety of inductive theory is employed, conclusions which are, at least to a degree, paradoxical result. For example, if one uses Sir Roy Harrod's ingenious formulation,[6] the hopes for success of a new solution can be dampened by pointing out that one is always likely not to be on the verge of a great philosophical discovery.

One objection springs to mind immediately. Nobody considers it paradoxical, for instance, that after years of research a solution should be found to some scientific problem, although it had eluded previous generations; why then should it be so for a philosophical problem? The answer to this is, of course, that the scientific problem yields to new evidence, but that in the philosophical case no new evidence is available. In principle, the fact that different animals require different groups of proteins is just as mysterious as the fact, which intrigued Hume so, that bread is nourishing for men but not for lions and tigers. The causal relation, objectively speaking, is as ineffable as it ever was. It is not inconceivable, I suppose, that new evidence might be forthcoming, and this would be the only way in which a theory of induction could escape the paradox; but it is difficult to imagine what might constitute new evidence in this sense. Williams remarks that

the solution of the problem of induction must be at bottom as banal and monolithic as the process of induction itself. Philosophers and logicians have walked around and over our principle for centuries,[7]

and it is to be supposed that they have seen most of what there is to be seen.

What really convinced Hume of the hopelessness of the situation with regard to induction was the inaccessibility of future data. In his discussion of causality he suggests that there are three elements in a


prediction—an observed event, a predicted event, and a causal mechanism, corresponding on the logical side to a premise, a conclusion, and an inductive principle respectively—two of which are needed for the determination of the third, as two sides of a triangle are needed for the determination of the third. If the observed and predicted events are both available they may be taken as defining the causal relation, and this is the way in which the word "cause" is generally used. In the logical case, if premise and conclusion are both known, some probability relation may be established between them, and this may serve as the paradigm of an inductive inference. But where the predicted event has not yet been observed, where the conclusion is not known, the situation is like that of trying to guess where the rest of a triangle lies, if one is given one side. Without further information the task is impossible, and the only way to get further information is to wait. In the absence of any other principle we use, of course, the relation defined by previous sequences of observations; but that the new case will conform to the pattern cannot be known until it has already done so.

Science constructs theories which are designed to fit as closely as possible evidence that is already in, and relies on them, as it is bound to do, when it is necessary to speculate as to future states of affairs. The theories present a more or less finished appearance, and as conceptual structures may be explored again and again without revealing any flaws; they may even be rigorously axiomatized and exhibited as logical systems. This in itself does not compel the world to behave as they say it will, and if it behaves differently changes are made in the theories. If physicists had resolved to cling tenaciously to apparently reasonable principles—principles of symmetry or of conservation, for instance—to the extent of demonstrating their logical necessity, there would have been more difficulty than in fact there has been in adjusting to recent developments. The same considerations apply to the principle of induction, which is simply the most general and inclusive theory we possess. I am not suggesting that a disproof of the inductive principle is likely—if it is not verifiable it would not seem to be falsifiable either. Verifiability and falsifiability, as methodological tools, are not as different as they are sometimes thought to be; whenever a crucial test arises, the principle of double negation turns the one into the other. But the principle of induction needs logical foundations as little as the conservation principles needed them; and if they are not needed it hardly seems worth a great deal of effort to supply them.

It is claimed, however, that logical foundations are needed—that their absence is a "scandal" which is likely to have dire consequences for civilization.[8] This kind of language betrays a concern which is more


than philosophical. We do in fact rely on the principle; it has in fact worked, up to this point; we are shocked at our inability to justify our actions logically. We are in the position of people who, as Pascal says, have been acting on an uncertainty without knowing why. Rem viderunt, causam non viderunt :[9] "they have seen how things are, the causes they have not seen."

Pascal was the first to use a mathematical theory of probability as a justification for action on uncertainties, although in a rather unlikely context:

If we must not act save on a certainty, we ought not to act on religion, for it is not certain. But how many things we do on an uncertainty, sea voyages, battles! . . . Now when we work for tomorrow, and do so on an uncertainty, we act reasonably; for we ought to work for an uncertainty according to the doctrine of chance. . . . St. Augustine has seen that we work for an uncertainty, on sea, in battle, etc. But he has not seen the doctrine of chance which proves that we should do so.[10]

It is quite possible to agree with him that acting on chances is acting "reasonably" in the broad sense of the term (which does not mean "logically") without following to the conclusion of his argument, which is the existence of God—for this passage is taken, of course, from the section of the Pensées entitled "The Necessity of the Wager." Similarly, it is quite possible to agree with writers on induction who say, as Williams does,

It remains none the less reasonable to wager our lives and fortunes where our chances are best,[11]

without following to the conclusion, reached by some in a manner strikingly similar to Pascal's, that a wager can justify induction as a metaphysical principle. This, however, seems to be the logical outcome of some recent proposals.

The two writers in whose treatments of induction the parallel with Pascal is most striking are Reichenbach and J. O. Wisdom. Both authors agree that the problem does not admit of a straightforward solution, either affirmative or negative, just as Pascal admitted that neither of the two propositions, "God is, or He is not," could be defended according to reason. And just as Pascal presented two alternative modes of action—to believe or not to believe—so in the case of induction there is a choice—to trust inductively-confirmed statements, or not to trust them. Nature may or may not be such as to vindicate our trust—in Reichenbach's language the world may or may not be "predictable,"[12] in Wisdom's the universe may be "favorable" or "un-


favorable."[13] We are, in effect, invited to wager on the former possibility, since the odds are heaviest on that side. Although neither of these authors believes himself to have solved the problem exactly as Hume set it—both, in fact, agree with Hume's main criticisms—nevertheless each claims to have removed the problematic elements from it. Reichenbach speaks straightforwardly of "the justification of induction which Hume thought impossible,"[14] while Wisdom solves the problem only after "transformation."[15] In both cases their conclusions penetrate beneath immediate strategic necessity to a more fundamental level.

There is a distinction to be made here between recommendations as to strategy—the maximizing of the chances, assuming a regular universe in which we know less than we would like, as practiced in the theory of games—and conclusions as to principle. Many authors stand behind the theory of probability as the best tool we have for guiding our practical decisions, and in this case the wager remains unchallenged—it is what we actually use. But this is not the point at issue. As far as practical affairs were concerned, as was pointed out earlier, Hume too knew and used the principle of induction, and would no doubt have been happy to learn and use also modern methods of probability. He was aware that this fact might be held against him—a kind of ad hominem argument based on such a discrepancy between belief and practice appears in nearly all treatments of the subject—and in the following important passage from the Enquiry took precautions accordingly:

My practice, you say, refutes my doubts. But you mistake the purport of my question. As an agent, I am quite satisfied in the point; but as a philosopher, who has some share of curiosity, I will not say scepticism, I want to learn the foundation of this inference.[16]

The foundation has been taken to lie in a metaphysical principle—the Principle of the Uniformity of Nature, of the Principle of Sufficient Reason, or the like. Such principles can be used to justify anything; happily, this kind of metaphysics is increasingly in disrepute. The principle needed is metaphysical, however, in Collingwood's sense,[17] in that it is an absolute presupposition of scientific activity. It appears to me unfortunate to suppose that a wager can be properly used to justify such a principle. If we ask ourselves what is the status of a concept which is made the subject of an intellectual wager—what, for instance, the existence of God meant to Pascal—we have to answer that it is that of something to which there is passionate attachment. Pascal already believed in God; the wager was a rationalization of his belief for the benefit of his worldly friends. Similarly, when Reichenbach says,


It is better to try even in uncertainty than not to try and be certain of getting nothing,[18]

or Wisdom,

We must not, however, slur over . . . the possibility that the universe is favourable,[19]

one is not impressed with a conviction of genuine uncertainty, of genuine doubt as to the nature of things; these devices are merely the best that can be done to provide visible support for a belief which is already stronger than any such devices could possibly make it.

Today we are not, most of us, moved by Pascal's argument. If the arguments of Reichenbach and Wisdom appear more compelling, that is because of our historical perspective. The conflict between religion and the world is more or less quiescent; science, together with the philosophy of science, occupies an area of active concern. Induction has an importance to us now that the existence of God has not; we are therefore more sympathetic to proposals for providing it with a logical foundation. But the truth or falsity of the principle of induction is not affected by our efforts, any more than the truth or falsity of the existence of God is. Electing one side or the other, as a result of logical calculation , is in any case futile. A regular world, viewed sub specie aeternitatis , is a fantastic improbability; an irregular world, viewed from our temporal standpoint, is equally a fantastic improbability. "It is incomprehensible," says Pascal, "that God should exist, and it is incomprehensible that He should not exist."[20] In our niche of space and time it seems foolish not to trust the principle of induction; in Pascal's, it seemed foolish to question God's existence. I do not doubt that in his circumstances he was right, and I do not doubt that in our circumstances we are right, but that gives us no reason for claiming the philosophical immutability of the principles to which we subscribe. Wagers are appropriate to limited objectives, not to ultimate metaphysical commitments. Neither Reichenbach nor Wisdom, perhaps, intends to give the impression that an ultimate metaphysical commitment is in mind, but by bringing in the notion of the world in which series converge to limits coincident with "best posits,"[21] the universe in which regularly unfalsified hypotheses remain unfalsified,[22] they have moved into metaphysical territory, where gambling is out of place.

The principle of induction is left, therefore, unverified, unfalsified, and apparently empty and useless. Some critics might be tempted to say that this end could have been reached much more quickly by the


employment of a meaning criterion, or something of that sort, which would have shown from the beginning that the principle could say nothing. But that would have been appealing to one more unnecessary assumption. I have preferred to show the impossibility of its logical proof in another way, by locating it among the paradoxes, and to show that some attempts at such a proof, in fact, appeal to something quite apart from reason. This is far from saying, of course, that the principle is uninteresting or unimportant. While it need not always do so, the discovery of a paradox may indicate a profound truth. It was this, perhaps, that Unamuno had in mind when he defined a paradox as "a proposition which is at least as evident as the syllogism, only not as boring."


The Structure of Discovery

It has been widely held that, while logical analysis is appropriate to the justification of claims to scientific knowledge, such knowledge being expressed in hypotheses having empirical consequences, it is not appropriate to an inquiry into the way in which such claims originate. Questions about origins are said to belong to the "context of discovery" rather than to the "context of justification," and to require a different kind of logic. The devising of hypotheses is ascribed to genius, intuition, imagination, chance, or any number of other extralogical processes; it comes to be regarded as a paradigm case of science in its authentic natural state, inaccessible to logical reconstruction by philosophers who do not really know what it is like to be a scientist.

One of the tactics most often used by proponents of the mystique of genius, who are always bandying about terms like creativity, insight, ripeness , and so on, is the recounting of tales about moments of enlightenment in the lives of the great scientists. Everybody has heard of Kekulé's dream about the snakes biting one another's tails, and of Poincaré's long bout with the Fuchsian functions on his geological bus trip through Normandy. Such stories no doubt give an accurate account of what "really happened"; they are suitably sensitive to the "actual development" of scientific theories. But to draw attention to them at all in connection with an analysis of the process of discovery seems to me a radical mistake. The mistake involved shows up clearly in a passage from Popper's The Logic of Scientific Discovery , where he says, "The initial stage, the act of conceiving or inventing a theory, seems to me neither to call for logical analysis nor to be susceptible of it. The

This article is dedicated to the memory of Norwood Russell Hanson, vice-president of AAAS section L in 1961–1962 and for many years secretary of the section.


question how it happens that a new idea occurs to a man—whether it is a musical theme, a dramatic conflict, or a scientific theory—may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge."[1]

Popper thus dismisses the possibility of a logical analysis of the conception or invention of a theory because he thinks of these things in terms of "how it happens." But in the case of deductive argument nobody would think of asking how it happens; it would be the structure of the process, not its particular embodiment in a particular individual, that would be seen by everybody to be the crucial issue. In fact, in demonstrative argument just as in the process of discovery, there would be nothing strange in its not happening at all—the actual movement from the premises to a conclusion is just as intuitive, creative, and so on as the actual having of a new idea, and very stupid or very stubborn people, like the tortoise in Lewis Carroll's fable, may quite well decline, or be unable, to make it—but the fact that it failed to happen would not alter in any way the logical structure of the relationship between premises and conclusion. Even if one wished to maintain that, in the case of discovery, there are not any identifiable premises (or even any premises at all—a strategy I have explored elsewhere[2] ) one could still choose to regard the process as in principle intelligible rather than unintelligible; what is disturbing about the passage from Popper is that he seems to opt for the latter. In fact he says explicitly, "My view may be expressed by saying that every discovery contains 'an irrational element,' or a 'creative intuition,' in Bergson's sense."[3]

My point is that if this is to be said of the process of discovery it may just as well be said of the process of strict logical deduction, so we might add to the canon exciting tales about that activity too. I hope I may be forgiven an autobiographical example to try out this parallel. I remember very clearly the moment when, as a schoolboy, I first understood the principle of linear simultaneous equations. The circumstances are engraved in my memory just as clearly as Poincaré's foot on the step of the bus became engraved in his; it was in the yard of my school, and I remember the red brick wall, the bicycle racks, and so on, in proper Proustian fashion. I saw, in a flash of intuition, why two equations were needed for two unknowns, and how the substitution from one equation into the other proceeded. Now, as I need hardly say, there was no question of originality here; I had had all the information for a number of weeks, during which my mathematics teacher had been trying to pound the principle into my head. As far as that goes, it wasn't that I couldn't do simulataneous equations—I could follow all the rules and get the right answer; it was just that I hadn't seen the underlying rationality of the process. When I finally saw it I got the "Eureka


feeling," of which Koestler speaks,[4] just as surely as if I had invented simultaneous equations myself, but I didn't suppose that that had anything to do with the logic of the situation.

The trouble with "Eureka!" is that the temptation to shout it is a very poor index of success in the enterprise at hand. Such a feeling can only be a by-product of the process—a not unimportant one, perhaps, from some evolutionary point of view, but certainly a dispensable one. A discovery would still be a discovery if it were made in cold blood without any such affective concomitant, and if it turned out to be mistaken it would still be mistaken even though the heavens had opened upon the lucky discoverer at the moment of enlightenment. It is perhaps conceivable that somebody might become addicted to the Eureka feeling and, in order to have it as often as possible, try very hard to make many discoveries, some of which might be valid. But scientists have to learn to be wary of emotional commitments to their hypotheses. Darwin says, "I have steadily endeavored to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject) as soon as facts are seen to be opposed to it. Indeed, I have had no choice but to act in this manner, for with the exception of the Coral Reefs, I cannot remember a single first-formed hypothesis which had not after a time to be given up or greatly modified." And he continues, "This has naturally led me to distrust greatly deductive reasoning in the mixed sciences."[5]

Another distinction frequently drawn between the logic of justification and the logic of discovery is that in the former case rules can be given. This is only apparently true; on the one hand, although in principle all deductions can be carried out by a rule-following technique, in practice good logicians and mathematicians are constantly making wild leaps only later justified by rules, if at all, while on the other hand certain workers—notably Polya[6] —have made significant steps in the direction of formulating rules for "plausible inference." Frege was among the first to try to carry out logical deductions strictly according to rule, and he found it extraordinarily difficult, as he testifies in the preface to the Begriffschrift .[7] If there were no rules of plausible inference, nobody could learn techniques of research, nor could the agencies responsible for funding it have any confidence whatever that the tasks undertaken by researchers would bear fruit. Yet people do learn, and suitably financed campaigns of research (like the Manhattan project) do regularly produce results. The task is then to find out what is going on, not dismiss it all as ineffable or mysterious.

Scientists, as Norwood Russell Hanson points out, "do not start from hypotheses; they start from data."[8] The question, then, is what happens between the data and the hypotheses, taken in that order—not


whether a deductive rule can be written to get from the former to the latter, but whether some intelligible structure can be discerned in the transition. I take "intelligible" in this context to be equivalent to "logical"—a procedure which certainly has etymological sanction, even if it means abandoning the narrower sense of "logical," which requires the specification of rules. In fact it need not mean this, if we remember that the use of "logic" in the expression "inductive logic" is a perfectly orthodox one, and that it sanctions a use of "rule" in the expression "inductive rule" which differs considerably in its connotations from the corresponding use in the deductive case. We have come to think of deductive rules as effective procedures , leading with certainty to the right result. In the inductive case, however, we have to get accustomed to rules which lead, with finite probability, to the wrong result. When people say "there could be no rule for making discoveries," they generally have the first sense of the term in mind: there could be no way of being sure of making discoveries. But there might still be sets of rules, which, if faithfully followed, would increase the chances of making them. These, as inductive logicians have begun to realize, may include rules of acceptance as well as rules of inference. The manner of their implementation (their relation to rules of practice) needs further study, but it is not my purpose to pursue the question further here.

A Model for Discovery

How do hypotheses arise? The answer I wish to suggest is that, strictly speaking, they arise naturally ; hypotheses are to be accounted for in the same manner as the events they seek to explain—indeed the hypothesis that this is so has arisen in this way. The evidence for this hypothesis is of course far from conclusive; while I think it preferable to any alternative which calls upon nonnatural occurrences, it would admittedly be difficult to show that no such occurrences were involved in the process (just as it would be difficult to show this for deductive arguments). But if a model can be constructed within which the emergence of hypotheses follows obviously from other properties of the model, the nonnatural element will be shown to be dispensable, just as it might be shown to be dispensable in deductive arguments by remarking that anybody can follow the rules.

Such a model can, I think, be put together from a number of disparate sources. It shows that, given certain facts about human beings and human cultures, there is nothing odd about the emergence of science or about the rate of its development, or about the fact that some of those who have contributed to this development have been geniuses.


The model, it is true, gives the main part of its account in collective rather than in individual terms—but that has now become commonplace, since the analysis of individual discoveries has shown that, in practically every case, the individual acted as the catalyst for a complex process in which many other individuals played a role. This need not be taken to mean that no credit is due the individual for having advanced a particular science in a particular way at a particular time, but it does mean that (probably) no individual has been indispensable to the advance of science in general. "Very simple-minded people think that if Newton had died prematurely we would still be at our wits' end to account for the fall of apples," says Medawar.[9] We must be able to find a way of reconciling our admiration for Newton with the avoidance of this mistake.

I make no apology for beginning my exposition of this theory of discovery with Bacon, whose method has, I believe, been misunderstood in important respects. The feature of the method which has always struck me most forcibly occurs in book 2 of the Novum Organum ,[10] where, after the construction of the inductive Tables, Bacon says (aphorism xx),

And yet since truth will sooner come from error than from confusion I think it expedient that the understanding should have permission, after the three Tables of First Presentation (such as I have exhibited) have been made and weighed, to make an essay of the Interpretation of Nature in the affirmative way; on the strength both of the instances given in the Tables, and of any others it may meet with elsewhere. Which kind of essay I call the Indulgence of the Understanding or the Commencement of Interpretation or the First Vintage .

This is strikingly similar to Darwin's remark in the introduction to The Origin of Species , where he says, "It occurred to me, in 1837, that something might perhaps be made out on this question by patiently accumulating and reflecting on all sorts of facts which could possibly have any bearing on it. After five years' work I allowed myself to speculate on the subject."[11] He remarks elsewhere[12] that he worked on "true Baconian principles," a claim which is denied by a number of commentators who have not read Bacon as closely as Darwin himself evidently did. There is a hint of the same kind of thing in Frege's concern not to jump to conclusions in the course of his logical work.

The truth to which I think these and other citations point is that the practical problem is often one not so much of finding hypotheses as of holding them in check. Bacon's use of a word like "indulgence," and Darwin's of the phrase "I allowed myself," suggest that, once the evidence is in, there is simply no need of a rule for getting the hypothe-


sis—it has long since formed and is only waiting to be recognized. (Remember Darwin's comment: "I cannot resist forming one on every subject.") But two questions immediately present themselves: By what mechanism of thought did the hypothesis come into being? And, if it is a natural process, why isn't everybody a genius? (It was Bacon's failure to recognize that everybody is not a genius which constituted the chief weakness in his program for making the methods of science available to the population at large.)

As for everybody's not being a genius, the answer may be that everybody above a certain level of natural intelligence in principle is, until inhibiting factors supervene—which almost always happens. It may be worth making a more general point here about a habit of thought into which philosophers of science sometimes fall—a habit due largely, I suspect, to the influence of Hume's analysis of causality. We think of events as in general being made to happen (and ask what antecedent events produced them), rather than as just happening (in which case the relevant question would be what antecedent events, by failing to happen, failed to prevent them). It is noticeable however that, when scientists perform delicate experiments, they expend their energy not on making sure that the desired outcome occurs but on trying to make sure that some undesirable outcome does not occur; they take experimental precautions against Nature, rather than giving experimental encouragement to Nature. Similarly, when engaged in logical argument we don't really need a rule to tell us how to proceed; what we chiefly need is a kind of single-minded concentration that keeps out irrelevant thoughts, and a facility for spotting wrong moves. The motive power of the enterprise doesn't come from the rules—they just keep it on the rails. Rules, it is true, can play a leading rather than a guiding part when the motive power is comparatively unintelligent, as in computers, but the critical thing seems to be to let the machinery run. This view is fully in keeping with the fact, frequently remarked upon, that the process of discovery may be unconscious: the scientist wakes up the next morning—or, in stubborn cases like Poincaré's, a week or so later—with the required solution. Whether or not all the steps are conscious is irrelevant to the question of whether or not they are logical.

If we are to admit biographical evidence, the point about inhibiting factors (and, on the other side of the coin, stimulating ones) may be illustrated by the fact that many geniuses have been characterized by a strong resistance to authority (that is, resistance to having their conclusions drawn for them) and, at the same time, by an openness to random suggestion amounting almost to credulity. Ernest Jones[13] observes this with respect to Freud, and Darwin[14] observes it with respect to himself. Ordinary social experience, and especially education, work,


of course, in precisely the opposite sense, imposing, even in the most well-meaning of democracies, an extraordinarily authoritarian view of the world and, at the same time, encouraging the belief that people should be selective about what they take in, and skeptical about all evidence from nonauthoritarian sources. These tendencies alone would be enough to account for the inhibition of discoveries in all but a handful of the population at any given time.

The hypothesis emerges naturally only when all the evidence is in—the conclusion follows only from a complete or almost complete set of premises. I add "almost complete" because there is a powerful Gestalt phenomenon to be observed here: closure is sometimes procured by the addition of a premise which is the obviously missing one, the only one which fits in with the rest of the pattern. Often, however, not even this much is required. All the premises for the hypothesis of the origin of species through natural selection were present both for Darwin and for Wallace, and, once they had them all (including the indispensable contribution from Malthus), they both got the point at once. Now there is of course no effective way of ever being sure that one has all the premises. But in this respect, also, the logic of discovery is in precisely the same boat as deductive logic: the rules there do not yield the premises either, they only yield the conclusion once the premises have been provided.

What are the premises which lead to a scientific discovery? Where do they come from? At this point, in the literature, the search for a logic of discovery frequently gets thrown off the scent by the insertion of a great deal of irrelevant talk about motivation, perplexity, or crisis; it is thought necessary to point out that discoveries do not happen if there is not some problem with the science we already have. This kind of thing is not only confusing but downright misleading. It suggests, again, a spurious difference between deductive logic and the logic of discovery. In fact, of course, nobody would carry out deductions either if there were not some reason to do so—and if that reason often amounts to nothing more than a passion for mathematics, having no direct relevance to the solution of any practical problem, a similar passion for investigation into nature has accounted for a great deal of inductive progress too.

The premises in question are of two principal kinds: on the one hand there are theories and observations made and confirmed by previous workers, and, on the other, observations not adequately covered by earlier theories, made by or communicated to the discoverer. The discovery consists, of course, in the provision of an adequate theory to cover these new observations. Premises of the former kind are part of the inheritance of the scientist, though finding them may involve a


search of the literature. Those of the latter kind may come from plain observation or from experiment; they may come into the possession of the scientist quite by accident, in a disguised form, and so on. It is at this stage—in the provision of the premises, rather than in the structure of the argument—that the notorious uncertainty of the process of discovery arises, that serendipity plays a part, and so on.

By far the most important contribution, however, is made by what I have spoken of as the scientist's "inheritance," although it might be better to use the genetic term rather than the legal one and speak instead of "heredity." Newton's celebrated remark about "standing on the shoulders of giants"[15] reminds us that the development of science is a stepwise process; nobody starts from scratch, and nobody gets very far ahead of the rest. At any point in history there is a range of possible discovery; the trailing edge of the range is defined by everything known at the time (I overlook here the fact that people are constantly "discovering" what is already known, which blurs this edge somewhat), and the leading edge is a function of what is already known, together with variables representing available instrumentation, the capacity of human brains, and so on. But, within the range, all movement is not forward—quite the contrary. While the mind moves with a kind of subjective conviction and (as it persuades itself) unerringly to its inductive conclusion, that conclusion is not always the discovery it is thought to be. There may be several reasons for this: the "discovery," if it fits the facts, may have been made before; if it does not fit them, that may be because there are still, without the scientist's knowing it, some missing premises (some fact not known, some previously established theory not taken into account), or it may be just because someone has made a mistake. In order to get a clear picture of scientific discovery the account has to be broadened somewhat to take into consideration the population of scientific workers at the time, together with the nature of the development of science. The best analogy for this development is again a genetic one: Just as mutations arise naturally but are not all beneficial, so hypotheses emerge naturally but are not all correct. If progress is to occur, therefore, we require a superfluity of hypotheses and also a mechanism of selection. At any given epoch in the development of science—to deal with the first requirement first—hypotheses are in fact emerging at a much higher rate than one might suspect from reading subsequent historical accounts. We all know about Darwin and Wallace, for example; but how many of the hundreds of other well-meaning naturalists of the middle nineteenth century, all tackling the problem of the persistence or mutability of species, are now remembered?

It may be useful in this connection to draw attention to a well-known


phenomenon which is more relevant to the development of science than most of us perceive it to be—namely, the phenomenon of the crackpot. We are accustomed to thinking of the advancement of science in terms of the half dozen great names in a given field; on reflection we may see that these half dozen are supplemented by a thousand or so working in more obscure laboratories. But we should also remember that there are myriads of people speculating, generally in a half-informed way, about the same topics from myriads of private vantage points; the occasional wild manifestos we all receive, showing how misguided Darwin and Einstein were, represent a mere fraction of their output. In every epoch something like this has gone on, and the unrecorded history of unsuccessful speculation would swamp completely the history of science as we know it if it could ever be added to the literature. Unsuccessful hypotheses are weeded out, of course, by their failure to square with the facts, or if they can be made to do that, by their failure to be predictive. But in this connection certain social factors tend to interfere with the evolutionary pattern, just as they do in the biological case. Just as the children of rich families may, under a less than equitable social system, be comparatively better protected against the hostility of the environment than the children of poor ones, so some theories produced under powerful sponsorship may have a longer run than they deserve.

Despite the fact that parallels present themselves so readily, there are a couple of puzzling things about the development of science that make this evolutionary analogy suspect. First of all, there is the fantastic rate of its growth in the last three or four centuries, quite unlike the leisurely pace at which biological adaptation usually proceeds. Second, there is the remarkable fact, documented in the work of Robert Merton and others,[16] that virtually all valid discoveries (let alone incorrect hypotheses) have been made by more than one worker, sometimes by many, while some great scientists appear to have made far more than their fair share of such discoveries. Clearly a random-mutation, Mendelian evolutionary model will not do.

The Evolution of Science

At this point it would be convenient to introduce some statistical analysis (already hinted at by the reference to Merton's work on multiple discoveries) to show how a given frequency of theoretical interest in a population, presumed to yield a rather smaller frequency of correct conjectures—these to be selected by the hostility of the experimental environment towards false theories—would account for the develop-


ment of science. Unfortunately the necessary statistical apparatus has not been worked out, since statisticians have concentrated their attention on Mendelian genetics, whereas the form of genetic theory required for this purpose is clearly Lamarckian. The accumulated empirical and theoretical knowledge passed on from one generation of scientists to another counts as an acquired characteristic, the fruit of direct adaptation rather than of mutation. To make matters worse, the pattern of reproduction is quite evidently not sexual. I can offer one or two further genetic analogies—for example, it is easy to find parts of theory behaving like dominant characteristics, in that they exclude or subsume alternative views, and others behaving like recessive ones, in that they are passed on with the rest of the inherited material but do not become important until they are conjoined with some other factor—but I have not been able to work out the details of the appropriate model.

Still I think the general evolutionary point holds. Discoveries represent a kind of adaptation which is almost bound to occur in a number of individuals if they are subjected to roughly similar environmental pressures, the environment in this case being an intellectual one. Medawar, in an exchange with Arthur Koestler about the latter's book, The Act of Creation , remarks,

Scientists on the same road may be expected to arrive at the same destination, often not far apart. Romantics like Koestler don't like to admit this, because it seems to them to derogate from the authority of genius. Thus of Newton and Leibniz, equal first with the differential calculus, Koestler says "the greatness of this accomplishment is hardly diminished by the fact that two among millions, instead of one among millions, had the exceptional genius to do it." But millions weren't trying for the calculus. If they had been, hundreds would have got it.[17]

That is as close to backing on the statistical point as I am likely to come for the moment. It is notoriously difficult to confirm counterfactuals of this sort, but there does seem to be a practical sense in what Medawar says, borne out by the tendency of various agencies to bombard scientists with research grants in an expectation of results at least comparable to that of geneticists bombarding Drosophila with gamma rays.

I have now sketched the main outlines of a possible model for scientific discovery. But there are two important components still missing—namely, some explanation, on the one hand, of the tendency of the human mind to produce hypotheses at all and, on the other, of the tendency of some great minds to produce many correct ones. Given that hypotheses are in fact produced, in a sufficiently prodigal fashion to provide the grounds for natural selection and consequently for the origin of new theories, how are we to account for the phenomenon? It


is not enlightening in this connection to talk about genius. To talk about imagination is a little better, although, as Peirce remarks in an essay on Kepler," 'Imagination' is an ocean-broad term, almost meaningless, so many and so diverse are its species."[18] I have already made reference to stresses from the intellectual environment, suggesting a theory of "necessity as the mother of invention," but that certainly cannot be carried through for a large—perhaps the greater—proportion of scientific discoveries.

Let me deal first with the special point about the disproportionate number of discoveries made by great scientists, and then go on to the more general, and concluding, point about the basic mechanism. Obviously no account which ignored "the distinctive role of scientific genius," as Merton calls it, can be considered satisfactory; but the term genius , meaning originally the spirit assigned at birth to guide a child's destiny, can now be admitted, if at all, only to describe people who have already proved themselves in the business of making discoveries, not to describe some potentiality they had before they started. There are clearly genetic determinants involved, having to do with brain capacity and other characteristics normally distributed in the population, with respect to which the genius will be found to lie under the right-hand shoulder of the bell-shaped curve, but none of them, nor any combination, can be equated with scientific genius, since a lot of similarly endowed people will be found living normal lives as stockbrokers, lawyers, and so on.

Once again, what makes people geniuses has nothing whatever to do with the logic they employ; and the point I wish to stress is that the discoverer needs no special logical endowment, no bag of creative tricks, in order to rise to the little eminence which, in the long historical view, he or she occupies for such a short time. I say "little eminence" not to minimize the respect we owe to genius—from close up, after all, we can properly refer to Einstein as a "towering genius"—but to reinforce the point made earlier about the comparatively narrow range within which at any time scientific discoveries can be made. The formation of a scientific genius, in fact, is comparable to the formation of an Olympic runner, or a tennis or chess champion. The chess analogy is a useful one; chess is, after all, a strictly deductive game, and all it takes to win every time is the ability to do a few billion calculations in the head within the period legally allowed for a move. Imagine a chess game in which there are some concealed pieces, moved by a third player, which influence the possible moves of the pieces on the board, and imagine that, instead of sixteen pieces to a side, there are several million, some governed by rules of play not yet known to the players. In such a game a player who, after a long apprenticeship with the ex-


perts, made three or four good moves during a lifetime career would have gained a place in history.

The kind of inference great scientists employ in their creative moments is comparable to the kind of inference the master at chess employs; it involves an ability to keep a lot of variables in mind at once, to be sensitive to feedback from tentative calculations (or experiments), to assess strategies for the deployment of time and resources, to perceive the relevance of one fact to another, or of a hypothesis to facts. The difference between their logic and ours is one of degree, not of kind; we employ precisely the same methods, but more clumsily and on more homely tasks. I wish to conclude by considering some crucial properties of the common logical mechanism with which we are all equipped, which explain, I think, the natural tendency for hypotheses to emerge, and in this connection to call on two diverse kinds of evidence, one from psychology and one from anthropology.

Psychology and Structuralism

On the psychological side, Berlyne has recently drawn attention to a form of behavior among higher animals which he calls "exploration." Under this heading, he says, may be grouped activities describable as "curiosity" and "play," or, in a human setting, as "recreation," "entertainment," "art," or even "science." This kind of activity is not indulged in because of its utilitarian value, although it sometimes has useful by-products. "An animal looking and sniffing around may stumble upon a clue to the whereabouts of food. A scientist's discovery may contribute to public amenity and his own enrichment or fame. Much of the time, however, organisms do nothing in particular about the stimulus patterns that they pursue with such avidity. They appear to seek them 'for their own sake.'"[19] Berlyne offers two lines of explanation for this exploratory activity. One of them is the conventional one of response to necessity, leading to "specific" exploration. The second, and more interesting, at least from the point of view of the problem of discovery, deals with what Berlyne calls "diversive" exploration.

It seems that the central nervous system of a higher animal is designed to cope with environments that produce a certain rate of influx of stimulation, information, and challenge to its capacities. It will naturally not perform at its best in an environment that overstresses or overloads it, but we also have evidence that prolonged subjection to an inordinately monotonous or unstimulating environment is detrimental to a variety of psychological functions. We can understand why organisms may seek


out stimulation that taxes the nervous system to the right extent, when naturally occurring stimuli are either too easy or too difficult to assimilate.

It looks, therefore, as if a certain kind of nondirected exploratory behavior is to be expected, both when the exterior world is too exciting (the intellectual withdraws into the ivory tower) and when it is not exciting enough (the explorer sets off to conquer new territories).

Now science is manifestly not the only possible kind of human exploration, even on the intellectual level, and this I think has to be recognized if scientific discovery is to be put in its proper context. The notion that true hypotheses emerge from the welter of speculation by a process of natural selection (the condition of survival being agreement with empirical evidence) can be extended by analogy to the emergence of science itself from a welter of natural mental activity. The final component of my model owes its inspiration to the work of the structuralists, notably Claude Lévi-Strauss, although it is an extension rather than a simple invocation of their views.

Lévi-Strauss observes, from the anthropologist's point of view, a phenomenon exactly analogous to that observed by Berlyne from the psychologist's. Primitive people, along with their totems and their myths, turn out to have an extraordinarily rich lore of a kind that can only be called scientific, since it represents a body of hypotheses about the natural world linked in some primitively acceptable way to a body of observations. This "science of the concrete," as Lévi-Strauss calls it, is not, in his words, "of much practical effect." But then "its main purpose is not a practical one. It meets intellectual requirements rather than or instead of satisfying needs. The real question is not whether the touch of a woodpecker's beak does in fact cure toothache. It is rather whether there is a point of view from which a woodpecker's beak and a man's tooth can be seen as 'going together' . . . and whether some initial order can be introduced into the universe by means of these groupings."[20]

This line of work is one which I think is at the moment of great interest and promise. What emerges from it is a view of mind as a structuring agent, which puts together a world of thought comparable in its complexity to the world of experience, thus satisfying the optimum conditions of mental activity described by Berlyne. The chief agency of structure is, of course, language. Of the various constructions made possible by language, science counts as only one, and initially enjoys no special advantage over myth. But sometimes what it says turns out to be true (the herb really does cure the disease), and although it is a long step from the truth of a report of practice to a genuinely theoretical


truth, this realization is the starting point of the process of scientific development. A story told for no other initial purpose than to keep mind in a kind of dynamic balance with the world, to assert it over against the world, turns out to hold the clue to control of the world. Other people continue to tell stories for other purposes, and the accumulation of specialized linguistic habits, specialized techniques, and so on, may soon persuade scientists that they are no longer like the others but engaged on a different quest with its own creative character. It is true that scientists, on the whole, care more than other people do that the stories they tell should be true; but then truth itself is a comparative latecomer on the linguistic scene, and it is certainly a mistake to suppose that language was invented for the purpose of telling it.

Scientific theories are no longer created ex nihilo ; the stories scientists tell are not free inventions. If the creative process starts from a very large set of premises already demonstrated to be true, its conclusion has a greater chance of being true than it would have if the process had started, like the conjecture of the primitive, from a random assortment of propositions indifferently true and false. When the conclusion is shown to be true by comparison with the evidence, we call the invention a discovery. ("Formulas are invented," as Bunge puts it, "but laws are discovered."[21] ) The major point I have wished to make can be summed up in this way: In the creative process, as in the process of demonstration, science has no special logic but shares the structure of human thought in general, and thought proceeds, in creation as in demonstration, according to perfectly intelligible principles. Formal logic, whose history as a rigorous system started with Frege and ended with Gödel, represents a refinement and specialization of the principles of everyday argument; the logic of scientific discovery, whose rigorous formulation is yet to be achieved (not that it holds out the hope of completeness once entertained by deductive logic), will similarly prove to be a refinement and specialization of the logic of everyday invention. The important thing to realize is that invention is, in its strictest sense, as familiar a process as argument, no more and no less mysterious. Once we get this into our heads, scientific creativity will have been won back from the mystery-mongers.


Induction and the Kindness of Nature

In his essay entitled "Induction and Empiricism,"[1] Grover Maxwell proposes a radical solution to what he calls "this notorious problem, the problem of induction, or the problem of nondeductive inference, or the problem of confirmation or corroboration—term it however we may."[2] The character of the solution is presented in the opening sentences of the paper: "The theory of confirmation sketched herein is subjectivist in a manner that will be explained. According to it, however, the degree of confirmation of a hypothesis is an objectively existing relative frequency (or a propensity , if one prefers). The resolution of this apparent paradox is simple, but its implications are, I believe, profound."[3] The simple resolution of which Maxwell speaks consists of having the reference classes over which relative frequency is to be specified be themselves classes of hypotheses which are "like" a given hypothesis, or classes of occasions when events "like" certain other events occur. The delimitation of these classes involves inescapably subjective factors having to do with the judgment of likeness but also with the recognition of the pattern hypothesis in the first place and the assignment to it of a prior probability. But once these things are in place the relative frequency is a matter of counting and is not subjective at all.

The strategy of extending relative frequency determinations to events in the domain of the philosophy of science (involving hypotheses and other thought objects), rather than limiting them as might normally be expected to the domains of the particular sciences themselves (involving physical objects and their properties), as a means of tackling


questions about induction has always had a strong appeal for me. It once occurred to me, for example, to argue for what I called the "paradox of induction" along the following lines: let A be an attempt to justify the principle of induction and let U be the predicate "unsuccessful"; then all A's so far observed have been U; therefore (by the principle of induction) the next and all subsequent A's will probably be U; therefore the principle of induction will probably remain unjustified.[4] Maxwell's argument, however, is not only more profound but also I suspect more serious than this. What he is dealing with is not the metametaproblem of attempts to justify the principle of induction but the metaproblem of finding hypotheses to whose confirmation to apply it.

This is not just the old logic of discovery argument, although Maxwell does make a gesture in the direction of the ineffability of retroduction by citing Einstein's "free, creative leap of the mind."[5] How we come by the hypotheses in the reference class is not the point at issue; it is rather a question of what sorts of hypothesis they are when we have got them. The function of a hypothesis is to form the starting point for inferences (not all of which need be deductive, so that Maxwell prefers the expression "hypothetico-inferential" to the more familiar "hypothetico-deductive"). A scientifically useful hypothesis will be one that has testable consequences and is therefore falsifiable. But this is only a necessary and not a sufficient condition, since indefinitely many falsifiable but irrelevant hypotheses can be generated at will. It is also required that the testable consequences be such that if they are not in fact false they have some sort of epistemic utility, throwing light on the behavior and interrelations of things in the world. The good hypothesis is one that is falsifiable but has not in spite of our best efforts actually been falsified. And it must have been worth our best efforts.

Now given the restlessness and fecundity of the human mind when the inventive mood is upon it, hypotheses would seem to proliferate continually and endlessly, forming an indefinitely large class (and potentially an infinitely large one) of which, with respect to a given explanandum, only one member can be true, assuming some mutual exclusivity in formulation. Do we have any chance of finding that one? Let me quote Maxwell at length, because this is the central point in his argument:

Let us not make the mistake of (tacitly) applying a principle of indifference and assigning equal probabilities to each of these bewildering possibilities and, thus, concluding that we have zero probability of hitting upon the theory that is true. . . . For all we know, we may hit upon the true theory (or one "close to the truth") after a few falsifications (or even after none) in a sizable portion of our attempts to expand our knowledge. Put in this way, this unexceptionable statement might seem austere


enough to avoid offending even the most rabid anti-justificationist. But reflection soon forces upon us the central, most crucial , and no doubt for many, the most awful fact of the entire epistemological enterprise: if we are to have an appreciable amount of nontrivial knowledge (or, even, of true beliefs ), we MUST hit upon fairly often the true theory or hypothesis (or one reasonably close to the truth) after a relatively tiny number of trials . . . . Time is too short for us to sift through (using falsifications to eliminate the chaff) more than an insignificantly small number of the infinite number of possibilities always sanctioned by the data we happen to have . . . . This statement of this "awful" fact (surely it is, rather, a wonderful one) is equivalent to saying that, if significant knowledge is possible, the relative frequency of successes among human attempts at knowledge accretion must be very high indeed, given the possibilities of failure.[6]

One thing to be noticed here before confusion sets in is that we are already talking about two classes of hypotheses, not just one: the class of alternative hypotheses in a given case, only one of which can be true, which is not a reference class for relative frequency purposes, and the class of hypotheses like the pattern hypothesis, many of which may be true (because each applies to a different case), which is such a reference class. It is the high frequency of true hypotheses in this latter class that constitutes the awful or wonderful fact referred to in the citation above.

Maxwell's claim is that because the class of hypotheses like the pattern hypothesis is a reference class for relative frequency purposes we can apply Bayes's theorem in order to estimate the likelihood that a hypothesis in the class will be true if it passes a test of a given degree of severity. But in order to apply Bayes's theorem we must be able to estimate the prior probability that it will be true if it just belongs to the class. How do we do this? Here Maxwell invokes a "contingent assumption" (the theory of confirmation developed in the paper is called a "contingent theory") which is stated and restated in various forms. Of the estimation of prior probabilities he says,

Although such estimation is virtually never done explicitly in everyday and in scientific contexts, it surely occurs tacitly fairly often. When we suddenly feel quite sure that the theory or hypothesis that has just occurred to us must be the correct explanation of some hitherto puzzling set of circumstances, whether we be detectives faced with strange, apparently unrelated clues or scientists trying to account for whatever data there are at hand, we are tacitly estimating a fairly high value for the prior probability of our proposed hypothesis or theory. We may (or may not) signal this by telling ourselves or others that this hypothesis "just sounds right" or that we "just feel it in our bones that it must be the right one."[7]


This is certainly subjectivity with a vengeance—but we still don't know how we do it.

The answer to that is that we just can, which is why the assumption is contingent.

I contend further that we have the innate ability , or, perhaps better, the innate capacity to develop , given appropriate environmental history, the ability to make estimates of prior probabilities that , in a significant proportion of trials, are not hopelessly at variance with the true values. This too is , of course, contingent , especially if one holds, as I do, that abilities and capacities are always due to intrinsic, structural characteristics of the individual.[8]

However it is not just the contingent properties of the individual that are in play; given that the hypotheses in question are about nature, there has to be a corollary assumption about its properties—two such assumptions, in fact.

The first is really the assumption about the individual in a new guise. Speaking of "the distribution requirements imposed by our definition of 'probability'," Maxwell says,

Nature is, of course, by no means necessarily bound to be so obliging as to fulfill such requirements. But one prime result to which we have returned again and again is that, unless nature is kind in some vital respects, knowledge is impossible. The vital respect that I have stressed and in which I have assumed such kindness to obtain has been the existence in us (a part of nature) of constitutional abilities to cope with the epistemic predicament.[9]

The second assumption about nature is just that there is enough simplicity in it to ensure that the "distribution requirements" do in fact hold, since otherwise, while our constitutional ability to cope might get us knowledge by a somewhat more roundabout method—via second-order knowledge of what the operative distribution function actually is, if not a simple one, for example—this would complicate things enough to make the "awful or wonderful fact" implausible again. So Maxwell rounds out his postulates by admitting

that, in addition to our contingent assumptions about our constitutional capacities, we also, in effect, are assuming that the comparatively limited amount of evidence that we are able to accumulate is not hopelessly unrepresentative.[10]

Now I happen to think that Maxwell's intuitions in all this (and I think they must be called intuitions) are sound, and that the account he


gives is at least close to a correct account. My question is, how does it stand in relation to the traditional body of argument about the problem of induction? What does it contribute to that body of argument? To take the postulates in reverse order: to assume that "the limited amount of evidence that we are able to accumulate is not hopelessly unrepresentative" is surely Mill's Uniformity of Nature argument in a slightly different form. The difference between Mill and Maxwell is that Mill happily argued in a circle: from repetitions in nature by induction to regularities; from repeated regularities again by induction to regularities in regularities, which constitute nature's uniformity; and thence to the warrantability of the principle of induction on the basis of which this ascent was made in the first place. Maxwell on the other hand just assumes the regularities as one of the warrants of the principle. And indeed if one wishes to avoid circularity there seems to be no alternative to some such assumption.

The other major postulate, of nature's kindness in arranging that we (being part of nature) should have a capacity for hitting on correct hypotheses, seems to be a reformulation of an insight of Peirce's, if not of the aboriginal psychologizing of induction by Hume himself. Peirce, in "A Neglected Argument for the Reality of God," describes the relation between induction and retroduction in the following somewhat melodramatic terms:

Over the chasm that yawns between the ultimate goal of science and such ideas of Man's environment as, coming over him during his primeval wanderings in the forest, while yet his very notion of error was of the vaguest, he managed to communicate to some fellow, we are building a cantilever bridge of induction, held together by scientific struts and ties. Yet every plank of its advance is first laid by Retroduction alone, that is to say, by the spontaneous conjectures of instinctive reason.[11]

What instinctive reason is able to do, according to Peirce, is to recognize nature's own simplicity and thus to choose, among hypotheses, the ones more likely to be correct, exactly in accord with Maxwell's requirement. However the notion of simplicity itself is susceptible of different interpretations.

Modern science has been builded after the model of Galileo, who founded it, on il lume naturale . That truly inspired prophet had said that, of two hypotheses, the simpler is to be preferred; but I was formerly one of those who, in our dull self-conceit fancying ourselves more sly than he, twisted the maxim to mean the logically simpler, the one that adds the least to what has been observed. . . . It was not until long experience forced me to realize that subsequent discoveries were every time showing


I had been wrong, while those who understood the maxim as Galileo had done, early unlocked the secret, that the scales fell from my eyes and my mind awoke to the broad and flaming daylight that it is the simpler Hypothesis, in the sense of the more facile and natural, the one that instinct suggests, that must be preferred; for the reason that, unless man have a natural bent in accordance with nature's, he has no chance of understanding nature at all.[12]

This "natural bent in accordance with nature's" requires a closer analysis. For again, in that form, it is merely a contingent assumption, and adds nothing to what Hume had already insisted upon in even more forthright language.

Everyone knows that Hume despaired of finding any argument that could carry the mind from past experiences to future ones, but that he had every confidence in the inference of the latter from the former just the same: "as an agent," he says, "I am quite satisfied in the point."[13] What compels the inference—though nothing can justify it—is custom or habit.

Having found, in many instances, that any two kinds of objects, flame and heat, snow and cold, have always been conjoined together: if flame or snow be presented anew to the senses, the mind is carried by custom to expect heat or cold, and to believe that such a quality does exist and will discover itself upon a nearer approach. This belief is the necessary result of placing the mind in such circumstances. It is an operation of the soul, when we are so situated, as unavoidable as to feel the passion of love, when we receive benefits; or hatred, when we meet with injuries. All these operations are a species of natural instincts, which no reasoning or process of the thought and understanding is able either to produce or to prevent.[14]

What is not always remembered is that Hume explains the working of these operations by a hypothesis just like Peirce's or Maxwell's, and furthermore gives an almost evolutionary explanation of it. There is a difference, it is true, namely that Hume's purpose is more limited than theirs, in that he is dealing only with inductive generalization and not with the generation or selection of hypotheses. But the language he uses is just what he might have used to make that point too, had that been his intention:

Here, then, is a kind of pre-established harmony between the course of nature and the succession of our ideas; and though the powers and forces by which the former is governed be wholly unknown to us, yet our thoughts and conceptions have still, we find, gone on in the same train with the other works of nature. . . .


As this operation of the mind, by which we infer like effects from like causes, and vice versa , is so essential to the subsistence of all human creatures, it is not probable that it could be trusted to the fallacious deductions of our reason, which is slow in its operations, appears not, in any degree, during the first years of infancy, and, at best, is in every age and period of human life extremely liable to error and mistake. It is more conformable to the ordinary wisdom of nature to secure so necessary an act of the mind by some instinct or mechanical tendency which may be infallible in its operations, may discover itself at the first appearance of life and thought, and may be independent of all the labored deductions of the understanding.[15]

So far, so good; Maxwell is in distinguished company, and seems to have a good chance of being right. But two questions remain: Why make these points again? And is there nothing further to be said than that our instincts have been programmed by a kindly nature—can no additional light be thrown on these powers of ours from some other vantage point? The first of these questions I will answer as Maxwell might have done; the second will take me into some conjectures of my own.

Maxwell's intentions in stressing the necessity for contingent postulates in any theory of confirmation were in fact, on the one hand, to draw attention afresh not merely to Hume's problem but to the seriousness of that problem, and on the other to insist on the inadequacy of empiricism as a philosophical doctrine. With respect to the first of these points, many writers, he maintains, pay lip service to Hume and "then proceed to forget, ignore or repress" his insights even as they propose solutions to his problem. But such repression is understandable because strongly motivated: "it is the very life of empiricism that is at stake."[16] And this leads to the second point.

Empiricism asserts that knowledge claims can be justified in terms of observations plus logic, the observations being contingent and the logic necessary; Maxwell insists that contingent elements other than immediate observation enter unavoidably into every such justification. The alternative is to turn every scientific problem into a pseudoproblem, precisely because the lack of a demonstrative rationale for induction prevents the establishment of any general proposition on the basis of evidence and noncontingent logical principles alone. But he turns this constraint to good account, because there may be more or less nonobservational contingency in the justification of particular claims, and roughly speaking he concludes that the less of it, the more scientific the claim, and the more, the more philosophical.[17] Given that he is committed to a continuity between these two classes, this seems a reasonable way to organize the spectrum.


So much for preliminaries—for it is only now that I approach the real problem of this paper, namely: What sense can we attach to Hume's "pre-established harmony between the course of nature and the succession of our ideas," to Peirce's "natural bent in accordance with nature's" on the part of man, and to Maxwell's "existence in us (a part of nature) of constitutional abilities to cope with the epistemic predicament"? Two hints are contained in texts cited earlier, one of which has already been flagged in the remark that Hume provides an "almost evolutionary explanation" of the reliability of custom in inductive inference. The other lies in Maxwell's remark that "abilities and capacities are always due to intrinsic, structural characteristics of the individual." The explanation of this contingent fact about human knowers is in other words to be sought along structural-evolutionary lines.

Now in my own contribution to the symposium in which Maxwell's paper appeared[18] I took a slightly different line from his on the relation between the two components of the empiricist's arsenal. I said, in effect—and I will not repeat the arguments here—that all the logical principles were contingent, reflecting as I took them to do what might be called global features of the universe in which we find ourselves. And I claimed that those principles, even the deductive ones, manifested themselves as abilities of ours, particularly as a basic ability I called "apposition." I suggested that this ability was not automatically employed in a deductively rigorous way, since this requires in addition special care in the following of rules.

The chief talent that logic requires is an ability to stick to these rules; the looseness and redundancy, the ellipses and shortcuts of ordinary language give way to a more or less rigorous formalism. . . . This talent is rarer than might be supposed, which accounts for the fact that logic . . . is too simple for many people who look for subtlety in its elements. It rapidly gets complex, of course, but its complexities always break down into simple elements as the fuzzy complexities of everyday life do not. People are always putting more or less complicated objects and expressions in apposition with each other and one another; this activity is governed by no principles other than immediate utility or intelligibility and the conventions of ordinary language and behavior, and consequently the coherence and even relevance of any element of the resulting structure with or to any other element are not guaranteed, indeed it is normal for this structure to be incoherent and fragmented.[19]

It follows from this that the ordinary employment of reason is not deductive in any logically rigorous sense; even deductive abilities have to be learned. Recall now that Maxwell, in introducing the inductive or retroductive ability required by his theory of confirmation, spoke


somewhat carefully of "the innate ability, or, perhaps better, the innate capacity to develop, given appropriate environmental history, the ability . . . ," etc. The point we were both trying to make, in our different ways, was I think that the business of acquiring and manipulating true propositions (i.e., knowledge, assuming appropriate ancillary beliefs) involves an adaptation of mind to the world, a selection of learned argumentative strategies under evolutionary pressure.

It would of course not be possible for these strategies to be selected if the apparatus for carrying them out were not part of the furniture of the neonate mind (or brain). The apparatus in question is certainly, on the side of the brain, neural, and we know that it has, among other features, those required for the modeling of arguments in deductive logic: binary states, feedback loops, etc. Part of the argument of my paper was implicitly that the apparatus itself has been selected in evolutionary development just because these features are necessary if we are to make our way in a world where things cannot be in two places at once, are either there or not, etc. Had the world been different, different mental structures would have evolved. (That we can't imagine how it might be different in these respects—that all our possible worlds belong to the same logical universe as this one—is, I argued, to be expected, given that we have the mental structures we have.)

What is required for the modeling of arguments in inductive logic? In the deductive case the apparatus is, as it were, indefinitely reusable; the circuits (saving certain consequences of unidirectionality of branching, which place the usual restrictions on operations like conversion) can be followed in either direction, and when the exercise is completed the whole thing can be erased and readied for a new set of data. But in the inductive case it would seem that in the first instance at least something irreversible would be required, something to impose a direction (corresponding to the direction of time in the flow of events in the world) and to preserve for future reference the "bent" of things and processes. And of course that happens too; we call it learning, and it consists of the laying down of traces, of "facilitations," as Freud called them, which ensure that the next time a given configuration is encountered the necessary neural connections are made more easily. That this should be the case is again to be explained, in evolutionary terms, by the fact that the world does in fact repeat itself.

That would be enough for inductive generalization of the kind that Hume included, though not under that name, among the instincts or mechanical tendencies of the mind. But the final and apparently most difficult question remains: what in addition is required to account for the ability to choose correct hypotheses? (Note that this is not a question of generating hypotheses, which seems to me in the first instance at least


less of a problem.[20] ) Here I am inclined to take a somewhat different approach, and to question the premises that lead Maxwell to stand in awe and wonder of the epistemological fact that we do hit on them as often as we do. What I want to suggest is that after all it does not happen that often, and that the proportionality he invokes between the "infinite number of possibilities always sanctioned by the data we happen to have" on the one hand and the "relatively tiny number of trials" we have time for on the other lends an unwarranted air of melodrama to the problem.

The situation is not, after all, that we generate infinitely many hypotheses and then start going through them seriatim for the correct one; it is rather that, in the exploratory way that is characteristic of the animal we are (and which I have described elsewhere[21] ), we try out each hypothesis as soon as it is thought of, and usually discard it, for the simple reason that it is usually wrong. If necessity presses, or our attention lasts for some other reason, and if inventiveness does not flag, we then think of another hypothesis and try that. We do not even try irrelevant hypotheses; that might be said to be because of a "bent in accordance with nature's," although a "natural" one only in a casual sense, in that it is borrowed from nature at the time, as it were, just from an apprehension of what the situation is that requires explanation. The question is, how much time is there for the sifting process with respect to which Maxwell alleges that time is too short?

I answer: the time of human evolutionary history. It has taken as long as it has for science to emerge precisely because correct conjectures at the hypothetical level are so few and far between. Inductive generalizations are necessary for survival, scientific hypotheses are not, or at least not until society becomes irreversibly dependent on science, which happened only recently. Human cultures must have endured for millennia—and some still endure—without anything like recognizably scientific hypotheses, let alone true ones. But it is a feature of scientific hypotheses, once formulated and recorded, that they endure and are communicable. When Maxwell says, "we must hit upon fairly often the true theory or hypothesis," there is an ambiguity in the "we"; if it is meant as it were editorially, so that each of us applies it in his or her own case, then indeed the fact is miraculous, but if it is meant collectively, over the whole history of inquiry, then the imperative is overstated. We might say instead: we need hardly ever hit upon the true theory or hypothesis, as long as, every time we do, we remember it and transmit it to posterity. If enough of us keep trying enough of the time—and I need not rehearse here the known facts about the demography of scientific research—then after a sufficient while we will have a respectable accumulation of known truths. And that is what we have.


This view, it is true, does not address directly one of the major points at issue, namely how we know that the hypothesis in question, rather than some competitor , is true or close to the truth. But notice that Maxwell effectively skirts this too, because he assumes that we do have nontrivial knowledge, and he assumes that we can "sift through (using falsifications to eliminate the chaff)" the possibilities available to us, or could if we had the time. The competitors in other words get eliminated in the usual way by further observations, if they are going to be eliminated at all, and that is what takes the time. So there is nothing special to be said on this point; the only problem that could arise would be the generation of alternative hypotheses by logical devices, and in that (usually tendentious) case we might accept as one true hypothesis a whole equivalence-class of such artificially generated hypotheses.

I conclude, then, that inductive generalization does represent a "natural bent in accordance with nature's," and further (although the arguments for this are not in this paper) that hypothesis generating is a "natural bent," although not necessarily "in accordance with nature's," since the overwhelming majority of hypotheses generated are false. But hypothesis choosing is a learned and often tedious process. In some simple cases there are just not that many alternatives—either the butler did it, or he didn't—so it is not surprising that we can sometimes manage to hit on the right answer by ourselves. But in the scientific case it has always been, as it still is, a matter of long-range and ramified cooperation among very many workers, many of whose results, painstakingly accumulated, still form the basis of new developments even though they themselves have long since vanished from sight. That, once the right hypothesis is presented, it may at once be seen to be such, and with just the kind of conviction Maxwell describes, is not so much a matter of intuitive recognition as of rapid calculation, rather like the assessment of moves in chess.

In all this we still count on the world to vindicate our trust that the future will in relevant ways continue to be like the past—that is, we rely on the kindness of nature. If in this respect we have made no advance upon Hume, that is no doubt, as Grover Maxwell would certainly have agreed, because Hume was right. And I would add, right not only in what he claimed but in what he had the wisdom not to claim.




Preface to Part III:
Logic and Causality

As the realist sees it, an underlying assumption in all scientific explanation is that there exists some order of things having reliable patterns of occurrence. Theory will be the matching of this, under some construction of the term "matching," in the domain of thought. The strongest construction would be identity, which would yield the Hegelian position according to which logic and ontology are equivalent. The very idea of such certainty now seems arrogant. Still there is a close connection between the logic we use and the kind of world it is possible for us to envisage. Since Kant the question of the a priori structure of the world, the availability of necessary truth, has been a problematic issue, and this part of the book takes up a number of aspects of this problem.

One paper that might well have been included, which deals explicitly with Kant and the possibility (which he himself vehemently rejected) of a hypothetical metaphysics, was to have been given at an Interamerican Congress in Buenos Aires in 1959, but an abortive military coup interrupted the congress. The paper was eventually published in Spanish in 1972, but the English version seems to have been lost. It seemed to me that the only way to have a metaphysics at all was to admit its hypothetical status; even if the defining topic of metaphysics is, as I have sometimes argued, "the way the world must be in order to be as it is" (the parallel formula for science being "the way the world may well be, and therefore be as it is"), still the way to put the question is: what would a world be like, in which such-and-such had to be the case?

Chapter 8 raises the speculative question as to what might follow if


the usual construction placed on probabilities of 0 and 1—namely that the events to which the probabilities are attached are respectively impossible and necessary—were modified so as to reflect this metaphysical modesty. In particular, do we need to suppose that something actually prevents an event with zero probability from happening? In the title of the chapter I speak of the "possibility of the improbable," but I obviously mean this to follow the probability all the way down to zero—to insist that probabilities of 1 and 0 are still probabilities, that they need not be supposed to turn into something else.

This chapter introduces a run of the most speculative arguments in the book, with respect to which I can only say that some of the playfulness to which I refer in the Introduction was certainly at work, although I meant, and mean, the conjectures to sink in and to challenge some of the boundary assumptions of the field. Chapter 9 poses just such a challenge to the nonempirical character of logic; it argues that even the possible worlds so popular with some philosophers at the time are all versions of this world, and that radically other worlds are not conceivable by us precisely because our conceptions are tied to this one.

Chapters 10 and 11 challenge some basic and rarely examined beliefs about causality. The quantum theory of chapter 10 is so called because I imagine causal moves to be discrete and discontinuous, and try to follow out what this imaginary situation would lead to if allowed to extend from local states and functions (designated with lowercase s and f ) to universal ones (in uppercase letters). The latter of course runs into absurdities, or so one would have to suppose, when relativistic time effects come into play. But then (to anticipate chapter 20, another speculative essay that might well have been grouped with these) relativistic effects are out of the subject's reach, while causality—if we are to follow Kant—is something that organizes just what is within reach.

Chapter 11 makes what I take to be one of the most radical, and at the same time one of the most serious, suggestions in the book: it questions the underlying and largely unacknowledged "drive" in causality. Perhaps this is not outcome-specific, as is usually if implicitly supposed—perhaps it is nothing more than the passage of time itself, under the principle of plenitude that says, in effect, that if anything can happen it will. I have since come to see that although I treat Heidegger in this essay with some flippancy, which I still think he often richly deserves, the concepts of "letting-appear" and of "letting-learn" in his later work link up very suggestively with the concept of action as "letting-happen" to which this negative view of causality lends itself.


Three Logics, or the Possibility of the Improbable

To do philosophy by threes, as Peirce realized, is to invite immediate criticism on the grounds of oversimplification and distortion. In the Collected Papers there is an unfinished essay called "Triadomany, the author's response to the anticipated suspicion that he attaches a superstitious or fanciful importance to the number three, and forces divisions to a Procrustean bed of trichotomy,"[1] in which he defends himself against the charge by counting up examples; in twenty-nine cases of division there are, he says, "eleven dichotomies, five trichotomies, and thirteen divisions into more than three parts." But it was not a random sample, since he admits to having excluded from consideration some "subjects in which trichotomies abound," where in fact they cannot be helped. Unfortunately the manuscript breaks off before disclosing which those subjects are.

But in spite of this statistical disclaimer Peirce did so characteristically work in threes that the danger of doing so now is that of plagiarism. I must admit, therefore, at the outset that what follows does owe a great deal to him; my three logics will look very much like abduction, deduction, and induction, and the later discussion will be colored by the categories of firstness, secondness, and thirdness. But it would be a mistake to give the impression that it is all merely exegesis or criticism of Peirce—in fact this aspect is secondary and quite accidental. The point of departure of the paper lies in a puzzle which arises in the customary interpretation of a logical formalism, and its main concern is with the empirical relevance of formal systems.

By "logic" I shall understand the discipline which studies the formal


structure of argument, where "formal" indicates that the structure in question can be abstracted from the content of the arguments which exhibit it. An "argument" I take to be an arrival at a conclusion, usually, although not always, by the following of some method after departure from a premise or premises. (This slightly unorthodox definition is required in order to get the third kind of logic in without subterfuge.) Now in this study the structure may be taken as a pure formalism, without regard to any possible application, symbolic constants and variables being related as empirically meaningful expressions might be related if there happened to be any whose relation in this way made any sense, although it would make no difference if there were not. The only restrictions on this kind of activity are those imposed by the demand for consistency, and it is clear that indefinitely many formal systems having no apparent relations to anything actual could be constructed, given time and ingenuity. But the principles on which such systems are in fact constructed, or closely related principles, are almost invariably to be found in familiar relationships binding elements of common experience. No matter how hard mathematicians may try to create pure mathematics as Hardy understood it,[2] science repeatedly catches up with them and shows that some empirically discovered generalization calls for just such functions—even Hardy himself could not escape the fate of having a principle of genetics named after him. The position taken here is that in some sense all the abstract formalities of any logic we are capable of constructing are grounded in concrete actualities. This in turn makes it possible to throw light on the classification of logics by an examination of the kinds of familiar relation in which they are grounded.

The most frequent relation exhibited in logic is the relation of entailment, the strict following of a conclusion from given premises where those premises would be inconsistent with the denial of that conclusion. Its counterpart in the empirical sphere is strict causal determination, the lawlike relation of antecedents to consequences which is often said to constitute the chief object of scientific enquiry. I shall call the logic grounded in such lawlike relations the logic of inevitable events , since if it is to apply there must always be, given the actual state of affairs corresponding to a set of premises, an inescapable necessity in the subsequent occurrence of the event corresponding to the conclusion. At any rate the interest of this kind of logic resides in examples of this sort. As is quite obvious, however, we almost never encounter lawlike relations which work with such exactitude as to rule out all possible alternative results, so that the justification, and especially the teaching, of this logic draws heavily on the nontemporal relationships of wholes to their parts, objects to their properties, and classes to their members,


which are much less interesting. If a centaur has four legs, it is inevitable that it should have at least one, but few people would have cared very much about the logic of such arguments if it had not seemed possible in principle to extend it to the inevitable following of eclipses from the courses of the planets, or death from loss of blood. Such involutional arguments—i.e., arguments which unravel in their conclusions something already involved in the premises—are held by some, the Kneales for example,[3] to exhaust the subject matter of logic; and if they do then the application of logic to science requires the belief that the future is only an unravelling of what was implicit all along in the past. This of course is the position whose classical expression is to be found in Laplace.[4]

Little more need be said about this logic—it rests on elements of experience which are either rare or uninteresting. (My insistence on the empirical connection is not, of course, to be construed as meaning that empirical matters have anything whatever to do with the validity of logical arguments, in any of the three cases—they have to do only with the distinction between kinds of situation.) By far the greater part of our experience consists of events linked to one another, as far as we can tell, in much less definite ways. Regularities permit of exceptions, and a casual view does not encounter many regularities, except of a gross sort, unless it happens to take in a good proportion of manufactured devices in working order. It may perhaps be ignorance and natural ineptitude merely, as Laplace thought, which prevent us from seeing directly the underlying mechanism of the world, but that is far from being established. An active debate is still proceeding on the question as it comes up in fundamental areas of physics.

The relation between premises and conclusion in the logic suitable to this kind of experience is one of probability ; and the second logic will therefore be called the logic of more or less likely events . Here the premises cannot be said with confidence to be inconsistent with any conclusion whatever, and although some conclusions are less probable than others every conclusion, with the exception of totally irrelevant ones, has some probability. These various probabilities can be made the subject of a calculus, i.e., a logic of the previous kind which serves as a metalogic for the kind now under dicusssion. In the logic of more or less likely events we have relations like "if a, then probably b, less probably c, etc." In the metalogic we have instead "p(a É b) = x," "p(a É c) = y," etc., more usually written "p(b/a) = x," "p(c/a) = y." It is in the formalization of this metaloglc, and the interpretation of that formalization, that the puzzle arises from which this whole discussion springs.

The first element to be introduced in expositions of the calculus of


probability is the quantity p(a/b), the probability of a, given b, which ranges over the real numbers from 0 to 1:


The standard next step is to show under what circumstances p reaches these limits:


There follow conjunctive and disjunctive axioms, etc., but we are not concerned with those here. The puzzle lies in the usual interpretation of the values 0 and 1 for p. In order for the calculus to work the implications (1) and (2) must of course hold, and their interpretation seems straightforward enough: if b É a, then if b happens a must happen, i.e., it is necessary ; if b É ~ a, then if b happens a cannot happen, i.e., it is impossible . And this interpretation preserves the formal symmetry of the whole thing, since substitution in (1) gives very simply


from which, with (2), it seems at first glance to follow that


Of course this does not follow formally, since the conditionals (1) and (2) go only in one direction; it would have followed if they had been biconditionals, though, and probably nobody would have objected if I had put them in that way. What I want to do now, however, is to explore the consequences of denying the converse conditional, at least in the second case:


In other words I want to take seriously the possibility that an event which genuinely had a probability of zero might nevertheless happen, and inquire what kind of logic would be suitable to such events. This third kind of logic might be called, with one eye on subjective interpretations of probability, the logic of incredible events . The three logics might then be exhibited in a table:



p = 1

inevitable events


0 < p < 1

more or less likely events


p = 0

incredible events

At first sight this schema looks rather silly, and that impression may continue. But I think there is a reason why it looks silly, and that when


that is taken account of the way is opened for some rather interesting possibilities. The reason is of course exactly an indoctrination with the metaphysical notion of causality which we have all undergone in one way or another, and which makes us incredulous or scornful if anyone argues for the possibility of uncaused events. As I have indicated above I think that this doctrine established itself when deductive logic, which had been abstracted from a general and approximate regularity in the world of experience, was forced back on experience as requiring a detailed and exact regularity, in the hope that the world would thereby be rendered fully intelligible. The intelligibility that was being sought was, however, of a special kind, whose philosophical hegemony is comparatively recent, namely the intelligibility which goes with scientific explanation in Hempel and Oppenheim's sense.[5] That is certainly an ideal worth striving for; but a resolve to continue to seek explanations of this strongly deductive variety is very different from the assumption that they exist, into which it often passes. The latter attitude closes off possibilities which the former leaves open, among them the possibility that totally improbable events might occur.

There must surely, nevertheless, be genuinely impossible events, such as the round square and the simultaneous whiteness and non-whiteness of snow? It was this kind of reflection that led Peirce to his distinction between the limits of probability on the one hand, and impossibility and certainty on the other. He introduced the notion of moral certainty:

By "morally certain," I mean that the probability of that event is 1. Of course, there is a difference between probability 1 and absolute certainty. In like manner, "bare possibility" should mean the possibility of that whose probability is zero. It is barely possible that a well made pair of dice should turn up doublets every time they were thrown: it is a conceivable chance, though morally certain not to happen. But that a pair of dice will not turn up sevens is absolutely certain; it is not possible.[6]

But the moral certainty of not getting an unbroken sequence of doublets does not correspond to a probability of zero, at least not for a finite series of throws. Such an event has a small but definite probability, and a rational gambler would be prepared to bet on it if the odds were good enough—something like a penny to a billion dollars for a series of fifty throws. Peirce's moral certainty is therefore arrived at by jumping to a conclusion, whereas what I envisage is the possibility of an event on which no rational gambler would bet at whatever odds—an event which would shake his faith in the order of nature, if he held it in the metaphysical form sketched above. But a doublet of sevens is a different matter again even from such an event. A doublet of sevens, like the round


square and white nonwhite snow, is admittedly impossible, but it is not an impossible event. It is not an event at all .

A certain asymmetry begins to be apparent between the meanings of the extremes of probability. A probability of 1 does indeed mean certainty or necessity; if p(a/b) = 1 then if b happens a will surely follow, and if p(~a/b) = 1 then it will just as surely not follow—something else will. In the latter case a is impossible, conditionally on b's happening; a does not specify an event, since there cannot be an event which is merely the denial of another event, but it leaves room for an event c or d, etc., which is other than a. If a is specified by a false statement, either an empirically false one about dice turning up sevens or an analytically false one about white and nonwhite snow, then denial of this statement will be true and materially implied by any specification of b, thus satisfying (3). On the other hand a probability of zero does not mean impossibility. It means improbability in the strict sense—such an event is not to be expected; but that does not mean that it will not happen.

The natural reaction at this point is to resort to the notion of causality as a methodological principle, and say that if such an event happened we would refuse to believe that it was uncaused, but instead would look for causal determinations which when found would show that its probability had been something other than zero all along. And this is a perfectly proper reaction, without which science would never have risen out of superstition. In a world where events are constantly causing other events in one way or another it would be foolish to abandon the search for causal explanations. But causal explanations tend to form chains reaching back towards the infinite past, which soon become embarrassingly complex; like the snakes on Medusa's head, each item disposed of produces a crop of others, until one wishes to cut the whole thing off, but doesn't know how. The usual unsatisfactory resolution leaves the matter poised between a supernatural beginning, which is unacceptable, and no beginning at all, which is almost equally so, but usually settled for to avoid the alternative. The only thing that prevents the acceptance of a plain, straightforward beginning, at least until further inquiry discloses actual antecedents (not merely unspecified ones dragged in to remove discomfort at the thought of floating, temporally, on nothing), is again the metaphysical doctrine.

The idea that causal lines might begin and end elsewhere than at creation and doomsday is not new; once again Peirce entertained it.[7] I wish not to revert, however, to the logic of the matter, which after all is the subject of this paper, and take up again the parallel between the formal and empirical. It has already been remarked that premises and conclusions on the formal side correspond to antecedents and conse-


quences on the empirical side. It has also been pointed out that the tight logical connection between premises and conclusions in deductive logic is matched by a tight causal connection between antecedents and consequences in the lawlike behavior of objects, and that the more or less flexible logical connection between premises and conclusions in inductive logic (which I take to cover not merely inferences from particular statements to general ones, but also nondemonstrative inferences of other kinds) is matched by a more or less flexible causal connection between antecedents and consequences in the statistical behavior of other objects. But it is not yet clear what the connection between premises and conclusions in the new logic is like. It will reflect, of course, the connection between antecedents and consequences in the events with which the new logic is to deal. This connection, however, turns out to be nonexistent. The events covered are, in fact, consequences without antecedents—consequences of nothing . And the new logic is therefore seen to be a logic in which arguments have no premises, but consist of bare conclusions.

The reader will by now feel, no doubt, that this is a lunacy which has gone far enough. What after all is the use of a logic in which no way of arriving at conclusions can be specified, in which in fact there is no criterion for knowing if one has even got a conclusion? To this I reply that it is of no use at all. But logic does not necessarily have to prove its utility as a tool for calculation before it can form a basis for discussion. The point is that such a logic offers to bring into the sphere of rational consideration (although not necessarily of explanation) certain kinds of event which other logics simply have to leave aside. Logic is the quintessence of rationality, and too narrow a conception of it makes a mystery of some things which need not be mysterious at all. I believe that there are three areas at least of contemporary interest in which such mystification goes on and in which therefore this new approach might be of value.

The first of these areas may be roughly characterized as the "logic of discovery." This of course was the point at which Peirce's abduction or retroduction was called into play; it made the initial jump to hypotheses, while deduction merely drew out their consequences and induction judged whether such consequences as were actually observed sufficed to render them plausible. Retroduction is the logic of novelty, and in a florid passage from the late article "A Neglected Argument for the Reality of God," Peirce defines it in a very suggestive way:

Over the chasm that yawns between the ultimate goal of science and such ideas of man's environment as, coming over him during his primeval wanderings in the forest, while yet his very notion of error was of the


vaguest, he managed to communicate to some fellow, we are building a cantilever bridge of induction, held together by scientific struts and ties. Yet every plank of its advance is first laid by retroduction alone, that is to say, by the spontaneous conjectures of instinctive reason.[8]

The key term here is "spontaneous." Aristotle used to speak of "pure spontaneous chance," and considered it a reasonable catagory of description; it was only in the late seventeenth century that Redi challenged seriously the concept of spontaneous generation, which was not generally abandoned until the middle of the nineteenth. I do not suggest that we should revert to such naïve beliefs (although the history of the theory of light since Newton shows that naïve ideas sometimes reappear in a more sophisticated form); the examples are mentioned merely to indicate that there is nothing inherently unsatisfactory in the notion of spontaneity—it becomes unsatisfactory only in the light of certain metaphysical presuppositions already referred to. Discovery is rendered most intelligible, I think, when hypotheses are not considered to arise from any particular concatenation of specific ideas in the mind of their inventor, but when they are regarded as conclusions without premises, which present themselves to the genius (and to society at large) in sufficient numbers to form a population among the members of which natural selection can take place. But that is the subject of another paper. It need only be added that to look for antecedents to acts of invention without crossing category lines (i.e., without resorting to neurology, etc.) is sooner or later bound to turn up a candidate for the new logic.

The second problem on which these considerations might throw some light is that of quantum jumps. The conflict between Copenhagen and the micro-microphysicists is well known; it is again a conflict between a dogmatism and an infinite regress. The problem is on the one hand concealed by statistics, on the other postponed until a new level is reached. But again I do not see why, when confronted (for instance) with a single radioactive atom abstracted from any mass to which half-life calculations might be applied, one should not say that its disintegration is an event without any causal antecedent, and yet say this without lamentations over the demise of causality. The event falls under the category, but its coefficient is zero. Of course here, in a sense, the instability of the atom is the cause of its emitting a radioactive product. But why did it happen then? A causal relation between antecedents and consequences makes sense only in a temporal framework, and it may be that uncaused events are necessary to define the framework.

This possibility comes up again in the third area of interest, namely, cosmology. This has been tacitly mentioned already in a discussion of


chains of causal explanation; the upshot of that discussion, if pushed a little further, would be that there is no explanation of the universe as a whole, which then becomes (if infinite regress is to be rejected) one great uncaused cosmic event, defining a spatio-temporal framework within which other events can follow patterns of cause and effect. There are two views, however, as to the mechanism of this cosmic event: one view holds that it was, as it were, uncaused all at once at a point to be determined by backward extrapolation of processes of expansion; the other holds that it is continuously being uncaused bit by bit in interstellar space, so that it remains in a steady state. Both sound pretty mysterious to people who are used to demanding antecedent causes. Again I do not wish to suggest that antecedent causes ought not to be looked for—they ought. But I think it regrettable that the idea of continuous creation should have seemed preposterous to so many people. Perhaps it seemed all too plausible to its proponents—Hoyle at least has given the impression at times that his espousal of the theory rests on grounds just as emotional as those on which religious arguments for creation rest.[9] But there is nothing improper in a hydrogen atom's coming into being ex nihilo , or for that matter in an elephant's doing so. The latter certainly is not an event we expect—it would be an incredible event; in a world full of antecedents the causal space, as it were, is largely taken up with their legitimate consequences, so that there is not much opportunity for such disturbances. But the view that this space is so tightly packed as to leave no room for new beginnings is exactly what this paper is intended to deny.

There is an even more fundamental level on which this whole question can be taken up, quite apart from all these questions of causality, which may perhaps have served to confuse the issue. On this new level the triad of logics is associated with the basic temporal triad of present, past, and future. It is a rewarding intellectual exercise (which has something in common with phenomenological reduction) to confront experience as presented in the moment, without memory and without anticipation. Like all exercises this is hard when first tried, but if properly done the world may take on those aspects of "freshness, life, and freedom" which for Peirce are a mark of Firstness.[10] Such a slice of present existence, abstracted from past and future, is an incredible event; what it conveys to us is neither the conclusion of a deductive argument, nor the premise of an inductive one—rather a conclusion without premises, or a premise without a conclusion. It is this view of things which prompts a question like Heidegger's: why should there be anything at all? Why not much rather nothing?[11] In such a light the queer logic of novelty and creation appears as the aboriginal variety from which the others are derivative. Once we have a world, we can begin to reflect


on the regularity of our experience of it as this accumulates in memory; the dyadic relation of cause and effect, the "predominant character of what has been done," or Secondness,[12] for which the logic of inevitable events is appropriate. (In retrospect everything is inevitable, which is why arguments for determinism are so seductive.) And we can entertain purposes and hopes, based on the extrapolation of that regularity into a problematic future for some meaningful end, a process which has the character of Thirdness, with all the opportunity for misunderstanding and uncertainty of outcome that implies.[13] Here the best that can be done lies within the range of a logic of more or less likely events. Present experience is the primordial given which cannot be explained, but it is not irrational—it forms indeed the basis for all rationality.

The final argument has its point of departure in this confrontation with the present, but goes off in a new direction. It is in the irreducible firstness of every moment that the roots of freedom and creativity lie, and the strongest case for uncaused events, having no probability whatever and lacking antecedents completely, is to be found in our instinctive conviction that we ourselves participate in such events whenever we act freely. Of course we could argue ourselves out of that conviction and come to believe that we never do act freely, but that would be a pity. Sartre is right, I think, when he claims that free action arises out of a center of consciousness which is empty , a Nothingness which can define itself only by means of what it denies in the world of brute existence.[14] This is only to say that whatever kind of being consciousness has is of a different order from the being of things, in which it can participate only negatively. Sartre does not accept other orders, and his consciousness is then mere Nothingness; other philosophers may resort to devices like Kant's realm of freedom or the vertical dimension of the Christian existentialists.

This is obviously not the time at which to bring up all the old disputes between freedom and determinism. The conviction referred to above may of course be illusory, although even if it were I do not see how we could help continuing to behave as if it were not. But if we could rid ourselves of the idea that causality applies to everything and restrict it to successive elements of a more or less complex causal chain, having a beginning or beginnings and an end or ends, there would be no difficulty about the matter. (The end of the chain is not required for the present argument, but it is an obvious corollary.) We are conditioned to feel that a world made up of such chains would be in some way unpredictable and chaotic. It would certainly be unpredictable, but just in the ways in which this world is. Prediction involves catching a causal chain and putting it in an insulating sheath, so that other causal chains cannot get at it without being deliberately allowed to, in which case their effects


can be taken account of. But such insulation is not always possible, and this limits our ability to see into the future. The admission of uncaused events makes that a limitation in principle and not merely a consequence of inadequate present knowledge, but it does not change the practical situation. And the world is no more chaotic than before.

The ideal of intelligibility held out by formal logic of the old kind was a misleading ideal, attainable only in the restricted context of isolated systems. The fact is that, except in astronomy, science has never succeeded in finding any causal chain more than a few years long without loose ends, and even in astronomy the clean simplicity of the system is more apparent than real. The old logic has imposed on us a view of the world whose analogy is a tangle of infinite wires (which science undertakes to disentangle), each continuous, from infinity to infinity, with occasional branchings which represent probability. A truer analogy is to be found, I think, in a similar structure made of natural fibers, each individual strand of which is of finite length. Such strands, although they are longitudinally disconnected, are laterally bound; and this suggests further work on a neglected topic, namely the nature of the quasi-causal relation between contemporary events.

To conclude: We may decide, of course, to reject this whole analysis as fanciful, and return to a comforting belief in the tight, systematic order of nature, undisturbed by the protests of people like Kierkegaard who claim that system so understood cannot be lived and that we are in danger of forgetting what it means to be an existing individual. If on the other hand the hypothetical possibility of uncaused events is allowed there may still be some protest at the conception of logic applied to them here. Formal systems have lives of their own, so in one sense there can be no objection to the introduction of a new one in which the probability of the conclusion is always zero, although its potentialities for development are admittedly limited. But if the empirical ancestry of such systems is taken seriously then we can deal with uncaused events either by saying that they are illogical, i.e., that no system applies to them, or by making some attempt to find a system which does apply. I have chosen the latter alternative.


Mach's Principle and the Laws of Logic

In this paper I wish to raise a philosophical question about logic, namely, the question whether its laws can consistently be thought of as analogous to those of the empirical sciences, i.e., as subject in some sense or other to test and confirmation, or whether, as is more often maintained, they must be thought of as analytic and a priori if not as conventional. In order to float the question, some general idea of what kind of activity logic is must be presupposed. The problem of logic I take to be as follows: Given the truth (or probability) of sentences {P }, what can we say (with what degree of confidence, etc.) about the truth (or probability) of sentences {Q }? The method of logic I take to consist in performing operations on the sentences {P } or on supplementary sentences introduced for the purpose and in performing further operations on the sentences so generated, and so on until the sentences {Q } or some obviously related sentences are generated. According to the rules employed in these operations we may then say that the sentences {Q } are true or have a certain degree of probability in relation to the sentences {P }.

We thus arrive at a degree of confidence in the sentences {Q }. But what of our confidence in the whole procedure by which this degree of confidence is arrived at? Well, we can construct a second-order scheme for that and talk about the sentences {P } and the rules by which we operated on them. We thus arrive at a degree of confidence in the procedure. But what of our confidence in this second-order scheme? And so on.

It is tacitly agreed by almost everyone except Quine that this regres-


sive problem presents itself in two distinct cases. The first covers deductive inference and gives us absolute confidence in the conclusion on the object level as well as in the rules at all subsequent levels. The second covers everything else; we can't even think of pursuing the regression more than one or two levels, and even there we have to cut off debate by shifting attention from truth, probability, etc., to acceptability, epistemic utility, and the like. Whenever bits of the problem under its second aspect can be so arranged as to yield to the techniques proper to its first, this is instantly done; we thus have deductive theories of probability, utility, acceptance, and so on—a veritable deductivizing of all tractable parts of inductive logic. It seems reassuring to be able to say with deductive certainty that the conclusion follows inductively, even if we can't assert the conclusion itself with anything more than inductive probability. This activity takes place mainly on the first metalevel, and represents a kind of sidestepping of second- and higher-order inductive issues. My admiration for the people who do it is great and sincere, but I have no contribution to make along these lines. Instead I wish to confront one of the issues they sidestep and to suggest in a slightly different way from Quine's that the separation of cases will not stand up under scrutiny.

I start with Hume and in particular with the distinction between matters of fact and relations of ideas. We tend on the whole to give Hume too much credit as a contemporary and to make too little allowance for his belonging to the eighteenth century. Given the tenor of immediately preceding discussions, especially Berkeley's, Hume may certainly be excused for believing in "ideas," but that does not excuse us for following him in this aberration, whether or not we call things by the same name. The main contention of this paper is that matters of fact are enough—are, indeed, all we have—and that the complex matters of fact which we call "relations of ideas" (or which we call "logic," deductive or inductive) are reflections of the inclusive matter of fact which we call "the world" and are as contingent (or as necessary) as it is.

This contention can be looked upon as a philosophical analogue of the generalized form of Mach's principle. What Einstein called Mach's principle was of course the restricted claim that the inertia of a given body is determined by the distribution of matter in the physical universe. The generalized claim is that the laws of nature are so determined, and the philosophical analogue is that the laws of logic are determined, not to be sure by the distribution of matter, but by some feature or features of the world as a whole, so that they would be different if it were different. This means among other things at least a reinterpretation of what we can mean by the expression "possible world," since


as presently understood the limitations on possible worlds are precisely logical limitations, whereas if the world determines the logic (rather than the other way around) it would seem difficult to rule out a priori any world whatever, by any other argument, at least, than follows from the fact that this world already exists. Given logical rules (and the distinction between rules and laws is an important one in this context) we can of course explore the set of worlds in which they hold and the relative possibilities within this set; for that matter, we can devise systems with any rules we like, the only limitations being the scope of our imagination. (But that limitation is far from trivial.)

This, however, is running ahead somewhat. My proposal is really a double one, of which the analogical form of Mach's principle is only one element, the other element being the reconstruction of logic and all other relations of ideas as matters of fact. And even if both these elements were upheld, there would still remain the question of what light, if any, they threw on inductive logic and confirmation theory. The last question is the easiest. As I have already said, I claim to make no contribution to the technical discipline, but it may be that success in the philosophical enterprise would lead to a different view of what the technical discipline is all about and what can reasonably be expected of it. In particular it might help along the demystification of deductive logic, which as the unattainable paradigm has been responsible for driving so many inductive logicians to despair.

"Relations of ideas" are represented by Hume (for example, in the passage about consigning books to the flames) as the kinds of thing that form the object of reasoning about quantity or number. Matters of fact, on the other hand, are the kinds of thing that enter into causal relations. We can know some relations of ideas certainly, because (to use post-Humean language) they are analytic, definitional, and so on, and hence, as later came to be thought, empty of factual content. We can never know any matters of fact certainly, because in order to do so we would need certainty in causal relations, and these run into the notorious difficulty that future exceptions are always possible. (I include here of course future knowledge as well as future events .) I do not wish to rehearse all this, but to comment on the insertion of temporal considerations into the inductive case when they are absent from the deductive case. Suppose we were to ask. How do you know the deductive relations will hold in the future any more than the inductive ones?

After a bit of spluttering, and assuming that the question is taken seriously (as I mean it to be), the reply to this is likely to be a challenge: How could the deductive relations be different, since they are rules of procedure and not laws of nature? What would it be like for them to be different? What conceivable test could be proposed that would reveal


exceptions to them? To which I readily answer that I have no idea, I can't imagine an exception, it doesn't even make sense to me—but nor do I conclude, from these limitations on my powers of fantasy, that the thing is impossible. I can readily think of exceptions in the case of induction; I can't conceive of them in the case of deduction—but there must surely be more to it than that, or if there isn't, then the debate is shifting to my territory. For imagining, conceiving, and the like aren't logical categories at all, but rather psychological and thus empirical ones.

But for counterinstances to arise in the deductive case would surely involve a self-contradiction. Here the deductive logician is likely to start writing things down, showing that the very rules of the language L entail the analyticity of this proposition or that. Now, however, I can afford to be generous. I have no objection whatever to analyticity within languages—for incorporating the law of contradiction as a rule in such languages; indeed, given the world we live in, it seems a sound move. But I can quite easily imagine a language in which the law of contradiction was not a rule, in which (let us say) truth-values were assigned at random; it might be spoken by some latter-day Epimenides, who would explain that Cretans aren't really liars, it's just that they don't care one way or the other, and if they contradict themselves then they contradict themselves and that's all there is to it. We care about truth—and that is a fact, not a principle of logic. And it is our caring about truth, not just about the rules of the language, that makes us choose the rules we do.

If during the argument the logician has written something down, that is particularly gratifying. For inscriptions have a factual presence, and as long as we hang on to them we can keep ideas (and propositions and a lot of other metaphysical baggage) at arm's length. The curious dependence of logicians on factual things like inscriptions (not to mention brains) ought to tip us off to a discrepancy between theory and practice. In theory logic reaches for immutable truth (most logicians are at heart Platonists); in practice the logician writes something down on paper or a blackboard, thinks for a bit, imagines, conceives, and so forth, and then writes something else down. But, you say, these are mere tokens—tokens stand for types, and the types refer to all those logical categories, truths, relations, and the like. That may be, but all I see are the inscriptions, the furrowed brow, the puzzled look. If I could give an adequate account of logic in terms of them , would we need the rest of the apparatus?

The kinds of matter of fact required for this reconstruction are two: first, obvious things like inscriptions and utterances, which can be located in the world easily enough alongside chairs and tables; second, a


rather specialized kind of animal behavior (in this case the behavior of logicians) which can be located alongside eating and sitting, and so on. I am prepared in the latter case to talk dispositionally about "abilities" provided it is understood that the possession of the ability in question is a factual matter to be judged in terms of behavior. What abilities do logicians need in order to ply their trade? They are not I think especially exotic abilities. The fundamental operation of logic is one that every functioning human being is capable of performing, indeed one that we all do perform all the time; I call it, borrowing from the classical English grammarians, "apposition," and it consists of taking two things—any two things—and putting (and holding) them together. I say "taking" and "putting" metaphorically; knowing a person's name is a case of apposition, so is knowing the French for an English word, so is using a metaphor, and so on. Apposition is a perfectly general binary operation, unconstrained by category considerations, and by means of it we build our world. The special behavior of logicians lies in the invention and following of special rules of apposition, which impose constraints under which we do not ordinarily work. (The laws of logic are not the laws of thought, any more than the laws of chess are the laws of moving objects on checkered surfaces. If Ayer or Berlin or Popper have black-and-white tiled bathrooms, that does not compel them to walk one square forward and one diagonally.)

The special rules of logic are formation-rules that preserve or at least safeguard type homogeneity and transformation-rules that preserve or at least keep track of truth. The chief talent that logic requires is an ability to stick to these rules; the looseness and redundancy, the ellipses and shortcuts of ordinary language give way to a more or less rigorous formalism (which has nothing to do with notation). This talent is rarer than might be supposed, which accounts for the fact that logic (as I have pointed out elsewhere)[1] is too simple for many people who look for subtlety in its elements. It rapidly gets complex, of course, but its complexities always break down into simple elements as the fuzzy complexities of everyday life do not. People are always putting more or less complicated objects and expressions in apposition with each other and one another; this activity is governed by no principles other than immediate utility or intelligibility and the conventions of ordinary language and behavior, and consequently the coherence and even relevance of any element of the resulting structure with or to any other element are not guaranteed, indeed it is normal for this structure to be incoherent and fragmented. These defects do not often appear because different parts of the structure are usually brought into play in fairly specific contexts which do not overlap; when incompatible parts of it are activated at the same time the various psychological phenomena of


dissonance are observed, and there are also contexts which activate no parts because they are just not comprehended. (Note that these structures are specific to individuals.) Logical structures on the other hand are such that except under Russellian or Gödelian stress all their parts are mutually consistent and no parts are lacking. The very idea of such comprehensive austerity is well-nigh inconceivable to the ordinary talker or thinker in daily life.

But what the logician does is different only in degree from what ordinary people do, and it too is governed in the end by utility and intelligibility and convention. What the additional constraints of logic make possible are just those logical properties that we think of as characteristic, namely, analytic precision and synthetic open-endedness. Everyday thought is at once grosser and more limited than logical inference: grosser because it works with unanalyzed complex wholes, more limited because these cohere imperfectly with one another and relate for the most part intransitively, so that inferential sequences are always short. Still it is adequate to the conditions of its world and survives because of this adequacy. Its world is an aspect of the world; the temptation we have to resist is that of supposing that because logical refinements enable us to transcend the limitations of everyday thought they also enable us to transcend the actuality of the world. The logical operations we are capable of performing are just some of the things that our evolutionary development has equipped us to do, and like other features of our heredity they can be assumed to reflect a close adjustment to the facts of our environment. If these facts are contingent, then logical laws are also contingent, while if they are necessary the necessity of those laws is still extrinsic to logic and depends on an empirical relationship between it and the world.

In this light we need to look again at the doctrine that tautologies are empirically empty, perhaps to abandon it in favor of the doctrine that on the contrary they are empirically full, if I may so put it.[2] Identity and contradiction aren't just logical rules (although they are that), nor are they laws of thought; they are laws of nature, reflecting just those properties of the world which, according to the analogous form of Mach's principle, determine the logic we are bound to use in dealing with it. They aren't obviously falsifiable (as remarked earlier, our imaginations aren't up to envisaging counterinstances, and by now it should be obvious why not), but their nonfalsifiability is clearly of a different kind from the nonfalsifiability of pseudoscientific or metaphysical claims. For that matter, in an important sense the other inductively established laws of nature aren't falsifiable either. It is one thing to know (to be able to imagine, etc.) what a counterinstance would be like , another to be able to produce one. We can of course construct


interesting but as yet useless systems incorporating alternative rules, but we can do that for logic too. Physical laws, like logical ones, function as rules within the theoretical systems to which they belong and acquire the status of laws only on the successful application of those systems to empirical problems.

What holds for the laws of deductive logic holds equally and under precisely the same conditions for the laws of inductive logic. In neither case are we really guaranteed success in advance; if in the deductive case it is claimed that consistency itself requires the outcome, it must be recognized that consistency in language is one thing, consistency in the world another, and that the former, again, reflects the latter. We count heavily on the world's consistency and are perpetually vindicated in doing so. We count heavily also on the world's continuity, on its regularity, and so on, and are vindicated in this too. The really serious difference between the cases lies in the information available in the premises, and from this point of view the current tendency to deductivize inductive problems seems to me entirely appropriate. But here a different distinction emerges, a form of the old distinction between theory confirmation and instance confirmation. In its traditional form the problem of induction focused mainly on scientific laws, which as conclusions of inductive inferences are regularly detached without modal qualifications and used as assertoric premises in making predictions. It would be absurd to say, every time we wanted to calculate quantities of chemical reagents or stresses in airframes, for example, "Well, the law is only confirmed to such and such a degree, so we can't really be sure how much we need," just as it would be absurd to place bets on the truth of special relativity theory. On the other hand in Bayesian estimation problems, statistical computations, etc., there isn't the same need for detachment; we can always go back to the original probabilities or the original figures and start again, and the probability estimate in the conclusion is of course the essential part of it. This difference in the use of inductive inference seems to me crucial. It is only in the former case that my version of Mach's principle can be thought of as applying, since in the latter our problem isn't with the behavior of the world exactly, but with the changing scope of our knowledge of it. The reason why the next raven should be black is quite different from the reason why the next ball drawn from the urn should be black.

I conclude by reiterating one or two points. Logic, like science, is a rule-governed human activity which consists in putting things (inscriptions, acts of judgment) in apposition with one another, in spatial juxtaposition, or in temporal sequence. Nobody can compel us to accept its conclusions, as the tortoise taught Achilles, but accepting (and de-


taching) them is one of the things it is useful to do in our dealings with the world. (Doing logic for no ulterior purpose is a form of having dealings with the world.) Within the systems we construct we can be as formal, as analytic, etc., as we like, but the choice among systems for use in dealing with the world rests, in the deductive as in the inductive case, on empirical (and in the long run on pragmatic) grounds. And there are limitations on the kinds of system we can construct, imposed by the finite scope of our intellect and its prior adaptation to the special circumstances in which we find ourselves. Success in theory, logical or scientific, consists in bringing it into parallel with the world so that it reflects essential features of the world. Some parallels are long-established and practically unquestioned, others are more recent and tentative. Also we can construct systems independently of questions of relevance, but their rules remain merely rules and are not to be confused with logical or empirical laws.


A Quantum Theory of Causality


The statement "x is the cause of y " is usually taken to mean one of two things—either, in commonsense contexts, that the occurrence of x is a "jointly sufficient condition"[1] for the occurrence of y , where x and y are distinct events (e.g., the throwing of a stone and the breaking of a window), or, in scientific contexts, that x and y both belong to a "causal line"[2] and x is antecedent to y , where x and y are states of some system undergoing continuous variation (e.g., the position of the moon yesterday and its position today). There is, it is true, something odd about saying "the moon's being where it was yesterday is the cause of its being where it is today," but this way of putting it is generally avoided by saying instead that the moon obeys a "causal law." Discussions of the philosophy of physics tend to take the latter meaning as paradigmatic: after all, the form of expression of most physical laws is


where st is the state of some physical system at time t and f is a (preferably continuous) function of time. Time here is the independent variable. which means that its passage is taken as conceptually prior to the variations in state described by the law. This point of view, generalized to an S t which represents the state of the universe at time t and an F which represents the totality of causal laws governing all physical processes whatever, leads to the "single formula"



referred to by Laplace in his classical statement of mechanistic determinism: "We must thus envisage the present state of the universe as the effect of its previous state, and as the cause of that which will follow. An intelligence that could know, at a given instant, all the forces governing the natural world, and the respective positions of the entities which compose it, if in addition it was great enough to analyze all this information, would be able to embrace in a single formula the movements of the largest bodies in the universe and those of the lightest atom: nothing would be uncertain for it, and the future, like the past, would be directly present to its observation."[3]

Whether or not one wishes to assert the possibility, in principle, of carrying out the determinist program, there remains something puzzling about the use of the word "cause" in this connection. Laplace's statement, in fact, seems to fall into two clearly separable parts—the first sentence, which is a comparatively modest remark about causality, and the rest, which is an extravagant dream about determinism, usually associated by us with the idea of "causal law." This latter part does not really have much to do with causality, although it does have something to do with law, and Russell's remark that "the whole conception of 'cause' is resolved into that of law'"[4] is, in these circumstances, very apt. I do not agree that the whole conception of "cause" can be resolved into that of "law," but at least this conception of it can. What Laplace's demon needs for its predictions is a set of lawlike statements about the way in which certain systems behave when they are left alone. It does not make much sense to ask whether or not this behaviour is "causal" as long as it is continuous, periodic, etc. It is clear that this is the sort of thing Laplace was thinking of because he goes on to say, immediately after the passage quoted above, "The human mind, in the perfection that it has been able to give to the science of Astronomy, presents a faint outline of such an intelligence."[5]

Taking the first sentence by itself, however, leads to some interesting reflections. "We must envisage the present state of the universe as the effect of its previous state ." It would be unfair to burden Laplace with the consequences of this innocent remark, and I do not wish to pretend that he is responsible for what follows. But, supposing ourselves able to take instantaneous readings of all variables and thus to characterize the present state of the universe, what is "its previous state"? If we were to take seriously Hume's statement that "every effect is a distinct event from its cause,"[6] and to use as the paradigm of a causality statement not equation (1) but instead the equation


where n assumes discrete values, what would be the consequences for


the philosophy of science? This is the question I wish to discuss here. I shall start by considering cases in which it is obviously possible to identify "the previous state," and then go on to deal with apparently continuous variations where this possibility is not so obvious.


This new paradigm, which I shall call, for obvious reasons, the "quantum paradigm," is rather closer to the commonsense jointly sufficient condition meaning of "cause" than it is to the causal-line meaning. Consider the example of the commonsense meaning given at the beginning—the breaking of a window by a stone. At first we might think of the throwing of the stone as the cause, and the breaking of the window as the effect, but this has several disadvantages. First, it leads to awkward questions about whether the throwing of the stone was really the cause—whether it was not really the delinquency of the student who threw it, or the failure of his parents to show him proper affection, and so on into an infinite disjunction. Second, it overlooks an essential part of the situation, namely, the availability in the first place of an unbroken window. Third, it focuses attention on something which is very hard to observe or describe, the breaking of the window. It is true that we have a preference for the dramatic rather than the prosaic, for events rather than states . But the breaking of the window is over in a moment, and when the flying glass has settled the significant thing is that it is broken . One might say that it had undergone a state-transformation; before, it was whole (sn-1 ); now it is broken (sn ). Clearly, however, the stone had something to do with it; one might say of it that it provided the condition under which the state-transformation could take place. Perhaps equation (3) ought to be rewritten to take account of this, so that our paradigm for causality-statements would become


where c stands for the condition. This will take care of the majority of cases. On the other hand, the extra term could be avoided by taking s to refer to the state of the whole system (stone + window).

What kind of function is f ? It seems likely that, whatever else may be true of it, it will contain elements of probability. Stones do not always break windows, and although one could certainly specify states and conditions (weak glass, stones of certain shapes and sizes thrown at angles and speeds above given values) under which the window would definitely break, and others under which it would definitely not break, still there would be a finite range within which one could only guess


whether it would break or not. One might even place bets. For the purposes of insurance against damage to property this probability for the individual event would be concealed under a statistical ratio, but on the level of particular state-transformations such probabilities have become a familiar feature of the world.

In itself this discussion of stones and windows is of little interest to the philosophy of science, but the pattern of causal relation brought out in it is seen to carry over into recent developments in physics. There is a well-known thought experiment due to Landé,[7] in which particles in a state B arrive at a state-filter which passes only particles in another state A . If A and B are unlike all the particles will be rejected, and if they are like all will be passed, but if they are fractionally like some will be passed and some rejected. Once a particle has passed the A -filter, however, it will always pass any A-filter it arrives at; similarly, if it is rejected, it will always be rejected. This state of affairs is explained by saying that a particle in state B jumps to state A (or to state

) on arrival at the filter: such a state-transformation is of course a quantum jump . If the fractional likeness between A and B is 0.25, say, one-fourth of all B -particles will pass the A-filter, but from the point of view of an individual particle this means that there is a probability of 0.25 that it will jump from state B to state A . This case is clearly analogous to the macroscopic one already discussed. The filter is the analogue of the stone, i.e., it provides the occasion for the state-transformation of the particle; and there is an irreducible probability-relation between state B (sn-1 ) and state A (sn ). In neither case is it necessary to talk about the time at which the transformation occurs, since the states between which it takes place are relatively stable and obviously qualify as successive in the histories of the particle and the window. But if one asks for a causal explanation of the present state of the system, the answer can only be that previously it was in another state, but that the supervention of a certain condition occasioned—in fact demanded—a quantum jump. (The previous state is an indispensable part of the cause; nothing is demanded by the filter of a particle already in state A, and stones pass freely through windows already broken.)

Let us now consider another familiar case of state-transformation, namely that between the excited state of an atom in a source of radiation and its ground state. Here I should like to digress for a moment in order to deal with an ambiguity of meaning of the word "state." The Bohr atom (long out of date, it is true, but good enough for the purposes of this discussion) is sometimes said to be like a solar system; the electrons go round and round in their orbits, accompanied by probability-waves or whatever the latest theory demands, and all this activity continues unabated even though, as far as radiation is concerned, the atom has


not changed its state at all. On the other hand the state of the real solar system is regarded as changing continuously as, and in fact because, the planets go round and round in their orbits. The two meanings of "state" are quite apparent; in the first case it means "being characterized by a certain set of equations" (the wave-equations for the unexcited atom), and in the second case it means "having parts disposed instantaneously in certain ways with respect to one another." The first represents a judgment external to the system, the second one internal to it. And yet one can certainly say that the solar system is characterized by a set of equations, namely Kepler's laws and their subsequent refinements, and according to some physicists it even makes sense to talk about the instantaneous disposition of the parts of atoms with respect to one another. The latter is the more difficult, however, and at this point in the discussion it is not required. Under the quantum hypothesis we would want to maintain that apparently continuous changes in the relative distribution of parts of a system, such changes being internal to the system, do not constitute changes in the state of the system. This means that the solar system has not undergone any state-transformations for a long time. The fact that the equations do not change with time may be taken as an indication of the stability of the system, although as usual the introduction of time brings its own complications. If the equations were changing gradually with time (for example if the system were "running down") it would always be possible, by the introduction of extra factors, to obtain a set of equations which did not change in this way.

To return to the excited atom: in this case the causal relation seems to revert to the form given in (3), since, although there is the required probability-relation between the excited state and the ground state, no known condition has to obtain before the transformation takes place. These transformations are quantum jumps in the purest sense (we might call what happens at the state-filter an "induced" quantum jump). A similar case is encountered in radioactive decay, when the emission of a particle from a given nucleus does not depend on the satisfying of conditions, although the half-life of the isotope in question may be known with great accuracy. One would nevertheless wish to say, I think, that the emission of a photon from a hydrogen atom was causally dependent on its being in an excited state, and that the emission of an alpha particle from a radium nucleus was causally dependent on the instability of the nucleus. It is true that this is exactly what some physicists have refused to say, maintaining that quantum transitions are in some way acausal; but if the alternatives are to have uncaused events or a new interpretation of the causal relation, it would seem wise to explore the possibilities of the latter before settling for the former.


Kant's remark that "everything that happens has its cause"[8] may not be apodictically certain, but it is not something one wishes to abandon casually.

The only strange thing about quantum jumps is why they should happen when they do, and this question can only be asked from a point of view which makes time prior to the events that happen in it. The principal reason for the rejection of the second meaning of "state" at this level of analysis was that it appealed to the notion of time ("instantaneous dispositions"), and in the quantum paradigm there is no reference to time. Everything that happens has its cause, but in the solar system viewed as a whole (to revert to an earlier topic of discussion) nothing happens. From our limited perspective (which takes account of only part of the system, e.g., the alignment of the sun and the moon in eclipses) we observe changes, it is true, but that is only against a background of local state-transformations which provide us with a time scale. Now I am outside my house, now I am inside; now it is night, now it is day. In the light of transformations such as these (and they are much more basic to our notion of time than the smooth passage of hands round clock faces) I may be tempted to say that the solar system is in a new state, but it is I who am in a new state. In fact it may be taken as fundamental to the position advocated in this paper that in the last analysis state-transformations determine the passage of time, and not the other way round. Time does not pass for a system which is not undergoing transformation, except from the point of view of another system which is, and consequently it makes no sense to ask why quantum jumps take place when they do. There would be no meaning to "when" if there were no jumps.


What then becomes of the notion of "law" into which the second of our meanings of "cause" was to be resolved? Equation (1) now appears to express merely the correlation of the states of one system with the state-transformations of another taken as standard. But the appearance of continuity remains, and the states which are correlated with the standard are still states in the unsatisfactory sense rejected a short while ago. Nobody can deny, however, that such internal rearrangements do take place in otherwise undisturbed systems, and it may be useful to know how they occur. Perhaps the quantum paradigm can be shown to apply to them too. There is really not such a great difference between a change regarded as continuous and a change regarded as proceeding by discrete steps; Ashby remarks that "in natural phenomena the ob-


servations are almost invariably made at discrete intervals; the 'continuity' ascribed to natural events has often been put there by the observer's imagination, not by actual observation at each of an infinite number of points."[9] And we are becoming increasingly familiar with processes in which an apparent continuity is known to be the product of very many individual stepwise transformations, in thermodynamics, for example. A "causal law" may therefore be nothing but the smoothing over on the macroscopic level of a large number of causal transformations on the microscopic level.

The quantum paradigm, then, appears to be adequate for all the standard cases of causal relation, and to cover also cases in modern physics which under other interpretations seemed to involve acausal relations. It may be asked what difference its adoption would make in our outlook on the world and on science. Its greatest advantage is to be found, I think, in a directing of attention away from notions like time, law, and so on, which seem to be transcendent with respect to our immediate experience and which have always been subjects of controversy, and a directing of attention towards the present state of the world and its possible modifications. One might envisage the universe as something like a rather large pointillist painting; there are so many spots of blue, so many spots of red, and so on, and what the painting looks like at any moment depends on their distribution on the canvas. The spots are to be regarded as movable, however, and the scene is therefore constantly changing. The old emphasis corresponds to a search for regularities in the movement of some spots with respect to each other and the canvas, such movement being thought of as continuous and taking place in a time through which the canvas itself endures. It would make sense, according to this view, to think of some scene as remaining unchanged for a period of time, which of course is to us the normal behaviour of paintings. But it is not the normal behaviour of the universe. What would it mean for all change to cease? It would entail the cessation of the passage of time; as maintained above, time does not pass unless some change takes place.

Even if, to revert to the painting, one were to see the absurdity of the notion of its existing unchanged through time (or even of its needing a canvas), one might still look for continuous functions ("causal laws") relating the movements of the spots to some regular movement—say an oscillation of one spot in a corner—which was taken as establishing a time scale, and this would be all right as far as it went. But suppose on closer inspection it was found that at least some spots (and possibly the time-determining spot itself) did not move continuously, but jumped from place to place. This would not signal general chaos, since one would not expect to find any particular spot jumping very far, but it


might occur to somebody that instead of looking for long-term regularities of an absolute sort we might just as well ask the more limited question about state-transformations: where can this particular spot jump next, and with what probabilities? In order to find answers to these questions we would have to make very minute observations, but instead of taking readings at long intervals (as judged by what the observer does in between) and plotting "best curves" to fit them, it would be necessary to concentrate on just three states, the last state, the present state, and the next state, as elements of a Markov chain.

One advantage of doing things in this way is that the process is continually self-correcting—the refusal to make long-range commitments prevents extremes of error. Its great disadvantage is that it is so myopic; in squinting at details it is in danger of failing to grasp what is happening to the whole. It may be said that, for the whole, there is no meaning to "the last state, the present state, and the next state"; that in fact the counterpart to equation (2) with the quantum paradigm,


is preposterous and absurd. To this it might be replied that it is no more so than equation (2) itself, that any causal law—and therefore equation (2)—can be expressed in terms of discrete state-transformations by taking readings at stated intervals (i.e., whenever some time-determining transformation lakes place), and that even the differential calculus does not need the "infinitesimals" of which Berkeley made so much fun, but can conduct all its business in terms of small but finite differences. A literal interpretation of equation (5) would, of course, be stretching matters somewhat; there is no reason for all the spots to jump together. Even for medium-sized aggregates equations like (1) might be practically more useful than equations like (3), so that there is no danger of the demise of "law." But what a quantum view of causality would insist upon would be the recognition that such equations rested ultimately on equations like (3). The whole is built up out of its details, and if the principles according to which the whole operates can be satisfactorily accounted for in terms of those according to which the details operate then there is no need for transcendent principles. It is the contention of this paper that time can be accounted for in terms of the behaviour of the ultimate constituents of matter, that continuity can be accounted for in terms of discrete stepwise transformations, and that these accounts remove some of the traditional mystification from the problem of causality.


A Negative Interpretation of the Causal Principle

The remarks that follow are to be regarded as a modest attempt at speculative therapy. The condition against which they are directed is metaphysical; it is the compulsion to extrapolate limited analytic successes into general synthetic principles. The particular form of it that I have in mind is the elevation of the concept of causal connection, established in brief episodes or in isolated sequences, into a principle according to which past states of the world produce and determine future ones. In this form it afflicts many scientists, and also many philosophers of science who consider themselves liberated from metaphysics. It is chronic and virulent, as can be discovered by engaging such people in conversation about free action, uncaused events, and so on. The prescription is homeopathic, in that it consists of a minimal dose of metaphysics.

One of the most fundamental characteristics of our experience of the world is that one state of it is always succeeded by another. Taken in their totality these states are unique and nonrecurring. Within them, however, stable elements can be identified which do recur, and some of these form repeatable sequences within the nonrepeating sequence of states as a whole. If such a sequence can be isolated from the rest of the world, and if its elements are complex enough so that they remain constant in some respect but change in another, the changes may be summed up in a scientific law, valid under given conditions (i.e., provided the constant background remains constant). It is customary to describe such laws as "causal," since each state element may be thought of as in some way producing or necessitating the next. This, as


Hume showed, can properly be no more than a subjective impression, but as a manner of speaking it is harmless as long as it is not taken too seriously.

I say "as a manner of speaking" because the problem of causality arises less on the ontological level (where a solution to it would involve an answer to the question why time passes and indeed why anything. happens at all) than on the logical and epistemological levels. If it is thought that one state produces another, it will obviously make sense to try to represent the first by a state-description which will similarly "produce" the state-description of the second. "Production" on the logical side is represented by entailment , a relation between logical antecedent and consequent which enables the latter to be deduced from the former. If a formalism based on entailment can be found which is a suitable representation, when interpreted, of some physical relation, the logical antecedent and consequent can be taken to mirror a temporal before and after.

The success of the physical sciences in finding formalisms of this kind is well known. The model of the causal relation suggested by the sciences I shall call the determinism-entailment model; it is a positive interpretation of the principle of causality, which maintains that past events positively determine present and future ones. Its formulation depended on the possibility of isolating repeatable sequences of state-elements, and it can be applied with justification only to such isolated and repeatable sequences. An isolated element in the sequence can be said to be determined by any previous element as long as the isolation is rigorously preserved; but as soon as the sequence is considered open to other parts of the world the concept of determination within it ceases to be applicable. Such an opening-up is reflected on the practical side as a failure of certainty in prediction: events can be predicted only if they are carefully screened from all influences except those of the prior events of the sequence in question.

The consequences of the success of science in establishing causal chains have not been uniformly beneficial to philosophy. Since the eighteenth century there has been a tendency to extrapolate the notion of causal determination from the limited sequences of isolated state elements for which representative formalisms have in fact been found to the succession of states as a whole. Since, as was pointed out above, each of these states occurs only once, this extrapolation is clearly illegitimate. I do not wish to assert that states of the world as a whole do not cause, or even determine, subsequent states—only that, the concepts of causality and determination being what they are, their application to states as a whole is meaningless. This applies also, by extension, to elements of such states which are essentially nonisolable , i.e.,


whose character would be altered by any restriction on the potential relevance of any other element (of the present or of any previous state). The most important among these essentially nonisolable elements are human consciousness and human action, which do not have to be excluded from the world in order to be excluded from the field of the causal relation. Consciousness is consciousness of the world as a whole, within the limits of the available system of signals (and of signs); action modifies the world as a whole, subject to similar limitations (the availability of energy, the absence of restraint, etc.). And of course it is not necessary to identify this "world as a whole" with the extent of the physical universe; the analysis applies as long as the content of consciousness and the object of action, taken in their entirety, satisfy the condition of nonrecurrence.

Since philosophers have an apparently incurable urge to extend the range of application of their concepts, metaphorically if not straightforwardly, it might be well to explore the possibility of reformulating the concept of causal connection in such a way that its extension would not lead to undesirable consequences. Some indication as to how this might be done may be gained from an examination of the logical formalisms which may be used to express various propositions about causal relatedness. In order to keep clear the difference between states of affairs related physically and sentences related logically we adopt the convention of referring to the former by capital letters (P, Q, . . . ), and to the latter by small ones (p, q, . . . ), the sentence p being understood to describe the state of affairs P, and to be true if and only if P is the case.

Suppose P and Q to be individual events or states of affairs between which the causal relation is supposed to hold. It is not necessary for the present purpose to specify whether either of them has yet occurred, although if there is to be a lawlike sentence expressing this relation similar events must have occurred and been described before. The entailment-determination model would require that "P is the cause of Q" should be represented by


where Ê is the sign for strict implication or entailment. This expression is given deliberately in nonquantified form, to avoid for the moment a commitment to universality or particularity. Since we begin from individual events, p and q are to be taken at first as singular sentences; as the discussion shifts to the possibility of establishing lawlike connections between events the universal quantifier must be implicitly prefixed. But in a sense it is implicit in every claim to causal connection, since the minimal causal principal would seem to require that if the


same situation were to recur in exact detail the same proposition would be true of it. Equation (1) is of course equivalent to


"it is not possible that p and (at the same time) that not q," or "P cannot occur without Q occurring." This already brings to light a weakness in the logical representation of causality, since P may very well occur without Q occurring until later, it may never be the case that p and that q at the same time. The indispensable temporal nature of the relationship cannot be captured in a timeless logic. (The fact that time enters the equations of physics as just another variable has encouraged the assumption that the many-dimensional world of space-time and other properties can be considered to exist atemporally and as it were all at once, a fiction which is useful for scientific purposes but cannot be allowed uncritical admission to philosophical argument.) But leaving that problem aside, the modal claim ("it is not possible . . . ") is obviously much too strong, and quite incompatible with inductive modesty. The contingent status of actual scientific laws is better reflected by the simple conditional


which is equivalent to


This brings us down to the level of Hume's empiricism, and merely asserts the de facto constant conjunction of P and Q. But it shares with the stronger form a serious drawback, having to do with the fact that even in isolated sequences there is rarely a one-to-one relation between causes and effects, and even when there is the relation may not exclude the possibility of interference. It follows from p É q that ~q É ~p, whereas it might well be the case that p but that, because of some inhibiting factor (an unexpected meteorite from outer space which shatters the experimental apparatus at the critical moment), not q. Such possibilities can be guarded against by the insertion of a ceteris paribus clause, but in a world as various as this one that really makes it too easy. How difficult the problem is in fact can be gauged from the elaborate physical precautions which have to be taken by experimental scientists in order to be sure that the sequence of events they wish to study is sufficiently isolated from extraneous influences, and from the difficulties even then of getting sensitive experiments to work, It has been pointed out (by Scriven) that what we mean by the cause of something is often only a "jointly sufficient condition" for it, not by itself a fully sufficient one, and it seems wise to make provision for this. And since


also p É q is fully compatible with r É q, s É q, etc., it is evident that a jointly sufficient condition may not be necessary or even jointly necessary, which again throws doubt on its adequacy as an expression of what we ordinarily mean by a causal relation.

In order to take care of these shortcomings we might try complicating the situation as follows: Suppose Q to have a number of alternative

causes, P1 , P2 , P3 , and so on; we may then replace p by p1Ú p2 Ú P3Ú  . . . Suppose further that each of these is complex, so that P1 can be broken down into a set of jointly sufficient conditions {Pi1 ; p1 in turn can then be replaced by the conjunction of the members of a set {pi1. As suggested above, however, among the less obvious conditions in each case must be the absence of certain potential inhibitors P* , in the case of P1 {P*1i }, and similarly for P2 , P3 , and so on, with their appropriate lowercase descriptions. The final formulation will then be


Supposing the alternatives to be exhaustive, and the lists of jointly sufficient conditions to be complete, this may fairly be said to sum up the facts about the cause of Q.

in the process of complication, unfortunately, the whole enterprise has lost any usefulness it might have promised at first. For, from the point of view of explanation, what we usually want to know is not what would or might cause Q under appropriate conditions, but what did cause it; while from the point of view of prediction what we usually want to know is not all the possible ways of causing Q, but only whether the state of affairs P which actually obtains is likely to cause it, or, even better, what P is likely to cause, whether Q or something else. If our P appears among the list P1 , P2 , etc., then we may assume that it will cause Q—provided it is specified in enough detail to ensure that all the {Pi } are present, and none of the {P*i }. The former condition can perhaps be met in some cases, although in most cases, and in detail, it would be difficult enough; the latter seems completely unattainable, since a list of all the things that would prevent Q if they happened, including bizarre possibilities like meteorites, would be prohibitively long even if it could be constructed in principle. The only way of making sure that all the conditions, positive and negative, for the occurrence of Q have been met is to have Q occur. The ideal formalization of the relation between P and Q, if only the order of the terms did not have to reflect the order of the events, would in fact be


The strong form can be used because if Q happens, and P is the cause


of Q, P (whatever it is) must have happened. But this is if anything more useless than (5).

Nevertheless (6) presents us with a kind of hint, for it yields by contraposition


in which the order of the terms is right, although its general appearance is admittedly negative. And this hint, taken together with the observation above that when scientists are anxious to have something happen, they spend most of their energy on making sure that other things do not happen, leads to the reflections which occupy the remainder of this paper.

As remarked earlier, the preoccupation of causal analysis has traditionally been with the way in which events bring other events about. The conviction that events must be brought about, somehow , is I think a piece of pure anthropomorphism, resting on our experience of "making things happen"; it used to be thought that an anthropomorphic God made natural events happen, and when God was no longer appealed to as a principle of explanation (except for events outside the accepted order of things, such as those still called "acts of God"), causal efficacy was invented to take his place. This conviction has engendered a good deal of speculation about the origin of the universe, as well as Heidegger's celebrated question: "Why should there be anything at all? Why not much rather Nothing?" This has always seemed to me an interesting but futile question. That there should now be, or should ever have been, Nothing, given that there is now something, strikes me not as impossible but as unintelligible, although if events have to be brought about one can at least see how somebody with metaphysical inclinations might raise such a question. But suppose they just come about? We may invoke here a time-honored principle, to be found in various forms in Plato, Nicolas of Cusa, and others (including Spinoza and Leibniz). and called by Lovejoy the "Principle of Plenitude," according to which all possibilities strive to come into being, since their failure to do so would leave the universe imperfect. This is at least as plausible as the metaphysical principle of causality. Bearing it in mind we might reformulate Heidegger's question as follows: "Why should there be only something? Why not much rather Everything?"

Unlike Heidegger's, this question permits a ready answer. There is not everything because some things are incompatible with other things, so that their coming into being excludes these others from the possibility of coming into being, and excludes also all the possible consequences (in the traditional sense) of the excluded things. An example may help. If somebody says to me today, "Why shouldn't you be in


London tomorrow night?" I may answer that it is quite possible for me to be in London tomorrow night, and if some reason for going presents itself I may indeed go. But if he says to me tomorrow night, "Why aren't you having dinner in London tonight?" I may answer "Because I'm in Washington, and it's too late to get to London tonight." Between today and tomorrow is a point beyond which, if I haven't yet left for London, I won't make it by tomorrow night; from that point on, the universe which has as an element my being in London tomorrow night is eternally impossible. One might say that my not having left Washington causes my not being in London. Of course as the point of decision with respect to London passes, the possibility remains open that I might dine in Chicago, when it is no longer possible to get to Chicago by some specified time, say nine o'clock, the possibility remains open that I may go to Baltimore; when Baltimore is out I may still go to Arlington; at one minute to nine I could make a dash for the restaurant on the corner. But at nine I am here , wherever that is, and every other possibility has been eliminated. The example has been couched in human terms and in terms of technological possibility (the availability of transportation, etc.), but it is easy to see how the idea of the progressive closing off of possibilities could be taken as a model for the behavior of the physical world. The negative interpretation of the causal principle consists in the view, not that past events determine present and future actualities , but that they exclude present and future possibilities . If P is the cause of Q (in the old sense), and if R is incompatible with P, so that


and hence


we have from (7) and (9)


which restores the original entailment-determination relation, only now with negative rather than positive force.

Although in some cases the two interpretations may turn out to be equivalent (e.g., if we are able to catch and isolate a "causal chain" in an experiment, which amounts to excluding all the alternatives except one, so that the situation is fully predictive), the negative view is far preferable for most purposes, and especially for general philosophical purposes such as the analysis of free action, where the positive view leads all too easily to a constricted determinism. For a start, all the criticisms of process philosophers like Bergson and Whitehead are taken care of at once, since it is no longer of prime importance to have


events crystallized into any particular pattern before they can be said to be causally related. A cross-section taken at any arbitrary time allows us to eliminate a whole family of future cross-sections, but at every moment the universe faces precisely as much possibility as before. The lawlike behavior of the world has to be taken as a datum in either case, but the negative interpretation is not in conflict with it—indeed it allows us to account plausibly for a whole set of laws, namely statistical ones, which have defied causal analysis in the traditional sense and have led to much bewilderment and debate. For under the negative interpretation one can quite easily envisage a situation in which an event is not excluded by any previous event but is excluded, at the last moment (to use a loose way of speaking), by a contemporary event. If the pattern of such exclusions (the statistical ratio) has to be accounted for it can be done by something like Popper's propensity theory. Part of the relief from metaphysical pain afforded by the negative interpretation consists in the realization that obviously important ingredients of the physical or human world, such as chance, freedom, etc., no longer have to be imported apologetically but are found to be naturally present up to the last moment before the passage of time fixes the state of affairs irrevocably. One might indeed take the position that it is just its being fixed irrevocably that constitutes the passage of time, that the way we tell the difference between past and future is by observing what is still open to possibility and what is not.

This is clearly no more than a preliminary treatment of a very large topic. The main point I have wished to make so far can be summed up as follows: If our model of the causal relation is "That past, therefore this future," not only is it very difficult to arrive at tight logical formulations (because of needing to know everything, etc.), but also certain general philosophical perplexities ensue, such as the denial of the possibility of free action and the mysterious conduct of statistical processes. If on the other hand we say, "That past, therefore not those futures," we simplify the logical situation, since we can formalize what information we have and let the rest lie in obscurity, and leave the sense of possibility alive until the actual occurrence of the event in question (or of some other event) provides the only conclusive test of the truth of the prediction that it will occur. And this interpretation applies equally to isolated sequences and to the world as a whole. Every state of the world as a whole excludes entire classes of future states; no state of the world as a whole determines any particular future state. The negative interpretation thus provides a degree of philosophical coherence to which the old, positive interpretation could not attain.




Preface to Part IV:
Machines and Practices

Science may be the matching of structural features of the universe in the domain of mind, but it is necessary to acknowledge that the capacities of mind are limited. From my early days among the General Systems theorists it seemed clear that computers were going to be of central epistemological importance, since while their computational powers promised to supplement the limited powers of mind, those limitations themselves would prevent the person whose mind it was from understanding fully just how its powers had been supplemented. These reflections on the inadequacy of the knower led to others, on the different embodiments of knowledge—in parts of the nervous system other than the central or cognitive (in muscular habits, in practices), in machines perhaps. They also led to the question of what the subject as agent is doing when using machines, whether to supplement the powers of the mind or amplify the powers of the body.

This part of the book is bracketed by two chapters on the general issue of the adequacy of the relations of mind to its world and the ways in which machines may modify those relations. Between them are four chapters on problems in the philosophy of technology. Chapter 12, which is the earliest paper in the book (its original version dates from 1959, when it was read at a meeting of the Missouri State Philosophical Association in Kansas City), introduces for the first time a principle that I was to use later in my work on structuralism, the "optimum complexity principle." This suggested that well-adapted organisms will naturally adjust themselves or their environment until the equilibrium of the systems of perception and thought (whose matching is the intelli-


gibility of the world) falls at a point where they are both within an acceptable range of complexity, to move outside of which, whether towards too little or too much complexity, would subject the organisms to stress. It is, I admit, a vaguely formulated principle, though not much vaguer than the Aristotelian principle of the mean, which (now that I think of it) the optimum complexity principle strangely resembles.

Chapter 13 was written in Paris for delivery in Chicago, during a sabbatical spent mostly in the Bibliothèque Nationale. One curious episode of its production was the ceremony with which, after due consideration and examination, I was allowed to consult (under a librarian's watchful and faintly disapproving eye, in the part of the B.N. known as Enfer) the rather modest Manual of Classical Erotology in which the (purely verbal) image of the prostitute as a mechanism was to be found, when far wilder examples of pornography, or for that matter contemporary embodiments of the mechanism in question, were freely available on the open market a few blocks away.

Chapters 14 and 15 might well have been conflated—they are somewhat repetitive, although they deal with rather different aspects of the same general problem. They might for that matter have been worked into a more systematic version of chapter 16. In this matter of praxis I seem to depend heavily on an image due to Spinoza, whom I quote twice (though I seem to have been using different translations, one of which I recall I tracked down in the Iowa State library and copied just before delivering the paper in which I used it). Also I am rather unfriendly to Kotarbinski, whose work on praxiology I keep calling anecdotal; this is because I was hoping from it more in the way of theory and less in the way of examples, such being my own bias in these matters. I did not know at this time the important work of Pierre Bourdieu, though my interest in relatively small-scale individual practices governed by conscious ends overlaps only partly his interest in ritual and other social practices.


Science, Computers, and the Complexity of Nature

The search for simplicity in Nature has at all times been an important part of the activity of science, and the faith that all events might ultimately be explicable in terms of a few simple laws an important part of the motivation of scientists. Newton, for example, was a firm believer: "Nature does nothing in vain," he says, "and more is in vain when less will serve; for nature is pleased with simplicity, and affects not the pomp of superfluous causes."[1] Certainly Nature, in the Newtonian period, did nothing to discourage this view, and among other things the discovery of minimal principles by Fermat, Maupertuis, Hamilton, and Gauss reinforced the conclusion that the world had been designed according to the most economical standards. Maupertuis's own statement affirms the reasonableness of such a conviction. "Here then is this principle, so wise, so worthy of the Supreme Being: Whenever any change takes place in Nature, the amount of action expended in this change is always the smallest possible."[2] In recent years the faith in simplicity has, however, been badly shaken; from that high point in the 1920s when, for a short time, there seemed to be only two kinds of particle (the electron and the proton) in the universe, even physics, traditionally more successful than any other science in extracting symmetries and simple regularities from events, has sunk into something approaching chaos in its foundations—and as for the less "exact" sciences, genuine theoretical simplicity there seems farther away than ever.

New approaches are therefore being tried, particularly by workers in information theory, the theory of games, cybernetics, general systems


theory, etc. Ashby, a brilliant pioneer in this field, says, in his Introduction to Cybernetics ,

Science stands today on something of a divide. For two centuries it has been exploring systems that are either intrinsically simple or that are capable of being analysed into simple components. The fact that such a dogma as "vary the factors one at a time" could be accepted for a century, shows that scientists were largely concerned in investigating such systems as allowed this method: for this method is often fundamentally impossible in the complex systems. . . . Until recently, science tended to evade the study of such systems, focusing its attention on those that were simple and, especially, reducible. . . . So today we see psychoses untreated, societies declining, and economic systems faltering, the scientist being able to do little more than to appreciate the full complexity of the subject he is studying. But science today is also taking the first steps towards studying "complexity" as a subject in its own right.[3]

In the philosophy of science there has been, of late, a good deal of discussion of "simplicity," associated with the names of Popper, Wisdom, Jeffreys, Goodman, and others.[4] It might be thought that the study of "complexity" would simply be the mirror-image of this, and that once one of this pair of terms had been defined, further consideration of the other would be superfluous. This would certainly be the case if by "complex" we meant only "not simple," but, as I hope to show, the relation between them is not as straightforward as this. The scale from simple to complex does not follow the pattern of probabilities, with simplicity at 0 and complexity at 1; we would rather have to put simplicity at I and allow complexity to go to infinity. Nevertheless it may be helpful to pay some attention to simplicity as a preliminary.

Credit for having introduced the methodological criterion of simplicity usually goes to William of Ockham, the familiar form of whose razor is "entia non (sunt) multiplicanda praeter necessitatem." There seems to be some doubt whether he actually said this, although he would certainly have recognized and sympathized with the principle, which he applied with great effect. But it is not accidental that it should have been attributed to him and not to some other mediaeval philosopher. The word "entia" is perhaps misleading, suggesting as it does an economy of things . Ockham would not have minded multiplication of things in the least, and it certainly would not have made his world any more complex. For him the world is neither simple nor complex, consisting as it does of unrelated individuals, contingently dependent on God, who is not to be limited by methodological conventions—in fact Ockham,


like Leibniz, would probably have thought the more things the better. What is to come under the scrutiny of the principle is the description or explanation of the world. Ockham is generally regarded as the founder of the nominalist or conceptualist movement, holding that universal terms in propositions do not stand for anything apart from individuals, that relational terms have no reference apart from the individuals related. Economy is to be effected in our thought about Nature—there is no need to invent the relation "father of" if the world contains fathers and sons already—but there is no suggestion that Nature is simple.

It is not, therefore, to be laid to Ockham's account that later philosophers came to regard the world in this way, and the charge could not even have been imputed if there had not been confusion between the associated notions of economy and simplicity. Ockham's razor is a principle of economy, which may or may not lead to simplicity. It will almost certainly lead to an appearance of simplicity—after all, Thales's account of the world seems to be as simple as possible, and exhibits the most rigid economy of entities, but it does nothing to relieve the complexity of events, whereas the ninety-two elements of the prenuclear periodic table lead to an enormous simplification by comparison. It is true that Ockham says "praeter necessitatem," but this only means that in addition to the principle of economy one requires a statement of the objectives of the theory (and, if one of these is simplicity, a separate account of the fatter) in order to determine what counts as necessary.

Still it is legitimate to enquire how the notion of a basically simple world arose. All that is needed to plant the germ of such a view is the realization that, although things and events do not repeat themselves, it is very easy to extract from them constant attributes which are repeated. This leads naturally to a theory of universals, and to a language of simple predicates. The use of a small number of simple attributes as a ground of explanation of all events at a material level appears in Aristotle with his pairs of sensed opposites, hot and cold, wet and dry. Properly speaking, of course, the simple predicate is to be opposed to the compound , not the complex . But there is an easy transference of meaning when, in an epistemology like that of Locke, simple predicates are associated with simple ideas, and these are assumed to correspond to simple qualities in the world. A simple predicate may be simple only with respect to the logical system in which it occurs, and this simplicity does not preclude the possibility of its having a complex referent, but a simple quality suggests the impossibility of further analysis by any means.

We now know, of course, not only how extremely complicated a simple datum (such as a monochromatic color) can be on the physical side, but also what highly-developed organs are needed for its recep-


tion, and what intricate behavior it stimulates in the cortex; it is evident that a high degree of complexity underlies the production of even one of Locke's "simple ideas." This suggests a solution to the problem of the apparent simplicity of the world. The complexity of the stimulus is in some sense matched by the complexity of the response, and in this way both are concealed. Nature exemplifies the maxim "Ars est celare artem"; the workings on both sides cancel out, and we are left with an experience which is phenomenologically simple.

At this point there are to be distinguished two senses of simplicity and two different senses of complexity. We have on one hand a simple idea produced by a simple experience (or, what amounts to the same thing, a simple predicate describing a simple experience), and on the other a complex natural event, matched by a complexity in the apparatus which we use to detect the event. In accordance with the principle enunciated above I shall speak of all these as kinds of complexity. In the case of ideas or predicates we have logical complexity; the logical complexity of theory is the kind of thing that has received most attention in previous studies of simplicity. In experience, we have phenomenological complexity, and for nature, physical complexity; these require no further clarification. Finally my terminology for the complexity of the sensory neural receptor may need some explanation; I shall call it mechanical complexity, partly in view of recent work on the analogies between brains and machines, partly because our range of perception may be extended by means of instruments without altering the nature of the problem, and it is convenient to have a name which is obviously applicable to this wider case. Mechanical complexity is just the complexity required in the sensory and cerebral apparatus if information about an event is to be received at all. For the neurophysiologist it is clearly a kind of physical complexity, when observed in other people, and in fact this is how we come to know about it; but even in the neurophysiological case the physical complexity of the patient's brain has to be matched by a mechanical complexity in the physician's.

Now science is, in one sense, only an extension of a normal human activity—that which receives stimuli, organizes them into concepts, stores information for future reference, and in general enables us to make our way through the intricacies of the world of sense. In the case of science the variety of stimuli is increased and an element of precision introduced; concepts are deliberately recast and carefully defined (so that they are more appropriately called "constructs"); information is codified and published in tables of constants and formulae so that it may be used for accurate prediction and, eventually, control of events. Just as a simple perception was seen to depend on a matching of mechanical complexity on the part of the organism with the physical com-


plexity of the event giving rise to the perception, so in scientific observation and calculation there will be a need for some parity of complexity between the event under consideration and its representation in theory. The situations are not quite parallel; in the case of perception, learning is automatic, and if awareness enters the picture it is only as an addition which can remain simple and is eliminable in the manner of La Mettrie, T. H. Huxley, and the behaviorists, whereas most of us would feel that science is more conscious of itself, and that the logical complexity of the scientists' concepts becomes an important factor. In neither case do we have direct access to physical complexity, but learn only indirectly whether we have been successful in matching the complexity of our responses to that of the world; as long as the organism survives its reactions must, up to a point, be succeeding in this task, while in the case of the scientist it is a matter of checking predictions. In the latter case a further complication appears: not only must the theory be complex enough to handle the events, but it must also permit calculations to be made and predictions stated in less time than it takes the physical state of affairs to move from the event perceived to the event predicted. It is here that the principle of economy enters once more. It may turn out to be a waste of time to do a calculation in the simplest terms.

There would therefore appear to be an optimum level of complexity in scientific theory if it is to be successful. There are at least two senses in which the theory may be said to be successful: it may simply produce the required predictions and enable us to anticipate and control nature (and this might, in extreme cases, be done entirely automatically) or it may lead us to some kind of understanding, by which I mean the ability to give a rational account of the process by which the prediction is arrived at—an ability which, apart from being desirable in itself, is psychologically satisfying and practically useful. It seems to me that these goals are often confused; certainly the optimum may come at different levels according to which is in mind, and this may lead to argument about the "better" theory. But in either case the question arises as to how we may recognize this optimum.

The development of science may be regarded as progressing in two ways: on one hand there is an increase in logical complexity, and on the other hand there is a decrease in the complexity of the world which the theory is trying to explain. Let me clarify this point. The world of naïve experience is complex in the highest degree; we are confronted with a crowded, constantly changing realm of phenomena in which, even when some kind of order has been introduced by the unconscious formation of concepts, no event repeats itself and regularities are obvious only in the loosest sense. It is the business of science to find units in terms of which this welter of activity, or at least a part of it, may be


rationally explained. In so doing it reduces the complexity of the world as understood by us (which is different from the world as experienced by us, and different again from the theory in terms of which we understand it). At the same time the theoretician is working on the combination of simple, not necessarily observed, elements into logically more complex calculi containing defined terms, many-placed predicates, and so on. The optimum will occur when these two processes—the increase in complexity of theory, and the decrease in complexity of the world—meet each other, and the logical complexity is of the same order as the physical complexity it is to explain. This physical complexity is, of course, relative to the units into which the world is analyzed; a forest regarded as composed of trees is a far less complex event than the same forest regarded as composed of plant cells, and the theoretical complexity of plant ecology need not therefore be as great, to deal with the forest adequately, as the theoretical complexity of cytology would have to be. In practice of course the latter would be prohibitive, so that cytologists do not deal with forests, any more than physicists deal with international relations. The question whether it would be possible in principle for the laws of social change to be "reduced" to the laws of physics, and the associated problems of emergence, holism, and gestalt theory, will not be gone into here. Even if this reduction could be done it would certainly be very tedious.

Once the optimum level of complexity is recognized, there remains the question of how it can be achieved. There are several ways of going about this, the most obvious being the development of ever more complex theories, with more primitives, more variables, and equations of higher order and degree, until one comes along which is successful. But theory rests on a protocol, a report of observations, and another possibility is to raise the phenomenological complexity of the units which can be recognized in experience. Some sciences depend more on this than others; it is not much use in physics, where observations are usually extremely simple, consisting as they do of pure space-time coincidences—needles on dials, etc.—but in geology, say, an important part of the scientist's education consists in learning to recognize feldspar, diorite, and the like as phenomenological units, to avoid the necessity of mineral analysis. The approach from this side also involves the isolation of smaller and smaller areas of observation, until a level is reached where interrelations are simple enough to be taken care of by the theoretical tools available. But of course the danger is that the optimum may be overshot. We may analyze the world into units that are too simple for our purpose, and find that we cannot recombine them to the required degree of complexity without sacrificing economy, if indeed it can be done at all. Such a case would occur, for example, if


at this stage of the development of biology its analyses were to be conducted in terms of subatomic units. Or we may increase theoretical complexity beyond what is needed, using more variables than are necessary, because of some failure in conceptual attitude.

For practical purposes, however, the optimum is not as critical as I have made it seem. So far the element of time has been ignored: and a simple theory may be able to deal with a complex situation, if the scientist has the leisure to tackle the problem one step at a time. Ashby has pointed out that a system with, for example, twenty degrees of freedom can be handled with only five variables if one takes four readings of each.[5] Treatment at this simple level involves a sacrifice of economy, but that may not be serious. Similarly a system with only five degrees of freedom could, again with a loss of economy, be analyzed by a theory with twenty variables; in this case several could be reduced to one, but the scientist might not realize this, and, again, given time, it would not matter. Problems would arise only when the calculations took so long that predictions could not anticipate the events to which they referred, and this, until recently, never seemed to happen, or, if it did, not much was lost by it.

Now, however, there frequently arise cases in which calculations are too cumbersome to produce useful predictions, if indeed they can reach any conclusion at all in the time at our disposal. This is where computers come in, bringing us back to the quotation from Ashby at the beginning of this paper. There are three reasons why calculations may take too long—first, because the theory is not complex enough (that is, sequential operations with few variables have to take the place of single operations with many); second, because although there is the right number of variables, the relations between them are so involved that the calculation takes too much paper and ink; and third, because the theory is too complex (that is, the problem is presented as one in many variables, when in fact it could be solved with fewer—the classical case being, of course, the astronomical system of Ptolemy). Now in all these cases computers can help; in the first by speeding up the sequential operations, in the second and third by handling more variables at once. In the first two cases the blessing is unmixed, and this represents the genuine sense in which cybernetics helps us to cope with complexities which before seemed hopeless. But the third presents a danger. If we have not realized that the theory is too complex it is certainly better than nothing to get a solution with the help of a machine, but at the same time the fact that we have got a solution may remove the necessity of simplifying the theory.

Suppose someone had been able to offer the use of a computer to a pre-Copernican astronomer, deep in calculation. The need for a Coper-


nican revolution would have been removed at once; eighty-three epicycles are as nothing to a good machine, and the nautical almanac for the next thousand years could have been completed in a few hours. It would have contained mistakes, it is true, but they would not have been immediately apparent, and could have been taken care of by the addition of a few extra parameters as need arose. With a reserve of circuits in the computer it would be easier, in fact, to complicate the theory than to simplify it. One might be tempted to say that this was all right: the predictions work, everything comes out just as it should, and this is the criterion of scientific success. But this would not have satisfied Copernicus—not only, one likes to think, because of his mystical attachment to circles, but because such an answer sacrifices understanding to prediction and control. For the latter all we need is to match mechanical complexity against physical complexity, but for understanding we have to match the logical complexity of our concepts against physical complexity, and this may be much harder. We may have to settle for prediction and control apart from understanding, in view of the fact that there is an upper limit to the complexity of ideas that can be grasped by individuals, and an upper limit to the complexity of the phenomenological unit which can be intelligently apprehended, so that we simply may not be able to reach the optimum which would represent success of a theory from the point of view of understanding. But this is a conclusion that must not be arrived at too soon, and perhaps never finally arrived at. It would be hard for scientists to keep at their research, except for purely practical ends, without some conviction such as that expressed by Einstein in the remark, "God may be subtle, but he is not mean," or by Santayana when he says, "the world I find myself in is irrational, but it is not mad."

In the light of these considerations we may sum up the relationship of the scientist to computing machines as follows: If understanding means anything apart from prediction and control (and I take the view that it does, as suggested earlier), then wherever possible scientists must attempt to match the complexity of their concepts to the complexity of the world, of which the machine is regarded as a part . When this is not possible (and sometimes when it is, for reasons of economy of time, etc.) they may still achieve prediction, apart from understanding, by matching the mechanical complexity of the system scientist-plus-machine to the complexity of the rest of the world. In no case can they suppose the machine to be thinking for them—only economizing their thought. This leaves open the question whether the machine might think for itself, and attempt to match its complexity against that of a world of which the scientists might be regarded as a part; but that falls outside the scope of this discussion.


What does give cause for concern is the possibility that the methods of arriving at predictions and the methods of achieving understanding may begin to diverge radically. It has always been the fashion for proponents of so-called phenomenological theories in science—and for that matter positivists in general—to stress prediction at the expense of understanding, but in a sense that was an excusable emphasis, in view of the fact that words like "understanding" are often used uncritically, and also because understanding and prediction were much more nearly the same thing when the same data could be used to achieve both, the methods of gathering data having been designed with a particular conception of the world in mind. Now however very general methods of handling experimental results with a view only to prediction are emerging. The "black box" technique of general systems theory depends, not on working upwards from the simplest state of affairs, which is inevitably the way of the understanding, but in working downwards from the most complex, in the sense that only complete chaos is unmanageable, and if any constraint whatever shows up in the protocol it can be immediately detected. If before one might have said, "The reason why the world is not easy to deal with is that it behaves in complex ways," the new formula would be, "The reason why the world is not impossible to deal with is that it does not behave in all the ways in which it might." Now it is extremely important that such methods should be devised, for the problems which they solve are urgent. But it does not count as understanding the world to be able to press a button, and understanding must at all costs be kept alive independently. Pursued to its extreme, this division of interest might lead to a situation where science carried out by human beings through the medium of ideas stood in relation to science carried out by machines where philosophy stands today in relation to science. It might even be regarded with the same apparent indifference. But its task would be just as essential.


Praxis and Techne

In spite of the fact that many people think it was invented at about the time of the Industrial Revolution, technology has a much longer history than either science or philosophy, and the philosophy of technology can be traced back to the earliest philosophers. Aristotle, in the Parts of Animals , recounts that when some visitors surprised Heraclitus in the kitchen he invited them to come in with the remark "for here too are gods."[1] Where modern philosophers would naturally appeal to scientific examples, Plato habitually chooses technological ones; his dialogues are full of references to agriculture, medicine, shipbuilding and navigation, and the training of animals—technologies whose origins go back to neolithic culture. We may speak of them as technologies and not mere techniques because there was a lore that went with the techniques; they were passed on—and no doubt also originated—not only through trial and error and imitation but also through discussion and instruction. We have a tendency to think of civilization before the invention of writing as silent, but it was not. Nor was it irrational. "As for me," says Socrates in the Gorgias , "I do not give the name techne to something lacking in reason."[2]

Technology, after all, is not merely the theory of the practical arts; it is the practical arts themselves, regarded as an activity of reason—the logos in the techne , rather than the logos of the techne .[3] Nevertheless, technology has continued for the most part to be relegated to the kitchen, and even philosophers who are beginning to pay attention to it do not always manage to avoid the patronizing tone of those who, to their surprise, have found hidden talent below stairs. The assumption,


all too readily made and accepted, that technology is to be defined as the practical application of scientific theory is symptomatic of this. The opposition of theory and practice, and the scorn of the latter, also has ancient roots, but we have perhaps not understood them. It was certainly not, as is sometimes supposed, a simple question of slavery and nobility; rather it was perceived that there are two different and independent manners of relating to the world. Thus Aristotle in the Nicomachean Ethics says, "For a carpenter and a geometer investigate the right angle in different ways; the former does so in so far as the right angle is useful for his work, while the latter inquires what it is or what sort of thing it is; for he is a spectator of the truth."[4]

There is no evidence that the Greeks despised carpenters. There is plenty of evidence that they admired geometers and indeed mathematicians in general, but the reason for this can easily be traced back to religious motives. Pure contemplation in Aristotle is a function of the divine; God is the archspectator, the theoros of himself and the world, not merely—like ordinary theoroi —of limited events like the games or the consultation of oracles. The essential distinction is between the perishing and the unchanging, between the transient and the eternal. Mathematics, the paradigm of theory, gives access to another world, where the soul is released from the body, the geometrical right angle from the wooden one, a world which in Plato almost certainly derives indirectly from an older philosophical tradition in India. It is significant for our purposes that when Plato's God turns his attention to our world he does so in the guise of a craftsman. (It may also be significant that some connotations both of techne and of its more commonplace relation praxis were extremely down to earth. One of the meanings of praxis given in the standard lexicons is "sexual relations," and there existed a minor branch of learning known as erotike techne , exemplified in a work of Paxamus called Dodecatechnon because it dealt with "obscene positions to the number of twelve.")[5]

I mention all this not just in order to observe the usual philosophical pieties towards the Greeks, but in order to stress that the concepts invoked by the philosophy of technology are embedded in a linguistic tradition that we ignore at the risk of talking nonsense. Our problems are not nearly as new as we think. A further development of the linguistic tradition, and one to which we ought to be sensitive, has taken place in the last hundred years or so. If the theoretician often tends to look down upon the merely practical, there has grown up since Marx an opposite tendency to look down, in a moral sense at least, on the merely theoretical. The fact that the widespread use of the term praxis in an exclusively political sense rests upon an etymological mistake (since it was the plural form pragmata that chiefly carried this connotation in


Greek) does not blunt the force of the argument that a balance may need to be struck between the luxury of theoretical detachment and the utility of practical involvement. This is not to say that practical involvement can ever be a criterion for the truth or adequacy of theory in its own domain. But human society has other values besides truth, values which in the end truth may rightly be expected to serve without compromising its status as truth: it is rather a question of choosing how much of one's life to devote to the pursuit of truth, and which truths to pursue.

Mao Tse-tung, whose works it would be quite wrong, I believe, to dismiss as merely demagogical (they are quite as much pedagogical, presenting surprisingly orthodox philosophical views in a simplified form accessible to the young or ignoranth offers in his essay On Practice a typically Marxist inversion of the theory/praxis relationship, reminiscent of Marx's own treatment of the money/commodity relationship in Capital . Money, Marx maintains, began its career quite reasonably as a form of mediation between commodities (C-M-C), but in the capitalist economy it becomes an end in itself, while commodities are reduced to mediating between its manifestations (M-C-M).[6] Similarly, for Mao the proper role of theory is as a form of mediation between praxes (which might be represented analogously as P-T-P), whereas the overintellectual and overtheoretical habits of the West elevate theory to the dominant place and put (experimental) praxis at its service (T-P-T).[7]

My own conviction, which is of long standing, is that the attempt to establish conceptual priority between theory and praxis is futile; the relation between them is dialectical in the strict sense, in that both historically and conceptually they alternate in the development of knowledge and of its applications. But if we are to understand this dialectic, its elements need to be sharply distinguished, not run together with one another. Therefore, I should now like to explore the senses in which theory and praxis can be thought of as parallel and autonomous. An incidental but striking tribute to this parallelism is provided by an illustrative analogy used by Spinoza in his essay On the Improvement of the Understanding:

In the same way as men in the beginning were able with great labour and imperfection to make the most simple things from the instruments already supplied by nature, and when these were completed with their aid, made harder and more complex things with more facility and perfection, and thus gradually proceeding from the most simple works to instruments, and from instruments to other harder pieces of work, they at last succeeded in constructing and perfecting so many and such difficult instruments with very little labour, so also the understanding by its native


strength makes for itself its intellectual instruments wherewith it acquires further strength for other intellectual works, and with these makes others again and the power of investigating still further, and so gradually proceeds until it attains the summit of wisdom.[8]

This citation serves my purpose in another way because it stresses the slow and complex character of the evolution of human knowledge and competence, which I believe to be a constant characteristic, although it has been concealed in recent history by quantitative increases in the capacity and facility of use of external storage systems and effectors, the speed and capacity of communication and its channels, the absolute numbers of human beings, and the proportionate numbers of them engaged in production and research. It is necessary to remember that if, as some anthropologists and linguists maintain, the complexity of the human mind is relatively constant over all known cultures and hence over a very long period of cultural development, human beings have since early prehistory confronted the world on roughly equal terms. The difference between us and prehistoric men and women is that we find in our world a great many things—buildings, clothes, books, and instruments—left in it by our predecessors, and that our immediate ancestors—parents, teachers, and the like—take pains to introduce us to the use and sometimes to the meaning of these things, as well as to the use and meaning of various activities, like speaking a language, that they learned from their immediate ancestors. We also find in the world, of course, many other people already versed in all this, with beliefs, prejudices, and the rest.

Our basic relation to the world, I repeat, is constant. In order to make this clear, it is necessary to complicate matters slightly. On the practical side as well as on the theoretical side, we need to distinguish between a mode of immediate interaction of subjects as knowers or agents with the world, and another mode of activity that is independent of such immediate interaction and which in both cases I would call intellectual. On the practical side, I associate praxis as immediate with technology as mediated by the intellect. In parallel on the theoretical side, the notion of the empirical (from empeiria , meaning, roughly, an experimental acquaintance with things) provides the immediate basis for the intellectually mediated activity of theory . There is no point in trying to make all this too perfect and symmetrical. Empeiria , for example, is not wholly passive; a case might be made for appealing to episteme rather than theoria , and so on. My point is only to stress that coming up against the world physically, on the one hand, and looking at and talking about it, on the other, represent two complementary and to some degree separable kinds of involvement with it, and that each


leads to its own variety of mental activity, which emerges again on a higher level in a more complex form, in the first case technological and in the second case scientific.

The parallel can be, and has been, drawn out to considerable lengths. An explicit version of it is to be found in the late-nineteenth-century work of Alfred Espinas, Les origines de la technologie , in which the sequence sensation-perception-knowledge-science is matched with the sequence reflex-habit-custom-art or technique.[9] Philosophy has concentrated almost exclusively on the theoretical side, of course, as the relative states of development of the philosophy of science and the philosophy of technology clearly show. The philosophy of science would never have reached its present level, however, without the basis laid down by the analysis of empirical knowledge carried out by epistemologists since the seventeenth century.

It is my belief that a developed praxiology is just as essential to the philosophy of technology as epistemology has proved to be to the philosophy of science. Praxiology, however, is in its infancy. Apart from the works of Marxists, who interpret the concept in an arbitrarily narrow way, and of pragmatists, who mistakenly suppose that the problem is really after all an epistemological one, hardly anything has been written except the disappointingly anecdotal treatment by Kotarbinski.[10] That is why the philosophy of technology can scarcely as yet be said to exist; what passes for it usually amounts to no more than inserting technology, taken straightforwardly as the application of scientific theory, the proliferation of machines, and the like, as a boundary condition into some other branch of philosophy, such as value theory or political philosophy. Some philosophical disciplines, like logic and automata theory, have been greatly stimulated by technology, but to regard them as its philosophy is, in my opinion at least, to underestimate the philosophical interest of technology.

Let me then explore the parallel between the practical/technological and the empirical/theoretical at its most humble level, the level at which the epistemologist would be talking about simple perceptions or basic observation statements. A problem arises for the praxiologist which is analogous to that of deciding among phenomenology, sensationalism, and physicalism, and I shall suppose it resolved much as standard empiricism resolved it on the epistemological side. That is, I shall assume a capacity to recognize things and shall not insist on beginning with pure sensory elements such as resistance to touch and the like. This, however, presumes a prior learning. Just as physicalism in epistemology involves the learning of a descriptive language, an insertion into a culture of names, so physicalism in praxiology involves the learning of habits of manipulation, an insertion into a culture of objects. It requires


a certain effort to see past the transparency of our habitual praxes, but if we reflect for a moment on the vast repertoire of elementary cultural praxes we have all acquired—things as banal as buttoning buttons, tying shoelaces, using knives and forks, opening and closing drawers or doors or boxes, brushing teeth, shaving, and the like—and on the way in which we acquired it, the parallel with language learning should be obvious enough. Special technological praxes, like special scientific terminologies, are acquired subsequently in the context of special training.

An elementary praxis, like an elementary observation, picks out a bit of the world and operates on it. The great difference, of course, is that while the observation leaves the world as it is, the praxis alters it. (At quantum levels, therefore, the distinction gets blurred, but that does not mean that it is not a perfectly good distinction.) A simple but often neglected corollary is that the world must be in a quite specific state before praxis becomes operative. People will not be found buttoning buttons unless they are confronted with unbuttoned buttons and with buttonholes to button them into; they will not be found doing it if buttons have not yet been invented, or if they are wearing clothes that do up with zippers, or if their buttons are already buttoned. Also, even if all the conditions are right, they will not be found buttoning buttons unless they want them buttoned. This inevitable incursion of value into questions of praxis—and a fortiori of technology—is familiar enough. It does not, however, change the character of the praxis; it only decides whether or not it will be practiced.

Every praxis, in other words, has a domain, a domain of ordinary, recognizable macroscopic objects that can be altered, arranged, connected up, stored and the like. Things become interesting from the point of view of our present discussion when the objects in the domain of the praxis are not natural objects or simple cultural objects but things like light switches, air-conditioner buttons, gearshifts, or triggers. Such praxes obviously depend on a prior technology, where by "a technology" is meant a planned, purposive, relatively complex, probably collaborative, structured sequence of praxes. The important thing to notice is people who turn on lights or shift gears need know nothing whatever about all this; all they need to know are the practical effects of their actions. For all they care, the device might have come into being naturally. It is a remarkable truth, when one comes to think of it, that an artificial eye, had we constructed one out of carefully replicated tissue, would work exactly like an organic eye; so an organic watch, were one to grow accidentally or miraculously in some improbable metal-rich environment, would work exactly like an artificial watch (artificial in the literal sense of being an artifact; the complexities of the ordinary-


language behavior of this term need not distract us). There could be no technology, in other words, if there were no laws of nature. But this, of course, is a quite different claim from the claim that there could be no technology if there were no science. The difference that science makes is a difference of efficiency, both in selecting what technologies to try and in deciding how to go about trying them. If you know how the relevant laws of nature operate, you go straight to the desired result and do not need to spend millions of years trying this and that. The point is, again, quantitative.

For the consumer of technology, in fact, the relation between a given praxis and its outcome may be wholly magical. People who know nothing about the internal combustion engine, when starting automobiles on cold mornings, seem to me to be in a position strictly comparable to that of dancers dancing for rain: sometimes the ritual, which consists of a certain learned sequence of pulling out chokes, turning keys, and depressing accelerators, pleases the god, and sometimes it doesn't. Apart from the complexity and predictability of our environment—the range of different options, and the facility with which the right choice leads to the right result or the wrong choice to the wrong one—we are pretty much in the position of our remotest human ancestors. We still have to decide what to do. The difference that technology makes is to give us greater freedom of choice, and greater responsibility. The risk of unexpected consequences—getting cancer from smoking or killing lakes with industrial effluents—is more dramatic and yet, once explained, more tractable than similar risks in past times. Such consequences as denuding whole territories through cultivation and irrigation simply could not have been predicted on the basis of prehistoric knowledge.

It is, I think, pure irresponsibility to claim that technology has made an essential difference in the condition of the human subject as knower and agent. Even the difference it has made in the environment is a quantitative rather than a qualitative one, and I do not believe that the dialectical law of the transition of quantity into quality applies. The moral and political problems that result from technology are not, I repeat, problems in the philosophy of technology. The trouble is that in claiming that human beings have been changed by technology we encourage them to think themselves at its mercy. In fact they are not. They may be at the mercy of other human beings who misuse technology, but the remedy for that lies outside technology, and a Luddite solution is not, in the end, satisfactory.

In conclusion I wish to turn back from this set of external problems to the internal, conceptual ones that constitute, in my mind, the essence of the philosophy of technology. In particular, I want to consider the


transition from praxis to technology proper as analogous to the transition from empirical acquaintance with the world to theory.

A technology was said earlier to be a planned, purposive, relatively complex, probably collaborative, structured sequence of praxes. The technologist knows which praxis to carry out when and under what objective conditions, what resources of material and energy it will require, and what its outcome within the technological context will be. All this has been learned from other technologists or practitioners. The question is, what sort of activity is it? It is my contention that technologies in this sense are just as much an evidence of human intellect as scientific theories are, and that our tendency to think of the latter as superior is just cultural prejudice arising out of the dominance of the verbal among the leisured classes. (We no longer, as a rule, have the religious excuse 1 earlier attributed to the Greeks.) It seems to me clear that technology, in this sense, represents a form of insight into the workings of the world, a form of practical understanding, that takes as much talent and application as the most rigorous theoretical calculation.

The most dramatic cases come from other cultures. I will cite only one, that of the Truk navigators of the Pacific. These islanders, working from traditional recipes which consist of a few lines drawn on a piece of bark, are capable of making landfalls within a half mile or so after voyages out of sight of land for hundreds of miles, in variable weather and at all seasons. They learn during a long training how to take in and process staggering quantities of information—not only winds and currents, the feel of the boat and of its rigging, and the appearance of the sky, but at any point the sequence of these things over the previous course of the voyage in relation to their value at that point.[11] (It is no argument against this achievement to say that migratory birds, sea turtles, and salmon perform feats just as staggering. In the latter cases it is a species-specific activity, not a culture-specific one. Also, the birds and turtles may be more intelligent than we think.) In our own culture, it is clear that the whole development of the plastic and performing arts is an example of this kind of thing. The point is that what is involved is a form of representation inside people's heads of the behavior of things in the world, a representation that is not descriptive but rather, we might say, operational. It is not just a question of manual dexterity, since there are well-attested cases—J. J. Thomson was one—of people who can see how the thing goes without being able to do it. The story is that Thomson's laboratory assistants refused to let him touch the apparatus, because whenever he did so it broke, but that they depended on him to show them what to do with it. It is a question of making our way about in a world whose physical and causal properties we know not only or even mainly by catching them in formulae, but


also by the daily practice of a form of learning embodied in a structure of behavior that has been accumulated and transmitted over many millennia.

Along lines like these I think we might hope eventually to come to some philosophical understanding of technology. One contrast I have tried to stress, and one that is essential if the whole subject is not to fall into confusion, is between the relatively unintelligent praxis of button-pushers and the true technology of engineers. Button-pushers have nothing interesting to do with technology, and their use or abuse of it, even if it has disastrous consequences, does not touch its essence. A moral question is thereby posed for engineers: the question for whom they are working. There are, after all, risks involved in offering buttons to be pushed. It was remarked earlier that people will not button buttons unless they want them buttoned, but they may be tempted to push buttons without having considered very carefully what they want, or whether there is anything they really want. The trouble is that the results of button pushing are out of proportion to the effort, and this is likely to produce also a disproportionality between decision and purpose, whereas in elementary praxes such as button buttoning these things are generally in a kind of natural balance.

Also, people may want the wrong thing. These are problems that technology poses, although they are not technological problems, and the philosophy of technology cannot by itself deal with them adequately. There is an overlapping of the philosophy of technology with the theory of value, as with the philosophy of purposive action in general; technology, indeed, might be represented as the systematic working out of the hypothetical imperative, putting into the consequent of the hypothetical anything we can lay our hands on—scientific theory if we have it, but other forms of knowledge and competence as well. So technology is not value free, but there is still a sense in which it is value neutral. The antecedent of the hypothetical remains to be filled in as we collectively prefer; nothing in technology itself compels us one way or the other.


On the Concept of a Domain of Praxis

In this paper I shall attempt (a ) to clarify the concept of praxis , (b ) to show how it can be treated homologously with the concept of theory , and (c ) to define the concept of the domain of a given praxis and examine its relationship to the domain of its associated (or some other) theory. While the concept of praxis has a long history, beginning with Aristotle's distinction between doing, making, and observing (praxis, poiesis , and theoria ), it has never attained to the rigour of other philosophical concepts (such as theory itself); its use in Marx, while clear enough for his purposes, does not rest on any extended analysis, and the one major treatment of the concept outside political philosophy (Kotarbinski's Praxiology ) is disappointingly anecdotal. (In political philosophy the concept is either too specialized to form the basis of a general treatment or, as in the case of Mao Tse-tung, too simplified to lead to any interesting development on the analytic level.)

I begin by distinguishing two exhaustive although not entirely exclusive species of human behaviour, which I will call linguistic behaviour or simply language and nonlinguistic behaviour or simply behaviour . Under language I include speaking and writing and other nonspecified modes of signifying, under behaviour moving things about, constructing things, and so on, but also eating, sleeping and the rest. The nonexclusive character of this distinction arises partly from the ambiguity of the notion of "signifying" and partly from the fact that some forms of linguistic behaviour (especially what have come to be called "performatives") are also special cases of behaviour in the other sense. But this lack of precision in the categories is not serious. I shall wish to locate


theory in the category of language and praxis in the category of behaviour, but even in these more restricted cases there may be some overlap (cf. Marx: "the praxis of philosophy is theoretical"). By and large, however, the obvious senses of theory as doing a certain kind of thing with words and praxis as doing a certain kind of thing with nonlinguistic objects will be adequate.

To deal first with nonlinguistic behaviour: I start with this rather general class of episodes in the life of human beings in order to emphasize that the subclass of actions which I immediately introduce is only a small province of behaviour (characterized in the usual way as intentional, goal-directed, and the like). Praxis I take to constitute a subclass of actions, and to come in two varieties which I shall call theory-related and non-theory-related . In order for an action to belong to a praxis it must be an element of a coherent set of actions, ordered in some way (e.g., as having to be done one before another, or simultaneously, etc.) and collectively serving some end, such that one or another action or sequence of actions belonging to the set is performed according to the state of affairs that obtains, the result of the previous actions, and so on. The special case of medical practice offers a useful paradigm, and the concept of a practitioner as one who knows how and when to perform the actions in question also has a familiar use. But the ordinary extension of the term "practice" is at once too narrow and too wide, since on the one hand it includes many actions that are merely preparatory to praxis ("practising" on a musical instrument), while on the other it does not include a great many praxes properly so called.

Most praxes are non-theory-related: daily complexes of action such as getting dressed and eating, climbing stairs, finding one's way about, playing games, performing various kinds of work, and so on. This does not mean that they are not capable of perfectly rigorous specification, although in most cases of this sort it would be otiose. Among those that are theory-related, however, the relation may take one or both of two forms: the praxis may have been determined by the theory (in which case it can be learned just like any other praxis by people who are ignorant of the theory) or it may be part of the determination of the theory, e.g., constitute its experimental basis. It is this last case that is of interest for the philosophy of science.

Now theory itself can be represented as bearing to the realm of language the same relation that praxis bears to the realm of behaviour, i.e., the two concepts can be seen as homologous to one another, Just as we distinguished a subclass of behaviour, namely action, of which praxis in turn was a subclass (understanding all three terms, of course, as classes of events or episodes), so we may distinguish a subclass of language, namely assertion , of which theory in turn will be a subclass.


A theory may or may not be related to a praxis; if it is not we may say that it is a pure theory, if it is we may say that it is an empirical or an applied one. Because language almost certainly developed in an empirical context it will not be surprising to find that pure theories are often refinements of empirical ones; thus pure mathematics grew out of practical arithmetic and geometry, although once the principles of pure theoretical construction had been grasped it became possible to develop pure theories without empirical origins. In the realm of pure theory we will find it natural to include a great deal of what is usually called "literature"; the defense of this inclusion does not come under the scope of the present paper, but the point of mentioning it is to acknowledge that theory construction is an imaginative activity and that it uses (under certain constraints of rigour and coherence, to be sure) the same linguistic medium as does literature. There is nothing exceptional or special about the linguistic resources of scientists, although their skills may have been developed along somewhat different lines.

The question I now want to raise concerns the relation between empirical theories and theory-related praxes. For this purpose it is necessary to clarify the notion of the "domain" of a theory or a praxis. The idea of the domain of a theory is familiar but unnecessary, that of the domain of a praxis is unfamiliar but necessary. Strictly speaking we need not distinguish the domain of one theory from that of another, indeed there is a sense in which the domain of any theory that aspires to be empirically true is the universe as a whole—there can be no competition between true assertions, and the fact that empirical theories refer directly or indirectly to observable facts or events, and contain the names of identifiable entities that enter into those facts or events, means that the selection of objects from the common domain is, as it were, built into the theory. Nevertheless for practical purposes (using "practical" here in its ordinary sense) we may speak of sets of entities—elementary particles, hereditary material in cell nuclei, people requiring psychiatric treatment—as constituting the domain of particle physics or genetics or psychopathology. These domains represent different cuts into the available material construed as the aggregate of objects of theoretical interest. The theoretical aspects of the object, however, bear to the totality of its aspects a proportional relation that varies according to the case: virtually everything that can be said about fundamental particles is of theoretical interest to particle physics, but some properties of the hereditary material in cell nuclei are of interest to other branches of biology or chemistry in addition to genetics, and very many things that might be said about psychopathic individuals fall entirely outside the scope of psychopathology.


It is clear that the attempt to make the notion of the domain of a theory precise raises a number of difficulties, only hinted at in the illustrative cases considered above, at any rate if a specification is sought in terms of objects recognized independently of the theories that refer to them. There is in other words nothing in nature that compels us to divide it up as we do between different sciences; to speak of animate and inanimate objects, of chemical or biological properties, of the physical sciences and the social sciences, is already to have imposed a set of theoretical distinctions. And although these distinctions have come to be built in to our view of the world we have to realize that there were not first electrons and then physics, but first physics and then electrons. This does not mean, however, as it is sometimes thought to do, the abandonment of the distinction between the observable and the theoretical, since if the notion of theory is not to be debased beyond all usefulness there must be allowed a level of ordinary naming—of people, dogs, trees, stars, and the like—not yet coloured by theory. Theoretical considerations enter only when things begin to be classified according to criteria other than those of obvious and immediate perceptual similarity, and when this happens the reference back of the theory to the world at once effects a segmentation into domains; indeed, one might plausibly regard theoretical activity, even of the most advanced kind, as a complicated sorting procedure designed to get everything into the right theoretical domain.

In the case of praxis the situation is quite different. For the object acted upon is not named, classified, or redefined by the action; it may be moved or altered, but this is something that really happens to it in "real time," not something that, like theory, can be carried on in the absence of the object or at a distance from it. Praxis is not referential , as theory is—it is rather, one might say, participatory . Thought experiments belong to theory, not to praxis. In the specification of a domain of praxis, then, we cannot resort to theoretical criteria except at the price of reducing the praxis to a mere appendage of the theory, which while satisfactory for the theoretician never reflects the true state of affairs even in theory-related praxes of the second kind discussed above. For it is the praxis that takes precedence: if something odd happens in an experiment, for example, it endangers the theory but not the praxis, for the simple reason that the praxis makes no claim, so that a change in it does not count as a refutation. The theory is obliged to take account of anomalies in the praxis, but not vice versa.

The domain of a praxis can and must be specified in terms of ordinary macroscopic objects that can be arranged, connected up, manipulated, stored, and the like. The contents of the domain will be those objects or kinds of object on which or by means of which the actions proper to


the praxis in question are performed, and the basis of the recognition of an object as belonging to the domain will be the "obvious and immediate perceptual similarity" referred to above. This does not mean that such recognition does not have to be learned (by initiation into the praxis) or that some of it may not be achieved by "labeling." Labeling is an important and neglected subject in itself, which the limitations of this paper do not permit me to develop; I will only remark that if some component of a primitive pharmacopeia, requiring a certain preparation, comes as a white powder labeled with an exotic name and is known to indigenous medical praxis as a cure for warts, this need involve no theory whatever. Beginning with flints and domestic animals, the human race has engaged since prehistoric times in a continuous interaction with objects and materials, using crude tools to make more precise ones, perfecting and transmitting increasingly specialized skills. Scientific praxis is to be considered as a recent refinement of this long-established activity. While some theory-related observations and experiments may still be made on domains of objects unaffected by technological developments (ecological and ethological studies, for example) scientific praxis has come more and more to involve the manipulation of relatively sophisticated instruments, so that the domain of the theory-related praxis of experimental particle physics tends to include vacuum pumps, high-energy generators, fluorescent surfaces, counters, and the like—it certainly does not include elementary particles.

This divergence of the domains of a theory and its related praxis raises a number of interesting questions. First of all, it is clear that the historical development of praxis, even in cases that we think of as theory-related, has been relatively independent of theory. It has often happened, in fact, that the stimulus for a certain line of experimental enquiry has come less from a crisis in the associated theory than from the availability of an improved apparatus or a new technique. The history of science as the history of its instruments is lent added importance by the reflection that, as suggested above, the praxis is never wrong—it may at most be inappropriate for a given theoretical purpose , and even if this is the case it may well be appropriate for another—whereas the development of theory is a history of abandoned errors. One reason for this asymmetry is, of course, that (as we suppose) the world does not change even though our theories about it do; the outcome of an experiment may be surprising but it cannot be mistaken. (We may understand it mistakenly.)

Secondly, however, if we assume that our theories are in some sense becoming more adequate to the world, the fact that a given theory is not the theory of the domain of its related praxis, but the theory of a more remote domain to which the domain of the praxis affords it an


entry, as it were, may be an indication of the theoretical profundity of the science in question. It is striking that a small advance in an empirical theory, e.g., again, genetics or particle physics, may now involve the common effort of very many different praxes. And this consilience of praxes may be as important as the consilience of hypotheses in the confirmation of theories. But further enquiry is clearly needed into the mode or framework of our knowledge of the world that is constituted by praxis, particularly in the specification of domains of elementary praxis. The concept of praxis has been confused by its too ready application to complex domains (such as political action) without the preliminary analysis that such enquiry might furnish.


Individual Praxis in Real Time

The philosophy of praxis, although it has seen a great deal of activity in the last century, and especially since the emphasis that was laid upon it by Marx, remains seriously defective as compared with what might be called the philosophy of theory—that is to say, the philosophical analysis and criticism of theoretical systems. Whereas in the latter case we can follow a more or less continuous ascent from the empirical point of contact of theories with the world in perception to the most abstract logical considerations, in the former a great gap separates the philosophy of basic or elementary actions (raising one's arm, for example) from the philosophy of historically significant political activity with which the concept of praxis has chiefly been associated. In this paper I wish to examine some of the reasons for this philosophical underdevelopment, point out some of its dangers, and recommend some lines of work that might be followed to help correct it.

Part of the difficulty lies in the theoretical character of philosophical praxis itself, which Marx pointed out in the notes to his doctoral dissertation.[1] As a self-reflective discipline philosophy offers an example of theoretical activity to its own examination; it has no such readily available model of praxis construed as distinct from theory. Marx indeed considered that the result of philosophical intervention in the practical life of the world would be an abandonment of philosophy itself: "The consequence, hence, is that the world's becoming philosophical is at the same time philosophy's becoming worldly, that its realization is at the same time its loss."[2] I have argued elsewhere[3] that we need not accept this conclusion, which has the force of depriving politics of any


contribution from an unprejudiced philosophy, but it must be admitted that, on the other side of the question, philosophical inquiry may well be affected by the physical, social, or political context the description of complex practical action necessarily invokes.

Is there any way of dealing with praxis in general theoretically, apart from the ways in which the various sciences (physical, social or political) do so? What is the characteristic philosophical task with respect to praxis? One frequent strategy is to construe praxis as purposive action, so that its philosophical treatment becomes the formulation of ends and the criticism of the results of action in terms of its success or failure in reaching these ends—both standard propositional activities. (The bimodal distribution alluded to above may derive from our tendency to neglect the analysis of purposive activities that fall between those aimed at modest private goals on the one hand, and those aimed at global public goals on the other—between personal self-realization and universal peace or justice. Intermediate cases have by and large seemed banal, and few philosophers—with the notable exception of Dewey—have been inclined to take account of them.) Another strategy is to construe praxis as the following of theoretical instructions; this has the effect—especially in the case of the advanced and ramified complex of praxes known as technology—of prejudicing the issue by reading a propositional structure into it a priori .

The former of these two strategies has, it is true, etymological sanction, since prasso is "to pass through," and hence "in common usage, to achieve, bring about, effect, accomplish."[4] But if we retreat from these derivative meanings we may ask, what is it that passes through, and through what does it pass? It is worth noting an indirect connection with the notion of the empirical, the root "p-r" linking prasso through perao to peiro and peira , although the dependence probably goes in the opposite sense: for a passing-through to be a trial or a test, to constitute experience, it must first be passed through and then formulated as an experience. The challenge is to catch the praxis before it has thus been rendered propositional or theoretical. And the question arises whether there is anything that can properly be called praxis independently of some propositional anticipation or recollection.

If there were such a thing it could not of course be articulated propositionally without at the same time being compromised as nonpropositional. The role of discourse in dealing with it could only be external, a pointing to or a setting of something that could be shown but not expressed. Here we rejoin a familiar philosophical problematic whose two moments are on the one hand ineffability and on the other ostension. When we begin to speak about something the thing referred to remains unspoken: language is the expression of thoughts, not of things.


Things as such are, strictly speaking, inexpressible, although they may be described, located, even analyzed in language. And praxis is irrevocably on the side of things rather than of words. In the case of praxis, however, we are not shut out as we are from things by a relation of mutual exteriority, because the thing chiefly involved in it is one that we inhabit, namely our own body.

Each of us has a private relation of interiority to his or her own body. There could not be a private language, because language is essentially social; by contrast there can in elementary cases be only a private praxis, because praxis is essentially individual. I cannot walk another person's walking any more than I can die another person's death—at best I can do some things for someone else, but it is I who do them. With language and thought the matter is different—there is a sense in which propositions, i.e., units of linguistic structure, are genuinely transmissible, in which we can think the same thought or utter the same sentence, and this is much stronger than the sense in which we can perform the same action. In cases where the action forms part of a social praxis (e.g., a game) so that it is unintelligible except in its "grammatical" context, the difference seems minimal—if I serve at tennis I am performing the same action as my partner and as thousands of other players; the crucial point, though, is that the action is not shared, it is only replicated. Watching another person perform an action is not like hearing her utter a sentence, since in the latter case I may be engaged as fully as she is in the propositional content of the sentence, whereas in the former I cannot be engaged at all in the intentional content of the action.

This assertion may seem to be contradicted by the obvious possibility of collective praxis, in simple cooperation or in more complex forms of technological collaboration. In these cases, however, individuals play an additive role. If many people join forces, they can lift heavier objects than any one of them can lift alone, but if many people join in the utterance of a proposition that does not make it any truer than if one of them uttered it alone. To the extent that speaking is also a form of praxis it may be additive in a practical sense (e.g., if many evangelists preach the same gospel to different audiences), but the contrast between the practical on the one hand and the propositional or theoretical on the other holds good. Hence my insistence on "individual praxis" in the title. The individual agent "passes through" his or her own praxis, that is, through bodily motions directed intentionally at practical ends. (Note that the intentionality of praxis is essential—contrast the expression "to go through the motions" as meaning inauthentic praxis.) Now I wish to maintain that praxis constitutes an immediate mode of our cognitive relation to the world, indeed the fundamental mode of


that relation, and that it precedes the propositional formulation of the contents of cognition. Praxis answers to the properties of material things and the regularities of their behaviour, and these are represented in it in the sense that, given the conditions in question, they could be inferred from it (for example, if I wish to break a stick, the muscular exertion I bring to bear on it is a practical measure of its strength). We know a great deal of the world in this mode still, in spite of the advances of science and technology; children and primitives may know it chiefly in this mode.

The reason why praxis has remained philosophically underdeveloped follows from its double character as individual and nonpropositional, although these characteristics do not exclude the possibility of its philosophical treatment; they tend only to obscure it. It would be quite feasible to show, not only how elementary praxes come to incorporate knowledge of properties and regularities of familiar things and events, but also how participation in more complex praxes results in partial forms of knowledge. In the absence of investigations of this second kind lies a danger for theoretical understanding of complex social processes. The call for "critical-revolutionary praxis" that originated with Marx presupposed a theoretical understanding of social, political, and economic conditions. He seems to come close to the view expressed above when he says that "the senses have therefore become theoreticians immediately in their praxis,"[5] but this remark is preceded by the observation that "the eye has become a human eye, just as its object has become a social, human object derived from and for man"[6] —in other words, we already have to have a conception of the human before the senses can be, in his terms, emancipated. The danger, then, is that the practical form of political cognition achieved in revolutionary activity may seem to sustain a theoretical interpretation that is in fact presupposed by it, if independent philosophical reflection on the cognitive character of praxes both simple and complex is not brought to bear on the problem.

What form might this reflection take? It would itself require a practical basis, in that it would have to be preceded by actual experiences of praxis of a suitable kind to provide paradigmatic instances. In the case of elementary praxes this condition is automatically fulfilled, since our knowledge of the world rests in the first instance on our acquaintance with simple properties of material things encountered in practical dealings with it. The difficulty is that the learning of language makes it possible to proceed at a relatively early age to the acquisition of propositional knowledge, through instruction rather than experience. While this is an essential feature of acculturation and education, without which it would be impossible for individuals to reach the levels of theo-


retical understanding they habitually do reach in literate societies (and indeed without which it would have been impossible for knowledge to advance at all), it has the result that a great deal of knowledge of complex matters, even those that purport to represent practical aspects of social life, has no correlation to the lived states of affairs which constitute its object, but only to certain statistical or anecdotal features of them.

One of the reasons why the transition to propositional knowledge is advantageous is that, by the processes of instruction, complex synchronic propositional structures can be built and retained in the memory, and this can be done in such a way that from a relatively small number of propositions all the other elements of the structure may be deduced. Praxis, on the other hand, is essentially diachronic, and takes place furthermore in "real time," rather than in the condensed time of reading or the timelessness of the synchronic. Praxes do not form a deductive system, although there may be relations of complementarity or compatibility among them, depending on their intentional context. The utility of having passed through them lies not in the construction of a propositional system purporting to represent them, which as we have seen would be a contradiction in terms, but rather in the qualifications (or disqualifications) they may bring to bear on propositional systems theoretically elaborated on other grounds. It follows from what has been said that there may be room for something like an experimental method in philosophy, especially the philosophy of the social sciences, which would have the task both of limiting and correcting systems of theoretical assertion on the one hand, and of freeing them from undetected practical presuppositions on the other. This would involve a genuine division of labour, in that, while theoretical systems can in principle be mastered and shared by everyone, praxis can be passed through only individually and in real time, so that nobody could encompass more than a fraction of its varieties. Some way would therefore have to be found of making its findings cumulative, without the pretense that its contents had been shared. In this admittedly sketchy proposal lies a challenge to the philosophy of the next decades.


Towards a Philosophy of Technology

As the philosophy of technology develops, it takes its place among a number of disciplines, each known as the "philosophy of x "—including the philosophy of science, the philosophy of law, the philosophy of art, etc. The philosophy of x , whatever x may be, provides a way of particularizing philosophy in general (although one could think of philosophy in general as the philosophy of x , where x is everything there is).

Philosophy in general has a number of standard subdivisions, such as logic, epistemology, and ethics, and questions deriving from each of these subdivisions are likely to be posed about the x in question. So we might ask, what are the principles of articulation of discourse about x , or in the field of x ? What criteria are there for the acceptance of assertions? What values govern the activities in the domain and the analysis of these activities? And what are the moral imports of such activities? But in some fields there comes to be a stress on one of these subdivisions of philosophy rather than on another: for example, the philosophy of science has been almost exclusively the logic and epistemology of science. Courses in the ethics of science are not taught as part of the philosophy of science as it has traditionally been conceived, although we have recently become highly aware of the ethical implications of science. Similarly, the philosophy of art has, by and large, dealt with values of one sort or another, but not much with logic or epistemology.

Thus when there is a new instantiation of the expression "the philosophy of x "—when, for example, the philosophy of technology begins to take shape, and to be recognized by departments of philosophy as a


reputable part of the field—we have to be on our guard against the ambiguities that may be generated by the tendency to stress unequally the different parts of philosophy. The philosophy of technology might be one of two quite different things: it might deal with value-laden questions about industrial alienation, urban squalor, pollution, one-dimensional humanity, moral decline, and other supposed undesirable consequences of technology; or it might deal with analytic questions about people and machines and the relations between them, algorithmic computability, the relation of collective means to individual ends, and the dialectics of theory and practice.

A present danger is too narrow an assumption of the direction or form that the philosophy of technology ought to take. It is evident that the philosophy of technology, in the minds of many people, consists of questions about values which are challenged, modified, or denied by the advance of science and technology. This may obscure both sets of issues—those that concern values and those that concern technology. No doubt some questions of value, which have arisen in contemporary society, are responsible for drawing technology to popular attention, but this attention is often, as it turns out, accompanied by some ignorance about technology and its sister discipline, science, as well as about the relations between them. Also, focusing on the supposed origins in technology of the crisis in values may lead to a neglect of developments in social and political philosophy which might be capable of dealing with the value issues directly, independently of the question of what it was that precipitated the crisis—which may after all have been not technology but something else that technology made possible.

Four Misunderstandings

Before discussing what direction a philosophy of technology should seek to follow, it is important to deal with a set of misunderstandings called here the industrial, handmaiden, moral, and juggernaut views of technology. The first of these identifies technology with industry, and with what has happened since the industrial revolution; the second holds that technology is to be understood as the application of science to practical problems; the third, already alluded to, contends that the principal philosophical questions about technology are questions of value; and the fourth sees technology as an impersonal and autonomous force into whose clutches the world has fallen.


Technology and Industrialization

If one chooses to understand by "technology" exploitative and large-scale industrialization on the capitalist model without regard to humanity, nature, or future generations, then of course the pressing questions would be value questions and concern is reasonable. But this seems to me a rather uninteresting use of the term, and the concept I have just sketched is more perspicuously defined by calling it exploitative and large-scale industrialization on the capitalist model without regard for humanity, nature, or future generations. One is bound, I suppose, to have some sympathy for people into whose awareness technology has come by that route. A case in point is the editor of a recent book called The Sciences, the Humanities, and the Technological Threat ,[1] for whom, given the use of the definite article in the last part of his title, technology is obviously a rather fearsome thing. (In fact what the book seems to fear is less technology itself than lapses in the conventional moral order which reached its height in Victorian England.)

A more straightforward and less rhetorical interpretation of the term would rely more heavily on its linguistic origins: the logos of the techne , or the logos in the techne , these terms having their usual meanings in Greek—"word" or "reason" and "art" or "skill." In the Iliad, techne is used to mean "shipbuilding"; in the Odyssey , it is used to mean "metalworking." In fact, the term belongs to a cluster of terms, the interrelations among which are worth attention: theoria, praxis, poiesis , on the one hand, techne and episteme on the other; a set of activities and a pair of acquirements.

The term theoria is of special interest, deriving as it does from the verb theorein , "to observe," which denoted the activity of the theoros , the official observer sent to the games or to the consultation of oracles; this last allusion yields the primary meaning, since theoros is derived from theos , "God," and ora , "care." It is not wholly inappropriate, given the value we correctly attach to theoretical knowledge, that our name for it should evoke, however indirectly, a concern for the divine. This remark has no tincture of religion—it alludes rather to an immediate relation between vulnerable humans and the physical world, by turns beautiful and awesome, that is still preserved in the spirit of modern science.

Theoria is often contrasted to praxis , rather as spectators of a sport are contrasted with participants in it. Praxis is a doing of something, the carrying out of some practical strategy. Poiesis , on the other hand,


is a making. There is a close relation between poiesis and techne , since poiesis also was originally used to mean the making of ships (and perfumes). Later on, of course, it came to mean making things with words, i.e., the activity we now call poetic.

More significant for us, however, is the opposition between episteme and techne. Episteme , translated into its Latin equivalent scientia , gives us our word science; techne , standing for the kind of knowledge involved in art or skill, is something people have in their hands rather than, as in the case of episteme , in their heads. Sometimes these two terms are themselves related; for example, in Herodotus the expression epistesthai ten techne occurs, meaning "to know one's craft," that is, to know it not casually but in the special way that episteme represents. All these interconnections seem to me provocative; one should not place more weight than is justified on etymology, but language is a rich source of suggestion in the clarification of ideas. The full-fledged term technologeo occurs in Aristotle, where it means bringing something under the rules of art, systematizing those rules; here we have not just the skill, nor even the special or precise knowledge of the skill, but the possibility of articulating and formulating what is done when the skill is employed. It is worth maintaining this set of classical connections and connotations in the contemporary use of the word "technology," if it is not too late to rescue it.

Technology as Handmaiden

The second misunderstanding with which we must deal is the identification of technology entirely with the application of science to practical problems. Technology and science, of course, are closely related. But it is possible to think of an independent history of technology, one which frequently anticipates scientific results. We know that things work without knowing how they work.

The notions of "knowing that" and "knowing how" correspond nicely to the terms episteme and techne , respectively, although there is a crossing-over of sense: to say that one knows something works without knowing how is equivalent to an assertion that one knows how to use a certain device or information without having the "knowledge that," which corresponds to the principles of its operation. This has happened in many cases, a cogent example being that of electricity. While hypotheses existed (e.g., Ohm's law), the theoretical understanding of electricity was achieved only after its technology had reached the commercial stage. In fact, the ordinary language of electricity—terms like "current" and "condenser"—indicate a belief about


its nature (that it was a fluid) that was discredited only after electricity had become a familiar feature of practical life.

We are, of course, able to produce results in the world by manipulating things without any theoretical understanding at all, and many of the fruits of human ingenuity have come about just this way. This is frequently true even of scientific research: Faraday, realizing that wires carrying electric current moved in magnetic fields, spent a great deal of time looking for the conditions under which moving a wire in a magnetic field would produce a current; he did this by trial and error, and without any clue as to the theoretical relationships between electricity and magnetism. It was a question rather of feeling something in the world, or seeing into it directly, than of understanding.

The concept of "knowing" is to be taken seriously even in the context of "knowing how," and not only in the context of "knowing that," since knowing how really counts as knowledge of the world through a device or a muscular movement; it is a mode of acquaintance leading to mastery even if it does not include analytic understanding. There is a passage in Hume which might be used to reinforce this point if read in a slightly unorthodox way. Hume says, in the course of his criticism of the principle of causality, "My practice, you say, refutes my doubts. But you mistake the purport of my question. As an agent I am quite satisfied in the point, but as a philosopher who has some share of curiosity, I will not say scepticism, I want to learn the foundation of this inference."[2]As an agent I am quite satisfied in the point : Hume has no practical doubt about the reliability of causal relations; he is not really afraid that the sun will not rise tomorrow, but he understands that, if we make claims to knowledge formulated in the mode of" knowing that," we run up against insuperable obstacles. It seems to me worth looking at the matter afresh, from the agent's point of view, and asking whether there might not be a wisdom of the agent, independent of the philosophical problems that confront the traditional epistemologist. Knowing how need not follow from or depend on knowing that—or, to put it in the terms of the present discussion, technology need not follow from or depend on science. If it is true that much modern technology would not have come into being without modern science, it is equally true that most modern science would not have come into being without modern technology, and there has been some attempt (particularly in the Soviet Union) to approach the history of science, for a change, in terms of the history of its instruments rather than in terms of the history of its theories.

Technology as a Moral Issue

Now let us turn to the third misunderstanding, namely, that the philosophical problems of technology are essentially questions of value.


There is, to be sure, a crisis of value, but to attribute this to technology is to conceal the ethical issues, in a narrower sense, that may be involved. The strength of our values is mediated by the magnitude of the strains to which they are subject, and it is probably a delusion to think that accepted values are commonly stronger than required to meet stresses usually encountered. The technological situation may be disastrous for people whose values are not exceptionally strong: put devices into their hands that enable them, by pressing buttons, to eliminate their enemies, and they will be more likely actually to do that than if it had required elaborate actions.

It is likely that our technology will put us in a situation where our values are strained, but the fact that the values break down should not signify that guilt attaches to technology; the fact that our values came into being at a time when such strains were not present means that they now need strengthening, not that technology is vicious. It is true that technology has made it possible for people to be foolish and evil in ways not previously possible; still, it does not seem reasonable that all this should be laid to technology's charge.

Technology has not always had the negative connotations, even in the domain of values, that it has recently acquired. In the early days, when machines were made of brass and were attended by enthusiasts who kept them polished, there were many people who found immense aesthetic satisfaction in the structure and power of technological devices. There has been a recent contribution to this tradition on the part of a contemporary writer, Robert Pirsig, in his book Zen and the Art of Motorcycle Maintenance ,[3] in which he shows in a convincing and often moving way how the technical precision of machines is a thing of beauty and also of great human potentiality.

This may seem to be an argument for the value-neutrality of technology, and to some extent, this is so. Such a position may be viewed as naïve on the grounds that technology is part of a complex social structure, that we cannot escape involvement with it, and that every action has moral and political implications. But the whole point of philosophy as an analytic device is to neutralize that kind of consideration in order to be able to reinsert the concepts concerned, in a clearer form, into a more complex argument later on . Philosophy does not take a position in questions of value; if it did, it would become a form of moralizing or of the arbitration of taste. Similarly, it does not take a position on what is true of the world; if it did, it would become a form of science. Instead, what philosophy tries to do is to clarify the definitions of the terms in question and the conditions of their application, the meanings of the relevant propositions and the conditions of their acceptance. It is true that philosophy may establish its criteria in such a way that, of the various alternatives offered to us, one and only one is selected while


the others are rejected, but this does not mean that philosophy has itself taken a position—it has simply enabled us to take a position more intelligently than otherwise. The position chosen might then be said to have a philosophical warrant, but it would not, for all that, have become a part of philosophy.

Technology as Juggernaut

In order to deal with the juggernaut view, it is necessary first to argue that, to some extent at least, technology, like science, is a distributive phenomenon. There would be no science if it were not for the heads of the scientists who know it; technology is similar in that it requires an army of humans to sustain it. The situations of technology and science differ in that while the head of a dead scientist ceases to be useful, the machine of a dead technologist may continue to function; technology is therefore less directly reducible to the activities of the involved people. Further, a technologist may feel a certain loyalty to the technological system he or she has helped create or helps to run. It is tempting, therefore, to regard the development of technology as autonomous, and to assume that if something is possible it will in fact be realized and exploited, to feel that as individuals we are powerless in the face of the technological system.

But the juggernaut view underestimates the tenacious role of the individual and the time scale of development. There is a long run ahead, and voices like Schumacher's are raised to insist that the tendency to bigness and uncontrolled automatism needs to be reversed.[4] The SST decision in the Congress may have been a turning-point back towards a more rational system of control. The only way to reverse the trend towards unthinking and deleterious developments is by a massive campaign of education, but it is very important not to inhibit this by counsels of desperation, by a readiness to believe that the whole thing is already out of hand and that technology represents an unconquerable force for evil in the modern world. There is no serious reason for accepting any of these propositions.

A New Direction: Words and Things

How should we regard the philosophy of technology, and what direction might it reasonably take? The most promising approach would appear to be to work out the opposition between episteme and techne (which, when episteme is translated into Latin and techne is made artic-


ulate, transforms into the opposition between science and technology) in a much more radical way than has yet been done—to follow it down to the opposition between two ways human beings have of dealing with the world, one in terms of speech and of words, the other in terms of actions and of things. One thinks again of Hume, satisfied as an agent, dissatisfied as a philosopher. There is another classical source in Spinoza, who, in his essay On the Improvement of the Understanding , proposed, in a remarkable though casual way, the opposition I now wish to exploit. (Spinoza is the most technological of modern philosophers, with his lens-grinding, dependent as it was on the technology of glass manufacture and casting, and of geometrical measurement, and abrasives and polishes.) He argued that somebody might claim the need for a method of discovering the truth, and the need for a method to discover this method, and so on into an infinite regress. He said,

The matter stands on the same footing as the making of material tools, which might be argued about in a similar way. For, in order to work iron, a hammer is needed, and the hammer cannot be forthcoming unless it has been made; but, in order to make it, there was need of another hammer and other tools, and so on to infinity. We might thus vainly endeavor to prove that men have no power of working iron. But as men at first made use of the instruments supplied by nature to accomplish very easy pieces of workmanship, laboriously and imperfectly, and then, when these were finished, wrought other things more difficult with less labor and greater perfection; and so gradually mounted from the simplest operations to the making of tools, and from the making of tools to the making of more complex tools, and fresh feats of workmanship, till they arrived at making, with small expenditure of labor, the vast number of complicated mechanisms which they now possess. So, in like manner, the intellect, by its native strength, makes for itself intellectual instruments, whereby it acquires strength for performing other intellectual operations, and from these operations gets again fresh instruments, or the power of pushing its investigation further, and thus gradually proceeds till it reaches the summit of wisdom.[5]

The point of citing this passage has nothing to do with reaching the summit of wisdom: it is to draw attention to the quite natural way in which Spinoza suggests that there has been an autonomous development of the technological, a mode of dealing with the complexity of the world, through the development of tools and ever more complex devices, having nothing to do with our theoretical understanding. This would be a way of becoming adequate to our environment or, even more importantly, becoming adequate to ourselves in that environment, reckoning with matters of life and death in a way quite different from the one made possible by intellectual instruments.


On the other side of this practical relationship to the world, philosophy has some extraordinary gaps, which the philosophy of technology might have the ambition to fill. There is a great gap between basic action theory on the one hand (what happens when I raise my arm, whether in so doing I do one thing or many, etc.) and questions of political praxis on the other (how political action is best organized, how it leads to revolution, etc.). The former does not rise to any great level of complexity; the latter begins at a level so complex as to defy lucid analysis. We need to pay attention to the intermediate range, to apparently banal forms of everyday behavior such as buttoning buttons, writing, and manipulating scientific apparatus, which are found everywhere but have been the object of very little philosophical scrutiny. These activities on the side of "knowing how" are surely just as significant for human society as activities of language in a similar range, to which philosophers have paid a great deal of attention.

The parallel between these two sides has some philosophical history of its own. In 1897 Espinas published a book called Les origines de la technologie , in which he matched a hierarchy of concepts on the practical side to another on the theoretical side:



Art or technique







He also anticipated certain present concerns in speaking of "the pessimistic aspect of this philosophy: the powerlessness of man." This, however, was countered by its "optimistic aspect: the Arts, gifts of the gods."[6]

On the practical-technological side we begin with a philosophy of praxis, which will become a philosophy of technology only when a certain complexity has been reached. What has been missing in recent attempts to define a philosophy of technology is just this basis in praxis, and the development of the intermediate region which separates the simple from the complex. The analogue of sensation may be just as rudimentary as it is—muscular effort, elementary motions, and the like—but the analogue of perception (which, after all, involves discrimination between relatively complicated aspects of the world) is to be found in much more pointed and refined praxes which may presuppose a considerable degree of learning and which represent a sophisticated mode of knowing a highly organized world. In other words, we need a serious and developed praxiology as a propaedeutic to the philosophy of technology, something more adequate than the rather anecdotal work of Kotarbinski.[7] Praxis stands to technology as elements do to systems,


technology being a consciously articulated sequence or bundle of praxes designed to reach a consciously apprehended end on a relatively large scale. This distinction between praxis and technology seems to be a useful and necessary one, as contrasted with the tendency, for example, on the part of Ellul, to lump everything together under the term "technique."

If we now associate the theoria-episteme side of the opposition with words, and the praxis-techne side with things, the obvious suggestion presents itself that there might be a philosophy of things; the analysis of the articulation of things in the world seems just as worth pursuing as the analysis of the articulation of words. There are some hints of this too in the literature: Lévi-Strauss talks in The Savage Mind of a "science of the concrete," which he attributes to primitive peoples, meaning by this a knowledge of the material resources of their world which is directly expressed in their relation to things and only secondarily in their language.[8] The science of the concrete is a science of natural things—plants, animals, and the like—but we have a similar relation to artifacts, with which, indeed, our lives are surrounded.

The artifacts that populate our lives express the human mastery of the world as much as, if not more so than, propositions do. We might even be tempted to say that, if we had adequate artifacts, we could do without the propositions. Elsewhere I have even suggested that the propositional content of science, namely, scientific theory, may be historically anomalous, and that its chief function may be to make itself unnecessary by incorporating itself into devices.[9] It is perfectly true that if we have the appropriate devices—from change-making machines and automatic cameras to computerized mass-spectographs—we can get results quickly and unthinkingly that a more theory-dependent praxis would have had to labor over for a long time; and it is a sobering thought that if the device had been produced magically (or naturally, which amounts to the same thing) it would have worked exactly as it does when produced to theoretical specifications.

In a way this line of thought, too, is part of a long tradition: the Encyclopedists of the eighteenth century attached great importance to the plates which accompanied articles about technical or industrial procedures, such as glass making, weaving, and the like. The plates were often divided into an upper and lower portion, the upper a realistic representation of the process in question, the lower a set of engravings of the various parts of the machinery, broken out and separated like the grammatical elements of a proposition. Such illustrations provided independent access to the knowledge the various articles were meant to communicate, and enabled the reader to see into the structure of the world, and to grasp directly the spatial and temporal and causal rela-


tions involved in a nonpropositional, nonverbal way. A more recent contribution to this general domain is Baudrillard's book Le Système des objets , which in a roughly structuralist-Marxist way analyzes the system of representation that is constituted by objects in the daily world—the furniture and ornamentation of houses, the apparatus of commerce and transportation and industry.[10] Things as they enter into the world—as they are produced, consumed, arranged—constitute an intelligible system which has some affinities to the linguistic system but is not in fact a system of language.

As we develop this parallel between words and things, it is to be remembered that the things are no longer merely the referents of the words, but entities in their own right, forming their own systems of articulation and expressing the fruits of human knowledge acquired practically. Of course, to maintain a rigorous separation of the two sides would be quite artificial—the progress of either side (towards science or towards technology) would have been impossible without the other. But in a certain respect activities traditionally supposed to lie on the side of words and of scientific theory are being taken over by things and technological practice. Earlier in this paper an association was noted between production and prediction; the latter at least has usually been taken to be a theoretical activity, but it is now more and more frequently the case that instead of working out theoretically some conjectural state of affairs, the task is entrusted to computer simulation. This is a way of producing new knowledge as the end product of a process involving things rather than words, without any cognitive monitoring of the intermediate steps. It may be argued that computers in fact do make use of languages, and it cannot be denied that their parts are arranged according to the logic of a "language." But what puts the simulation process on the technological rather than the scientific side is the fact that there is no theoros , no spectator intelligence to attend and guide the process; and there is also a sense, as I have argued elsewhere, in which the logic of our language is a necessary reflection of a prior logic of things.[11]

Values Independent of Technology

I return to the concept of value, to emphasize values held independently that technology might or might not serve. The notion of independence is important, and controversial, as a brief reference to some recent arguments will show. Heidegger sees in technology the spirit of the modern age and links it with modern metaphysics:[12] for him this is


not a compliment, since he thinks that metaphysics since Plato has been utilitarian and that indeed Plato represents the wrong turn in the history of philosophy. If technology is an instrumentality, so on this account is philosophy (and, Heidegger would say, just as great a threat to our values). I would prefer to say not that technology expresses our collective purpose but that, unlike philosophy, it is a means for the practical realization of this purpose.

Marcuse, on the other hand, regards technology as part of ideology, and hence believes it to have a repressive force in the contemporary world:

In this universe, technology provides the great rationalization of the unfreedom of man and demonstrates the "technical" impossibility of being autonomous, of determining one's own life. For this unfreedom appears neither as irrational nor as political but rather as submission to the technical apparatus which enlarges the comforts of life and increases the productivity of labor. Technological rationality thus protects rather than cancels the legitimacy of domination and the instrumentalist horizon of reason opens on a rationally totalitarian society.[13]

The real problem here seems much more one of technocracy than of technology; only if social and political and economic structures are allowed to create an irreversible dependence on technology for the very survival of the individual or if through education and propaganda individuals are conditioned to see the character of any acceptable survival in technology-dependent terms, is the trap Marcuse describes effective. Technocracy is not so much a problem for the philosophy of technology as for political philosophy in its more familiar sense. The challenge is to incorporate the technological into the pattern of our social and political organization without allowing it to determine the essential features of that pattern. But even to speak of allowing it to do this, or not allowing it, is to fall into one of the errors I initially deplored, that of regarding technology as an impersonal force which has to be confronted as something alien.

Technology is in fact a human product; so is political organization. We need to master each conceptually before we can safely deal with their interaction. If a great many unthoughtful people, who have not sufficiently meditated upon their own political organization or the ends they wish it to serve, are offered technological solutions, they are likely to accept them, wherever they are available, because they represent the easiest temporal solution. If, likewise, they have not meditated upon the technology, it will not be surprising if disaster and recriminations follow. If, of course, we wanted to claim that we were, individually or collectively, simply not up to coping with the advantages of


technology, we might wish to adopt a Luddite view, but this seems a cowardly attitude. What we must do, rather, is to work on the articulation of the social and political values we wish to realize, so as to be able to adopt technological solutions intelligently without giving up other values we wish to preserve.

To repeat a point made earlier: If technology runs away with society, this is a condemnation of society and not of technology. There is a kind of futility in the attack on technology, rather like kicking a car after one has backed it into a lamppost. An old proverb, suitably generalized, is applicable in the present context: "A bad workman always blames his tools." It is true that in this generalized case the workman is collective, and that we very often have a feeling of individual helplessness in the face of technology—it appears to be getting away from us, if not from society as a whole. But the same thing is true of language; we do not have individual control of the means of expression. We might say that we are at the mercy of language; and yet we can use the available language to express what we want to say,just as we can use the technology there is to do what we want to do. If larger-scale uses take deleterious turns, as they occasionally do—in industry on the side of things or in literature on the side of words—remedies in action become, as one might expect, political.

Habermas contrasted institutional frameworks characterized by symbolic interaction, which he takes to be the mode under which we are to be emancipated from inherited structures, with autonomous systems of purposive rational action, such as armies, commercial and industrial firms, universities, and the like. Under the former we can confront one another openly and take account of similarities and differences of ends or concerns, and this mode has to be made dominant over the latter if we are not to be a prey to impersonal economic structures rooted in class inequalities.[14] Again, however, it is not necessary to follow him in identifying technology as one of these autonomous structures, "autonomous" signifying here self-interestedly concerned and disdaining a more general law. It is much more likely that we will find the deleterious principles in the economics or politics than in the technology. The demiurge of the modern world is after all the collectivity of human beings—individual human beings, not some abstract representative Man—a distributive collectivity of people like us (for if the event transpires it will be one of us, not some abstract force, who presses the fateful button). Not that it is impossible for the juggernaut to come into being; indeed, it may already be grinding its way towards the end of the world, in which case nothing any of us says can make any difference. But this is not a necessary consequence of technology if we come to terms with it in full understanding. It would take a consid-


erable degree of concerted malice, of which, it is true, the great powers show themselves capable, to encompass the technological destruction of civilization.

Technology offers, for the first time in history, the possibility of an almost complete command over the natural world. One is reminded of the passage in Hegel's Phenomenology of Mind in which he discusses the relations between the Master and the Slave: in the end it is the slave who comes to dominate, because he is the one who deals with the world practically and not theoretically; he serves the master with his hands, but in so doing conquers the world by work, and thus achieves his own emancipation not only from the master but also from many of the limitations of the world itself.[15] Marx, following Hegel, was one of the first to perceive the relations between the social structures we unconsciously perpetuate and the economic and technological forces which transform those structures, and he has been called by Axelos the "philosopher of technology."[16] Since technology offers this command of the world, it is up to us to formulate with due care the moral and social imperatives that are to be satisfied in achieving it. But technology should not be identified with this task; its philosophical problems are its own.


Scientific Theory as an Historical Anomaly

Up to a point "science" and "scientific theory" can be taken as roughly synonymous, and "scientific discovery" as the discovery of theoretical truths. These are selected from among many propositions that might generally speaking be called "theoretical," including speculative or even fictional ones. The propositions that science rejects, while they may not have the virtue of truth, have other virtues, perhaps, which will lead to their selection for other purposes—the enlargement of the imagination, the encouragement of kindness, and so on. Now the ordinary linguistic practice of human beings will presumably always include the production of propositions of this sort, but it is reasonable to ask whether it will always include the production of truths of scientific theory. A great deal of everyday discourse incorporates propositions that would once have been considered "scientific" but now express commonplaces rather than discoveries, and that will no doubt continue to be true. But what will the future development of scientific theory as such, the special province of a limited community of scientists, be like?

Science, in its practical aspect—in which it has been defined by Gilles-Gaston Granger as "the construction of effective models of phenomena" (emphasis added)—contributes not only or even mainly to our understanding of the world but also, and in the long run more significantly, to our control of it. The origins of science lie in various forms of practical knowledge (agriculture, the domestication of animals, pottery, metallurgy, navigation, land surveying, etc.) acquired in prehistoric times as solutions to problems of necessity—not that the necessity need have been perceived as such by their first practitioners,


rather that we can see them as having been necessary to the perpetuation of the race and the emergence of civilization. While more purely theoretical interests soon developed (e.g., for religious and philosophical purposes), this practical side has never been absent in any period of scientific advance.

Scientific theory, in other words, has been one of the means by which human responses to the environment have been rendered more adequate to the challenges it poses—threats to survival, or obstacles to progress towards desired goals. I shall describe this process of winning functional control over the environment on the part of a living (in this case human) organism as the adequation of the organism to the environment. I use this term rather than, say, adaptation because the latter might describe an adjustment in which there were no goals except survival itself and in which the organism, while well adapted to normal conditions, could not cope individually or collectively with extraordinary changes. Adaptation occurs at all levels, and can be seen wherever species have stabilized themselves, whether they are viruses or people. Adequation on the other hand has overtones which suggest that a species which had merely adapted might be missing all kinds of opportunities presented by features of its environment to which it was not equipped to respond, might as it were not be doing justice to its environment. This possibility of "higher" forms of interaction is what allows evolution to proceed even when perfectly adapted species are already established. (Whether the human organism is yet adequate to its environment in these senses remains to be seen. All sorts of things may be going on around us that we will eventually be able to exploit but of which we have as yet no inkling.)

If we examine the problem of relative adequation for organisms in general—what it is that makes one organism more adequate to its environment than another—we see that it rests on adaptation and that it is possible to distinguish a number of levels. I shall discuss these in terms of types of responses that the organism may make to changes in its environment. (The notion of "response" can easily be extended at the appropriate point to any action taken for any end—the stimulus need not be positive, i.e., a challenge from the environment, but may be negative, i.e., a deficiency in it with respect to some desire.) At the most rudimentary level survival requires a set of innate responses which are functions of the structure of the organism and do not involve memory. These include the various reflexes as well as homeostatic controls, e.g., of body temperature. In organisms with a memory, learned responses may come into play; the organism's reaction to new changes is a function of its structure plus its previous experience of similar changes. Most animals do not rise above this stage. In higher animals, when,


in addition to individual memory, interaction between individuals is possible, a third category of what I shall call taught responses is added; the organism is dependent for its survival on a social relationship, with a parent or a herd, which lasts through infancy and thanks to which it acquires responses—by imitation, training, etc.—which it would have been incapable of acquiring alone merely through experience of the environment.

The impulse to behave socially and to educate the young may itself be innate in developed species, so this new level does not require any degree of awareness of the conditioning process on the part of the organism. But it does make possible a form of cultural evolution, since for the first time the transmission of the response in question or of the conditions for acquiring it is not genetic but is externalized in the group. As long as some members who have the skill in question survive, and can instruct or be imitated, the behavior will be passed on from one generation to the next, even though if the continuity were broken its revival would be problematic. (Since such a response would have been invented at some earlier stage it could always, in principle, be invented again, but the conditions under which its invention occurred might not be reproduced; if accidental, their reproduction would be very unlikely.) Learned and taught responses may be either direct or mediated —that is to say either addressed immediately to the stimulus condition, or designed to set in motion a causal sequence that will cope with it. The latter, again, need not be conscious—the organism may not know why it does what it does, even though this may be for the sake of some quite remote effect—and the direct and unconsciously mediated types of learned and taught responses, once the learning and teaching have been achieved, become indistinguishable from innate responses in that they follow immediately upon what triggers them (cf. the concepts of second nature, habit formation, etc.). There is however a kind of mediated response in which alternatives are weighed and possibilities projected before the final choice of a course of action, and this can only, if the terms involved have their usual meanings, be a conscious process. What intervenes in this new case between the problem and its solution, or the desire and its satisfaction, is something of the nature of deliberation or inference, and to this intervening process I shall give the generic name "calculation."

Calculated responses , then, are those which are deferred while calculation proceeds: the organism's reaction now is a function of its structure, experience, and education, plus whatever algorithmic resources may be at its disposal. (I use "algorithm" here in its wider sense as "the art of calculating with any system of notation.") The employment of these resources will be conscious, but it still may not be intelligent,


in the sense that the organism may not know why the calculation is effective—it may be a method that has been learned uncomprehendingly. (In this case a calculated response would be just a rather complicated kind of taught response.)

Now, for every mediated response the question can be raised as to how we know that the action will produce its desired effect. In the case of taught responses we just know it, by experience. But for calculated responses—the most obvious cases of which are precisely the human use of mathematics and science—somebody, even if not the person who actually carries out the calculation, will ordinarily be in a position to explain why the action it leads to produces the effects it does, by appealing to a theory that entails them; it is no longer a question of expecting results, but of predicting consequences. The calculation exemplifies some part of the theory, and the theory as a whole provides an effective model of the phenomena in the context of which the response is appropriate. Scientific theory, then, must be counted among the algorithmic resources referred to above; the characteristic mark of a scientific age is its use of calculated responses rather than merely innate or learned or taught ones, and the most developed form of calculated response involves grasping the elements of the situation to be dealt with under the terms of a theory, and by various techniques of measurement, computation, the devising of hypotheses and the testing of their consequences, etc., arriving at the practical specifications of relevant action. The history of science is the history of the origin and development of such algorithmic resources and their associated concepts, vocabulary, and techniques.

I said above that a calculated response might be taught but not understood, i.e., that we might profit from a theory without ever coming to know it. I wish now to raise the related question, whether it might ever happen that we came to know a theory and then forgot it, retaining however the responses that it made possible. It seems clear that this does happen: we work out a method or a strategy on theoretical grounds, but thereafter it becomes habitual: its theoretical justification might at some later time require considerable effort, in the limit involving the relearning of the theory. Examples of this kind of thing might be found in medical diagnosis or therapy, in engineering practice, etc. I make no comment on the lapse of professional standards involved in forgetting the theoretical justification; my point is only to indicate the possibility, initially on the ontogenetic rather than the phylogenetic level, of the emergence and subsequent disappearance of a theory, which however might leave behind a trace in the form of changed behavior. One might say, to put it in the most general terms, that the organism had been obliged to reach up into consciousness to learn a certain


adequacy to its environment, but that once this had been learned it could afford to lapse into unconsciousness again.

This association of the theoretical with the conscious is fully warranted by the root meaning of the term "theory" as suggesting spectatorship. But it has been suggested, by Schrödinger, that there may also be an essential link between consciousness and novelty, as in the emergence of changed modes of behavior:

Any succession of events in which we take part with sensations, perceptions and possibly with actions gradually drops out of the domain of consciousness when the same string of events repeats itself in the same way very often. But it is immediately shot up into the conscious region, if at such a repetition either the occasion or the environmental conditions met with on its pursuit differ from what they were on all the previous incidences. . . . The gradual fading from consciousness is of outstanding importance to the entire structure of our mental life, which is wholly based on the process of acquiring practice by repetition. . . . Now this whole state of affairs, so well known from the ontogeny or our mental life, seems to me to shed light on the phylogeny of unconscious nervous processes, as in the heart beat, the peristalsis of the bowels, etc. Faced with nearly constant or regularly changing situations, they are very well and reliably practised and have, therefore, long ago dropped from the sphere of consciousness. Here too we find intermediate grades, for example, breathing, that usually goes on inadvertently, but may on account of differentials in the situation, say in smoky air or in an attack of asthma, become modified and conscious. . . . What material events are associated with, or accompanied by, consciousness, what not? The answer that I suggest is as follows: what in the preceding we have said and shown to be a property of nervous processes is a property of organic processes in general, namely, to be associated with consciousness inasmuch as they are new. . . . I would summarize my general hypothesis thus: consciousness is associated with the learning of the living substance: its knowing how is unconscious.[1]

If this is plausible—if it is the standard pattern of evolution to incorporate the consciously learned into the unconsciously performed—then one might expect to find theory, the quintessential form of conscious understanding, active mainly at the frontier of science and technology. The question now becomes in what form the theory leaves its traces behind it when, with respect to some domain, theoretical activity slows down or ceases.

The practical effect of scientific development is as much a change in the environment itself as a change in our ability to understand it or cope with it (indeed human evolution, in the sense in which the term is used in the preceding paragraph, may have come to a halt precisely because


of this power of adapting the environment to the species, thus eliminating the necessity for the species to adapt to the environment). A great deal of theory is already incorporated in the world, as it were hidden there so that people are quite unaware of it. The writer who uses a pen does not need to know the principles of capillary action, the reader who turns on a light does not need to know the principles of the generation and distribution of electricity, and the case is even more acute in our habitual use of electronic devices, aircraft, and the like. And it is not only in such everyday utilitarian contexts that this externalization and objectification of theory has occurred; it is now quite common for chemical analysis to be carried out by programmed devices, and it is clear that the navigation of spacecraft would be impossible if it were left to the astronauts, even supposing them to have the most advanced scientific knowledge and observational skills, because of the very complexity of the necessary observations and calculations.

Now it is, again, commonplace to say that computing machines enable us to effect in a few minutes tasks that would take the unaided scientist centuries, i.e. which would be strictly impossible without them; it is less common to ask what this signifies for the future history of science. It seems clear to me that it marks the end of a period, the period during which the scientist's conscious involvement in the theoretical process was indispensable. There are now whole areas of scientific research, including many purely theoretical ones (such as quantum mechanics) in which results simply could not be obtained at all if every step of the calculation had to be consciously worked out by the researcher. In a sense, of course, this situation is new only in degree, not in kind; scientists have always externalized the intermediate steps, using pencil and paper as a primitive computing device and as a short-term memory, tables of logarithms and constants as somewhat more sophisticated and long-term versions of the same things. But in these cases the dynamics of the process was still provided by the brain, the sequence of steps had to be followed consciously. Now it is not at all unusual to entrust whole sequences to the computer.

We comfort ourselves with the thought that the principles and programs involved in these computations are of our own devising. It is worth reflecting, however, that the device would work, if built that way, whether we understood the principle or not, and that, in spite of our protests that we are not satisfied until we understand, people have always been quite content to use devices that yield the right result in ignorance of their operating principles. If everybody forgot the theory of electricity, generators would still produce current and lights would respond to switches; the same would be true even if the theory had always been mistaken . In fact electricity was commercially prosperous


long before the discovery of the electron; the primitive notion of electric fluid that determined the ordinary language of the trade (current, accumulator, condenser, and the rest) has long since been abandoned, but it was that notion that presided over the transition from the idea to the externalized reality. Similar points could be made for any number of other theories.

A response that invokes an objectified and externalized theory, embodied in whatever device, I call an automated response . It is one of the most striking features of the present age that whole classes of response, which at the high-water mark of scientific theory (i.e., in the first half of this century) were calculated, are now becoming automated. I have mentioned one or two examples; let me add another striking although banal one. It was necessary in the early days of photography for amateurs to learn some elements of theory—film speeds, f-numbers, focal planes, and the like. This is less and less the case: coded cartridges, light-sensitive cells, and sonar take care of everything, and all that is left is to release the shutter. I now put forward the speculative hypothesis that the principal utilitarian task of scientific theory is to render itself unnecessary by ensuring the transition from the calculated to the automated . Once this is accomplished, so that the condition in the environment that once necessitated the theory is routinely dealt with by some device or set of devices, the theory may conveniently be forgotten.

Not, it may be said, by the people charged with the maintenance or improvement of the devices. But there again the theory is embodied in the device and its descendants (cf. the use of the concept of "generations" in connection with computers) so that for somebody also to know it is in a sense redundant. The notion of devices that maintain or even develop their own capacities is no longer merely conjectural. We have not, in fact, forgotten many theories, apart from those that were superseded by better ones, and as long as adequate information-retrieval systems exist, no theory need be forgotten beyond recall—somebody can always look it up, relearn it, and enter into its point of view, provided that its logical and mathematical complexity is not too great to be grasped with the brain capacity and in the time available. But serious questions of a purely utilitarian kind begin to arise when the number and complexity of theoretical propositions grow rapidly with respect to the number and brain capacity of available scientists. That the growth of theory and the growth in numbers of working scientists should go hand in hand is entirely understandable when we remember that, strictly speaking, there cannot be a theory unless there is a head to entertain it. But if the number of scientists is limited they may have to choose which theories to entertain, and clearly those which


can be replaced by automated devices need not be dwelt upon by human beings.

Human beings, in fact, think for the most part only reluctantly. Their evolution did not involve them in thought at all until the complexity of the environment made survival impossible in the absence of the kind of generalized ability that thought represents, until it compelled solution at the individual level of problems that had up to that time been solved at the species level. We do not have to think in order to breathe, regulate body temperature, circulate blood, hear, see, or move our limbs, or even (paradoxical as it may sound) in order to think. The devices that do these things for us—lungs, heart, ears, eyes, brain, musculature, and the rest—would if constructed artificially be considered to embody prodigious amounts of theory; indeed, the crude imitations of them we are beginning to manage are hailed as scientific triumphs. Yet they worked perfectly before any theory had been thought of, and it is simply a misuse of language to claim that they do embody theories; they work because of the properties of their parts, not because of our understanding of those properties.

Now as far as its operation is concerned it is a matter of complete indifference whether a device happens to be grown or constructed, developed in the embryo or inserted into (or hooked up with) the mature adult. Imagine a race of beings without a sense of balance, who sent out missions on heavily computerized bicycles rather as we send them out in spacecraft. Imagine further that some of their scientists, specialists in a domain known as the theory of balance, were to develop a device called a "semicircular canal," a pair of which, fitted into the head of an adult, made it possible to dispense with the computer when riding a bicycle. It had been the function of the computer to accept from special sensors measurements of the bicycle's deviation from the perpendicular, and to compute the restoring forces necessary to maintain its dynamic equilibrium, which were then applied by appropriate mechanisms; now, however, anybody equipped with semicircular canals could just hop on and ride off. Imagine finally that a reliable method of reproducing and inserting semicircular canals were developed that could be applied by people who had no acquaintance with the theory of balance. What, in practical terms, would be the disadvantage of forgetting the theory?

The analogy should be plain enough. For human beings to know theories, to be consciously aware of their beliefs about the physical world, is of course an indispensable bridge between the long history of biological evolution, with its complicated devices, and what may be an equally long history of the evolution of human-machine systems, not to speak of systems formed of machines alone. But the history of such


conscious knowledge occupies a comparatively short period of a few thousand years, towards the end of which we now find ourselves. This is what I mean by calling scientific theory an historical anomaly; in the long history of the adequation of the species to its environment the possession of theoretical knowledge in any form usable by individuals may turn out to have been exceptional.

It may be argued that we will always need to keep theory in reserve in case the machines break down, since the consequences of our coming to depend on them in the absence of such a theoretical safety-net, to speak metaphorically, might in the case of catastrophe be very costly. On this point there are two things to be said. First, we have already in practical terms become irreversibly dependent on technology; if a catastrophe occurred the costly consequences would already have taken place by the time the theoreticians had designed the new machines. Secondly, even if we could for a time sustain this reserve of theory it is not clear, for reasons already suggested, that it could be kept up in the long run without seriously hampering technological progress. It is after all only about a century since the death of Babbage, and half a century since the first functional computer; the chief applications of computer technology to theoretical problems (as opposed to computations that were not theoretically problematic but just tedious) belong to the last few decades. The decision as to what to remember and what to forget may be put off, but not for long. Up to now it has been all right for scientists to forget Newton because they know Maxwell, or to forget Maxwell because they know Einstein. This absorption of earlier theories by later ones in the case of science has often been remarked upon, in contrast to the case of philosophy, or of the humanities in general, in which earlier theories continue to be read alongside later ones; it has been explained in terms of the function of science, which is to discover the truth about nature (so that it can be controlled) rather than to concern itself with forms of understanding. What I have been suggesting is that when Einstein is forgotten it may not be because we know a better theory, but because some human-machine system can do a better job of control. And this possibility is one that has to be faced squarely.

But of course scientific theory is also a form of understanding. Up to now the development of understanding has run roughly parallel to the progress of control, but there are several reasons why this may cease to be true. As we have seen, the necessities of control for which automated devices have been constructed already outstrip our private capacities, operationally if not in principle, and there are devices whose principles of operation are understood by a few people only, perhaps some not fully understood by anybody. What is more important, how-


ever, is that as yet very few people have entered into the scientific understanding we already have (which may already be adequate for most human purposes, though as suggested above new ones may come along), and such understanding is surely a human good. In this latter sense I do not think of science as ever superseded, but in order to have its effect it will have to take its place where as theory it has really always belonged, namely, among the humanities.




Preface to Part V:
Scientific Knowledge—Its Scope and Limits

In this part I revert to the knowing subject and the nature of his or her knowledge, beginning in chapter 18 with some well-known epistemological challenges (such as the Gettier counterexamples to knowledge as justified true belief). The goal here is to arrive at a robust definition of (scientific) knowledge as a characteristic of the knower . It turns out, if I may so put it, to be an ability rather than a commodity. The definition of knowledge given in this chapter is already to be found in The Philosophy of Science: A Systematic Account , but here it is modified and strengthened.

The other chapters in this part look at some generally assumed characteristics of scientific knowledge and question their adequacy. Chapter 19 examines the assumption, underlying a great deal of thought about the "exact" sciences, that science has privileged knowledge of quantitative properties of things. The relation between the quantitative and the qualitative is however one of the least well understood in the domain and I give what is perhaps a new view of it.

Chapter 20, as remarked in the preface to part III, is more speculative. Relativity theory has long been understood to imply that known characters of objects—particularly quantitative ones!—change at relativistic distances and speeds, but again what that really means has not always been clearly thought through. In particular what does it mean to make an adjustment here in the properties of an object there ? Observers there would not have the sense of moving at a very great speed relative to us, any more than we here now have the sense of moving at very great speed relative to them. So what does "relative motion" amount


to, in relation to our local understanding of motion? The issue here is one that recurs in the last chapter of this part, and is taken up briefly in an article that could not be included in this book because it is an entry in an encyclopedia, the Encyclopedia of Physics : "Our perceptions and naive thoughts," I say there, "are adapted to the scale of our bodies, our days, and our lives; relativistic and quantum phenomena have no direct bearing on them, and being equipped to envisage such phenomena would have been of no evolutionary advantage." The term "phenomena" is used in a loose sense here—strictly speaking, relativity and quantum theory offer us nothing phenomenological.

In chapter 21 I take up the problem of the self-enclosed character of scientific knowledge, another variant on the theme of hypothetical realism, and give reasons why circularity in knowledge is not necessarily vicious.

The final chapter is of a different order from the others—it is devoted to the work of a French philosopher of science and of literature, who saw as clearly as anyone has what is involved in the restriction of the imaginable to the local and macroscopic. "Imaginable" is used in a strong sense, in keeping with the meaning of the cluster of terms in French built on this root; we tend to use "imagination" to include the having of bold ideas of any kind (for example, one of the formulations in Science and the Theory of Value was "science is imagination controlled by experiment"), but, like the term "idea" itself, it has lost in English its close association with the visual image. Bachelard stresses throughout his work on the philosophy of science that it is reason, rather than imagination in the strong sense, that gives access to the theoretical structures of science, but in doing so he makes room for a different function of the imagination, namely, a poetic function, whose correlative status to the activities of science is not stressed by writers in English because it falls for most of us altogether outside the domain of inquiry. He is one of those very rare practitioners of the human sciences whose contributions to the understanding of the humanities on the one hand and of the hard sciences on the other are in a working equilibrium, and he deserves to be better known to the profession than he is.


Is There (Scientific) Knowledge? Who Knows?

The title of this paper is complex—it packs in several layers of suggestion by the use of typographical devices. The first suggestion is that if "scientific" is an optional adjective for knowledge then if there is any knowledge—which is the question at issue—then some of it needn't be scientific. Or, to put it the other way round, I am asking the question about ordinary as well as scientific knowledge. The second suggestion, however, is that if the adjective "scientific" is an optional part of the question at issue, then the two kinds of knowledge, scientific and nonscientific, stand or fall together: if there is knowledge then there can be scientific knowledge; if there can't be scientific knowledge there can't be any knowledge. In other words, to be scientific is a permanent possibility of knowledge, if there is any. The third suggestion, taking "who knows?" in its colloquial sense, is that there is some doubt as to whether there is knowledge or not, and that this doubt isn't particularly easy to dispel, so that one might be inclined to throw up one's hands over the question. But the fourth suggestion, taking "who knows?" in a plain and straightforward sense, is that the answer to the question whether there is knowledge has something to do with the particular individuals who have it.

Epistemology is the central discipline of philosophy, and every philosopher has to come to terms with it. The stakes are high. Recently the very idea of knowledge as a reflection of the way the world actually is has come under attack from neopragmatists like Richard Rorty, who wish to discourage us from thinking that we are getting closer to the truth through the efforts of scientific inquiry, to encourage us in a kind


of benign floating in the stream of culture, from which we cannot escape. Actually, this program is rather appealing, especially in the later stages of a decaying civilization, which ours may well be; at least it does nobody any direct harm, which can hardly be said for bombs and industrial effluents. But it gives up too easily, and in my view for inadequate reasons, an old hope that knowledge cannot only be reliably acquired but also put to safe use for the benefit of people who really need better control of their world.

One of the things that has made the old standard of confirmed scientific knowledge vulnerable to the criticisms of the neopragmatists has been its conceit, its assumption (or the assumption of the people who thought they had it) that it would conquer everything, produce the answer to all problems, be total or absolute. It was no doubt the risk of this God-like pretension that the ancient Jews had in mind when they told the story of the Tree of Knowledge. But the fact that human knowledge is limited hardly seems a good reason for trying to discredit it altogether. What we need is a genuinely modest but at the same time sturdy conception of knowledge that will avoid the extremes of megalomania and despondency but serve us adequately in the middle region between nothing and infinity, between birth and death—it is after all our knowledge that is in question, not some abstract entity or substance independent of us.

Suppose we start with this idea of what is in question , and ask, what sort of thing have people thought they were talking about when they used the word "knowledge" or its ancestors? Here I want to take the ancestors seriously. It is not always helpful to ask where a word of ours came from, since that may have very little to do with what it means now, but there are some things about the derivation of the word "knowledge" that help to get it in perspective at least. First of all the "kn-" at the beginning: this puts the word in a whole family deriving from a Greek root, "gno-"; I think of them as the g-n (or c-n or k-n) words. One possibly unexpected member of this group is "noble"; the g-n shows up in its negative, "ignoble," the two words meaning in effect who's known, and who's beneath notice. Another is "king," the king being in ancient times wise as well as strong; another is "cunning," a kind of low knowledge, but still part of the family.

Greek gnosis means a kind of knowledge of the sort that we might call acquaintance, and indeed "acquaintance" is part of the family too (call it a q-n word), having come via Latin cognitio , somewhat distorted by a passage through old French. (We have this Latin root in a less distorted form in "cognition.") However there was another Greek word for knowledge, episteme , which translated into Latin as scientia , meaning in both cases not knowledge by (casual) acquaintance but careful


or serious knowledge. It is tempting to try to see a connection here between Greek episteme and Latin scientia on the one hand, and Greek temno and Latin scindo respectively on the other, both the latter meaning "to cut," thus to divide into categories, to distinguish—distinguishing among the things with which we are acquainted being an important step on the way to more adequate knowledge. In this way the "sci-" of "science" and the "sci-" of "scissors" would be related and a point easily made about the sharpness and exactitude of scientific knowledge. But this connection is uncertain. In any case Latin scio , "to know" in the sense of scientia , gradually gave way to another verb, sapio , originally meaning "to taste" and thus eventually in its own way "to distinguish." Remnants of that root are scarce in English, though Homo sapiens is familiar enough.

At all events we have a double history here, whose two parts are however related to one another: becoming acquainted with things in the world on the one hand, and finding out about them more carefully and exactly on the other. "To know" and "knowledge" in English carry both burdens, helped out in the latter case by the intensifier "scientific," scientific knowledge being an especially knowledgeable kind of knowledge.

But it is worth looking once again at the word "knowledge" itself, since it has a component that does not derive from Latin or Greek. The suffix "-ledge" seems to come from Old English "-lac," which survives in only one other modern English word. The suffix denoted a kind of action or proceeding, and it often had playful connotations, as in games. The basic idea seems to be of a state or condition into which one enters which enables (or entitles) one to engage in a certain sort of activity or practice, sometimes serious, sometimes not. The other survival is in "wedlock."

Now again I do not wish to burden you with surplus etymological baggage, but I have a feeling that the Old English were on to something when they assimilated knowledge to a class of activities including at the time dancing, fighting, robbing, pledging (the original meaning of "wedlock"), etc. Knowing is an activity, it involves skill and can be done well or badly, it can be celebratory or destructive, it can commit to consequences. (There is a line in Eliot's "Murder in the Cathedral" in which Becket says, "After such knowledge, what forgiveness?") I mention all this so that it will be vivid as we plunge into some of the grey matter of philosophy.

What have philosophers in fact said about knowledge? There is a whole history here and I will not enter into it but begin fairly recently. Some time ago it was a standard move in analytic philosophy (that is, philosophy that engages in the analysis of concepts, which all philoso-


phy ought to do at least some of the time) to offer as an analytic equivalent of the concept of knowledge the concept of "justified true belief." Roughly speaking the argument was that belief is an attitude we have to propositions when we think they are true, and if we think we know something we must at least think it is true; we may of course be doubtful, realizing that many of the things we think true may not be, but at all events we aren't going to offer as a candidate for knowledge something we think isn't true. If something we think true turns out actually to be true then we'll move it up from the status of mere belief to the status of the special kind of belief we call knowledge, and we won't do this otherwise. But how can we be sure that it actually is true? We have to have some basis for this conclusion, be able to offer a justification. Hence knowledge as justified true belief.

In 1963, in a very brief article in Analysis ,[1] Edmund Gettier blew this view out of the water with a couple of telling counterexamples. I might believe something, it might be true, and I might be justified in believing it was true, but it might turn out that I didn't know it after all. Gettier's cases hinged on contingencies and ambiguities but they were telling nonetheless. The kind of strategy he uses can be illustrated as follows: Suppose I claim to know that my car is parked opposite my house. I believe this to be the case, it is the case, and I have a justification for believing it to be the case, namely, that I parked it there this morning. However, unknown to me my wife used the car to run an errand at lunchtime, and she returned the car to a slightly different spot, still opposite the house. So the three conditions are still met; yet nobody would claim that I know the original proposition.

Since 1963 a good many people have had a crack at this problem. In 1984 Richard Kirkham published an article in Mind[2] in which he claimed that no "analysis of knowledge can be found which is (a ) generous enough to include as items of knowledge all, or most, of those beliefs we commonly regard as knowledge, and (b ) rigorous enough to exclude from the class of knowledge any beliefs held in real or hypothetical cases which we would agree on reflection are situations where the epistemic agent does not know the belief in question." If Kirkham is right—and I think he is—then either we need a new analysis of knowledge or we'll have to conclude we don't have any. Kirkham takes the latter position, but he says it shouldn't bother us as long as "we remember that a belief or proposition does not become less valuable merely because we can no longer apply the 'hurrah' word 'knowledge' to it. Only the discovery that it had less justification than we thought it had can cause it to lose epistemic value."

Now something very odd is going on here, something quite characteristic of some recent moves in philosophy, which can throw light


on the comfortably skeptical neopragmatist phenomenon to which I referred earlier. The problem is that nothing can be absolutely nailed down, so firmly that it can't be budged by anyone—or rather that's not the problem, since I don't see how we could possibly expect, knowing what we know (and I use the words deliberately), that anything ever could; the problem is that because things can't be nailed down absolutely, people tend to throw up their hands and assume that nothing is even approximately in place. This is sometimes called a crisis in the foundations, and the position to which pragmatism opposes itself called foundationalism (I drop the "neo-" here because it is clumsy and because the old pragmatism made the same claim for the same reasons).

For myself I'm not too much concerned about foundations; since Copernicus we've had to get used to the idea of being freely suspended in physical space, and I think there is a lesson for the intellectual domain in that. Pragmatism in fact seems to me to be an essentially foundationalist move; the situation is like that of theism and atheism—as Sartre once said of an atheist friend, he was "a God-obsessed crank who saw His absence everywhere, and could not open his mouth without uttering His name, in short a gentleman who had religious convictions."[3] Pragmatists keep saying that we should give up the old silly ways of talking about the differences between knowledge and conjecture, between facts and interpretations, between rationality and irrationality, and so on, thus emphasizing the very concepts they reject. But if we can still make sense of them there seems no reason why we should follow this advice. One might turn the tables and say that historically foundationalism was an essentially pragmatist move: there was a problem about certainty, and trying to make knowledge fixed and absolute seemed like a good solution to it, especially when the means were at hand (thanks to the belief in God) to do so convincingly. Spinoza has as I recall an argument to this effect in his essay on the improvement of the understanding.

At all events we are dealing with something we all think we have some of, with respect to which however our confidence has been shaken because of devious and cunning counterexamples devised by tricky philosophers. I seem to be making light of their work; in fact I respect it highly, but want to get it in the kind of perspective that the lively and even playful attitude to knowledge we encountered in Old English would facilitate. The kinds of objection to the possibility of knowledge that I have already outlined are supplemented by objections to general truths in science because of the skepticism about induction that we owe originally to Hume; this has led careful philosophers of science as soundly empiricist as Hempel, for example, to admit that there are no scientific explanations, only explanation sketches (because in a strict


explanation the explanans would have to be true and contain a general law, something that can never be known to be true).

I think Hume was right about induction: we don't, in the end, know why the things that go together in Nature ultimately do so. (For complex things we sometimes know it in terms of the properties of their parts; in the end, at some level, we can only accept the fact that the parts behave as they do, we cannot explain it further.) But that doesn't mean that we have to give up the word "knowledge" as correctly characterizing something we have and can do. Let me dwell for a while on the philosophical situation in which we find ourselves. For some perverse reason it keeps reminding me of an unfortunate Scottish lady my family knew when I was a small boy. Her name was Mrs. Catterall. One of Mrs. Catterall's misfortunes was to discover in a local store, and to buy, something that was labeled "unbreakable china." This was a misfortune, in fact, only relatively to a second misfortune: that of being married to Mr. Catterall. Mrs. Catterall brought home her purchase and said excitedly to Mr. Catterall, "James! Only think! Unbreakable china!" "Unbr-r-reakable!" said Mr. Catterall, seizing some of the china and dashing it with all his force against the tiled floor of the kitchen, where it unthinkably broke.

The point of this story, of course, is that, while "unbreakable china" is as audacious a designation as "certain knowledge," Mrs. Catterall wasn't altogether silly for buying it, only for telling Mr. Catterall what it claimed to be. What "unbreakable china" meant was that it wouldn't break under normal or even rigorous conditions of use, conditions under which ordinary cheap china might break, not that it couldn't be broken by a large and irate Scotsman if he put his mind to it. Of course there's some hyperbole in the "unbreakable" and if we made china we'd be disinclined to call ours that, just as we might be disinclined to call our knowledge "certain." But in the places and under the conditions in which we have to use it—building machines, curing diseases, etc.—our knowledge seems to function pretty well and we don't want to be told we don't have it. And we might feel, as G. E. Moore came to feel, that the principles on which the radical criticism of knowledge rests are in fact themselves far less certain than the principles on which the knowledge itself rests.

How can we characterize a kind of knowledge that might withstand radical criticism and justify our reliance on it under normal conditions of use? One move is to concede everything the radical critics want and then go patiently back to where we left off. Descartes was the modern initiator of the radical move, which took the form for him of doubting everything except the fact that he doubted. We might be asleep, he said, we might be mad, God might be deceiving us. We might be brains


in vats, some more recent thinkers have suggested. Well, suppose we grant that possibility: in our sleepy or crazy or deceived or wired state we can still ask the question whether there is a difference between what we know and what we do not know, and ask how we know the difference. And, until we wake up or get cured or undeceived or unplugged, philosophy can proceed as before.

Another possible move is to reject any claim for our knowledge that takes it further than what is before our eyes, unless we have carefully argued grounds for extrapolation. This was the original strategy of what came to be called positivism: refuse to assert anything you don't positively know to be the case. It is very hard to do this, since the most elementary judgments about states of affairs take us beyond the immediate, and call for resources of logic and language we certainly didn't learn from observation. Still we can profit from the idea behind the strategy. Suppose we resolved not to extend our claims to knowledge, in the first instance, beyond the kinds of thing and event we are directly acquainted with, beyond the places and times of our familiar life, and then built very cautiously out from the initial claims towards wider ones, holding ourselves ready to modify them at any time in the light of new evidence? That would seem modest and safe enough. What would be wrong with it?

In some people's eyes, I suppose, its very modesty would be what is wrong with it. We want knowledge of the whole universe, and are tempted to claim it on the slimmest of evidence. Science seems to do this systematically. Newton adopted as "rules of reasoning in philosophy" (by which he meant what we call science) that "to the same natural effects we must, as far as possible, assign the same causes," and that "the qualities of bodies . . . which are to be found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever." And these have been essential assumptions for the development of science. But in fact, in spite of their sounding imperialistic, they themselves embody just the kind of modesty we are looking for: "as far as possible," says Newton, "within the reach of our experiments"[4] —if it isn't possible, when our experiments reach further, we'll be happy to change our minds. (Newton has taken a lot of criticism from antiscientists because his view of science is taken to have locked us into grandiose and inhuman claims; such claims have been made by others on the basis of what he worked out but they need not be imputed to him.)

It did indeed look for a long time as if scientific knowledge would prove to be unlimited. The turn away from this hope has been partly due to a misunderstanding. It was thought that the knowledge we already had would extend to the limits of the universe; when it became


clear—thanks to Einstein and Planck and Heisenberg and others—that it didn't, this was taken to be a blow to science. But actually the discovery that it didn't was itself a scientific discovery. The general point to be made here—and to be learned from the positivist program—is that we can't jump to the limits and work back, we have to start from the middle and work out. But that was what science, properly understood, always did. The paradigm case of scientific knowledge (I use the term "paradigm" in its old sense, not in Kuhn's sense) is for me something that lies at the very beginning of the development of modern science: it is Galileo's demonstration of the relations between distance and time for bodies moving in a gravitational field (not that he called it that). He says he will find an equation that actually describes what happens, and he does. The equation matches the behavior of the moving body; the behavior of the moving body matches the equation. What happens in Galileo's laboratory in Padua; the equation he writes down he writes down there.

The lesson I want to draw from this case is this: that in the first instance all knowledge is local . It involves a matching of a description and a state of affairs. The relation between the two is open to radical challenge—there might be something wrong with our eyes, we might not be able to count straight, we might be brains in vats. Or again we might not—the claim that any of these things is the case is at least as implausible as the claim that all is normal. So we put that challenge courteously aside and get on with our work. From what is established locally we extrapolate at our own risk, and provisionally. How badly do we need to do so? Well, that depends on the application we wish to make of the knowledge in question. Perhaps we would like to apply it globally—but what is the motivation here? This is where we have to learn restraint. There is a parallel situation with meaning, which gets people into all sorts of psychological difficulties. I learn meaning locally, in episodes of effort or enjoyment or human contact; forthwith I am tempted to require that the universe as a whole should have the kind of meaning I have learned locally, that my life should have it, that life in general should, or history, or human striving. When I discover that they don't, I may come to think that the local episodes didn't have meaning either. But this would be a sad mistake—it is a sad mistake, made sadder by the fact that so many people make it.

A concept I have found useful in dealing with these questions is the concept of what I call the "flat region." The floor of my room is flat; localities generally are, or their declivities can be measured in relation to a flat surface. I learn the geometry of things, the earth-measurement, in the flat region. As I now know—thanks to other people mainly, I have to admit, and this is a point to which we will have to return—what


I learn here won't work if I try to extrapolate it for more than a few miles; I'll have to correct for the curvature of the earth. But that's only if I want to talk about some distant place while staying physically here. If I actually go off around the earth in search of its curvature, I discover a curious thing: wherever I stop, it's flat again. Of course if I can get off into space and look back at the earth I'll see it as curved, but for the purposes of my metaphor that's cheating, although we could extend the metaphor to accommodate it—space is curved too, and four-dimensional, but however far I go looking for that curvature, my spacecraft will remain "flat" in three dimensions.

The metaphor of the flat region applies to other domains as well. The flat region is where we are, locally, in the middle of things; if we push to the edges, to microscopic or cosmic dimensions, speeds near that of light, etc., the things we've learned locally won't apply. Why should we ever have thought they would? Up to a point, when we'd had no experience at all of anything nonlocal, the expectation was understandable, but that was a long time ago and by now there's no excuse for it. Yet people keep exclaiming over the fact that at the quantum level things don't look and behave like macroscopic objects. Are they waves? Are they particles? Why can't we measure their position and momentum at the same time? But waves and particles, positions and momenta, are things we learned about in the flat region; off at the limits we can by now expect things to be different. Even in logic and mathematics something like this occurs; locally, with ordinary proofs and other inferences, things work perfectly well, but when we push to limits of consistency or completeness we run into self-referential or self-descriptive problems.

The conclusions that are drawn from these altogether expectable failures of flat-region concepts to work after we've pushed inquiry over the edge remind us again of pragmatist pessimism about knowledge. Because of Heisenberg people wanted to throw out physical causality, because of Gödel they wanted to throw out logic. It is true that the advocates of these drastic revisions generally had an axe to grind, about freedom or the inadequacy of language, although it also usually turned out that they could have got the results they wanted without recourse to spurious technicalities. But they hung tremendous weight on what seem to me fairly banal conclusions, to the effect that middle-size people like us, who become acquainted with the world in a middle-sized context, don't learn in the course of coming to terms with their middle-sized world all the refinements they are going to need when they set off towards the very large, the very small, the very distant, the very complicated, and so on.

Let me return to a point I made just now, about the earth's being flat


wherever on its surface I happen to be. The general observation to be made here is that I regularly take my flat region with me . That is because I can't myself go very fast or become very small or very big; it's always the other fellow who is moving or distant, never myself. I speak of relativistic motion here, though we could with a bit of perversity sustain the view for local motion—in the morning the University rolls in my direction, at night my house does. Some medieval thinkers, notably Nicholas of Cusa, had exactly the same idea, which is the essential point of relativity theory, centuries before Einstein. We are fixed in relation to the world, always here, always now; where we are is always in the middle of the perceived universe for us, however far from home we may be.

What does all this have to do with knowledge? I said earlier that in the first instance all knowledge was local. The point of the flat region analogy is to insist that the failure of a form of knowledge to carry undistorted to the edge of the universe is no argument against its adequacy in the local context. The fact that definitions of mental illness, for example, break down in borderline cases, doesn't mean that we're in any doubt as to the insanity of the patient in four-point restraints. So we've laid down two lines of defense for our sturdy local concept of knowledge—one against radical, brain-in-the-vat type criticisms, and another against inadequacy-in-limit-cases type criticisms. We can now go back once more to square one, this time with some hope of being able to get to the straightaway without being tripped up, and ask once again the old question: what is knowledge, assuming it to be possible?

Our Old English friends wanted to make knowledge an enabling or entitling, an ability to engage in some practice. Let's say, taking a hint from the justified-true-belief school, that the ability in question is telling the truth . Having knowledge means that I can tell the truth (if I want to; nothing prevents me from lying, or from just keeping my mouth shut). So knowledge is an ability, in the first instance an ability to assert true propositions . But "telling the truth" has a double meaning, not just telling people true things but also telling what is true from what is false. This is the "justification" clause of justified true belief, and it will eventually shift the discussion to the concept of truth. Meanwhile, however, the Gettier objections really are, as I said earlier, telling in their own way; how should we deal with them?

Knowledge is an ability to assert true propositions and defend their claim to truth. "Defend" here means, among other true things, against the Gettiers of this world—that is, we have to be able to come back again and again to the defense when challenged, until we drop if necessary, build in safeguards against accidental fulfillment of the epistemic conditions, and so on. Also we may have to specify the kinds of knowl-


edge claim we are prepared to defend in this way (excluding perhaps as not worth the trouble anecdotal assertions about unspecified members of groups who have coins or tickets in their pockets, but including certainly claims about the regular behavior of significant classes of object in the flat region). What this means in effect is that we must be prepared to justify our justification, up to as many levels as may be required; if the justification breaks down at level n we will have to accept as a consequence that the knowledge all the way down to level zero is wiped out, but, at least for small n , we won't let that happen until we've tried level n + 1. The series of levels of justification constitute a system of the adequacy of the mind to things, to use an old formula, and the proposition whose true assertion entitles us to claim knowledge of what it asserts has to belong to the system if the claim is to be valid.

This view gives incidentally an answer to some relativists who claim that we are culturally biased in what we accept as scientific method; a favorite counterexample to our so-called "Western" concept of knowledge is a method, used among the Azande for determining the truth in vexed cases, known as the chicken oracle. The Azande know a poison that is marginally fatal to chickens; to consult the oracle a standard amount of this poison is administered to a chicken; if it kills the chicken the answer is yes or no as the case may be (I forget which), if the chicken survives the answer goes the other way. And why, ask some self-critical Western social scientists, shouldn't the Azande believe their oracle just as we believe our scientific oracles? The answer is that you can't reliably ask the oracle to pronounce on its own reliability (that's not the sort of question you can put to it, since it is used mainly to ferret out witches), whereas scientific method belongs to a complex of argument and inference in which it is possible to raise the question of its reliability, and of the reliability of our estimate of its reliability, and so on up for as many levels as you like (not too many, generally, since the higher-order questions have been debated by philosophers of science for whole classes of cases).

Does this whole edifice lie open to radical skepticism? Of course it does. By now, does this disturb us? It does not. The usefulness of radical skepticism lies in the fact that it forces us to stare it down. The disagreement between us is not, as far as that goes, as deep as it looks. It amounts (to go back to the language of the article by Kirkham cited above) to a difference of judgment as to when it is necessary, if ever, to say "hurrah!" Skeptics have a view of a philosophically perfect kind of knowledge; they think we can't have it, though if we did it would be worth saying "hurrah!" about it; meanwhile nothing else will do, everything falls short, so we should stop claiming to have any knowledge. I agree that we can't have that kind of knowledge, but I think


that only a thoroughgoing Utopian would ever even dream of having it; meanwhile it seems to me quite reasonable to claim as knowledge, until further notice, whatever, having earned its place in the system of justification, enables us to play our part in the truth-telling game. Of course we'll have to be sensitive to different possible moves in the game, to judge prudently how far out we may venture on excursions away from the flat region; in home territory, however, we are entitled to a certain confidence.

From here there are several directions in which we can go. Recall that ordinary knowledge and scientific knowledge were said at the beginning to stand and fall together; the kind of care science compels us to bring to the formulation of our knowledge can be exercised with respect to any subject-matter whatever, though there are many cases in which it would hardly be worth the trouble. One thing worth noticing, though, is that the natural sciences on the one hand and the social sciences on the other result from special care exercised on two different kinds of everyday knowledge: one of states of the world that are independent of our interest in them and one of states of the world that are to some degree created by our interest in them. To this distinction correspond two different conceptions of truth. Truths in one category are accepted as such because they are forced upon us by observation: they obey Tarski's semantic criterion. Truths in the other are forced upon us because they are required in order to preserve the fabric of intelligibility in discourse—they can't not be true on pain of the incoherence of our whole scheme. That this desk is hard or this room illuminated belong to the first category; that it is Monday or that John is a friend of mine (or even that he is John) belong to the latter.

The person known as "John" is not John in the way that the desk is hard. The desk is called "hard" by a linguistic convention, and hence by something our interest created, that is true, but it is independently what we call "hard" (and what the French call "dur," etc.), whereas John isn't independently anything called "John" in the same way; there is no property of Johnness he could have or fail to have and still remain himself (though after acquaintance with a social object we may begin to attribute properties of this kind, saying, for example, "John isn't himself today," and so on). However, if we doubt that he is after all properly called "John Smith," we pose a radical challenge to the stability of the social structure, just as if we doubt that this day is properly called "Monday, January 13," we challenge the whole worldwide system of names and dates. There is nothing about this day, as I look around in it, to label it Monday, January 13. The fact that I'm beginning a course of lectures tonight is confirmation that it is, since the first lecture is announced for this date, but the alignment of earth and sun


that makes this day rather than night, winter rather than summer, is supremely indifferent to my lecturing schedule. We have to do things to make days into what they are for us, but we don't have to do anything to make the table hard, once it is (that somebody made it means that its existence as a table is a social fact; its hardness however isn't a social fact but a natural one).

This distinction is useful when it comes (as it often does) to charges of the cultural relativism of knowledge. It is in fact the failure to keep clearly in mind the distinction between the natural and the social sciences—a distinction that for several perverse reasons nearly everyone has been at pains to suppress—that has led to a lot of the confusion about the possibility of knowledge. If we talk about the truths of society or history or any branch of what Sartre so usefully called the "practicoinert," then of course these change from culture to culture, from generation to generation, although even in those cases there are some striking continuities in the mainstream from its beginnings in Greece and Judea until our own day. But if we talk about the truths of science or nature then however different cultural formulations may be they prove in the end to converge, to be intertranslatable.

This casual assertion on my part goes against a whole recent tradition that casts doubt on the convergence of scientific discovery, on the grounds that there have been revolutions, that Einstein has displaced Newton, etc. However, let me repeat that I still have my feet firmly planted in the flat region—indeed I'm confident I'll never leave it—and the kinds of truth I'm talking about are not megalomaniac claims about how the whole universe is but are the stuff of the sturdy local knowledge on which I rely when I go to the dentist or use my word processor. This is not a regression to some sort of naïve realism—indeed, it is consistent with a quite radical theory of perception—but it does involve the claim that it's pointless to refuse the title of knowledge of truth to what has always counted as such in our long adaptation as curious and discursive organisms to the local conditions in which we evolved. This view implies no rejection of or even disrespect to scientists whose business it is to push inquiry far from the flat region, towards big bangs or quarks, but it does—to repeat what has already been said—insist that difficulties encountered only there need cast no doubt on the reliability of local knowledge.

This is the obvious point at which to deal with an objection that has no doubt occurred to many readers. My radical opposition of the natural to the social sciences, in terms of their objects as (to put it succinctly) mind-independent as opposed to mind-dependent, may seem to collapse because physicists have been saying for a long time that at the quantum level observation partially determines what is observed, etc.


But again this is a problem of a region of extreme curvature, in which all that is available to us in the way of claims to knowledge involves complex theoretical reconstructions (undertaken, I may remind you, by macroscopic scientists in macroscopic offices and laboratories). The paradoxes that have led some physicists to posit one form or another of an "anthropic principle," according to which a condition for the development of the physical world was the eventual appearance of human beings capable of observing it, seem to me extreme cases of the disproportionality I have already referred to, hanging global consequences on small discrepancies. The discrepancies mean, to be sure, that our present knowledge isn't yet, and may never be, absolutely and completely and universally valid. But we gave that up a while ago in favor of knowledge as relatively and partially and locally valid. This is not, however, to be interpreted in a minimal sense—on the contrary, these deficiencies are acknowledged only out of a principled concession to modesty. They are marginal, not central.

There is room here for a version of the old legal maxim, "hard cases make bad law"—limit cases cannot be allowed to overturn principles established centrally. Of course this does not mean that no other central principles are conceivable—that would be to play into the hands of critics like Feyerabend who want to make all received views into forms of fascist oppression. But until such conceptual replacement actually occurs I want to conclude that the flat region remains Euclidean, Newtonian, causal, etc. and that we know this as well as we know anything. We know it, I repeat once more, of the flat region , and it is our knowing it there that makes departures from flat-region principles intelligible, when we move inquiry in the direction of the limits. Classical physics makes modern physics possible. But this is getting repetitive and that suggests that it is time to go on to a conclusion.

Most scientific inquiry in the history of the race has been conducted in the flat region. In fact the analogy with the surface of the earth is inexact in its proportions, since we don't really have to go very far before plane Euclidean geometry becomes a bad basis for geographic surveys, whereas we have to go very far indeed before relativistic or quantum considerations impose themselves more than marginally. "Ignoring second- and higher-order terms" is a standard and perfectly safe rubric for most actual computations. Nothing in biology, and in practice hardly anything even in chemistry or physics, requires us to resort to anything other than normal science (here Kuhn's term is unobjectionable, as long as we realize that most scientists live practically as if the revolution had never occurred). Nothing a scientist can personally do goes beyond the limits of normal science, only what instruments sometimes do (and usually the most expensive instruments—there is an ex-


ponential relation between distance from the flat region and the cost of research). Even space travel, so far, has been entirely within the flat region; no navigational computations for any spacecraft to date have required the insertion of relativistic terms.

Also we all live in the flat region, and it is our knowledge we are talking about. This brings me back to the last element of my title. I want to claim that there is no such thing as knowledge in general, only someone's knowledge, and that each knower—I will take myself as the paradigm case—has not only two kinds of knowledge of the world, but knowledge of two kinds of world. I distinguish between my world, which will die with me, and the world, which I suppose to have been there before I was born and which I expect to remain after my death. I learn my world and carry it with me; it is my locality; it is, in my metaphorical sense of the term, flat, although I could probably distort it to some degree, if I wanted to, by ingesting mind-altering chemicals. Thanks to the fact that there are other people in my world (the sense in which they are "in" it needs to be elucidated by the theory of perception to which I referred just now, but that will have to be on another occasion) and thanks to the evolutionary accumulation of the knowledge they pass on to me (each bearing some part of it, or directing me to elements of the practico-inert on the basis of which I can reconstruct it), I come to learn a good deal about the lineaments of the world, at least locally, where it too is flat.

That the world is locally flat has in fact to be a truism, because "flat" for me just means conforming, again thanks to evolutionary adaptation, to local conditions. Whether I ever want to push my knowledge off in search of the curvature of the mind-independent world depends on my inclination; most people don't. But I won't be able to do that unless I have come exactly to terms with the structure of my world, which is to a first approximation the local structure of the world. (What we know of the world can only be structural—there is no reason to think it shares the vividness of the material contents of our worlds.) And that means making some of my knowledge scientific, that is, exercising care in its formulation and attending to its empirical adequacy and its logical consistency.

Nobody can do that for me, although I can profit readily enough from what they have done for themselves, especially if I am lucky enough to have access to them and it—which is exactly what universities exist to make possible. So the question "Is there scientific knowledge?" really has to be posed differently; it should be "Is any of the knowledge I have scientific?"—that is, have I cared enough about exactitude and consistency to be willing to do the work necessary to make it so? For if there is to be scientific knowledge, if it is to survive and have the


useful effects it is capable of producing, individuals will have to continue to choose to do that work, to attend carefully to what they know, to organize and perfect it. We ought not to discourage them from doing so by belittling the possibility of knowledge. Indeed we ought—but with this I would need to start another lecture—to get them to pay such careful attention to knowledge in domains not generally thought scientific: religion, politics. At all events we should exercise such care ourselves, making our work in these domains at least commensurate, in the level of seriousness and responsibility we bring to bear on it, with the work of the natural sciences. For scientific knowledge, if it is not the answer to everything (and it isn't), does at least set a standard.


The Law of Quantity and Quality, or What Numbers Can and Can't Describe


Before there was writing, any culture carried by language had to be transmitted orally. People memorized poems that incorporated the knowledge that was to be passed on to future generations. A poem is something made (poiein is "to make"), something made with words and remembered, not just words uttered for an occasion and forgotten. Now, we are accustomed to think, things have changed: there are texts and chronicles, and the art of memorization has gone almost entirely out of use. We don't need it for the storage or transmission of knowledge, and the old chore of learning poems by heart in school has been almost entirely dispensed with. Feats of memory, outside some technical contexts (in the theater or in medicine, for example) have become curiosities, useful to intellectuals who are unexpectedly imprisoned and need something to keep them sane, but otherwise merely freakish or decorative.

It is worth noting, though, that in fact there are still at least two poems that everyone who has the most rudimentary education learns and remembers. Learning them indeed is a condition for participation in the literacy that makes the old feats of memory unnecessary. One of them is the alphabet, and the other is the series of names for the integers.[1] They don't look like poems, but on reflection they obviously are poems: words that belong together, to be remembered and recited in a given but not intuitively obvious order. The order is important, and must be learned exactly; later on it will seem intuitively obvious, but


that will be only because it was thoroughly learned before the concept of the obvious (or not) had been acquired.

The elements of these poems have iconic representations, in our case respectively Roman and Arabic—a significant detail, this, and relevant to the separation of the quantitative from other predicates in our scheme of concepts. The Greeks and Romans used letters for numerals; in Greek they were accented, but in both cases it was clearly enough understood that the combinatorial rules were different as between literal and numerical uses, whether ordinal or cardinal. We however learn different poems and not merely different rules, so that they seem from the beginning to belong to different domains, mixing the elements of which creates awkwardness, though it is easier for us in the ordinal than the cardinal case. We may identify, and if desirable order, paragraphs, buses, telephones, postal codes, registered automobiles, etc., alphabetically or numerically or by a combination or alternation of the two, and be comfortable with this, but the alphanumeric notations sometimes used in computer programming (such as the hexadecimal, which inserts A through F between the usual 9 and 10, 1A through 1F between the usual 19 and 20, 9A through FF between 99 and 100, and so on) still seem intuitively strange to most people.

Of course it is not only alphanumeric notations that perplex—so do purely numeric ones to bases less than ten. That is because the number poem is a poem to base ten; the sequence 1, 10, 11, 100, 101, 110, 111, 1000 in the binary system would have to be read "one, two, three, four, five, six, seven, eight," not "one, ten, eleven, one hundred," etc., in order to refer correctly in ordinary language to the numbers in question, and this strains the intelligibility of the written characters. Something of the same sort happens with Roman numerals—most of us have to make a more or less conscious translation of MDCXLVII into 1647 as we read it off, much as we do with familiar words in an unfamiliar script, Cyrillic for example (try reading "CCCP" as "SSSR").

So far these considerations are purely discursive—they do not bear on the properties these two systems of representation may serve to articulate, but only on the existence of the systems themselves, and the ordering and legibility of their elements, the letters and numerals. But it is evidently not just a curiosity that these systems should exist, and it is worth reflecting on what brought them into being. Letters were the issue of a long evolution of modes of representing what could be conveyed in speech, pictorially and then pictographically and then hieroglyphically. (No doubt at the same time speech itself developed to express distinctions that had shown up graphically.) At some point the connection between the system of representation and the content of what was said gave way to a connection between the system of repre-


sentation and the sound of what was said. This reinforced a separation between discourse and the world that had begun far earlier with the abandonment of any necessary connection between sound and sense, a move from motivated sound elements to merely differential ones.

With the numerals the story was somewhat different. They seem to have been invented (if etymology is to be trusted at all) in connection with a special social activity, the acquisition and distribution of goods (Latin numerus is connected with Greek nemo , to deal out, dispense, thence to hold, possess, etc.; one of the derivatives of this verb is nomos , meaning among other things a law that assigns lots and places to people and things, from which in turn philosophers of science have derived "nomological," thus indirectly reinforcing the connection between mathematics and the laws of nature). This activity necessarily involved on the one hand gathering and counting, on the other dividing, apportioning and so on, and one can imagine the closeness of the attention paid to the sizes and quantities of things in these processes. The concepts of more and less are attached to powerfully affective modes of relating to the world, involving property and justice, security and self-esteem. It has been noticed by educators among others that people with apparently undeveloped mathematical talents may be quantitatively knowledgeable or even sophisticated when their interests in fair shares or sums of money are engaged.

Two Kinds of Predicate

There is an interesting difference in the uses of these two systems, hinted at above in the remark about ordinality and cardinality. Either system can be exploited for the purposes of ordering , on the basis of the conventional structure of its poem: we know that K comes before L just as we know that 11 comes before 12. But the development of the alphabetic system goes in the direction of arbitrary associations of letters and sequences of letters with sounds, and thence of the arbitrary association of sounds with conceptual contents, that is in the direction of language and its "double articulation." The development of the numeric system, on the other hand, goes in the direction of systematic combinations of numbers and thence of their systematic interrelations among themselves; insofar as they are associated with conceptual content this remains external. Words mean by referring to things in the world; numbers do not—they mean only themselves, though they can be attached to and modify the referents of associated linguistic elements. If I say "ten grey elephants," the terms "grey" and "elephant" refer to each of the entities in question or to their properties, but the


term "ten" doesn't refer to any of them or even to all of them as the entities they are ; it refers only to the cardinality of the collection to which they happen to belong. If I had said "ten grey owls," it would make sense to ask, of "grey," whether it was the same grey as in the case of the elephants or a different grey; but it wouldn't make sense to ask if it was the same ten, or a different one.

Is "ten" an adjectival property of "ten grey elephants" in the same sense that "grey" is? That, in a nutshell, is the problem of the qualitative and the quantitative. It certainly looks as if there is a radical difference here: they couldn't be elephants if they weren't grey but they could certainly be elephants if they weren't ten. Well, they couldn't be ten elephants, but that sounds tautological. Wait a minute, though—why not say similarly that the only thing ruled out by their not being grey is their being grey elephants (they might still be pink elephants)? Even so we are tempted to feel that the greyness (or pinkness as the case may be) inheres in the elephants in a way that the quality of being ten does not; going from ten to eleven is a contingent and external move, requiring nothing more exotic than the arrival of another elephant, whereas going from grey to pink seems like an essential and internal move, requiring a general metamorphosis on the part of all ten elephants.

"The quality of being ten"—this expression sounded natural enough when I used it a few lines back. It wasn't a quality of the elephants exactly, but rather of the collection they happened to constitute, which however might quite as well have been constituted by penguins, nebulae, or abstract entities. Call it a set: students of elementary abstract set theory have to get accustomed to the irrelevance of the obvious properties of the members of sets as individuals, to dealing with sets whose only members are, say, {Napoleon, and the square root of minus one}, or {the empty set, and the Lincoln Memorial}, and to recognizing that the cardinality of these sets, which we call two , is the same (and the same as the cardinality of the set that contains both of them —and of the set that contains {the empty set, and the set that contains both of them}).

The quality of cardinality is something that only sets have: what it permits is an unambiguous classification of sets according as they have more or fewer members than, or the same number of members as, other sets. Perhaps I should have said, the qualities of cardinality, since two is different from three and both are different from 1010 . At the lower end of the scale of cardinals (it doesn't have an upper end) these qualities are perceptible and have common names: pair or couple, triad or threesome, etc. Other names for numbers (dozen, score) are generally survivals from alternative poems rather than directly descriptive predi-


cates: applying them correctly normally requires counting out. In special (pathological?) cases the perception of cardinality can apparently go much higher: the neurologist Oliver Sacks recounts (an expression that in itself reflects the overlapping of the qualitative and the quantitative in ordinary language) an episode in the lives of a pair of idiot savant twins in which someone drops a box of matches and they both say at once, "111!" When asked how they could count so quickly, they say they didn't count, they saw .

They seemed surprised at my surprise—as if I were somehow blind: and John's gesture conveyed an extraordinary sense of immediate, felt reality. Is it possible, I said to myself, that they can somehow "see" the properties, not in a conceptual, abstract way, but as qualities , felt, sensuous, in some immediate, concrete way?[2]

The choice of words here reinforces the suggestion that the rest of us too think of small numerical attributes as qualitative, and that they become properly quantitative only when the numbers are too large to be attributed without counting.

If we now revert from speaking of sets as such to speaking of their members, we say that there is a quantity of them—but they don't thereby acquire any new qualities . Things get complicated, though, when this habit of switching attention back and forth from sets to members of sets follows the development of the number system from the integers or natural numbers, in connection with which the idea of cardinality was first defined, to rational, real, or even complex numbers. By introducing the concept of unit , which makes some standard embodiment of a quality such as length or weight (the standard meter, the standard kilogram) the sole member of a set of cardinality one , and specifying a rule of matching (laying end to end, piling up in the scale of a balance) that will generate sets of higher cardinality whose members will be units (fractions of units being relegated to fractional scales, where the new units are fractions of the old: a tenth, a hundredth, etc.), cardinality comes to be attached by courtesy to other objects embodying the quality in different degrees. Instead of "longer" and "shorter" we now have "11 meters" and "10.3 meters," which define whole classes of longers and shorters among indefinitely many such possible classes. Our interest in "11 meters" as a defining property of some object was initially, no doubt, a desire to know what it was longer or shorter than, or the same size as, but "11" came to attach to it as a predicate along with "blue," "soft," "glutinous," and whatever other qualities our postulated 11-meter object may be supposed to have. And before we knew it, our language was stocked with ratios, averages,


angles, temperatures, coefficients, dates, times, indices, prices, and other numerically-expressed predicates as familiar and useful, in our commerce with things in the world, as any other qualities by which they might be distinguished from one another.

The specifications of degree among objects sharing a given quality, which quantitative predicates make possible, have been available in some technical contexts for a long time, but their general invasion of daily language is relatively recent. That the temperature should be "in the sixties" has of course been a possible determination only since the invention of the Fahrenheit scale and the general availability of thermometers, i.e. since the early eighteenth century. But a temperature "in the sixties" has nothing to do with the number 60 or the cardinality it represents, it has to do with spring and light coats, while "in the twenties" means bitter cold and "in the nineties" intolerable heat. Note that the expressions "in her twenties," and "in his nineties," coexist with these unambiguously, as indeed do "in the twenties," "in the sixties," and so on, as applied to the years in a given century, but that these in their turn mean young and beautiful or old and wizened, flappers and flower children, rather than anything quantitative. It is interesting to find that although these latter expressions have been available for much longer than is the case with the weather, birthdays and calendars having been marked by cardinals for centuries, they were not in fact used until about the same time; whether it be temperatures, ages, or years, the first occurrences of the expressions "twenties," "thirties," etc., up to "nineties," are all given by the Oxford English Dictionary as falling between 1865 and 1885.

It was at about this time, in 1878 to be exact, that Frederick Engels, in Herr Eugen Duhring's Revolution in Science (commonly known as the "Anti-Duhring"), gave popular form to the principle, introduced by Hegel and utilized by Marx, of the passage of quantity into quality. Hegel speaks of "nodal lines" in nature, along which incremental quantitative changes are accompanied, at the nodes, by qualitative shifts. Such a shift is "a sudden revulsion of quantity into quality," and Hegel offers as an example "the qualitatively different states of aggregation water exhibits under increase or diminution of temperature."[3] Engels too cites this as "one of the best-known examples—that of the change of the state of water, which under normal atmospheric pressure changes at 0°C from the liquid into the solid state, and at 100°C from the liquid into the gaseous state, so that at both these turning-points the merely quantitative change of temperature brings about a qualitative change in the condition of the water."[4]

This "brings about," however, is highly misleading. It gives the impression that temperature is a property of water that is causally re-


lated to its state: change the (quantitative) temperature, and the (qualitative) state will change. The fact is that at the boiling and freezing points the temperature can't be changed until the state has changed. What happens is this (I will take the case of boiling, which applies mutatis mutandis to freezing also): steadily supplying enough heat energy to water will raise its temperature to 100°C; at this point supplying further energy will not change the temperature but will dissociate the molecules from one another so that they become steam at 100°C; when all the water has been changed to steam then, assuming a closed system, the supply of still further energy will raise the temperature of the steam above 100°C. But if the process begins at room temperature it will take about seven times as long to change all the water into steam as it took to raise the water to the boiling point.

So there are two things wrong with the Hegel-Engels account: first, it isn't changing the temperature that changes the state, and second, the change is not sudden. As I have pointed out elsewhere,[5] when water boils because it is heated from the bottom, the change of a small amount of it into steam makes dramatic bubbles, and this is not a bad analogy for repressed change, which was one of the popular senses in which the dialectical principle of quantity and quality came to be understood: history will accumulate exploitation and repression incrementally, until crisis and revolution suddenly ensue. And this may indeed happen—only the quantity/quality distinction has nothing to do with it. Water froze and boiled long before temperatures were thought of, and when we talk about "the boiling point" and attach a number to it (note by the way that it is impossible to measure the boiling point at standard atmospheric pressure in degrees Celsius, since 100°C is defined as the boiling point of water at standard atmospheric pressure), the number by itself does not refer to anything that is true of the water, but (as before) only to the cardinality of a collection of units.

This point can be driven home in various ways, One of the remarkable and useful features of the exact sciences is that quantities can be measured and the measurements plugged into computations. The qualities whose degrees are attended to in the process of measurement (or predicted by the outcome of the computation) are sometimes thought to enter into the computations. Thus in the most elementary case of a freely falling body initially at rest we have the equation:


which means "the distance fallen is equal to half the acceleration of gravity multiplied by the square of the time elapsed." But a moment's thought will show that this can't possibly be what is meant: times can't be squared; only numbers can. Nothing can be multiplied by an acceler-


ation. The expression is only a shorthand way of saying that measurements of the distance, the acceleration, and the time, using compatible units, will yield numbers that stand in the required arithmetical relation. In the algebraic expression given above s isn't a distance at all, it's a variable that can take numerical values, and so for the other elements.

The coincidence of Engels's popularization of dialectical doctrine on the one hand, and the emergence of numerical expressions as descriptive in ordinary language on the other, suggests that the latter paved the way for the general confusion represented by the former. We can use numbers to describe things, but unless the thing described is a set or collection with a given cardinality, they won't be functioning as numbers, just as predicates to be defined in the ordinary way and eliminable by substitution. Their use will he a metaphorical use. Yet in the last hundred years or so people have thought of themselves as getting hold of a special numerical or even mathematical feature of things when they use numbers in this way, a quantitative feature at any rate. And when the numbers change concomitantly with some notable qualitative change we have all the appearances of a passage from quantity to quality.

Notable and Just Noticeable Differences

The idea of concomitant change ("concomitant variation," to use Mill's phrase) is basic to the scientific enterprise: we want to know, if we make some change in the world, what else will also change, so that we can achieve or avoid it. Changes can be large or small, dramatic or marginal. Group sizes change by the addition or subtraction of members, other properties by augmentation or diminution, intensification or dilution, etc., or by outright metamorphosis, one property being replaced by another. Cumulative marginal changes, each of which is hardly noticed, may eventually result in states so altered that they require altogether different descriptions. But this phenomenon is context-dependent and works on both sides of the qualitative-quantitative boundary. If a large surface, a wall for example, has always been red, but suddenly overnight is painted yellow, the change is startlingly obvious, but if its red color is modified very slowly, through an imperceptible shift in the direction of orange and progressively through lighter and lighter shades, until finally the last trace of red has vanished and the wall is pure yellow, the fact that it has changed at all may dawn only slowly, and then only on an observant witness with a good memory (imagine the change stretched out over centuries, so that in any one


witness's life it was just an orange wall). Psychologists speak of "jnd's" or "just noticeable differences" as a measure of the refinement of perception (similar to "resolving power" in optics), a threshold below which changes cannot be perceived, so that several subliminal moves may be possible before anything is noticed—and indeed if they are made at suitable intervals nothing may ever be noticed.

Something very similar happens on the quantitative side if the sets in question are sufficiently large. If one person is in a room and another enters, the change is obvious enough, and similarly if a third joins a couple, but if forty people are watching a parade, let us say, the arrival of the forty-first may go entirely unremarked. Still if people keep coming, one by one, sooner or later we have a huge crowd, a demonstration, a triumph—and when exactly did this happen? There is an ancient paradox called The Heap: a grain of wheat is set down, then another grain, and so on; eventually there is a heap, but which grain was it that turned a scattering of grain into a heap? This paradox was presumably intended to remain paradoxical—no empirical research was done, as far as I know, to find out when impartial observers would start to use the term "heap" without prompting. (My guess is that four grains, in a tight tetrahedral array, would qualify as a very small heap, whereas if the procedure were to scatter randomly over a given area, say a square yard, there would be a range of many thousands of grains over which the status of the accumulation as a heap could be disputed.) The point the paradox makes is that categoreal boundaries, for example, between "scattering" and "heap," are fuzzy, but that surely comes as no surprise and hardly makes a very convincing foundation for philosophical doctrine, whether metaphysical or revolutionary.

The dialectical law of the passage of quantity into quality, like its companions, the law of the interpenetration of opposites and the law of the negation of the negation, is thus seen to be an entertaining but nonessential red herring. There are cases in which cumulative imperceptible changes in x lead to the emergence of y , and there are cases in which they just lead to more x —and either x or y can be indifferently qualitative or quantitative predicates; everything depends on the particular case, and can only be learned by looking. Adding atom after atom to a lump of uranium 235 eventually produces an atomic explosion and an assortment of vaporized fission products; adding atom after atom to a lump of gold just produces a bigger lump of gold. Water when refrigerated changes into ice; iron when refrigerated gets colder but doesn't change into another form. No general law can be established that would be of any reliable predictive value; as in any empirical situation, the correlations cannot be generalized in advance but must be learned for each case or class of cases. That solids will eventually melt on heating,


and liquids vaporize, can be expected within limits, but even there other forms of dissociation may take place, and nothing whatever is gained by claiming these phenomena as examples of the dialectic in nature.

The contingency of the relation between quantitative and qualitative change, its dependence on the state of the system, can be illustrated by the following thought-experiment, in which A is a pedestrian walking slowly towards the edge of a cliff C:


Cumulative quantitative displacements of A in the direction of the arrow will lead to a dramatic qualitative change in his situation at point C (call it the "falling point"), but nobody would seriously think of attributing this to the quantitative change as such, only to its taking place near the edge of the cliff.

These considerations do not abolish the differences between qualitative and quantitative but they do suggest fresh ways of thinking about them. In particular it is not clear that they need be accepted as dividing the field when it comes to determinations of the state of the world in various respects. Both derive from members of a family of Latin adverbs beginning with "qu-," all of which have interrogative uses, whose form was presumably determined by the verb quaero , to seek, ask, inquire. So qualis ? from which "qualitative" derives, means in effect, "I ask: what sort?" while quantus ? similarly means, "I ask: how much?" We may think of this "qu-" prefix as a kind of question mark, and translate qualis and quantus respectively as "(?)sort" and "(?)degree." However there are lots of other possible questions, and Latin provides for them: (?)manner will give quam or quomodo ; (?)time, quando ; (?)elapsed time, quamdiu ; (?)reason, quia or quare ; (?)distance, quoad; (?)place, quo ; (?)number, quot ; (?)frequency, quoties ; (?)number in series, quotus , and so on. Why should there not therefore be quamitative, quanditative, quaritative, quotative and quotitative inquiries, as well as qualitative and quantitative? And yet these last two are the only survivors to have made it into our ordinary language, and this means, if we are to take Austin seriously, that only one difference or opposition out of this whole crew was important enough to be preserved. The question is, what opposition was it?

I shall suggest that it was not the sort of opposition that divides the


world into a part that is qualitative and a part that is quantitative, or that allows the transition of one sort of predicate into the other according to any law, no matter how dialectical. The world is as it is and its states are amenable to description on condition of our having a suitable language at our disposal; every element of every state invites the question of what sort of thing it is, what sort of thing is going on. Let this be the general question, the descendant of qualis ? to which the answer may be in diverse modes: spatial, temporal, causal, numerical and so on. If the last among these rather than the others singles itself out for special attention, why might this be?

Separation of the Mathematical Apparatus

It should be noticed at once that something is slipping here—if numerical properties had been the issue surely quot rather than quantus should have been the root of our own expression. This slippage indicates, I think, where our own confusion lies. The questions "what sort?" and "how much?" are both required if the entity or event under investigation is to be estimated correctly in relation to other things; both are differential questions, and the answers to them provide the coordinates that locate the object in an array of types and magnitudes: the first distinguishes it from other objects of different sorts, the second compares it with other objects of the same sort. The latter purpose, however, can be served in diverse ways—within a given category there can be more than one dimension of variety. So a series of possible orders may be envisaged, in which the members of the category might be arranged, and for each order an ordinal sign may be assigned to each member. For this purpose we are not unlikely to call upon one of the poems with which we began. And the discovery that if we choose the number poem we may also be able to make use of cardinality, and even perform computations that will accurately predict some features of the ordering in question, will come as a surprise and a revelation.

It is just this formal and computational aspect of the matter that brings in the quantitative as it has generally come to be understood. One of the earliest discoveries along these lines was made by the Pythagoreans, who correlated the ratios of lengths of stretched strings with musical intervals. They thought this discovery sacred, and indeed it is hard to imagine the awe and astonishment it must have produced. I suspect (indeed I remember) that something like it can happen in childhood when elementary mathematical truths suddenly dawn, but that is an expected step, an entry into a known domain, not as for them


the opening up of something novel and incredible. Pythagorean doctrine concluded that the world was at bottom numerical , which involved a category mistake but nevertheless set the tone for a long tradition. The beginning of modern science was marked by Galileo's resolve to make the "definition of accelerated motion [i.e., its mathematical expression] exhibit the essential features of observed accelerated motions,"[6] a scrupulous formulation that seems unnecessary to us, because obvious, but that required new clarity on his part. The comparable claim in his case was that "the book of nature is written in the language of mathematics," which does not involve a category mistake but does assume a parallel between an intelligible domain (the book and its mathematics) and a sensible one (nature); here also Galileo was scrupulous and clear, though his remark has frequently been interpreted as meaning that "nature is mathematical," which brings back the mistake.

These episodes represent steps in a process of realization that reached its full formulation with the Turing machine: the realization that all relations between exactly specifiable properties of all the things in the world can be modeled to as close an approximation as desired in logico-mathematical language. This development is recounted with great perspicuity in Husserl's The Crisis of European Sciences and Transcendental Phenomenology , in which he speaks of "Galileo's mathematization of nature," and in a brilliant image describes a tendency to "measure the life-world—the world constantly given to us as actual in our concrete world-life—for a well-fitting . . . garb of symbols of the symbolic mathematical theories."[7] The success of this program of measurement however leads to "the surreptitious substitution of the mathematically substructed world of idealities for the only real world, the one that is actually given through perception, that is ever experienced and experienceable—our everyday life-world."[8]

The properties of things in the life-world are what we would normally and generally call "qualities," and the only qualities that permit of direct mathematical expression are precisely the properties of sets or collections already discussed; all the others have to be translated into sets or collections, through the specification of units and combinatorial procedures. This process has been called "substruction" by Paul Lazarsfeld, independently, I take it, of Husserl's use of the term (cf. the quotation above); it "consists essentially in discovering or constructing a small number of dimensions, or variables, that underlie a set of qualitative types."[9] The actual carrying out of the process will involve distinctions between ranked and scalar variables, discontinuous and continuous scales, ratio and interval scales, etc.;[10] fitting the life-world with its mathematical garb is a busy and demanding industry.


Only sets or collections, properly speaking, can be said to have quantitative properties, and these in the end will all turn out to be numerical—Husserl speaks of the "arithmetization of geometry," of the transformation of geometrical intuitions into "pure numerical configurations."[11] (This claim is no doubt oversimplified—there may be topological features, such as inclusion or intersection, that have nonnumerical expressions, though these would not normally be called quantitative.) What are thought of as quantitative properties of other entities, such as length, temperature, density, etc., are so many qualitative properties with respect to which however an entity may change its state over time, or otherwise similar entities may differ from one another. Such differences are themselves qualitative, though they may be given numerical expression. It is important to realize that, for example, the difference in height between someone five feet tall and someone six feet tall is not a numerical difference, even though the difference between five and six is a numerical difference. At every given instant every entity is in the state it is in, with the qualities it has. These may include vectors of change or becoming. Whether such vectors essentially involve quantities—that is, whether becoming involves at every infinitesimal moment a change in the size of a collection—is a question as old as Zeno, which however need not be answered in order to characterize a momentary state.

Of course collections may change their cardinality with time, and we can over suitably large time intervals make other changes into changes in the cardinality of collections by choosing to represent them numerically. In counting and measuring we have two ways of generating numerical predicates out of determinate qualitative situations. The numbers so generated can be inserted into more or less complicated mathematical expressions and made the objects of computation; the numerical outcome of the computation may then by a reverse process be applied to a new qualitative feature of the original situation, or to the same feature of a transformed situation. The rules according to which all this is done (the generation of the numbers, the computations, and the application of the results) have to be learned empirically, as Galileo realized; in this way a number of mathematical relations and formulae are selected from the potentially infinite store of such things and given physical meaning by courtesy. But the mathematical work is entirely carried out within mathematics; measurement shifts attention from quality to quantity, crossing the boundary between the sensible and the symbolic. This shift corresponds to what Braithwaite, in his Scientific Explanation , called the "separation of the mathematical apparatus."[12]


Qualitative and Quantitative Revisited

Qualitative and quantitative do not divide up a territory; they both cover it, overlapping almost totally. But one is basic and the other optional. Everything in our world is qualitative; but virtually everything is capable—given suitable ingenuity on our part—of generating quantitative determinations. Whether we want to expend our ingenuity in this way is up to us. The United States Bureau of the Census, whose main business might seem to be quantitative, has nevertheless an interest in questions of "the quality of life," and has devoted a good deal of attention to efforts that have been made to translate expressions of satisfaction or dissatisfaction into numerical measures. The standard trick is to develop an ordinal ranking and then assign cardinal values to the positions within it for the purpose of drawing graphs, performing statistical computations, etc. The SIWB scale, for example (the initials stand for Social Indicators of Well-Being), assigns the integers i through 7 to "terrible," "unhappy," "mostly dissatisfied," "mixed," "mostly satisfied," "mostly pleased," and "delighted."[13]

One possible use of the results of inquiries on such bases (or improved ones—the Census people seem realistically aware of the shortcomings in the state of their art) might be to produce correlations between these measures and quantities that permit of objective assessment, such as income, energy consumption, cubic feet of living space, number and horsepower of automobiles, etc. These might throw light on some aspects of our common systems of value. But it is worth noting that the starting-point here is not an experimental procedure but an appeal to the judgment of an individual. The individual does not need the quantitative apparatus, only in the first instance an awareness that better or worse conditions are possible, and a subjective conviction of distress or euphoria as the case may be. This is what I mean by saying that the quantitative is optional: our lives would be in some important respects just what they are if we did not know the date or the time or the temperature, or perhaps even our ages or bank balances or IQ's or cholesterol counts. In some significant respects they might be better. I do not mean this as a regressive criticism of measurement or computation, without which we would be at the mercy of old forces from which they have helped to deliver us, but rather as a comment on the use of the metaphorical language of number.

The French used to make fun of tourists who insistently wanted to know the population of this city, the height of that building, by calling them hommes chiffres , "number people." It is worth asking what use is to be made of numerical information. Sometimes numbers are reas-


suring or threatening, as when they mean that I can expect to live a long time, or that I run such and such a risk of having a certain sort of accident. Sometimes they give me a sense of solidarity with a community, sometimes a sense of inferiority or superiority. Sometimes there is an effect of scale, as when the numbers of people killed at Hiroshima or in the Holocaust boggle the imagination—genuine cases, perhaps, of a psychological transformation of quantity into quality (and with nothing metaphorical about the numbers either). But in every case, even these, I or other individuals must prosper or suffer singly. The quality of pain or terror or despair involved in a quite private injury or death or betrayal may match anything any individual can feel or have felt in a mass event.

The value of genuinely collective measures—aggregates, averages, and the like—remains unquestioned, but the question as to role of numerical determinations in the descriptive vocabulary remains open. Part of my argument has been that when these come about as a result of measurements they are to be understood not as quantities but as disguised qualities. Their use as such has drawbacks as well as advantages. There is a short story of Hemingway's, "A Day's Wait," that may serve as a closing illustration. An American child who has lived in France falls ill, and overhears the doctor telling his father that he has a temperature of 102°, upon which he withdraws into himself, stares at the foot of the bed, and won't let people near him. Only at the end of the day does it dawn on his father that he takes this 102 to be in degrees Celsius, a scale on which he has been led to believe a temperature of 44° to be surely fatal, and that he has been quietly preparing for death. The story ends on a happy ira shaky note. But in a world where plunges in the stock market index have been known to provoke plunges from high windows there may be room for the renewed cultivation of quality unmediated by quantity, leaving the quantities to do their undeniably useful work in their proper domain.


On Being in the Same Place at the Same Time

"Nobody has ever noticed a place except at a time," says Minkowski, "or a time except at a place."[1] One might add, "and nobody has ever noticed a place except here , or a time except now ." With this addition, what was meant as an innocent argument for the interdependence of space and time becomes a serious obstacle to all cosmologies in the traditional sense. This paper is an attempt to draw out some of the philosophical consequences of the fact that the observer must always be located here and now. It is a commentary on some aspects of the theory of relativity which seem still, after half a century, to be misunderstood.

The quotation from Minkowski is taken from his paper on space and time in which he introduces the postulate of the absolute world : "the substance at any world-point may always, with the appropriate determination of space and time, be looked upon as at rest." His decision to use the term "absolute" to describe the four-dimensional world of space-time seems curious, since the theory which led to this view of the world was a theory which promised freedom from absolutes and their replacement by relativistic determinations, but it illustrates just that ambivalence in the theory of relativity with which this paper will be concerned. The loss of absolute rest and motion in absolute space and time was a serious shock to physics, comparable to the loss (in more recent developments) of certain aspects of conservation and symmetry; in both cases the immediate reaction was to look for some way of restoring, in a slightly modified form, what had become psychologically indispensable. In the relativistic case the new version appeared to be even better than the old; the effect of Minkowski's world-postulate is,


as Cassirer points out, that "the world of physics changes from a process in a three-dimensional world into a being in this four-dimensional world."[2] The world-postulate has its first and most obvious application at the place and time where the observer happens to be, but it was of course assumed that it might be applied equally well anywhere else in the universe. The observer may be considered at rest, but then for purposes of argument any other point may just as well be considered at rest. In fact, however, considering other points as at rest is only a game—for serious scientific purposes the observer must be considered at rest. There is no such thing as a moving observer.

This conclusion was foreseen by as early a thinker as Nicholas of Cusa. "As it will always seem to the observer," he says, "whether he be on the earth, or on the sun or on another star, that he is the quasi -motionless center and that all the other things are in motion, he will certainly determine the poles of this motion in relation to himself. Thus the fabric of the world will quasi have its center everywhere and its circumference nowhere."[3] The last sentence is as succinct a statement of the theory of relativity as could easily be found. For Cusa the quiescence of the observer poses no problem, but that is because he too believes in an absolute, namely, God, in whom all opposites are reconciled—motion and rest, center and circumference, maximum and minimum. Apparent contradictions are tolerable when there is a divine guarantee of their ultimate resolution. But such mystical resources are no longer available to us, and the denial of the possibility of a moving observer—the claim that such a being is a contradiction in terms—is intended here as something more than an exemplary paradox.

The assertion appears paradoxical, in fact, only because we are all conditioned to Newtonian modes of thought. In a homogeneous three-dimensional universe all vantage points will be equivalent, and motion from one to another is possible without any distortion of phenomena. To put the same thing in another way, observers are interchangeable. And in the Newtonian system God is over all, the generalized observer whose omnipresence is a guarantee of the universality of the laws of motion. Every Newtonian observer could take God's point of view (i.e., any point of view removed from his or her own) and from it regard the world, observer included, sub specie aeternitatis . The reference to Spinoza is deliberate; Newtonian mechanics was a physical counterpart of Spinoza's ethics, and each rested on the possibility of seeing the world in God. Unfortunately a belief in this possibility has persisted; although contemporary scientists would hardly describe it in quite that way, many of them feel that the task of science is to give an account of the world which shall be independent of any particular perspective, But this is quite impossible.


The reason why the theory of relativity was widely thought to provide another absolute account was that it did in fact offer a formulation of the laws of nature invariant between observers, whatever their state of motion with respect to one another. (It is to be remembered that it is always the other observer who is moving.) Laws of nature had always been thought of as rules obeyed by the universe as a whole, and an invariant formulation of them was taken to be a new and more compendious way of saying what the universe, as a whole, was like. But with the new theory came a new insight into the nature of scientific law. A law (and this is by now so familiar that it seems hardly worth repeating) is simply a generalized relationship between observations, each made at a particular time and at a particular place; and the invariance of a formulation of such a law means simply that it can be applied to sets of observations taken in different times and places with equal success. But these observations can never be mixed, and if we wish to insert data from an observation at A ́ into calculations based on observations made at A , they will first have to be transformed according to some set of transformation equations appropriate to the shift from A ́ to A . There is no law which is capable of application to the universe as a whole.

Such a law, if it existed, would in any case be far too powerful for any practical purposes. The function of laws is to provide explanations, and there is only one world which calls for explanation, namely, my own world. It would be presumptuous to suppose that that constitutes more than an insignificant fragment of the world as a whole. In my world I am always at rest. Other bodies move about, and I get information about their movements from observations made, as always, here and now; the larger their velocity with respect to me, the odder the transformations they undergo—increases in weight, the speeding up of time, the contraction of lengths, etc. I should find such changes extremely inconvenient, and it is fortunate that I am not called upon to experience them. It is not that I do not move fast enough, but that I do not move at all (for the relativistic effects of small velocities are only quantitatively different from those of large velocities, and are equally inadmissible). I may get reports from other observers who are in motion relative to me, but I do not accept them until they have been transformed according to the equations mentioned above. Oddly enough, these reports never make any reference to the inconvenient consequences of motion from which I congratulate myself on being preserved; these appear only if I make observations from my own point of view on the physical system of the other observer regarded now not as an observer, but as an object of observation. Occasionally, it is true, other observers attribute to me anomalous states of motion, etc., but


these attributions are contradicted by my experience and are soon corrected by applying the appropriate transformation equation.

These considerations bring up in a novel form the whole question of the relationship of theory to observation. In theory, theoretical observers may be in motion; one well-known cosmological theory refers in fact to fundamental observers moving outwards from a point of mutual origin in such a way that none of them is at rest. Such theories are, however, purely hypothetical, and they have nothing to do with the real world except insofar as their consequences are projected upon the real world. To quote Minkowski again, "only the four dimensional world in space and time is given by phenomena, but . . . the projection in space and in time may still be undertaken with a certain degree of freedom."[4] To say that the four dimensional world is given by phenomena is, however, to use the term "given" in a special sense, since a complex process of reasoning separates the conclusion that the world is four dimensional from the observational evidence for it. According to more familiar usage, what is given by phenomena is what has to be explained, and this is done by taking a projection of a theory which is precisely not given by phenomena, but which is freely constructed by the scientific imagination. For an observer at (x, y, z, t ) a theory is confirmed if its projection at (x, y, z, t ) agrees with observations made there, i.e. if it satisfies the boundary conditions at (x, y, z., t ). It is the possibility of different projections of the same theory, according to the different space-time situations of different observers, which Minkowski asserts in the passage quoted.

But the real world can never be the world of theory; only parts of the real world may coincide more or less exactly with parts of the world of theory when the latter are submitted to boundary conditions. And this imposition of boundary conditions has to be done afresh every time an observer makes an observation. Retrospectively, the fit of theory to the real world is remarkably good, on account of the fact that (at least in principle) those elements of theory which do not fit are discarded. But every future application of theory, even of a theory which has proved itself without exception in the past, has to be validated at the time when it is made. Such validated bits and pieces of theory remain, nevertheless, the best way of grasping the real world as it presents itself to me in bits and pieces; for my experience, while it validates theory cognitively, validates reality existentially. "The world can not exist," says Sartre,

without a univocal orientation in relation to me. Idealism has rightly insisted on the fact that relation makes the world. But since idealism took


its position on the ground of Newtonian science, it conceived this relation as a relation of reciprocity. Thus it attained only abstract concepts of pure exteriority, of action and reaction, etc., and due to this very fact it missed the world and succeeded only in making explicit the limiting concept of absolute objectivity. This concept in short amounted to that of a "desert world " or of "a world without men"; that is too a contradiction, since it is through human reality that there is a world. Thus the concept of objectivity, which aimed at replacing the in-itself of dogmatic truth by a pure relation of reciprocal agreement between representations, is self-destructive if pushed to the limit.[5]

This would still be true philosophically even if our world were really Newtonian, but in that case a self-consistent theory of physical objectivity would be possible, and a useful reinforcement of the philosophical point lacking.

The appeal to Sartre is again deliberate. It is not, I think, too fanciful to say that, just as Spinoza was said to be a moral counterpart of Newton, Sartre is a moral counterpart of Einstein. Both Spinoza and Newton devised absolute deductive systems; Sartre, like Einstein, recognizes the necessity of reducing all questions to the level of the individual observer. The data of science, no less than those of ethics, require phenomenological analysis, since human beings in their capacity as knowers depend on their bodies for entry into the physical world just as basically as, in their capacity as agents, they depend on them for entry into the moral world. As a matter of fact, most intuitive objections to the thesis of the immovable observer rest on phenomenological grounds; what makes it implausible is not the theoretical possibility of motion but our frequent consciousness of it. But this "motion of the observer" always takes place with respect to a more or less confined framework, an environment which is itself taken to be at rest and which is always of modest and human dimensions. This is part of our psychological orientation to the world which we inhabit, and only goes to show that we need to feel anchored and located in a setting which is, by comparison with ourselves, stable and enduring. The change of attitude characteristic of the shift from Newtonian to relativistic science reflects a change in the answer to the question whether the comforting characteristics of this familiar and local world can be extrapolated beyond it. The belief that they can turns out historically to be tantamount to a belief in God.

In the light of contemporary science the conclusion seems inescapable that human beings, condemned to carry their own perspective on the world always with them—to be each (not all!) in the same place at the same time—are denied the vicarious view of a domesticated universe once provided by God. The trouble is that the scientists always


lag behind the philosophers in their understanding of the relation of human beings to God. While Newton clung to his conception of the "Lord over all, who on account on his dominion is wont to be called Lord God pantokrator , or Universal Ruler,"[6] Spinoza had already arrived at his Deus sive natura ; and when Sartre had come to recognize that a universal consciousness of universal being was a contradiction in terms, Einstein still held explicitly a Spinozistic view of "a superior mind that reveals itself in the world of experience."[7] It would of course be foolish to take this disparity of outlook too seriously, especially since some philosophers as well as scientists share Einstein's pantheistic conviction of the intelligibility of the world in an objective sense, i.e., independently of the perspective from which we view it. But if scientific theory is only the means of rendering intelligible the world as it appears to me from my irreducibly singular point of view (and any stronger claim seems to entail quite unjustifiable assumptions) then nothing is gained by putting into theory the possibility of my own motion except a spurious and slightly megalomaniac feeling of all-inclusive understanding. Nothing is lost by it either as long as the motion is slow compared with the velocity of light; the ordinary language of local movement does not have to be given up. The foregoing argument is addressed to the relativistic case. The immobility of the observer can be carried through for local motion too, but for relativistic motion it must be.


On a Circularity in Our Knowledge of the Physically Real

In this essay I wish to raise a comparatively innocent-looking problem and explore the consequences of taking it seriously. The problem, briefly stated, is this: is there an essential circularity in our knowledge of the physical world? If so, does it matter? That is, does it have a systematically self-defeating effect on our attempts to understand that world? It will be seen as we proceed that a similar question can be raised for all claims to knowledge, but for the time being I restrict my attention to the epistemology of science. By way of an approach to the problem, consider an example that embodies it. Suppose we take some book about the physical world, for example, Henry Margenau's The Nature of Physical Reality .[1] It is an object in the physical world and has all the properties of such an object—location, cohesion, relative impenetrability, mass, motion, and the rest. Also, it consists of an ingenious and compact ordering of plane surfaces, about 1.7 × 105 cm2 of them, that allows the display of an arrangement of some 9.5 × 105 marks, of roughly 75 basic types, by means of a technique of impregnating the surface at the appropriate points with a preparation that changes its reflective power. These marks constitute a code that can be decoded by certain other physical objects, namely, human beings, which share the same basic properties—location, cohesion, and the rest—but have in addition special facilities for receiving and analyzing visual signals and processing and storing information. The book that I hold in my hand is a member of a class of books (called The Nature of Physical Reality ) which is in turn a member of a class of such classes of books (called just books). It is a characteristic of the members of the class of


books called The Nature of Physical Reality that they are virtually indistinguishable from one another, except for accidental marks of individuation—stains, the yellowing of pages, tears, marginal notations, dedications. By contrast, it is a characteristic of the members of the class of human beings that they are essentially distinguished from one another, i.e., that each is a unique individual. This contrast may not be as fundamental as it appears, however, depending as it does largely on differences in the mode of production in the two cases: if books were still copied out by hand they would have much greater individuality; if humans underwent some standardizing or screening process that straightened out differences in genetics or education they would have much less. A more fundamental difference between books and people than their degree of individuation (and one that largely explains that difference) is the fact that books, once produced, are inert and do not change, either in themselves or with respect to one another, except by the operation of adventitious forces, whereas human beings not only start out with marked genetic differences but also continue throughout their lives to change in themselves and with respect to one another as well as being acted on by adventitious forces. Also, they are far more sensitive to the adventitious forces, being affected by changes in their environment and by incoming stimuli to which books remain completely indifferent. It is just because the sequences of internal changes and especially of adventitious forces are never the same for two individuals (although they may be closely similar in the case of identical twins), and because the changes are for the most part irreversible and the effects of the forces cumulative, that progressive physical individuation of human beings goes on continuously. I pursue perhaps to the point of absurdity this insistence on the common physical status of books and human beings because the interaction between them to which I now wish to draw attention is purely physical in nature, consisting as it does of the scanning of the pages of the book, under suitable conditions of illumination, by the eyes of the human being, and the transmission of the coded sequence of arrays of light and dark surface areas to the brain, where it is processed by means of rapid chains of electromagnetic and chemical events in and between some of the 0.5 × 1010 neurons normally found there. It is the configuration of the links and potential barriers between these neurons that determines the individuality of the person whose brain they constitute, as apart from the individuality of that person's body (which is more like the individuality of a book). The person is a mental structure. This structure is made progressively definite by what we call the person's "experience," which can be thought of as a long syntagmatic sequence of sensory inputs, some of them coming into the brain from the rest of the body more or less


directly (as in proprioception, or in looking at one's own hands, hearing one's own voice, touching one's face, etc.), but the greater proportion coming from external objects in the form of light, sound, convection or radiation of heat, pressure, and the like. In the case of humans who have grown up in a social context, an extremely important, indeed the dominant, part of this syntagma consists of spoken or written or printed words , that is, arrays of marks or sequences of sounds that occur in repeating patterns and set off, singly or in combination, essentially similar neural reactions on each occurrence. Depending on the state of the mental structure at the time, the other types of input that arrive simultaneously, and so forth, these reactions may in turn trigger others, and the cumulative influence of these inputs and reactions determines eventually the structure of the actions that the person carries out. Let me refer once again to the present case: my own verbal input syntagma has included two readings of Margenau's The Nature of Physical Reality , one almost twenty years ago, the other quite recent. Partly as a result of these experiences, and partly under the stimulus of other inputs—reading philosophy and talking about it, feeling gratitude in respect of certain events in my life and wishing to cooperate in a particular enterprise, receiving letters and telephone messages from editors, etc.—it came about at a quite definite time and in a quite definite location that the physical body with which I am associated sat down, took up a physical pen, and began to make marks on a plane surface by running an inked ball over it that left a trace, detectable because of the differential reflecting powers of the surface and the ink. "In this essay," I wrote, "I wish to raise a comparatively innocent-looking problem and explore the consequences of taking it seriously."

I have now constructed a circle in this essay, and it is easy to see how it mirrors a circle in the processes of our knowledge. For all knowledge that can be recognized and defended as such eventually finds its way into written (or at any rate spoken) form, and the physical character of the mode of its expression will locate it in the world alongside the things of which it constitutes knowledge. The Nature of Physical Reality is not, it is true, a physics book, so that the allusion to "things in the world," of our knowledge of which it constitutes the expression, may be needlessly confusing. I would be prepared in another context to defend the view that there is only one world, namely, my own, and that everything that I can know finds its place there—and to argue further that this does not involve solipsism—but for the time being wish to concentrate on the case of physical knowledge: books about the physical properties of physical objects are themselves physical objects, and may indeed be composed of some of the physical objects—fundamental particles, atoms, molecules, etc.—about which they speak. This


is clearly self-referential, but its essential circularity may not be so evident. That, however, is because we have to include ourselves in order to complete the circle. We believe that our eyes and brains, as well as the ink and paper, are made of physically real particles, but clearly we would not be able to know about physically real particles if it were not for the use of our eyes and brains, of the ink and paper.

Now it is a mistake to think of all circularity as being paradoxical or even undesirable. In fact, there is a long tradition in philosophy which suggests that circularity in argument is inevitable. An early hint of it is given in the Euthyphro , where every attempt at clarity seems to result in bringing the argument back to its starting point;[2] at the other end of the historical scale Wittgenstein, in the preface to the Philosophical Investigations , speaks of traveling "over a wide field of thought crisscross in every direction. . . . The same or almost the same points are always being approached afresh from different directions."[3] The fact is that the world in which philosophy operates is a closed world; every attempt on the part of sentient beings to understand the world in which they find themselves is bound to be circular if it is carried far enough. This inevitability arises out of the fact that, whatever the starting point of the inquiry which is to lead to understanding, sooner or later the starting point itself will become an object of the inquiry. Circularity can, it is true, always be refused by resort to a priori assumptions; the trouble with these, however, is that the resulting linear argument may not intersect with another linear argument constructed by somebody else on different assumptions. If my assumptions contradict yours, our conversation is ended before it is begun. (There are people who are not troubled by this kind of standoff, but this I suspect is either because they are too lazy to be interested in alternatives or because they are too dogmatic to entertain them.)

The question is, what is the character of this world which is thus closed in upon itself? Is there room for physical reality in it? In the first instance it is a world marked by discourse, but since there are parts of it that have not been adequately articulated discourse cannot be the whole story. The best way of characterizing it may be to say that it is a thought world—not a world of thoughts , but a world every element of which is thought by somebody: in my case, by me, in yours, by you. The arguments for closure have still not been better put than by Berkeley in the Three Dialogues between Hylas and Philonous .[4] All sensation, as well as all conception, all language, and so on, belong on the side of mind, as is clearly evidenced by the fact that in the absence of all mental activity no trace of the world remains. This is not to say that no world remains; it is only to remark on the thorough assimilation of whatever world may independently exist into mental form for pur-


poses of human consumption. In the thought world there is clearly no room for material, if by material we mean something that is not thought.

But there is no need to identify the physical with the material; physis means, after all, nothing more than the nature of things naturally generated. And yet I think that most people who use the word mean by the "physical" something that is at any rate independent of their minds, and the independence of the material is no more problematic than, in Berkeley, the independence of the mind of God. Even for Hegel we are only the Absolute knowing itself, and its knowing itself does not in any way entail our knowing it. The problem of physical reality remains as acute as ever for the individual in his or her private circle, and the philosophical circle can be described in idealist or in materialist terms indifferently. I started this essay with a book as an object in the world, but I could have equally well have started it with the concept of the book—the point is that in both cases the exercise was an exercise in thought, a conclusion not to be escaped because it happened to be an exercise in discursive thought and even in written form.

The choice of the medium, then—ideas or matter—leaves the question of reality untouched. The problem of the real, as I understand it (and this is not, as we shall see, exactly the way that Margenau himself understands it), is the problem of clarifying the status of what there is, insofar as this is independent of my experience. My experience itself is of course also real, but this is not problematic, nor has it been since Descartes—that is, since the establishment of the distinction between the actuality of experience and its significance. To say that the problematic part of the real is the part of it that is independent of my experience does not mean that none of it could be in my experience, only that it would be as it is whether it were in my experience or not. But by far the larger part of it—everything that happened before I was born, everything that will happen after I die, everything that now happens elsewhere, or behind my back, or beneath the surface of things—is and remains outside my experience. Reality is infinitely richer, ontologically speaking, than my world (even though my world can contain a conception of the whole of reality).

I wish now to deal with two kinds of attempts that have been made to solve the problem of the relationship between the content of my experience on the one hand and reality on the other. One strategy has been to try to construct the elements of reality out of the elements of experience, or at any rate out of classes of elements, typical examples of which are found in experience. The most elegant expositions of this strategy are to be found in Russell and Carnap. Russell poses the problem as "the construction of physical objects out of sense-data," and he arrives at the notion of things as classes of their aspects. We have seen


the table from a few points of view, under a few conditions of lighting, etc.—imagine how it would look from all possible points of view under all possible conditions, and we have an aggregate that will exhaust without remainder all possibilities of perceptual knowledge of the table.[5] Carnap's program is more ambitious: from the perceptions of a single observer he constructs, in ascending order, autopsychological, physical, heteropsychological, and cultural objects.[6] (He uses for the purpose a fictional character A, the first half of whose life is spent in experiencing the world without analysis, the second half in analyzing the data thus gathered, without further experience.) A third attempt along somewhat similar lines—although all three have radical differences from one another—is that of Whitehead, who by his method of extensive abstraction sought to isolate real space-time points as the termini of converging series of experienced space-time regions.[7]

The second strategy is that of Margenau himself. He approaches the question from a diametrically opposite point: assuming, in effect, that if the real shows up in our experience at all it will bear the marks of its independence (its coherence, its connectedness, and so on), he limits the ascription of reality to those parts of experience that have been successfully systematized by means of scientific constructs. "To us," he says in The Nature of Physical Reality , "reality is not the cause but a specifiable part of experience."[8] Now both these strategies—Russell's and Carnap's on the one hand, Margenau's on the other—seem to me unsatisfactory, in the former case because of ontological extravagance, in the latter case because of ontological poverty. On the one hand, Russell thinks nothing of introducing infinite sets into the real, and he does so with an abandon that would have made Ockham shudder. Margenau, on the other hand, excludes from the real undeniable facts of experience, if these have not been "normally standardized into scientific knowledge." "As yet," he says, "they have not been united into an organized pattern comparable with the structure of physical reality, and it would be pardonable for the scientist to suggest that the name reality be at present denied to them."[9] If we subscribe to the Cartesian principle referred to above, this puts him in the odd position of saying that reality is only a part of a part of itself. Of course, the motivation for introducing a limited and specific meaning for the expression "physical reality" is quite clear in Margenau's work, and if we view it as an epistemological rather than as an ontological limitation his strategy becomes perfectly acceptable. And yet from the ontological point of view it does seem overcautious.

My own inclination is to try a completely different approach, by asking whether there could be anything with respect to which we could know that it lay outside our experience as well as being independent of


it. If it were just a question of our confronting the physical world as individuals, there would be no possible way in which we could know that anything lay outside our individual experience, and anyone who wished stubbornly to maintain that view throughout the following argument could perfectly well do so. There is, however, a class of events of which we can come to have knowledge that would strike all but the confirmed solipsist as satisfying my criteria: I mean the contents of other people's experience. We come to know these by being told about them, and we have to admit that, for the most part at any rate, our existence makes no difference to them and their immediacy is out of our reach. Other people report to us what happened in the past, and what happened elsewhere, and what happened when we were not looking, and so we extend the basis of our knowledge of the real. I cannot here go into the details of the process according to which scientific knowledge is generated out of this collective resource; suffice it to say that the result is not a monolithic body of scientific theory or even of knowledge of the everyday world, but a series of free-standing conceptual schemes each associated with a single knower, yet overlapping in such a way that groups of people can in various times and places be taken to embody such theories and such knowledge.

Every conceptual scheme is closed upon itself just in the way in which experience was earlier said to be—in fact, every such scheme is precisely the consequence of a certain experience—and so is the aggregate of conceptual schemes, incoherent as this is bound to be even if limited to the domain of a particular science. Yet, when the scheme of an individual knower has been critically rethought, when out of intuitive systems scientific constructs have been formed, when some part of the scheme has been converted into a system , the overlapping with other people's schemes proves to be more precise than before, so that among the practitioners of a given science it becomes reasonable to speak of an isomorphism between individual conceptual systems. And it is this that leads us back, in the end, to the idea of an external and independent physical reality. "It is forced upon us by the constant pattern of the isomorphic systems. But this in itself does not necessitate the postulation of an external: perhaps the isomorphic systems just hang together that way. All of them, however, except the postulated natural reality, are in mind; they constitute a universe of thought. The reason why it seems to me sensible to postulate a universe of being also is that in principle the universe of thought could be entirely capricious, and it is not. The simplest explanation of this constancy of form between the isomorphic systems is the existence of another system isomorphic with all of them—rather such that they are all imperfect isomorphs of it—underlying them and proceeding independently of them."[10]

The standard objection to this formulation is that it makes physical


reality hypothetical, and that this hypothetical status then extends to every element of it, even those we take to be reflected in our experience. This leads to the paradoxical consequence that the real components of our experience are made up of hypothetical parts. But this is just a set of confusions. The real elements of our experience are not made up of anything, unless of parts really experienced at the same time (there is nothing paradoxical about an experienced person's having experienced arms and legs). What is hypothetical is the object as a whole under its real, as opposed to its experienced, modality, and there is nothing odd about a hypothetical object's having hypothetical parts. What science enables us to see is that there are more hypothetical objects than experienced ones: electrons lie in the hypothetical real world, and not in the real world of experience—but the formulation of that sentence is meant to underline the proposition that we are not talking about two worlds, only about one, most of which, however, remains outside our experiential range.

The upshot of this analysis is that the cycle of our knowledge, from perceptions to hypotheses and back to perceptions, while it is itself inscribed in the real, does not enclose the reality it claims to know. Reality lies outside the circle, rather in the way that tracks lie outside the train. (Imagine a train running on a single circular track, and passengers able only to look horizontally out of the side windows—they could tell by the repetition of the landscape that they were on a closed track even if they could never actually see it.) The idealist temptation is to identify reality with the circle, but this seems unnecessarily limiting as an ontological principle, besides being immodestly self-centered. Assuming a reality infinitely vaster than our knowledge of it, we can still make at least one assertion about it with complete assurance, albeit a negative assertion: namely, that the real has not been such as to require other experiences than those we have in fact had. In closing I offer two formulations of a somewhat more positive kind, each of which sums up the result of the argument, and the choice between which can be left to philosophical temperament. The first is skeptical, and yet conditionally affirmative: we do not know the real at all, but what we assert about it would be knowledge if there were any way of getting out of the circle . The second, which I prefer, has obvious affinities to Margenau's position, but avoids some unfortunately relativistic aspects of that position: the real is the objective correlate of those aspects of conceptual structure that most people have in common or in such form that it is translatable into a common structure . This assumes that the development of science is on the right track, not necessarily that it has arrived. The possibility remains, of course, that we might all be mistaken together.

I conclude, then, that there is an essential circularity in our knowl-


edge of the physical world—that every line of demonstration sooner or later turns back upon itself. But I conclude also that this is not a self-defeating circularity, and that two things about it even give grounds for philosophical satisfaction. One is that, as always happens in philosophy, the state of the knowers , if not of their knowledge, changes as they go around the circle, so that they understand each proposition anew in the light of the propositions that have intervened since their former understanding of it. The other is that if the relative sequence of the propositions remains unchanged—if they are always there, as it were, when the argument comes round to them—that in itself may be evidence of an underlying stability in the real, of which, after all, the propositions and the argument and for that matter we ourselves are an integral part.


Truth and Presence:
Poetic Imagination and Mathematical Physics in Gaston Bachelard

In the stacks of the Sterling Library at Yale University, thirty years ago, I happened as a graduate student in philosophy to be reading Gaston Bachelard's L'activité rationaliste de la physique contemporaine while my closest friend at the time, a graduate student in French, was reading his L'eau et les rêves . This coincidence was gratifying, although it did not seem remarkable; neither of us found the other's interest alien. I refer to it not from romantic nostalgia but because it now occurs to me that this personal conjunction of science and the humanities antedated by five years C. P. Snow's The Two Cultures and the Scientific Revolution ,[1] an essay which suggested that it ought to have seemed remarkable, since according to Snow a great gulf was, if not fixed, at least being busily dug, between the domains to which these works belonged. Of course Snow believed rather complacently that he himself embodied a rare and difficult combination of the two, but he seems not to have realized how thoroughly his problem had been anticipated, or how satisfactorily it had been solved, by a professor at the Sorbonne who had begun his career as a provincial French postman.

As far as that goes my own double interest, in science and in poetry, antedated by many years my encounter with Bachelard. Bachelard somewhere acknowledges a debt to his father in the matter of building fires; I owe a debt to mine both for his habit of reciting Milton and for his curiosity about the sciences, especially astronomy. He possessed some of the works of those great popular writers, both distinguished scientists, Sir James Jeans and Sir Arthur Eddington, and I read them while I was still in school; in the latter's The Nature of the Physical

All translations from Bachelard (except those specifically cited in English translation) are my own.


World is a passage that Bachelard may have known and would certainly have liked. "One day," says Eddington, "I happened to be occupied with the subject of 'Generation of Waves by Wind.' I took down the standard treatise on hydrodynamics, and under that heading I read," (and there follows a paragraph of mathematical symbols):

And so on for two pages. At the end it is made clear that a wind of less than half a mile an hour will leave the surface unruffled. At a mile an hour the surface is covered with minute corrugations due to capillary waves which decay immediately the disturbing cause ceases. At two miles an hour the gravity waves appear. As the author modestly concludes: "Our theoretical investigations give considerable insight into the incipient stages of wave-formation."

On another occasion the same subject of "Generation of Waves by Wind" was in my mind; but this time another book was more appropriate, and I read:

There are waters blown by changing winds to laughter
and lit by the rich skies, all day. and after
      Frost, with a gesture, stays the waves that dance
And wandering loveliness. He leaves a white
      Unbroken glory, a gathered radiance,
A width, a shining peace, under the night.

The magic words bring back the scene. Again we feel Nature drawing close to us, uniting with us, till we are filled with the gladness of the waves dancing in the sunshine, with the awe of the moonlight on the frozen lake. These were not moments when we fell below ourselves. We do not look back on them and say: "It was disgraceful for a man with six sober senses and a scientific understanding to let himself be deluded in that way. I will take Lamb's Hydrodynamics with me next time." It is good that there should be such moments for us. Life would be stunted and narrow if we could feel no significance in the world around us beyond that which can be weighed and measured with the tools of the physicist or described by the metrical symbols of the mathematician.[2]

Eddington suggests here that the business of life will draw one's attention now to the scientific side of things, now to the poetic; there is no thought that the two functions will be exercised by different people, or belong in the life of the same person to separate periods, say, youth and maturity.

Critics are fond of chopping great thinkers into two, the early and the late, and this is nearly always misleading, as the most obvious examples show (Marx, Wittgenstein, and Sartre come immediately to mind). Some people have tried to do this with Bachelard, as if he turned from science to poetry, but even the sequence of published works is more complicated than that. If it is necessary to identify periods there


are at least four, the first two overlapping: (1) an initial preoccupation with scientific thought, from Essai sur la connaissance approchée (1928) to La philosophie du non (1940); (2) the working through of the elements and the corresponding forms of the imagination, from La psychanalyse du feu (1938) to La terre et les rêveries du repos (1948); (3) a reconsideration of the thought processes of science in the light of a new rationalist epistemology, which includes Le rationalisme appliqué (1949), L'activité rationaliste de la physique contemporaine (1951), and Le matérialisme rationnel (1952), three works that Roch Smith has called a "trilogy," a view supported by Bachelard himself ("je considère que [ces] trois livres . . . ont une unité de vue";[3] and (4) the new poetics of the three last works, La poétique de l'espace (1957), La poétique de la rêverie , and La flamme d'une chandelle (both 1961). So at the beginning of this paper I state my confidence in two kinds of unity: that of Bachelard's career, and that of the possible embodiment of both science and poetry in a single individual that that career exemplified.

In the stacks of the Sterling Library, however, the rest of the Bachelardian corpus was still in my future. I was reading L'activité rationaliste for a quite specific reason, namely, to advance an inquiry into the ontological status of fundamental entities in physics. Electrons, protons, and the rest are never observed directly, so they remain theoretical constructs; what we observe are the consequences of interactions in which we suppose them to have participated—bubble chamber tracks, clicks from Geiger counters—and these consequences are always macroscopic and more or less familiar. This is still a topical problem, though not in the form of bewilderment about waves and particles that Eddington dramatized with his "wavicle," which was a wave, as I remember, on Mondays, Wednesdays, and Fridays, and a particle on Tuesdays, Thursdays, and Saturdays. Now, with the benefit of hindsight, I would rather be inclined to say: why did we ever suppose that the habitual images experience equips us with in the local "flat region" of macroscopic observation would be adequate to remote reaches of physical reality—the microscopic, the cosmological, the relativistic? Getting physical theory right means being ready to leave the comforts of the flat region, to depart from the simple image.

Now two things about Bachelard seem to me particularly memorable and important: on the one hand the tenacity of his rootedness in what I am calling the "flat region," the familiar, the everyday, the down-to-earth, but on the other hand the audacity of his speculative departures from this solid base, his persistence in following his arguments where they led, whether into the gloom of psychoanalytic depths or the vertigo of relativistic speed and distance. The polarity of his work between


science and poetry is, as I have already noted, notorious; I find no less remarkable the polarity between the postman and the philosopher. On the whole it seems to me that it would be a good thing for more philosophers to have been postmen. The metier may not be accidental: apart from the letter-scales Bachelard refers to as having given him his idea of weight, there is a hermetic side to the postman's activity—he is the point of contact with the world beyond, he brings sealed messages from distant origins, there is no knowing what marvels or portents they may not contain; at the same time nothing can surprise him, he is the very image of persistence and reliability, of local intimacy and homely order. And when the postman himself leaves for the outside world—for Dijon, for Paris—he takes with him this imperturbable sense of the familiar, and his concern continues to be with the firm materiality of the world, now from the scientific point of view.

It is, however, the point of view of a new science, a "nouvel esprit scientifique," one of whose effects is gradually to undermine that materiality. The old science, beginning with Galileo, say, made its object the mathematical representation of observable relations; Newton added the modern concept of force, but that had its own familiar representation in muscular effort. Microscopes and telescopes, etc., merely extended the flat region; they did not lead outside it. It was towards the end of the nineteenth century that the existence of entities hitherto unsuspected, with entirely new properties, began to force itself on scientific attention. The electron was discovered when Bachelard was eleven, and he was a young man during the heady days at the beginning of the century when relativity and quantum theory were undergoing their dramatic development from marginal conjectures to fundamental disciplines of physics.

The initial reaction to the opening of these new domains was sometimes overdone, and Bachelard did not escape the temptation to which so many of his contemporaries succumbed of making a mystery out of the absence of an imaginable substantiality at the quantum level. In Le nouvel esprit scientifique he says,

Instead of attaching properties and forces directly to the electron we shall attach quantum numbers to it, and on the basis of the distribution of these numbers we shall deduce the distribution of the places of the electrons in the atom and the molecule. The sudden dissolution of realism should be clearly understood. Here number becomes an attribute, a predicate of substance . . . . Thus chemistry, which was for a long time the "substantialist" science par excellence , finds the knowledge of its own matter progressively dissolving. If we judge the object according to the proofs of its objectivity, we must say that the object is mathematizing itself and that it manifests a singular convergence of experimental and mathemati-


cal proof. The metaphysical gulf between mind and the external world, so unbridgeable for a metaphysics of immediate intuition, seems less wide for a discursive metaphysics that attempts to follow scientific progress. We can even conceive of a veritable displacement of the real, a purging of realism, a metaphysical sublimation of matter. Reality first transforms itself into a mathematical realism, and then mathematical realism comes to dissolve itself in a sort of realism of quantum probabilities. The philosopher who follows the discipline of the quanta—the schola quantorum —allows himself to think the whole of the real in its mathematical organization, or better, he accustoms himself to measure the real metaphysically in terms of the possible, in a direction strictly the inverse of realist thought. Let us then express this double supremacy of numbers over things and of the probable over numbers by a polemical formula: chemical substance is only the shadow of a number (l'ombre d'un nombre ).[4]

This is terribly confused. It is simply misleading to suggest that there are numbers in the objective world and that they somehow replace a materiality that has dissolved away. If the world ever was material, it has not ceased to be so just because we can't picture its materiality. Before, we could have a pictorial representation as well as a mathematical one; now we can manage only the mathematics, but it is no more constitutive of the world than in the former case. The epistemological basis of science is still in ordinary macroscopic objects; our immediate world is still Euclidean and Newtonian; but we have learned that the rough-and-ready world-picture of the flat region, with its colours and sounds, its solids and spaces, is inadequate for the representation of basic physical truths.

What gets in the way of a relaxed and uncomplicated acceptance of this limitation seems to be a need on our part to have an image of matter. It is difficult to attribute reality, materiality, or substance to the world there physically is without attributing to it the imaginative contents that have hitherto accompanied these ideas. There is no way of getting rid of these imaginative contents but their existence poses a problem for scientific understanding. The fact that La formation de l'esprit scientifique and La psychanalyse du feu were published in the same year is not accidental: in the former Bachelard is concerned not only with the proper formation of the scientific mind but also with the fact that it is de formed by its habitual expectations, while in the latter he looks at a particular case, the habitual association of substantiality and fire. "In this book when we talk of our personal experiences we are demonstrating human errors," he says in the Introduction to La psychanalyse du feu , and he continues.


Our work is offered, then, as an example of that special psychoanalysis that we believe would form a useful basis for all objective studies. It is an illustration of the general theses put forward in our recent book, La formation de l'esprit scientifique . The pedagogy of scientific instruction would be improved if we could demonstrate clearly how the fascination exerted by the object distorts inductions . It would not be difficult to write about water, air, earth, salt, wine and blood in the same way that we have dealt with fire in this brief outline. . . . If we succeeded in inspiring any imitators, we should urge them to study, from the same point of view as a psychoanalysis of objective knowledge, the notions of totality, of system, of element, evolution and development. . . . In all these examples one would find beneath the theories, more or less readily accepted by scientists and philosophers, convictions that are often ingenuous. These unquestioned convictions are so many extraneous flashes that bedevil the proper illumination that the mind must build up in any project of discursive reason. Everyone should seek to destroy within himself these blindly accepted convictions. Everyone must learn to escape from the rigidity of the mental habits formed by contact with familiar experiences. Everyone must destroy even more carefully than his phobias, his "philias," his complacent acceptance of first intuitions.[5]

It is clear from this passage, among other things, that Bachelard's project at this time was a full-fledged deconstructionism avant la lettre .

There are now two directions in which the Bachelardian work must obviously go—towards the dissolution of the scientific image, and towards the exploration of what this turn uncovers, namely the richness of the material image in its own right, and not just as an obstacle to scientific understanding. What led to the other works on the elements was just the realization, which dawned after (but no doubt as a result of) the writing of La psychanalyse du feu , that the domain of the imagination has its own constructive materiality ("quand j'ai écrit le Feu je ne me rendais pas compte du rôle de l'imagination matérielle").[6] The former direction is taken in La philosophie du non , and leads from the image to the concept, not now as a mathematized abstraction but as a postulated object more real than anything merely imaginable. Just as in surrealism (in which Bachelard at this time was deeply interested, to such a degree that Breton called him "the philosopher of surrealism"), the domain of the everyday is transcended, by an appeal to the unconscious, towards the poetically marvelous, so in Bachelard's "surrationalism" the familiar image is transcended, by an appeal to critical reason, towards the physically fundamental.

In one way or another, what is cut away from the image has to be found in the rectified concept. We could therefore say that the atom is exactly the sum of the criticisms to which its first image has been submitted.


Coherent knowledge is a product not of architectonic reason but of polemical reason. By its dialectics and its criticisms, surrationalism in a certain way determines a surobject . The surobject is the result of a critical objectification, of an objectivity that preserves of the object only what it has criticized. As it appears in contemporary microphysics the atom is the very paradigm [type ] of the surobject. In its relations with images, the surobject is exactly the nonimage. Intuitions are very useful: they are good for destroying. In destroying its first images, scientific thought discovers its organic laws. The schema for the atom proposed by Bohr a quarter of a century ago has in this sense behaved like a good image: nothing remains of it.[7]

(I translate "surobjet" as "surobject" rather than as "superobject" to maintain consistency with "surrealism"—and hence "surrationalism"—even though it is a rebarbative term. The use of this prefix in recent thought presents some interesting contrasts: "Ueberich" in German becomes "surmoi" in French but "superego" in English, which seems right—but if "surréalisme" had by the same token become "superrealism" I cannot help feeling that the understanding of the movement would have been very different, perhaps indeed improved.)

But if for science nothing remains of the image, the images that nevertheless remain lose nothing of their poetic value. Since this is the aspect of Bachelard's thought that has become the most familiar, I can afford to dispense with a catalogue of what those images are and concentrate on some problematic aspects, with the remark however that if he had done nothing but identify the species of the material imagination, that would have been enough to establish him as one of the century's seminal figures in the domain of poetics. It is perhaps not without significance that this work had its origins in a therapeutic situation, the psychoanalysis of fire described in an earlier citation.

Fire is the least material of the elements, and its elemental status is the most obviously unscientific. If we ask what fire is, the scientific response is quite straightforward: it is the hot and therefore visible gaseous product of an exothermic chemical reaction, usually one of oxidation; and this is as far as it could possibly be from the poetic response, in which it is warmth, passion, domesticity, life. The two poles do not interfere. What this means is that it is relatively easy to perform the required psychoanalysis; we are not really aux prises with materiality (indeed as remarked above the material imagination is not in play at the time of La psychanalyse du feu ). However as Bachelard works through the elements things get stickier, as it were, and by the time of La terre et les rêveries de la volonté there is a kind of collision of matter and imagination that seems to compromise the distinction between science and poetry. "Reverie that looks for substance under


ephemeral aspects," confronted with the three lighter elements (fire, water, and air or sky),

was in no way blocked by reality. We really confronted a problem of imagination ; it was a matter precisely of dreaming a profound substance for the fire, so lively and so brightly colored; it was a matter of immobilizing, faced with running water, the substance of this fluidity; finally it was necessary, before the counsels of lightness given us by breezes and flight, to imagine in ourselves the very substance of this lightness, the very substance of aerial liberty. In short materials no doubt real, but mobile and inconstant, required to be imagined in depth, in an intimacy of substance and force. But with the substance of the earth, matter brings with it so many positive experiences, the form is so evident, so striking, so real, that it is hard to see how to give body to reveries touching the intimacy of matter. As Baudelaire says, "The more positive and solid matter is in appearance, the more subtle and laborious is the task of the imagination."[8]

The resolution of this conflict is to be found in the admission that the substantiality of earth is just as imaginary as the substantiality of any of the other elements—that is, material and imagination belong together on the side of poetry, neither has anything to do with science. To the question whether images of density, hardness, massiveness, substantiality, etc., tell us anything at all about how the physical world really is, the brutal answer is no. They tell us about our world, with its vertigo and its viscosity, but not about the world science has to deal with. This doctrine is hard to accept because we want science to be about ordinary objects, not "surobjects" inaccessible to us, or accessible only through the operations of reason, and because as Bachelard says the impression of contact with the real material of things is so strong. But science is under the rule of reason and it does compel us to conclude that the physical world is beyond the reach of the material imagination; and Bachelard believes that this conclusion has to be accepted according to what he calls

the cogito of mutual obligation, [which,] in its simplest form, should be expressed as follows: I think you are going to think what I have just been thinking, if I inform you of the episode of reason which has just obliged me to think beyond what I previously thought.[9]

What we have to "think beyond" is, once again, the image. It is not just images of materiality that are suspect; in contemporary physics nothing is given to the imagination, not even something "hidden"—what there is seems less discovered than invented. In the works of the trilogy "surrationalism" gives way to "applied rationalism," a


more modest way of handling the same problem, and the atoms of an earlier citation from La philosophie du non have been generalized into particles, but the message, though expressed differently, is by now familiar:

Particles are situated at the boundary between invention and discovery, just where we think applied rationalism is active. They are precisely "objects" of applied rationalism. When we studied matter in an attempt to resume it in its four elements, in its four kinds of atom, phenomenology offered seductive images: fire has a spark, water a drop, earth has a grain, air can be felt in the movement of dust. Here, nothing. No natural "corpuscularisation." Nothing, absolutely nothing in common knowledge that could set us on the track of the isolation of a particle. And all the images are deceptive [et toutes les images sont trompeuses ].[10]

By now the point seems sufficiently established. Yet there is something unsatisfactory about it even from the scientific point of view. It is as if, in looking for the truth about the world, which is now to be expressed in formal rather than materially imagistic terms, we had somehow forgotten that it was there . The parts of the world—its particles—are yielded only by the application of reason and only when I am attending to them with a certain concentration of thought and from a particular point of view. But all the while the rest of the world is there, as it were, peripherally; I can't, precisely, be attending to it , and yet its being there is a condition of my having anything to attend to in the first place.

In a remarkable paper delivered to a philosophical congress in Lyon in 1939 Bachelard speaks of "the idea of the Universe [which] presents itself as the antithesis of the idea of the object," and introduces the lapidary formula: "The Universe is the infinite of my inattention." The truth about objects has to be complemented by the presence of the world, immediately and globally; our sense of this presence is a matter of intuition rather than of knowledge, it comes not from the accumulation of facts but from a kind of phenomenological totalization.

Experience of the Universe, if we admit that this concept has a sense, prepares no multiplication of thought; as far as I am concerned the idea of the Universe immediately and definitively dialectizes my objective thought. It breaks my thought. The I think the world ends for me with the conclusion: therefore I am not .

In other words, the I think the world puts me outside the world . Meditate on the other hand on the axiom of the philosopher of the universe: everything is in everything. Listen to him sing, like a poet, his Einfühlung among the forms and the light, the breaths and the perfumes. Look at him in his paradoxical attitude: it is in opening his arms that he embraces the world! But—strange conclusion—this Universe that totalizes all qual-


ities keeps none of them as a specific quality. Or at least if it does keep one, one soon sees that it is only as the valorization of a reverie.[11]

This is where the image comes back into its own. The quality of the Universe is in effect the quality of the moment of my apprehension of it, not now with scientific concentration but with poetic openness; it is the product of the nonspecific awareness that Bachelard calls reverie, waking but not active, alert but not intentional. The image, specifically the literary image, offers us this kind of relation to the world, or rather offers us a new content for it. Literature is significant, and its significance derives in part from its lending new significance to the world. In Bachelard this process goes through three stages, in which the image is first directly signifying, then metaphorical, and finally a creator of its own "unreality." The first is found in L'air et les songes :

How can we forget the signifying action of the poetic image? The sign here is not a reminder, a memory, the indelible mark of a distant past. To deserve the title of literary image it has to have the merit of originality. A literary image is a sense in the state of being born; the word—the old word—comes to receive from it a new signification. But this is not yet enough: the literary image must enrich itself with a new oneirism . To signify something other, and to make for other dreams, such is the double function of the literary image.[12]

"To make for other dreams": it is not that we needed the image to have dreams in the first place, to live the reverie that yields the Universe in the mode of presence rather than (scientific) truth, but it offers us a renewal of that presence under a different sign. However, the relation between signs that this originality of the literary image generates is nothing other than metaphor, and some years later, in this passage from La terre et les rêveries du repos , Bachelard suggests that poetry gives access through its metaphoric shifts to something like a true dream, a truth of its own:

In all its objects, Nature dreams. From this point, if we faithfully follow the alchemical meditation of a chosen substance, a substance always gathered in Nature, we arrive at this conviction of the image which is poetically salutary, which proves to us that poetry is not a game, but rather a force of nature. It elucidates the dream of things. Thus we understand that it is the true metaphor , the doubly true metaphor: true in its experience and true in its oneiric thrust.[13]

The imagination here, however, is still, as Bacon might have said, "hung with weights," held down in this as in the other earth book


(cited above) by the evident reality of the material, convinced by its experience rather than freely adventuring. It is only in the period of the last poetics that the imagination is given a power of its own, liberated not only from the burden of experience but from metaphor itself. Thus, in La poétique de l'espace , Bachelard says,

Academic psychology hardly deals with the subject of the poetic image, which is often mistaken for simple metaphor. Generally, in fact, the word image , in the works of psychologists, is surrounded with confusion: we see images, we reproduce images, we retain images in our memory. The image is everything except a direct product of the imagination. . . .

I propose, on the contrary, to consider the imagination as a major power of human nature. To be sure, there is nothing to be gained by saying that the imagination is the faculty of producing images. But this tautology has at least the virtue of putting an end to comparisons of images with memories.

By the swiftness of its actions, the imagination separates us from the past as well as from reality; it faces the future. To the function of reality , wise in experience of the past, should be added a function of unreality , which is equally positive, as I tried to show in certain of my earlier works.[14]

Such a "function of unreality" is clearly incompatible with scientific truth, whose concern must in the end be with the real even if on the way to its formulations it passes through the philosophie du non . But it is not incompatible with presence, especially if we construe the prae of praesens as temporally before; the future is axiomatically unreal, but it is the task of the imagination to face it, not in the mode of knowledge and the determination of parts but in the mode of creativity and transcendence towards the whole. So Bachelard quotes with approval these words of Jean Lescure: "Knowing must be accompanied by an equal capacity to forget knowing. Non-knowing is not a form of ignorance but a difficult transcendence of knowledge. This is the price that must be paid for an oeuvre to be, at all times, a sort of pure beginning, which makes its creation an exercise in freedom."[15]

The poetic presence to the world that is always a pure beginning transcends scientific knowledge but does not thereby belittle or annul it. I revert now to the duality from which I began, between science and poetry, in the light of Bachelard's itinerary. We left the truth about the real, some pages back, in the care of a strictly unimaginable but mathematically compelling "applied rationalism," in order to pursue the power of the image towards an immediate presence to being. This presence is characterized in La poétique de l'espace as a possession of the subject by the image, as a reverberation that constitutes a "veritable


awakening of poetic creation . . . in the soul of the reader."[16] These two extremes—on the one hand mathematics with no image at all, on the other an image that fills the whole space of subjectivity—seem to stand in complete opposition to one another, to have nothing in common. For Bachelard, however (as for Eddington), they are clearly not opposites but complementaries. It may be helpful in closing to consider their complementarity through the mediation of language.

Language is a common resource of science and of poetry, but the roles it respectively plays in them illustrate at once their separation and their continuity. Language—the language of logic and of mathematics—is the only medium we have for representing the truth about objective physical reality, inaccessible as it is to the imagination. On the other hand language is incapable of representing the immediacy of presence, which is yielded only by the imagination, although in poetry it can as it were prepare the imagination for presence. Language, in Heidegger's terms, is "the house of Being," by which we are to understand that if we make (poiein ) a place for being, by means of poetry, Being may come to dwell in it. Presence to Being however is not linguistic, it is not the same as presence to poetry—the latter is merely propaedeutic to it. Bachelard seems to have had an independent understanding of this in his doctrine of the reverberation of the poetic image, the image that "has touched the depths before it stirs the surface."[17]

These two functions—the discursive ground of science that is constituted by language and the unspoken intentionality of poetry that is prepared by it—are both eminently human functions. The subject does not vacillate between them but occupies their intersection, an intersection that is not a point but a place , the place where our life, with all its scientific complexity and poetic intensity, takes place. What Bachelard reminds us, in his person no less than in his writings, is that the complexity and the intensity are departures from, and equally rooted in, the familiar materiality of the simple image; that, given a willingness to do the necessary work, whether rational or imaginative, scientific truth and poetic presence are both accessible, to postmen as to philosophers.




Preface to Part VI:
Science and Subjectivity

In this final part the emphasis is on the subject in his or her subjectivity. This is obviously what unites two aspects of the work of Bachelard; as a practitioner of the human sciences it is he himself who holds them in equilibrium, as subject and agent. Chapter 23 links this part to the previous one through its association with Bachelard, who as remarked in chapter 22 was sometimes called the "philosopher of surrealism"; it deals with a well-known movement of ideas in France whose connections were mainly literary, though the resources of the subject on which its particular form of awareness depended—the freedom of the imagination, the refusal of a priori limits, the spirit of adventure—seem to me to be just those required for creativity and understanding in the human sciences, including the philosophy of science itself. One detail here: the minor surrealist poet Chausson was invented by me, as was his book and the press that published it. I think that at the time this was a gesture in the direction of the surrealist game; there now seems less point in it, but the invented quotation remains apposite, and there would be even less point in rewriting it all than in leaving the artifice in place.

The subject is embodied: could this happen in any cases other than those of human beings or the higher animals? In chapter 24 I try out the hypothesis of an ascent from primitive sensitivity to full subjectivity, sketched in the previous chapter, in connection with an inquiry into that by now hoary issue of the possibility that machines might think. But I give a fresh turn, it seems to me (which means "I think," a point developed in the chapter), to the notion of thinking. In another paper as yet unpublished, destined for another encyclopedic work, this time


a French dictionary of poetics, I try a similar tack with the creation and judgment of works of art, with similar results: nothing that we know about thought, or subjectivity, or art, rules out the eventual performance by machines of appropriate, and as far as we can tell authentic, functions in these domains.

Chapters 25 and 26 represent the fullest development of the argument implied by the subtitle (and, in the case of the eponymous chapter 26, the main title) of the book: the dependence of science on, or its embodiment in, individual knowing subjects. The status of subjectivity has been a major philosophical problem since Kant and Kierkegaard, but it was the introduction of the concept of intentionality by Brentano, and its extension by Husserl, that made it possible to understand the world-making power of the subject. The issue is much misunderstood: the world the subject makes—or the world that is made for it by its powers of intentionality—is not the hypothesized real world, the physical universe, but the life-world, the one that is born with the individual and dies when he or she dies. This is belied for most people—and for many otherwise careful philosophers—by the vividness of the apparently stable features of the life-world, the objectivity of which seems to be confirmed by the agreement of others, so that we think of ourselves as inhabiting a perceptual world in common. However, other people are only encountered in one's own life-world; they, and all the apparently stable features of that world, have been constituted as objective by the adaptive strategy that uses intentionality to mediate the real world (as environment) to itself (as organism) in such a way as to ensure the survival of the species. Science itself can be thought of as part of that strategy.

Do I mean here a conscious evolutionary strategy? Do I mean "uses intentionality" in a purposive sense? Of course not, as some of the foregoing chapters will have made clear. But that is the way in which it is tempting to think, to a first approximation, as we try to make sense of the life-world, helped by features of it (such as language and other cultural objects) that we borrow or inherit from other people. Not even such objects guarantee a common world, because the mediation of the real world to itself, just referred to, need never—given the complexity of the systems involved—take the same form twice, at least not in a population as inconsiderable, relatively speaking, as the human population to date (only a tiny fraction of which partakes in any cultural sharing above the local and primitive).

The life-world is our only route of access to the real world—which can figure in it (to belabor a point, perhaps—but it is a delicate and crucial point) only as hypothetical . It is thus a paradoxical fact that while on the one hand my world occupies only a small corner of the


universe, the universe on the other hand occupies only a small corner of my world. I am a minuscule fragment of the hypothesized real world—but the hypothesis of the reality of that world is only occasionally the focus of my attention, which tends to be preempted by more mundane objects. The way in which I come to have that world is the main topic of chapter 25, and some of the ways in which this kind of conjecture has been adumbrated, especially in connection with the theory of perception, is the main topic of chapter 26. It will be seen that in the hypothetical real world I subscribe to an uncompromisingly causal theory of perception, but that in the life-world I find affinities with theories that appear to take a diametrically opposite view.

The life-world contains more than its perceptual contents—much more, though the perceptual will dominate if given the chance (which is one of the reasons why television is such a mixed blessing). Reflective and affective life in it—as contrasted with more straightforwardly active or reactive life—involves objects of an entirely different order: persons (as distinct from their embodiments), artworks (with a similar proviso), theories, theorems, narratives, personages, ideas, ideals, ideologies, societies, communities, nations, cultures. Does the fact that we do not require a realist hypothesis for these objects—that indeed we cannot propose one without metaphysical extravagance—mean that they cannot be treated by the methods of science? The final chapter of the book is devoted to the thesis that knowing subjects can sustain, with appropriate theoretical rigor, sciences not only of the natural world but also of the human world—sciences that will complement but need not imitate one another. (If the social sciences had not felt a need to imitate the natural sciences they would have made much greater progress.)

Whether knowledge is scientific—to hark back to the conclusion of chapter 18—is a question less of what is known than of how the knowing subject acquires and uses its knowledge, whether naively or reflectively, whether casually or systematically, whether as opinion or as judgment, whether as borrowed or as earned. At a time when knowledge of nature has so far outstripped knowledge of the human—as is evidenced by the enormous discrepancies that are manifest everywhere between technical power and social understanding—there is a greater need for serious work in the human sciences than ever before. And yet those sciences, as stable forms of knowledge, are in their infancy, as can be seen from the squabbles and recriminations of competing schools, among which the old-established analytic and the upstart deconstructionist are among the most notorious (even though their names, suitably unpacked, mean the same thing).

I would now be inclined to go even further than I do in the last


chapter as it was originally written, to point out that in the domain of the human sciences the temptation to fall back on a realist hypothesis has traditionally taken a theological form, and that the (relatively) stable world-views that have resulted, while comforting to their believers, spell eventual—perhaps imminent—disaster for humanity because they are incompatible with one another but cannot hope to resolve their differences, as the natural sciences naturally do, on the basis of a common underlying ontology. The convergence of the natural sciences over the brief span of historic time is an overwhelming fact, in spite of current pragmatist and relativist opinion to the contrary. It rests on a persistence of the real: this is the great background hypothesis, the refusal of which makes any collaborative activity, even that of relativists, unintelligible. The convergence of the human sciences, because of the very different nature of their objects, has to be constructed. This is an imperative that might well set the intellectual and political agenda for the next millennium.


Science, Surrealism, and the Status of the Subject

My aim in this essay is to explore some conceptual relations between surrealism on the one hand and philosophy and science on the other. I shall not however be talking about particular scientific theories, or about the surrealists' reactions to them, but rather appealing (briefly and indirectly) to a possible scientific program. The common thread in this exploration will be the philosophical problem of the subject, which I shall treat first in the context of existentialism and phenomenology (in Kierkegaard, Sartre, and Husserl) and then from the point of view of a kind of evolutionary ontogeny. In undertaking this exploration I do not mean to reduce surrealism to the level of a theoretical view among others; if one talks about it at all one must at least acknowledge and respect the passionate difference of its founder, who after all once said, "For me everything is subject, nothing is object." I do however take it in its earliest form, as a movement of liberation through the power of the imagination, leaving aside the difficulties in which it became entangled when it moved from the level of individual or group embodiment to that of public and political involvement.

The problem of the subject is one of two chief limit-problems of philosophy. By a limit-problem I mean one that cannot be encompassed within philosophy but forces it to acknowledge its limits; if there is a solution to the problem it will not be a philosophical solution, and by parity of argument if only a philosophical solution is acceptable then the problem remains insoluble. The problem of the subject has this character because no problem can be posed except by a subject, and the subject cannot attain the exteriority with respect to itself that would be necessary to encompass the problem.

Translations from works cited in French are my own.


The other chief limit-problem of philosophy is the problem of the world as a whole. It is correlative to the problem of the subject and is problematic for a similar reason: no problem can be posed unless there is a world, and the subject as in the world cannot attain the exteriority with respect to the world that would be necessary to encompass it as a problem. In both cases the last gesture of philosophy can only be a pointing: it can as it were zero in on the place where subjectivity is likely to be found, draw its circumference, and point inwards; or it can reach through phenomena in the direction of transcendence, draw (but only approximately and partially—the boundary is both indeterminate and infinite) their periphery, and point outwards.

The first of these pointings is the encounter with existence, the second the encounter with being. Between them stretches the domain of objects, things, res , the reality of ordinary macroscopic everyday life, troubled neither by the absoluteness of being nor by the anguish of existence. The real world is a world in which people can live comfortably enough as long as they are not thrust up against what Jaspers already called Grenzsituationen , "limit situations," like love and pain and death. These are not without philosophical tractability, but elements of them reduce to our two main problems: love is the impossible encounter with another embodied subjectivity, pain the encounter of subjectivity with implacable objectivity, equally embodied, death the end of subjectivity because the end of its embodiment. That subjectivity should be embodied is the first observation on the way to locating it in the world.

But whose subjectivity is in question? It can only, for me, be mine. If I say "the subject" as if it were a category of thought, of metaphysics or ontology or epistemology, I hypostatize it as an object of my own thought and so precisely sacrifice its status as subject.

Objectively we consider only the matter at issue, subjectively we have regard to the subject and his subjectivity; and behold, precisely this subjectivity is the matter at issue. This must constantly be borne in mind, namely, that the subjective problem is not something about an objective issue, but is the subjectivity itself.[1]

Kierkegaard remained in subjective despair over this problem, unable to make what he considered the necessary leap to something posited as objective, whether God or Regina or just everyday life, an outing to the Deer Park for example. For that, faith was required. Not that the ordinary realistic bourgeois was shut out from these things, church or marriage or outings to the Deer Park—but without faith, they would not be entering into them as subjective individuals. With faith, on the other hand, they would be indistinguishable from the ordinary bour-


geois, at least as far as external indicators were concerned. But Kierkegaard himself, unable to rise to faith, was at the same time unwilling to renounce his subjectivity.

Since Kierkegaard, up through the surrealists and the second (or Sartrean) wave of existentialism, the subject has been a constant preoccupation in Western thought, if sometimes a thorn in its flesh, to such an extent that even Sartre, and after him some of the structuralists, have tried in different ways to suppress it, by depersonalizing it or relegating it to "absence." Sartre had a horror of the "inner life" and went to a great deal of trouble, in The Transcendence of the Ego , to extirpate the subject from philosophy. I will not follow his whole argument but I cite from the conclusion of the book,

The subject-object duality, which is purely logical, [should] definitively disappear from philosophical preoccupations. The World has not created the me; the me has not created the World. These are two objects for absolute, impersonal consciousness, and it is by virtue of this consciousness that they are connected. This absolute consciousness, when it is purified of the I, no longer has anything of the subject. It is no longer a collection of representations. It is quite simply a first condition and an absolute source of existence. And the relation of interdependence established by this absolute consciousness between the me and the World is sufficient for the me to appear as "endangered" before the World, for the me (indirectly and through the intermediary of states) to draw the whole of its content from the World.[2]

But this "absolute consciousness" constitutes a far more difficult problem than the subject itself. Sartre's answer to the question of the origin of the subject is to have it emerge from a sort of prepersonal field of consciousness. He adopts this solution in order to avoid the charge of escapism that he levels against Husserl's doctrine of the transcendental ego, insisting that the ego must not take refuge from the world but be out in it, among things. Unfortunately this was just the point on which he most gravely misunderstood Husserl. Husserl inverted the subject-object relation (or the subject-world relation), so thoroughly that the transcendental ego, rather than being in retreat from the world, altogether contains it. "The Ego himself, who bears within him the world an accepted sense and who, in turn, is necessarily presupposed by this sense, is legitimately called transcendental."[3]

This clearly allows the world of things, of reality, to continue in existence without me—only it is not my world, not the world that has sense for me. Husserl does not fall into Hegel's megalomaniac identification of subjectivity with the Absolute; when the latter says, as he does in the Preface to the Phenomenology of Mind , that "everything


depends on grasping and expressing the ultimate truth not as Substance but as Subject as well,"[4] he does not mean the individual subject (the one I know because I am it) but a World-subject, thus personalizing and as it were subjectifying the dualism of Spinoza. Hegel, as Kierkegaard often remarks, has forgotten, in giving absolute reality to the System, that he is an individual subject.

The systematic Idea is the identity of subject and object, the unity of thought and being. Existence, on the other hand, is their separation. It does not by any means follow that existence is thoughtless; but it has brought about, and brings about, a separation between subject and object, thought and being. In the objective sense, thought is understood as being pure thought; this corresponds in an equally abstract-objective sense to its object, which object is therefore the thought itself, and the truth becomes the correspondence of thought with itself. This objective thought has no relation to the existing subject; and while we are always confronted with the difficult question of how the existing subject slips into this objectivity, where subjectivity is merely pure abstract subjectivity (which again is an objective determination, not signifying any existing human being), it is certain that the existing subjectivity tends more and more to evaporate. And finally, if it is possible for a human being to become anything of the sort, and the whole thing is not something of which at most he becomes aware through the imagination, he becomes the pure abstract conscious participation in and knowledge of this pure relationship between thought and being, this pure identity, aye, this tautology, because this being which is ascribed to the thinker does not signify that he is, but only that he is engaged in thinking.

The existing subject, on the other hand, is engaged in existing, which is indeed the case with every human being.[5]

The problem is to give a sense to this "existing" which will do justice to the fact that the subject is a thinking subject, without absorbing it into thought taken as adequation to (or identity with) being. The couples subject/object, thought/being, are thought by a subject, in this case here and now by me, by you. And this necessitates the invasion of the perfect and eternal unity of the system by a new dimensionality that I have elsewhere called "orthogonality"[6] and which I understand as the incursion of time into structure.

Sartre, who (as he himself later admitted) was only temporarily and perversely anti-Husserlian, introduces such a temporal dimension in an exceptionally brief and lucid text, "Intentionality: a Fundamental Idea of Husserl's Phenomenology":

Imagine for a moment a connected series of bursts which tear us out of ourselves, which do not even allow to an "ourselves" the leisure of


composing ourselves behind them, but which instead throw us beyond them into the dry dust of the world, on to the plain earth, amidst things. Imagine us thus rejected and abandoned by our own nature in an indifferent, hostile and restive world—you will then grasp the profound meaning of the discovery which Husserl expresses in his famous phrase "All consciousness is consciousness of something." . . . Being, says Heidegger, is being-in-the-world. One must understand this "being-in" as movement. To be is to fly out into the world, to spring from the nothingness of the world and of consciousness in order suddenly to burst out as consciousness-in-the-world. When consciousness tries to recoup itself, to coincide with itself once and for all, closeted off all warm and cozy, it destroys itself. This necessity for consciousness to exist as consciousness of something other than itself Husserl calls "intentionality."[7]

The difficulty here is that, as Kierkegaard feared (but in a different way), the existing subject evaporates, this time as it were away from itself in the direction of the intentional object. "All consciousness is consciousness of something" requires perhaps to be complemented by the symmetrical claim, "All consciousness is somebody's consciousness," meaning by "somebody" a fully embodied existing individual, not an impersonal absolute springing out of nothingness. Sartre's idea of prepersonal consciousness seems to me merely mystifying. It is true that very great difficulties attend the question of how personal or individual consciousness emerges (I prefer "individual" to "personal" because it is precisely not a question of the "persona," the way the individual appears to others, but of biological individuality), but it is not helpful to invent a common source of subjectivity in "absolute consciousness." Whose is it? It is true also that when I become conscious of the world, but admittedly much later, it is not at first as an individual but as a consciousness. But it is I just the same; if I do not yet know who I am, it is not that I am confusing myself with someone else.

It seems to me in fact that the refusal of the idea of the subject, on the part of subjects, whether by Sartre or by the structuralists, is (like the refusal by Derrida of the idea of the book, in the introduction to a book) a red herring, unless indeed it should be a kind of surrealist game ("ceci n'est pas une pipe"). In any other spirit the utterance of the words "I am not a subject" would require elaborate preparation, like the utterance of the words "I am dead" in Poe's story about M. Waldemar. For the moment at least we are still ourselves subjects.

Even if everything I have said so far is provisionally acceptable there still remains the question of the nature and provenance of the subjects we are (or strictly speaking, as before, the question for each of us of "the subject I am"). I take this question up now from an entirely different point of view. To reflect on subjectivity is necessarily to engage in


autoreflection, to lend oneself to reflexivity. In the rhetoric of Quintilian there was a figure called subiectio , which was defined as "giving the answer to one's own question." In full subjectivity I give myself as the answer to my own question. But what kind of being must I be in order to be able to do that?

Perhaps some light could be thrown on this by a genetic approach, something like Condillac's device of the statue but without the prior assumption of interior organization. We might think in fact of several stages in which the interior-exterior relation takes progressively different forms. In the case of a merely material object there is no interior-exterior relation in the required sense. In the case of a very primitive but merely reactive organism there is such a relation but of the simplest kind—its interior state is required to match, in an objective sense, the exterior, and if this match fails it will move, essentially at random, until it finds a new and more closely matching exterior situation. In the case of an organism whose nervous system permits what we might call sensation there comes to be an interior representation of the exterior situation, and it is the matching of the interior state with this representation, rather than with the exterior situation directly, that determines its motion. A yet more complex organism which has a representation not only of the exterior situation but also of its own interior state might be said to have reached the stage of consciousness; its motion is now determined by the matching of the two representations. Its consciousness lies in the awareness of similarity and difference between these representations; but one might say that it is not aware of this awareness. Finally we arrive at what Sartre called the prereflexive cogito ; in this last stage subjectivity enters, a kind of conscious monitoring of consciousness that consists not in the matching of the two representations but rather in the matching of the interior state with a representation of itself.

This is of course sketchy and merely programmatic, but I think something like it must hold in principle. If we now ask the question, "What is the conscious organism's world like?" we get all sorts of interesting answers from biologists: specialized creatures have specialized worlds (in the case of frogs, for example, only what moves counts). On reflection it becomes clear that my world cannot be more complicated than the internal structure by means of which I represent it; the aspects of the exterior world that are not reflected or matched (i.e., anticipated) by this structure do not and cannot exist for me. What hides this truth from us is that we have, of all the animals, the most general-purpose structure, and the most complicated, so that it takes a serious effort of the imagination to comprehend it: ten million cells in the retina, ten billion in the brain, all multiply connected, and so on.


Now without pursuing this idea into its furthest physiological ramifications we can take one more plausible step and say that our interior world may be—and in fact must be—far more complicated than the external world in which we live. In order to match a given external situation I must have at my disposal a repertoire of interior representations that encompasses and surpasses—by far—the sum of the elements of the given situation. It is a bit like reading (and indeed is a kind of reading): if I am to be able to read a text I must know all the words it contains—and all the others in my language as well. This concept of the "prepared reader" is very far-reaching and I will not pursue it further for the moment. But I will point out that it easily explains, in its generalized form, the experiences of drug addicts who think they have come upon a new world; what they have come upon is a state, artificially displaced from the normal state, that would have corresponded to the world if the world had been different. If we are able to "read" the world in which we do in fact live, we must have the materials for reading many others, nearer to it or more remote (up to a point)—we must, that is to say, have the materials for the construction of the internal representations that are constitutive of states of consciousness other than the "normal" state, and these materials may be constructively employed, as we know they are, under special circumstances—sleep, illness, intoxication, etc.

In fact, the complexity of the apparatus at our disposal is such that "reality" pales in comparison with it. Not that in the first instance it takes precedence over reality; indeed, as Sartre suggests, a lot of what we are comes from the external world. The structure of the apparatus is determined, however, not only by the physiology of the nervous system and by experience in the usual sense of the word, but also and most strikingly by reading, and especially by the reading of texts. Furthermore, this is cumulative: reality is as it now is only now, but there remains in me something of what I read yesterday, last year, in childhood. Subjectivity, I will say, is what animates this complex structure, what scans it. The structure is idiosyncratic in each one of us, partly because of genetic differences, but mainly because of radical differences of experience and reading. And it is indefinitely extensible. If I pick up a book I have not previously read I borrow its structure, I lose myself in it, or as we sometimes say, I am "buried" in it. This is what André Breton clearly saw when he said, in Surrealism and Painting ,

Nothing prevents me, in this moment, from arresting my gaze on some illustration in a book—and lo! what was all about me no longer exists. In the place of what surrounded me there is now something else because,


for example, I participate without difficulties in an altogether different ceremony.[8]

The reality, then, of "what is all about me" is for the most part much less interesting than the reality of what I read—or find in myself, in my unconscious, my dreams, in hypnagogic images or phrases. As a subject I have a whole domain to scan, if only I can find the keys to it. I also have deep resources of action, even violent action, which can be tapped in automatic writing or other exercises, such as shooting at random in the street. But where then am I? It is not that I inhabit my body, I am it insofar as I am a conscious subject. But I am it differently according to circumstances. Often I animate only its physical structure, or the borrowed structure of the immediate environment as it is delivered to me in perception. But I also animate other structures, borrowed or created, those of books or of the imagination itself. Let me cite Breton again, from the First Manifesto :

For today I think of a castle half of which is not necessarily in ruins; this castle belongs to me, I see it in a rural situation not far from Paris. . . . I will be convicted of poetic untruth: everyone will go about saying that I live in the rue Fontaine, and he won't swallow that. Parbleu! But this castle of which I do him the honors, is he sure that it is an image? If this palace existed, for all that! My guests are there to answer for it; their fancy is the luminous route that leads there. It is in truth at our fantasy that we live, when we are there .[9]

Now we are always somewhere and we are always there morally, as it were, on the basis of what we are physically, even though this truth is for the most part hidden and may well remain so. There is enough going on in us for us to have no excuse for boredom. Among other things there are second-order activities like philosophy and criticism—including the question of the subject. If the subject, as I have suggested, scans or traverses the labyrinthine structure of the me, this means that it endures through time and traces out, so to speak, a line through this structure, through this network of available subjective states. To the idiosyncrasy of the subject we can then add its linearity, thus invoking the doctrine of the sign—unsurprisingly, perhaps, since we remember from Husserl that the ego, the subject, contains its world precisely as a "unity of sense." The concept of matching to which I referred earlier has since Saussure been a special mark of the sign, and that the life of the subject should be a life of significance is altogether appropriate. (I find the notion of a "life of significance" far more acceptable than that lure of bad theology and metaphysics, the "significance of life.") As the minor surrealist poet Denis Chausson points out in his


essay Les lumières coincidentes , "If the surrealist life has a single power, it is to be able to make of the highest significance whatever is presented to it, in whatever disorder, by the chance of the everyday."[10]

One further step: we can add also to the idiosyncrasy and linearity of the subject what the latter, in its Saussurean context, inevitably suggests, namely, the arbitrariness of the subject. That these subjectivities should be associated with these bodies, in this place and at this moment, has no reason and no explanation. They are not only our means of access to this reality, they are this reality; they contain their world, as Husserl said they did. But that leads to the final question: what is the relation between the reality the subject contains and the "external" reality with which we began? As we have already said, the projection of a reality from the side of the subject, out of its resources of conscious and unconscious structure, may seem in the end far preferable to the usual entrapment in the ordinary. And yet it is perhaps unrealistic to try to use that as an escape route from reality. To quote Surrealism and Painting once more,

Everything I love [says Breton], everything I think and feel, inclines me to a particular philosophy of immanence according to which surreality would be contained in reality itself, and would be neither superior nor exterior to it. And vice versa, for the container would also be the contained.[11]

The question is complicated, however, for reality seems to exercise an obscuring function. In his essay on the first Dali exhibition, in Point du jour , Breton suggests that what is thus obscured is of paramount importance and that it can be recovered by surrealistic strategies:

It remains to suppress, in an unquestionable fashion, both what oppresses us in the moral order and what "physically," as they say, prevents us from seeing clearly. If only, for example, we could get rid of these famous trees! and of the houses, and the volcanoes, and the empires. . . . The secret of surrealism resides in the fact that we are convinced that something is hidden behind them. Now we have only to examine the possible modes of the suppression of trees to perceive that only one among these modes is left to us, that in the end everything depends on our power of voluntary hallucination .[12]

The language of this passage has a curious echo which takes us back to Kierkegaard. For what was traditionally recommended as a means of getting rid of volcanoes, or at any rate of moving mountains, was precisely faith, the goal for which Kierkegaard strove so continually and, according to his own account, so unsuccessfully. If we had faith, then the real and the surreal would indeed be interpenetrating, just as


Breton's earlier citation requires. Consider the Knight of Faith in Fear and Trembling :

No heavenly glance or any other token of the incommensurable betrays him; if one did not know him, it would be impossible to distinguish him from the rest of the congregation. . . . Towards evening he walks home, his gait is as indefatigable as that of the postman. On his way he reflects that his wife has surely a special little warm dish prepared for him, e.g. a calf's head roasted, garnished with vegetables. . . . As it happens, he hasn't four pence to his name, and yet he fully and firmly believes that his wife has that dainty dish for him. . . . His wife hasn't it—strangely enough, it is quite the same to him. On the way he meets another man. They talk together for a moment. In the twinkling of an eye he erects a new building, he has at his disposition all the powers necessary for it.[13]

Here is faith at work all right—and yet is it in the world? Are the container and the contained symmetrical? Has something not appeared behind the "physical" obstruction of reality? Kierkegaard remained ambiguous about this; he could not manage to be this humble Dane, the Knight of Faith who takes an outing to the sea shore or the Deer Park—did he really think it desirable? Precisely the same ambiguity, strikingly enough, is to be found in Breton. Consider the following passage, in which Nadja speaks:

"A game: Say something. Shut your eyes and say something. Anything, a number, a name. Like this (she closes her eyes): Two, two what? Two women. What are they like? They're in black. Where are they? In a park. . . . And then, what are they doing? Come on, it's so easy, why don't you want to play? Well, me, that's how I talk to myself when I'm alone, I tell myself all sorts of stories. And not only pointless stories [de vaines histoires ]: it's even altogether like that I live."[14]

And Breton adds a footnote: "Doesn't one touch here the extreme of surrealist aspiration, its strongest limit-idea?"

But he doesn't want to play—or he can't. Like Kierkegaard, who lamented his own inability to claim Regina, Breton in several places reproaches himself for not being up to the surrealist challenge, for not speaking to such-and-such a woman, for not allowing himself to drive blindfold with another woman who happened to be Nadja. Or rather in this latter case it is just an observation: "No need to add that I did not accede to this desire."[15] Perhaps in the end—and the parallels are far from exhausted in this brief paper—the subject in surrealism as in phenomenology cannot let itself go wholly in the one direction or the other, to a solipsistic retreat or to a total surreality. Perhaps pure surrealism, like pure faith, is an unrealizable limit-idea. But as in the case of the subject itself, that limit-problem, the failure to reach (let alone to transcend) the limit does not, as it turns out, invalidate the enterprise.


Subjectivity in the Machine

Thinking and Subjectivity

In this paper I shall propose what I take to be a timely shift of attention from the question of whether a machine can think to the question of who, if anyone, a thinking machine is. But there is life in the old question yet, and one way of looking at it provides a bridge to the new question. This is the strategy of treating "to think" as deponent verb, and it opens up the whole issue of thinking as behavior versus thinking as experience.

Deponent verbs, which in Latin grammar were passive verbs with an active sense, were so called because their original passive sense had been "put aside," deposed as it were, in favor of the new active one. So they involved a history, the history of a transition from something that happened to people to something they did . If this can be traced within a living language it must have been very recent (though the whole history of language, indeed the whole of history, is recent in evolutionary terms). In the case of thinking such a transition does seem to have taken place—indeed, it is still incomplete, the passive sense being preserved in many locutions in current use: "it seems to me," "it occurs to me." We have dropped the archaic passive form "methinks," but it remains in the language as an archaic form; it was current only yesterday, as it were. And we can get a sense of what the transition means by trying it out on another form: take "it seems to me" and transform it into "I seem it to me." That is what "I think" in fact amounts to—from having thought contents occur whether we want them to or not, we move to a position of control, we choose what to


think, we "make up our minds" (consider the transition from the old form 'I have a mind to . . . ," or its current descendant "I've half a mind to . . . ," to "I've made up my mind to . . . ").

Thinking has an authentic etymological connection with seeming or appearing; what I do when I think is to bring something "before" me, in a sense yet to be specified—the expression that comes to mind (the passive position again!) is "before the mind's eye," whose usefulness as a metaphor is however limited because of the tendency of the visual to preempt the field of attention. I will return to this point. The main thing to notice here is that the shift from passive to active takes us from the object position to the subject position. And this is something that happens gradually, ontogenetically as well as phylogenetically, and that requires our participation. Being able to do it is what it means to be a subject in the active sense, to be an agent in the process of thinking rather than a patient. But doing it does not mean the end of thinking in the other sense.

The two possible ways of construing "thinking," then, are as a process that goes on, that happens to us, and as an activity undertaken, that we engage in. Only the latter involves subjectivity essentially. But just what do we understand by this "subjectivity"? And, since understanding and explanation are correlative in the philosophy of science, is there an explanation of subjectivity? To deal with these questions in order: by "subjectivity" in this paper I mean the condition of being a subject, or being in the subject position, in relation to objects known or acted upon—being situated, that is, at one pole of a vector of attention or intention, the pole characterized by noetic consciousness as opposed to the noematic contents of consciousness. I do not mean "subjectivity" to be contrasted with "objectivity" where the latter is used, as it sometimes is, to mean a commendable detachment from affective influences on judgment. If I am to preserve a scrupulous objectivity I cannot allow "subjective" factors to influence my conclusion, so it looks as if I have to keep something called "subjectivity" at bay. But I have to be a subject , hence to have subjectivity in the sense in which I use the term here, if I am to preserve or conclude anything, or know that anything has been preserved or concluded. Another way of putting this is to say that objective knowledge presupposes subjectivity because any knowledge does, knowledge being just such an intentional relation.

Can subjectivity in this sense be explained? There are two main ways of going about the business of explanation, which I will call working out (from an intuited center) and working up (from a postulated base); their paradigm cases are respectively phenomenological and hypothetico-deductive explanations. Ideally these processes meet in the full ex-


planation of a given object, process, or event. This shows not only how the entity in question fits into a causal network but also how we have access to it in experience. But a systematic difficulty arises in any attempt at an explanation of subjectivity: we must start with it in working out, but we can't get to it by working up. This is, again, for the obvious reason that even working up presupposes it: subjectivity is intentional, and because both the explanandum and the explanans are intentional objects it is required if they are to be evoked.

This is a crucial point. I use "intentional" here in Brentano and Husserl's sense, not in Dennett's sense,[1] which, although it catches admirably how systems with Brentano-Husserl intentionality might behave (or how their behavior might be explained), it doesn't succeed in showing—and to do Dennett justice doesn't try to show—that a system the explanation of whose behavior requires the "intentional stance" needs to have intentionality in the Brentano-Husserl sense. Dennett doesn't think this matters, but I think it makes a tremendous difference. Intentionality is just the feature of subjectivity that directs its awareness towards objects (or is aware of its direction towards objects). "Intentions," in the sense of "purposes," are a familiar case of this, though they account for no more than a small fraction of intentional activity; attention, to which I shall return below, is a special case in which the object is presented (normally in perception), so that its constitution as what it is requires no active contribution from the subject.

Of course the only subjectivity presupposed in this way is my own. Its intentionalities, however, are what make my world and my project and make them meaningful. Or perhaps having a meaningful project in a meaningful world is just what it is to be a subject. Quintilian's definition of the rhetorical form subiectio , as "the suggesting of an answer to one's own question," suggests a possible definition of subjectivity: to be a subject is to be in a position to suggest oneself as the answer to one's own question. One of the things I think is myself. But I think myself differently from the way I think objects, and that is one of the difficulties in the way of a general explanation of subjectivity. Freud to the contrary notwithstanding, the subject can't be made an object; I can't give my own subjectivity as the answer to anyone else's question, nor can anyone else give theirs as the answer to mine. But if we can't produce subjectivity as an explanandum on the basis of an explanans we can perhaps at least locate it, and indicate the kinds of structure and experience that are concomitant with its occurrence.

We have no evidence at the moment of any cases of subjectivity other than our own, and (conjecturally) that of some of the higher mammals. All of them are associated with the advanced development of central nervous systems. But central nervous systems can become ex-


tremely complex without the emergence of subjectivity. What we find we want to say is that there cannot be subjectivity without the full activity of thinking, and that brings us back to the old question. The distinction between thought as process and thought as activity closely parallels the distinction between thought as behavior and thought as experience: any system can behave, whether conscious (or subjective—but the terms are not synonymous) or not, but only a conscious subject can have experience. (The "ex-" of experience," though in its origin a plain "ex-" like any other, shares with the "ex-" of "existence" the sense of a standing-out from something: ex-perience is a coming-out from a going-through, but there has to be some continuity, some substrate, for it to be cumulative.)

Thinking as Behavior

"Thinking as behavior" is itself ambiguous. It has two senses, one relatively straightforward, the other at once more trivial and more profound. The straightforward sense construes the behavior associated with thinking as a behavioral output that is taken to result from thought. It helps in circumscribing the place of thought in the process if there has been an input which has triggered it by offering something to think about. So the paradigm case is the answer to a question, and the classical form of questioning is Turing's imitation game.[2] A machine that plays the game proficiently, so that its "opponent" thinks it is a human being, has to be admitted to have been thinking, or at any rate doing something that, if a human being did it, would count as thinking.

Turing argued that there could be no reason to deny thought to the machine if it satisfied the test by which we attribute thought to human beings. Is the imitation game such a test? There is clearly something it tests: it is by his or her answers to my questions (or responses to my conversational moves—outright interrogation is not the norm of social intercourse) that I judge whether my interlocutor is awake, English-speaking, intelligent, knowledgeable, witty, thoughtful (which is only one mode of thinking), gifted at languages or mathematics, a compatriot, a fellow-enthusiast, a professional colleague, etc. Mostly of course I assume these things and am disappointed when he or she turns out to be inarticulate, slow-witted, or fraudulent.

The imitation game in fact seems to be less a test of thinking than of the membership of one's interlocutor in one or more of a number of linguistic communities, all the members of all of which are assumed to be capable of thought. But on reflection it does not seem obvious that


if a machine passes the test of admission to such a community, it necessarily follows that this assumption operates in its favor too. What does the assumption rest on? How do people learn the language of thought—not the "language of thought" in Fodor's sense but the ordinary language by which they refer to thought?

They precisely can't learn it by behavioral cues except in the trivial sense: furrowing brows, banging foreheads, etc. But the more profound implications of this sense—the behavioral concomitants of thought, rather than its behavioral consequences—are connected with the realization that sometimes it is just the absence of behavioral cues that indicates thinking. A child who disturbs an immobile and silent parent is rebuked for it—"Can't you see I'm thinking?"—may learn to be silent and immobile too at tricky junctures and may thereby learn to attend to something in itself that would not otherwise have been attended to.

An anecdotal example of the point at issue here was recently provided by a striking scene in a play called "Whose Life Is It Anyway?" Its main character was a hospital patient paralyzed from the neck down. At a climactic point in the play a judge, who had to decide whether to grant the patient's wish to be allowed to die and who was in perplexity about this decision, walked downstage and stood motionless for what was in dramatic terms a very long time, his hand to his chin, obviously deep in thought. It was understood that in this moment he was actively exercising his highest human capacities, his high judicial function; and yet in so doing he was as it were symbolically paralyzed. He wasn't doing anything that the patient couldn't do, and the spectator was led for a moment to reflect that if the highest mode of functioning as a human is compatible with immobility, then the patient's immobility was no bar to his functioning at the highest human level.

That wasn't how the play came out, but it serves my purpose by sharpening the question of what is going on while we are thinking, what attention to the process of thought reveals, and whether attending to it has anything essential to do with the process. It might turn out to be the case that we are thinking beings who happen to be able to observe some of our thinking processes consciously even though there is no need for us to do so, like passengers on a ship who are allowed to go up on the bridge and watch the captain and hear his commands, though the ship would sail on just the same if they stayed below. And it might turn out that machines can't do this, in which case they would be thinking all right but there still might not be anyone there. On the other hand it might turn out that our role on the bridge is as captain, forming an essential link in the causal chain of command. And if there were some


maneuvers that couldn't be done without the captain's intervention, and the machine managed those too, then we would have to ask what in it corresponded to the captain.

Before anyone raises the obvious objection to this metaphor I had better do so myself. It sounds like a homunculus theory, and even Descartes saw that that would not do. He uses much the same image: "I am not only residing in my body, as a pilot in his ship, but . . . I am intimately connected with it, and . . . something like a single whole is produced."[3] So it is not a question of a part of me observing another part, but of my exercising a reflexive or self-referential capacity that may or may not be essential to the process of thought. This "essential" remains problematic though—it might turn out that the captain himself served a merely decorative function, and that the ship would sail smoothly on even if he stayed below.

Attention and Intenttonality

Certainly there are functions of what we all recognize as thought that we can't attend to even if we try. If someone asks me for the product of six and seven and I say "forty-two," there is just no way in which I can catch any thought-content between the question and its answer. I can of course complicate things so as to provide one—six sevens is the same as three fourteens, which is three tens plus three fours, two-and-a-half of which will already make another ten, and so on. But that is only like climbing the steps at Lourdes on one's knees so as to draw attention to the process.

The concept of attention keeps cropping up and itself needs to be attended to. It makes a pair with intention and the contrast between them is instructive. This has something in common with the contrast between discovery and invention, in that we attend to what is already there but intend precisely what is not yet or is no longer there, or what never was or never could be there (golden mountains, round squares). It is the difference between finding something and creating it. Subjectivity, in the phenomenological sense, is as we have seen intentional by definition: every consciousness is a consciousness of something, and intentionality points along the axis from neoesis to noema. This pointing however is an activity of the knowing subject, which sustains the object of thought; it suggests that the subject is also agent . The standard cases of intentional objects, just referred to, don't we suppose present themselves; they need to be thought of . (Their claim to ontological status consists wholly in the fact that they are objects of thought.)

What if subjectivity were merely attentional ? Consciousness would


still have a content all right, but the question of agency wouldn't arise; we would have to assume that thought proceeded automatically and that the various contents of consciousness associated with it, including the feelings of deliberation, agency, etc., were just given, as we are normally convinced some of them really are (for example, hypnagogic images and phrases). This is in effect the position familiarly known as epiphenomenalism. If it is correct then there can't be any way of knowing whether there could be subjectivity in the machine, but on the other hand it won't matter much.

Epiphenomenalism is about as uninteresting, philosophically speaking, as strict determinism, since if either of them is correct, there's nothing we can do about it; in fact there's nothing we can do period—it all just happens. It may be philosophy, it may be sex, it may be pain, it may be madness—we're just along for the movie. Perhaps someone wrote the script for the movie, perhaps not, it makes no difference, we're strapped into our seats, no climbing out of this cave. And these views might in fact be correct (I know of no argument that can block that possibility in either case), but if so my conjecture that they are or aren't correct has no weight whatever: it's just something that got conjectured in my movie, I can take no credit for it, I 'm not in it. Possibly the machine is watching its movie. If it is, as I shall suggest in a minute, it may or may not be enjoying itself, and that might have implications for us as machine-builders—but not in an epiphenomenalist world; there there aren't any implications of anything for us, just movie-implications at best, as unreal as screen kisses or champagne.

Still once one thinks of it one has to admit that a lot of human experience is merely attentional. So the capacity for conscious attention might be an evolutionary dead-end, a freak of which we happen to be the complacent (or on reflection the astonished) beneficiaries. But it seems just as likely that it conferred some evolutionary advantage, that attention made intention possible. The attention-intention loop constitutes, one might say, a conscious version of the sensorimotor loop, and may indeed be inserted in it, although this is not necessary either to the success of some sensorimotor processes or to the meaningfulness of some intentional responses to attention.

Sensation and Consciousness

This way of putting it provides a clue as to the emergence of subjectivity out of mere consciousness. First we have peripheral reflex arcs, then sensorimotor coordination through the central nervous system: so


far no consciousness. Then the complexity of sensorimotor activity, its multidimensionality, its necessary swiftness, make it desirable to give the relevant inputs analog form as what we call the "senses," and to represent the state of the ambient world by mapping its features into a visual space into which the position of the body and of its parts can also be mapped. (More accurately, the independent development of such an analog representation makes greater complexity, speed, etc. of sensorimotor processes possible for the organisms that have it.) Visual awareness, let us suppose, is just this mapping. It has its attentional and its intentional aspects: the input is something that as we say "catches" our attention, the output is the directed reach, first of sight (we look rather than just noticing) and then of the appropriate muscular complex. The sum of sensory awarenesses and their derivatives will constitute the content of consciousness.

Any sensorimotor loop, it might be argued, involves representation or mapping of some kind: at the very least the visual (or auditory or olfactory or tactile) space from which the input arrives will map on to the motor space to which the output is directed. However we need our system of representation to recognize the mapping (and the spaces between which it holds) as such—otherwise it's just a bit of causal machinery. In more complex cases, when the loop doesn't put its signals straight through but engages some more complex neural structure, possibly involving time delays that make strategic computations possible, we may find what look to us like internal representations that map into both spaces, sensory and motor. But do they make the input and output spaces "look like" the same space to the organism in question? Is computation here also deliberation of a sort? No simple answer, it seems to me, can be given to these questions. In our case it is so, which suggests that something similar probably holds for other organisms of comparable complexity.

Once the analog representation is in place (let us call it the "sensorium"), the organism can do all sorts of interesting things with it, especially if a good memory is also available. But the development, from the immediate and vivid sensorium (largely preempted by perceptual contents, which when available tend to overwhelm competitors for attention—this explains why the library in which I am writing is quiet, and decorated in muted colors), of the intentional domain required for thinking and subjectivity, must have been long and slow. Once it had learned to attend to the contents of perception (selectively no doubt—at first to movement and change, for example, rather than to constant features), the organism could begin to attend to sensory contents remembered or abstracted from memory. Later on—and here is the transition to agency—it could intend them also (imagine, project, etc.). At


this stage there need be no actual sensorimotor involvement at all: hence the immobility of the thinker. This activity is by definition conscious, and there can be little doubt that many animals share it with us (dreaming dogs, for example). The question is, how much of the activity of thought necessarily goes on there , rather than going on elsewhere and being reported there, or not, as the case may be?

I am suggesting that some forms of complex sensorimotor coordination may have required the insertion of an attentional-intentional segment, that this may have been what made them possible. Or perhaps it is what made learning them possible. In this connection Schrödinger's conjectures as to the evolutionary role of consciousness are relevant. Schrödinger thought that consciousness had a phylogenetic function: "I would summarize my general hypothesis thus: consciousness is associated with the learning of the living substance: its knowing how is unconscious."[4] But once we know how there seems to be nothing to prevent our programming complex coordinations into the machine without requiring it ever to be conscious; we could give it the appropriate transform between visual space and sensory space instead of making it establish the transform for itself, as seems to be the case with us (although even with us a lot of that seems to happen automatically—binocular vision, for example—only learned refinements requiring conscious monitoring).

The Ascent to Subjectivity

It is tempting to try to establish a developmental sequence, an ascent from the inertia of matter to the reflexive consciousness of the subject. I offer here one possible account of such an ascent, in terms of states (S), representations (R), and an operator (~) which I shall call the "matching" operator. Matching, as I have explained elsewhere,[5] is not a simple correspondence but has an active sense—among things that match are equal numbers and identical colors, but also left and right gloves, keys and locks, musical phrases and their repetitions or transpositions, bits of jigsaw puzzles and the spaces they fit into, and so on. Above all, in language the signifier matches the signified (indeed I consider matching to be the fundamental phenomenon of signification).

The stages of the ascent are represented as (a) through (f) in figure 1. The states are states of some individual entity, which at stage (c) and above is an organism. At stage (a) the entity is not differentiated from its environment, so that there is no sense to a contrast between internal and external states. At stage (b) there is differentiation but no metabolism, so that nothing about the environment matters to or affects the



Figure 1

entity. Metabolism is the minimum condition for the entity's functioning as an organism. At stage (c) and above, then, the external state Se is the state of the immediate environment as it affects the organism , ignoring irrelevant conditions even though these might seem, from some other point of view, the most salient features of that environment.

Stage (c) itself represents a situation in which the organism reacts to something in its environment, salinity or temperature, for example, so that it moves in such a way as to bring its internal state Si into equilibrium with the external state and then stops moving until either Si or Se changes with time, in which case movement recommences. The matching takes place across the boundary between the organism and the environment. But at stage (d) I postulate a causal process that forms within the organism a representation of the relevant state of the environment. There is obviously no need for the representation to resemble anything in the environment, as long as changes in the environment produce corresponding changes in the representation. Now the matching becomes internal to the organism, and a mismatch, which leads as before to motion until it is corrected, is no longer a transaction between organism and environment. I suppose that at this stage the organism is sensitive , and that the mismatch, constituting as it now does a state among others of the organism itself, is felt , perhaps in the limit as pain—indeed almost certainly as pain, since presumably the avoidance of gross mismatches is of evolutionary significance.

It has to be admitted that here (as at every point in the ascent) my


assertions are wholly conjectural, but of course just such conjectures, which if true would account for the conditions they are intended to explain, are the stuff of scientific discourse. At stage (e) I envisage a double representation, not only of the state of the environment but of the state of the organism that requires to be brought into equilibrium with it: the monitoring of the match between these representations I take to signal the emergence of conscious awareness, though this is still tied to immediate sensory contents. A centrally important special case at this and the subsequent stage will be the case in which the internal state represented by R (Si ) is an element of memory ; the availability of large numbers of memory-states is part of what makes consciousness and subjectivity as complex as they are. Finally at stage (f) the dependence on momentary sensory inputs is overcome, and consciousness is free to attend to matchings of pairs of internal representations. This is the condition for subjectivity: consciousness can build up an internal history, can lead a life of its own independently of the external state of affairs, provided the latter does not obtrude in the form of representations on a lower level that demand attention because of painful disequilibria. The number of available internal states can, with time, grow very large, and the life of the subject be almost wholly internal.

The Place of the Subject

In speaking of "monitoring" or "attending to" matchings of representations there is a risk of misunderstanding that I would like to try to dispel, though since the corresponding form of understanding can at best be metaphorical, this may not be easy. My subjectivity is of course engaged in the writing of this text, as is the reader's in reading it; as we look at the representations of the hierarchical stages, for example, at (e)


the vector of our intentionalities is roughly orthogonal to the plane of the page—we attend to the postulated matching of two states but our own subjectivity is a third element in the situation. But in (e) and (f) I do not mean to suggest that the consciousness or the subjectivity of the organism is a third element alongside the two matched elements; on the contrary, it is their matching . In the perceptual case it will look to the organism as if there is only one state, and that external, perception being of objects in the environment. But all the organism has subjective access to is a representation of the external state, and all representations are internal; what is happening is that the internal representation


(not necessarily resembling the external state but determined by it and responsive to it) is being intended (or attended to) by the organism, which for the purposes of conscious action or reaction is as it were lining it up with, sighting it through, correlating it with, a representation of an internal state, a memory-state for example. All these images of comparison are unsatisfactory because they invoke two entities being manipulated by a third, whereas the subject subsists in the dynamic relation of the two.

The net result of the matching that produces the subject is the intending of objects, of a world. In an earlier text I defined the subject as "what permits the integral, continuous, and possibly repeated apprehension of the object, in the moment of this apprehension and abstracting from purely physiological conditions of perception . . . 'Integral' does not require a total integration of the object in itself . . . and 'continuous' does not require a very long time—but enough. . . . Continuity implies, one might say, a repetition from one moment to the next; the further possibility of the repetition of a whole episode of apprehension, the recognition of the same object after a more or less prolonged absence, implies the 'genidentity' of the subject as an individual and of its own point of view."[6] So for the subject in the diagram the vector of intentionality lies along the line, in the plane of the page; at stage (e), the first at which talk of intentionality makes any sense, one can imagine its pointing from left to right, sighting the external world through its representation, as it were. In that case we might be tempted to locate the subject itself as a kind of virtual origin of intentionality somewhere to the left. But this would be misleading at best, and by the time we reach (f) the assignment a priori of such directionality makes little sense. In general we might suppose that the vector goes from signifier to signified, but in the ramified network of representations that exists at this level it may become problematic or meaningless to perpetuate that distinction.

The subject in other words is coincident with the matched elements. The somewhat elusive nature of this relationship was anticipated (in what may seem an unlikely quarter) by Jean-Paul Sartre in his doctrine of the "prereflexive cogito."[7] Sartre's argument in effect was that if I am conscious of something, I am at the same time conscious of being conscious of it. So in the matching of representations I am aware of the matching as well as of what is represented and the form under which it is represented. This double awareness is I think an essential feature of subjectivity. (It can of course be more than double, since the structure of the prereflexive cogito lends itself to recursion, though in practice I suspect that there is a limit to the number of terms in the series that can be attended to at once, and that two terms—the first two—are the


norm.) Saying this however is of no help in explaining how subjectivity is possible. As I suggested at the beginning, I think that such an explanation is unavailable to us in principle. The fact of subjectivity is, in the strict sense, absolute: as a problem it does not admit of solution.

Note that in the ascent described above "representation" is in a somewhat similar situation to "intentionality" in the early part of the paper: a specification of the relation of representation might be drawn up, that would be met by cases in which subjectivity was present, but at the same time the fact that a case met the specification would be no guarantee whatever that subjectivity was in fact present. Indeed, such specifications have been drawn up, by Churchland, Pylyshyn, and others, and they turn out to be nothing more than sophisticated versions of mapping, just as Dennett's intentionality turned out to be a sophisticated version of explanation. It seems to me unlikely that we will understand represent ation in thought until we have understood present ation in perception—and then (perhaps) in thought too.

The Sensorium as Monitor

What exactly is "present to the mind" in thought? I come now to an even more conjectural part of my paper, which belongs to the higher reaches of the ascent, after all the stages hinted at above. Note, again, that we might go up an ascent like this one (e.g., one that mimicked it behaviorally, with delayed reactions that looked like "allowing time for thought") without activating the sensorium or having genuine cases of subjective presentation or representation. One reason for this is that an organism (or a machine) can have sensors without having a sensorium. The old Turing problem, this time a bit closer to realization, comes up implicitly in Valentino Braitenberg's genial menagerie of "vehicles."[8] His second simplest vehicle—with two sensors and two motors—already exhibits "fear" or "aggression," depending on whether the sensor-motor connections are parallel or crossed over. This reinforces the view that there is more to thinking than meets the behavioral eye (or less: the immobile parent lost in thought may in fact be merely daydreaming), but again it's we who interpret the behavior as fearful or aggressive.

I will begin this further conjectural development by likening the sensorium to the display of a computer, a very fine-grained display with something on the order of 107 pixels. The display can be used to map a visual field corresponding to perceived features of a world, but it can just as well be used for text. (Of course text can be and often is found in the perceived visual field, since reading and writing are the dominant


sensory and motor activities of organisms above a certain level of acculturation—note that "sensorimotor" would be inappropriate here since there is normally a wide separation, and an extremely complex correlation, if any, between the two activities.)

Text in the sensorium will consist of sound-images (in Saussure's sense) or visual images of letters, etc., and it will coexist there with other sensory elements, with complexes of which textual complexes may be matched, in the first instance just as Saussure says they are in his theory of the linguistic sign. These matchings, achieved by an ability that I have called "apposition,"[9] might be stored—the mechanism of this storage is for my purposes a matter of indifference, although any attempt to realize this scheme would have to pay careful attention to it—in a memory capable of expansion, in such a way that the appearance of one element in the display might cause the other to be retrieved. The process of retrieval would not be conscious, since the display itself is the analogue of consciousness; the appropriate response would just appear in consciousness, as I suggested above that "forty-two" does when somebody asks for the product of six and seven. We will have to suppose a basic repertoire of possible computational moves to have been genetically installed in apparatus of which we are wholly unconscious.

It is admittedly far from clear just how the contents of the display are presented to consciousness. I mentioned earlier that the use of the sensorium for properly sensory presentation—presumably its earliest use—tends to overwhelm other more con ceptual options, to preempt the space of representation; its use for symbol or text manipulation seems to be secondary and derivative. The history of the familiar computer display, at any rate in its popularly accessible mode, offers an instructive parallel: originally everybody (apart from back-room boys with cathode ray tubes) thought of the display as normally providing visual representations, video images in fact, and only later did it become usual to use it as a monitor for computer operations. But in the case of the "display" in this model of mind it isn't necessarily the case that its contents are obviously sensory at all; they aren't necessarily images in the sense in which that term has been used, e.g., by Kosslyn.[10] It seems plausible that only after the development of sensory awareness could conscious thought have emerged, but its intentional domain, though as it were derived from the sensorium, isn't properly speaking a sensorium (we might perhaps call the sensory-perceptual domain the primary sensorium, the intentional domain of thought the secondary sensorium, in order to preserve the sense of the suffix as a place where something is going on).

The reason why this display model is so appealing, given what we


now know about computers, is that in fact even the contents of the primary sensorium turn out to be computed rather than just given—perception is the outcome of a computational process, not the mere transmission of data but the construction of a world out of them. This "computational model" is still being argued, though it has been obvious enough for a long time to anyone familiar with the relevant physics and neurophysiology. It is a delicate problem to explicate, to be sure. Consider, for example, Schrödinger's remark at the beginning of the work already quoted: "The world is a construct of our sensations, perceptions, memories. It is convenient to regard it as existing objectively on its own. But it certainly does not become manifest by its mere existence. Its becoming manifest is conditional on very special goingson in very special parts of this very world, namely on certain events that happen in a brain."[11] Here "world" needs to be disambiguated: the contents of my sensorium are my world, even though I take there to be a world that exists objectively on its own that I will call the world, whose local features I suppose help to determine my world. But the events in my brain are not in the "very world" whose perceptual apprehension they make possible.

The essential point, though—expressed succinctly by the English systems-theoretical eccentric Oliver Wells as: the brain computes the world—is that perception is a computational mechanism whose output to the display is the world we perceive, or in Wells's words, after a discussion of Gibson:

What we propose is that the visual system be considered as a computing device which computes from overlaps of . . . different scenes the stable, continuous, unbounded configuration of the room.—What we "see." Note that this can only be done when there is movement; the head has been turned, and optic information on the retinas has changed. Without this movement there could not be any computation. It is as in mathematics—the computation of an invariant under successive transformations.[12]

It may be worth noting en passant that Cassirer had this idea of perception as invariance under transformation as early as 1938.[13] In the generalized or secondary-sensorium case we may say similarly that thought is a computational mechanism whose output to the display is the world we apprehend, or grasp, or understand, in its structure and not merely in its appearances. Much of the input here, while carried by perception, will be purely textual or relational and thus transparent to its mode of embodiment—which is why the same thought content can be conveyed in different words, or different languages, or different symbolic modes.

If this model is plausible, if thought has available to it both display


and memory, both what Churchland[14] has called "topographic maps" or "state spaces" (among which three-dimensional visual space is the most easily envisaged, although the spaces of higher dimensionality that he describes for other senses—which of course we won't apprehend as spaces in the same way—might well function similarly) and a storage mechanism that assigns addresses to items of structure as they are encoded (cross-connections between experiential items in different state spaces, for example, but also and mainly connections between such items and textual ones, or between textual items) then a great many puzzling facts about brain structure and memory might fall into place and some neurological pathologies be readily explained. The question about the consciousness of thought then transforms into the question of how much thinking goes on in the display and how much is hidden in circuits that draw on stored information, whether learned or innate.

The Subject as Agent, The Machine as Subject

What we have to assume (if we are not to fall back into the epiphenomenalist position) is that there is an intentional agency capable, if not of summoning material from storage (although it probably does that too), at least of attending selectively to what happens to be in the display, whether it shows up there on the basis of sensory experience or emerges when sensory experience is to some degree suspended or shorted out (the immobile thinker again). This selective attention will evoke the appropriate connections and thus build intentional structures. Full subjectivity, I argue, requires a reflexive intentional structure that represents on the one hand the genidentity of the agent from his or her past to the present and from the present to at least a proximate future, and on the other the coherence of his or her sensory and textual embodiment (though there are methodological obstacles to the conclusiveness of any such argument). It also requires just such an agency of selection or evocation. Here I think we are going to need a whole new way of looking at action as "letting-happen," as well as a theory of the dynamics of action, with respect to which I find some interesting hints in what I take to be a new reading of Lacan's gloss on Freud's theory of the drive. However, that is another story; I will remark here only that when the computer is turned on it is on and stays on, whether or not one happens to be doing any computing, and in our case when it is off we're dead.

I argue also that the embodiment of subjectivity is transparent to its


structure, which means among other things that although so far there seem to be no cases of subjectivity otherwise embodied than in biologically developed "wetware," nothing we know as yet excludes the possibility that subjectivity might be embodied in hardware. The specific character of any system is presumably to be found in its structure, that is, the complex of relations that it embodies. In principle one might then suppose that the structure is indifferent to its physical embodiment. Thus, for example (at a more primitive level of complexity), the reaching and bearing functions of an artificial joint are in the limit behaviorally indistinguishable from those of its natural counterpart, so that the structure of the patient's behavior is unaffected when a natural element is replaced by a sufficiently sophisticated artificial one. This is what I mean when I say that the embodiment is transparent to the structure. We are thoroughly familiar with numbers of ancillary or corrective or prosthetic devices that are transparent in this way: automobiles and telephones have become just as transparent to the structure of purposive behavior as pockets or eyeglasses. The banality of the examples is significant. It reminds us that intentionality and purpose are everyday matters, not special states we have to work up to deliberately.

If a machine were to develop a reflexive intentional structure of the required kind (and I would want to specify some constraints: the structure is always double, the system of matchings is not, as in the case of language, initially arbitrary, and so on) there would be no reason to deny it subjectivity. However a number of difficult issues need to be surmounted before such a point can be reached. First, consciousness is a necessary precondition of subjectivity, so that the analogue of the interior display has to be provided for the machine. The monitor that we can see won't do, but nor will a monitor that the machine can see—what has to be provided is a way of its seeing, not the monitor, but what the monitor displays. So "interior display" here has to mean "display as seen from within." It is not clear that this condition can ever be known to be met, though we would have to admit that if it were (that is, if we built a machine like us in all relevant respects) there would be no reason to deny the machine the attribute of consciousness.

Second, the interior display has affective components in human beings which have been tuned by millennia of evolutionary selection to be neutral to perception and cognition within normal limits. If we were to make a machine complex enough to rise to consciousness we would have no guarantee that its first experience would not be one of intense pain; it ought therefore to be a matter of course to provide the machine with a mechanism for voluntary anesthesia if not suicide. Third (a point already made eloquently by Turing in 1950), we could not expect the machine to give evidence of subjectivity any earlier in its ontogenetic


development than human beings do—which I believe often to be never, but in any case hardly ever before the age of seven or eight, and that after intense socialization and acculturation.

And when all was said and done we would still be liable to Cartesian scepticism about the reality of the machine's subjective experience, even if it told us elaborate stories about that experience. But then we are liable to this scepticism about one another. And in the end we would have to grant it the same benefit of the doubt that we grant each other, and assume that at the origin of its first person utterances stood an intentionality and an agency. The answer to the question "Who is it?" is essentially: the intentional agent who says "I." But is there any reason to expect that we will understand the relation between this "I" and its embodiment any better in the case of the machine than in our own case?

Perhaps we should say: we will understand both, or neither. The problem is that our understanding anything involves our use of the mechanism of thought, our occupying the subjective standpoint. That is why I find something comical about doctrines like those of Lévi-Strauss and Foucault which claim to have dispensed with the subject (except, in Foucault's case, the subject in the sinister sense of being subjected to social and political forces), and why doctrines like eliminative materialism, for example in Churchland, strike me as perverse. I claim to be a materialist, but there isn't much about me that I want to eliminate, certainly not my feelings and appetites. In the case of eliminative materialism I want to ask: what is eliminated, and from where? If we can have a representation of thought without any elements of "folk psychology," well and good—except that thinking that representation, having it as the content of my intentional domain, brings in my subjectivity again. Churchland writes: "I gradually became comfortable in the idea that there really were quite general ways of representing cognitive activity that made no use of intentional idioms."[15] One might ask—what does "comfortable" mean here? Doesn't it require elimination in its turn?—and then point out that a representation of cognitive activity which makes no use of intentional idioms makes use of intentionality just the same: that of the thinker, comfortable or otherwise, for whom it is a representation.

For what we are doing now, thinking about machines and about their thinking (and if thinking means computation, or just producing answers to any questions, computational or not, then machines have been doing it all along—that can't ever have been the problem), is all taking place in our individual sensoria, primary or secondary. The question is, could machines think in that way too? And the answer is, why not? Perhaps it will turn out that the structure of thought isn't transparent to its


embodiment, that there's something special and unreproducible (except by biological means) about wetware, but what evidence there could possibly be for such a view is far from clear. And there is something narcissistic in the thought that we are the only machines to have the experience of thinking as distinct from its behavioral manifestations. At all events I think there are likely to be, before too long perhaps, some machines that it would be morally prudent to treat as ends only, and never merely as means. Doing so at least is what it will take to make me feel comfortable. But that opens up a different argument.


Rethinking Intentionality

In this essay I wish to float a conjecture that I think may have relevance to the debate about intentionality that has been conducted over the last couple of decades in an arena common to several domains whose interests have come to overlap in a striking way: phenomenology, semantics, artificial intelligence.[1] In doing so I shall be suggesting that some other domains have overlapping interests with these and should be represented in the arena: structuralism, psychoanalysis, the philosophy of literature—though these seem as yet, and in their popular forms, philosophically peripheral. Part of my point is that, suitably reformulated, they have a significant contribution to make to the debate.

Let me begin with a small point from Stoic linguistics, and a solution it suggests to a problem in the philosophy of literature. I take the example from Seneca, via Benson Mates.[2] To understand what is going on on the occasion of a given utterance, say, "Cato is walking," three distinct entities need to be invoked. First, there is the sound of the utterance itself: "Cato is walking." Second, there is Cato, who is walking or not—if he is walking the utterance is true; if not it is false. But then there is a third thing that is brought into being by the utterance as an object of attention, namely, Cato-walking , something that might now be called propositional content but which the early Stoics called the lekton : what is said or meant or picked out by the utterance.

Roughly speaking, in more contemporary terminology, these three correspond to the sign, its reference, and its sense. To say "sense," though, opens up a large domain of argument, particularly about what


Frege meant by Sinn , that I wish to avoid. I have found it useful in developing a theory of literature simply to retain the Greek term, partly because—given that in literature the focus is on utterances written and read, rather than spoken and heard—there is an instructive (and etymologically justified) connection between the reader and the sense of what is read, between lector and lekton , but partly because this move avoids entanglement with current controversies in semantics and frees us to attend to the status of the lekton from the reader's point of view . What is it exactly that is brought into being by the act of reading? Not the words, for they have been there on the page, or in memory, all along (I can't read a word unless I know it, generically at any rate); nor the thing referred to, which is there (or not) independently of my reading.

The reason why this question is especially interesting in connection with literature is that whenever there is obviously something actual to refer to, as so often in nonliterary contexts, considerations of reference tend to contaminate the lekton, so that its peculiar difficulties do not have to be confronted. It is tempting to think that no object of attention is "brought into being" when I read about the President, or the stock market—it is just that I am led to attend to those things themselves, indirectly and at a distance to be sure but not in such a way as to require any ontological mediation. If I'm present when Cato is walking, if he's present to me ("presence" just means "being-before," which is why it's a symmetrical relation), then I tend to think that it's his walking, then and there, that is picked out for attention by the utterance "Cato is walking." But it isn't, because taken simply as itself the (noncontextual) utterance has to pick out the same thing whether Cato is there or not. It's true that if we contextualize the utterance with some indexical particle: "Look! Cato is walking!" (he's been paralyzed; the evangelist has just said, "Rise up and walk!"), the urge to conflate lekton and referent becomes overwhelming. But still it ought to be resisted, not only because the utterance might still be false ("Just kidding," someone says; the evangelist moves on, crestfallen) but because, just as in the Saussurean theory of the sign, we need the lekton in order to know that Cato, walking, is the referent.[3] I shall call this tendency of reference to blanket the lekton "the dominance of the referential."

When I read about Hamlet or Emma Bovary the matter is not quite so simple. They have, it is true, tenuous connections to a referential domain, historical in the one case, journalistic in the other—but we know there's more to Hamlet than the original Prince of Denmark, whereas in the straight referential case the surplus (let's hope) is in the other direction: there's more, that is, to the real President than the report of him in the news. And most of the time the characters in fiction


bear no more than an accidental resemblance to anybody, living or dead. The book I happen to be reading, these days, for reasons having nothing to do with this paper (the reading of which I've suspended in order to write the paper), is Molloy . I open it at random. "The truth is, conaesthetically speaking of course," writes Beckett, "I felt more or less the same as usual, that is to say, if I may give myself away, so terror-stricken that I was virtually bereft of feeling, not to say of consciousness, and drowned in a deep and merciful torpor shot with brief abominable gleams, I give you my word."[4] There's a powerful lekton here, but assigning reference in any straightforward way would be problematic.

If fictional objects and characters aren't sustained (or overwhelmed) by referential attachments to some objective world, what world, we may ask, do they inhabit? The kind of answer that often seems wanted to a question like this will involve some hypothetical or metaphysical domain—of Ausserseienden , of mental representations, of the imagination, etc. All this seems to me, the engaged reader, a very heavy way of dealing with the obvious, which is that they inhabit my world—for as long as I'm reading, or whenever I think back on what I read. My world is a fairly complicated place, and its elements do not lend themselves to neat categorization; it contains, at the moment, both the pad I'm writing on and the thoughts I think as I write, the ticking of the clock and the anxiety of the deadline. Lurking in the background are Molloy, Moran, and the rest, ready to take center stage again when I get back to my reading. Then they'll fill my world; indeed, it may seem for a time that I've moved into theirs—I may be wholly absent from my normal surroundings, "lost" or even "buried" in the book, as we sometimes suggestively say. But in fact I won't have gone anywhere, they'll have come to me: I'll read, and there they'll be. That's what the lekton is in the case of reading.

But how does reading produce the lekton—or the "megalekton," as one is tempted to call it in the literary case, the whole world of the text? Just in the way that seeing and hearing produce the perceptual world. It seems clear by now that language functions like a sensory medium, processed in the cortex much as sight and hearing are.[5] The fact that it's normally conveyed by sight and hearing only means that the cortical pathways to the language-processing areas have to pass through the visual and auditory areas, but their destination is elsewhere (and probably not as sharply localized, since language in one form or another uses a good proportion of our working brain capacity). In deaf people using sign language these pathways traverse a different part of the visual cortex, dealing with body-sized movements rather than (or as well as) the fine surface discriminations required for reading, and don't get in-


volved with the auditory at all; in blind people they pass through tactile areas as well as auditory ones, etc.

So language presents a whole world, or a part of one if other sensory activity hasn't been blocked out or suspended (as it often is, locally at least, when we are engaged in linguistic activity). Sometimes that world appears with imaginative vividness, as if it were painted, and that's one of the oldest techniques of literature. A classic case is Homer's description of Achilles' shield in the Iliad ; such a visually evocative passage was known as an ekphrasis , a "telling-out." Later on ekphrasis came to be an academic exercise, describing paintings in words (when copies weren't easily come by), but the term still seems appropriate as referring just to those uses of language whose purpose is to present a visual thought-content. Coleridge, in Kubla Khan , says that if only he could manage it

with music loud and long
I would build that dome in air,
That sunny dome! those caves of ice!
And all who heard would see them there

—a line I emphasize because it expresses so well the ekphrastic utterance-lekton connection, taking music as metaphoric for poetry. (The question of the lekton for music proper, taking it now to be a kind of language in its own right, is a good test case for theories of intentionality.)

However, the "seeing" that hearers (or readers) do need not be like that done with the eye. Seeing is a common metaphor for all kinds of understanding; thought-contents can be brought before the mind (not "before the mind's eye"!) and metaphorically "seen" in ways other than the visual, indeed other than the sensory (in the narrow sense) altogether. One still might want, though (on the principle of language as a category of extended sense) to think of these thought-contents as located in a "sensorium"; that will be the space of the world language presents, and it will be as really my world as the more vivid perceptual sensorium. The relation between myself as subject and the contents of this world is of course that of intentionality: a stretching-out towards, and a holding-in, meanings derived respectively from Latin tendo and teneo , both referring back to Greek teino , one of the senses of which is "to hold out, present"; in the case of intentionality I hold out or present something to myself. Not exactly that I choose to do this (though within limits I can, when the potential furniture of the sensorium is rich enough), but it's obviously a capacity I'm naturally furnished with: not only can I direct attention selectively to perceptual


contents when they are present, but I can evoke them when they are absent, and can also attend to, and evoke, thought-contents of a nonperceptual nature, such as those presented by nonekphrastic language. These thought-contents, under certain conditions of definiteness in conception and description, can qualify as "intentional objects" in the sense of Brentano and Husserl. The lekton is a domain of intentional objects.

I have suggested elsewhere that the nonperceptual sensorium might be called a "secondary" sensorium,[6] and it and its thought-contents certainly seem secondary, as do those of memory or evocation (no matter how skillful the ekphrasis), in comparison to the primary vividness of the bright outer world, of color and sound, that I inhabit when attending to the immediately perceptual. But this contrast may be misleading. In order to clarify that question it is necessary to examine the relation between the vivid content of perception and the pale content of thought, between the data of the five senses (especially that of sight) and those of the sixth, as we may as well denominate language. Hamlet and Emma Bovary, I said earlier, just inhabit my world, sometimes to the point of extinguishing the brightness of the primary sensorium, of whose details I become unaware. Of course I'm helped in achieving this abstraction by protecting myself against intense sensory inputs, which is why I'll read by moderate light, in a quiet comfortable place, decorated for preference in muted colors. Those are good conditions for thinking, too—and, as far as that goes, for sleeping, a fact that may prove relevant in the sequel.

It is, when one thinks of it, an extraordinary fact that literature, philosophy, mathematics, and so on can occupy the foreground of our attention, and its background too (the phone rings and I have to attend to primary matters, but I keep thinking about this argument), that indeed the contents of the secondary sensorium can dovetail with those of the primary, the lekton can coexist comfortably with the furniture of the everyday world. Of course I can tell the difference between them (not being able to do so is a pathology we'll come to in a minute), but so I can between different environments in either world. If I have Madame Bovary in my hand I'm in Yonville, or Rouen, if Hamlet , in Elsinore; if I have the menu in my hand I'm in the restaurant, if the telephone directory, in the office. In fact, as I suggested earlier, I live in one world, not many, though its aspect and its furniture change as I shift my body or my attention. I can distinguish in it, up to a point, what is perceptual from what is intentional (though there are problems here when what I perceive is laden with affect), but I'm not inclined to say that some of it, for example, is "mental" and some "physical,"


though this seems to be taken by many writers on intentionality as an unproblematic division.

Which is developmentally prior, perception or intentionality? This moves the argument into more speculative territory. The usual account would, I think, maintain that perception gives us a store of experience and that we learn, as we acquire this store (through the agency of memory), habits of attention, and eventually intentional capacities, that enable us to manipulate, as it were, its contents intelligently and in absentia (Chisholm, following William James, calls the problem of nonreferring intentionality—that is, the intending of nonexistent objects—"the problem of presence in absence"[7] ). But how does this process get started? The old empiricist doctrine of memory and imagination as decayed or decaying sense doesn't do justice to the strength and interest of the intentional domain. Here I find matter for conjecture in some work on dreams, on the one hand Freudian, on the other embryological.

Freud, in "Creative Writers and Day-Dreaming,"[8] likens the imaginative writer to a "dreamer in broad daylight," and remarks that daydreams are especially the province of children, who create worlds of their own in play, inventing places and playmates with creative resourcefulness and certitude. These worlds, as anyone who has observed children knows, are at least as real to them as the world of adult perception that will eventually have to be learned. They are, without question, intentional domains; sometimes they incorporate lekta from children's stories, sometimes elements from the perceptual world, but the activity of intending them seems far more persistent than can easily be accounted for by the hypothesis that intentional objects are learned after the pattern of perceptual ones. The process of instruction consists largely in getting the child's attention away from this domain, compelling it over time to yield to the insistence of the perceptual. Eventually the "normal" adult loses the ability to intend alternative worlds in this strong sense—except in dreams .

Freud himself suggests that waking life is the norm, governed by the reality principle, and that dreaming is in effect a form of psychosis.[9] But in recent years it has been shown that unborn children in utero spend a great deal of time in REM sleep, which sleep studies in the last few decades have associated unequivocally with dreaming. A standard question, when people are told of this, is "What can they be dreaming about ?" It seems odd to say that fetuses are psychotic, the idea of psychosis being defined in relation to a postulated normality. This is the point at which I float the conjecture I promised at the beginning of the paper. It is that intentionality is one of the basic functions of the


human nervous system, that it develops independently of and prior to perception, that dreaming is a normal exercise of intentionality free of the constraints of perception, and that dreaming in utero is to be construed as the laying-down and rehearsal of the function of intentionality . A lemma to this conjecture is that once perception comes on line, as it were, postnatally (though there are anticipations of this also in utero ) it tends eventually to preempt the domain of attention and to become the norm for unreflective introspection.

This latter tendency might be called, in parallel with the dominance of the referential introduced above, "the dominance of the perceptual." But until it happens the Freudian order is reversed: dreaming is the norm and waking life the derivative state. It is plausible to assume that waking life has to be developmentally acquired, and some pathologies may be explainable as failures (or refusals) to manage this. If the chaos of images and voices, and even of actions, characteristic of dreaming were to carry over into daily activity they would produce a plausible imitation of some forms of autism or schizophrenia or mania, and it may be that that is just what is happening in those cases. Alfred Schutz characterizes the highest and most aware form of subjective involvement in the world as "wide-awakeness," citing Bergson's concept of a series of planes of interest in life, from dreaming at the lowest end of the scale to action at the highest, and the transition from infancy via childhood to adulthood might be construed as the gradual ascension of this scale.[10]

The ability to intend a world, like the ability to speak a language, is, I am claiming, a competence genetically provided. Without it the child would never come to consciousness or subjectivity at all. (The subject is what is present to the objects it intends, that are present to it—as I remarked above, presence is a symmetrical relation. The subject-object relation is something that has to be brought into being, that comes into being at some stage of embryonic development; it might be thought of as emergent in a manner analogous to that of pair-creation in physics.) The normal world that is eventually lived (like the standard language that is eventually spoken) is determined by experience, in this case perceptual experience. The innate grammar permits the child to learn English or Japanese or Kwakiutl—which one depends on its linguistic environment; the innate categoreal structure permits it to learn space, time, and causality. (This echo of Kant is not accidental.)

There is of course a difference between the language case and the world case, and it is a strikingly instructive difference. Not all children have to learn the same language, but there is a sense in which they all have to learn the same world, namely, this one with its day and night,


its warm and cold, its times and distances and bodies—including their own bodies. (It is of course possible that different bodies might learn the world differently.) Yet as in the case of language it must have been possible to learn a different world, it would be astonishing from an evolutionary point of view if the mechanism of world-making had been adapted in detail only to this one. A general adaptation might be expected, yielding what we think of, often with satisfaction, as necessary truths, but there is a range of states more or less adequate to this world—and some psychoses might be explained, again, by the failure of possible intentional states, for a given subject, to coincide with any of them. I take it that it was the availability of these alternate states that persuaded some early drug users, who managed chemically to wrench the brain into producing a different world-configuration, that they had discovered a new reality.

These considerations suggest a reversal of the usual way of thinking about the relations between thought and perception. Rather than saying that thought is of a different nature from perception—while admitting as everyone does that some forms of thought, such as memory and imagination, have something in common with a diluted perception—I want to say that perception is an involuntary but very vivid form of thought, of just that form of thought we know in its attenuated form as imagination. Imagination isn't decayed sense; sense is intensified imagination, which is forced upon the subject once its body is thrust into the outer world, where it is no longer protected from the onslaught of light and sound, of heat and cold, of touch and movement.

I use "imagination" here to stand for the function of intentionality that produces images; that it is an intentional function was recognized by Sartre early on, soon after his exposure to phenomenology, and his account of it in L'imagination and L'imaginaire still seems to me worth attention even though it fails to work out a clear relationship precisely between perception and imagination. Sartre carries—and can't help carrying—the burden of the old belief that makes imagination on some level derivative from perception: the image is of the object, though indirectly, while perception is of it directly. If however we are to think of the intentional function that produces the image as preexisting its activation by perception then it will have to be possible for the subject to intend images it never perceived.

This takes us back to the dreaming babies. I have no doubt that once perception assumes its dominant role, the contents of the imagination come to consist largely of what has been taken in perceptually: the elements of the imagined centaur are indeed parts of the perceived man and the perceived horse. But that there may be imagined elements of a basic kind—geometrical shapes, or even some organic ones; colors;


forms of movement—that are independent of perception, and perhaps very generally shared by nascent consciousnesses the world over, is not an original conjecture: it corresponds to, and easily explains, some features of what Jungian psychologists have called archetypes, though I would wish to keep a careful distance from the weight that is placed on them in that tradition.

The supposition that, just as in the case of language, evolution may have selected for some disposition to construe experience in one way rather than another (and as in Chomsky's argument about the linguistic case it may be observed that the purely inductive learning of the spatiotemporal world would be an amazing feat for the first few months of life) does not require the postulation of anything as melodramatic as a "collective unconscious." Of course, "collective" is ambiguous here, and unobjectionable if all it means is unconscious structures we happen to have, individually and distributively, in common, because they are determined by a common genetic inheritance. Among the Jungians, however, it usually seems to mean a shared reservoir of some transindividual sort into which we all tap, and for this I can see no evidence, nor even a remotely plausible model.

Two questions pose themselves at this point, one as to the nature or mechanism of the intentional activity that begins in utero and is then largely taken over—commandeered, as it were—by perception, and the other as to the nature of the intending subject that originates at some point (when?) during fetal development and is presumably genidentical (if not simply identical) with the mature subject the individual becomes. I will content myself with a programmatic treatment of each in order to leave time at the end to return to the metaphilosophical point with which I began.

The best model for the required mechanism is I think the Freudian drive, especially as expounded by Lacan. In The Four Fundamental Concepts of Psycho-Analysis , Lacan points out that for Freud, "the characteristic of the drive is to be a konstante Kraft , a constant force. He cannot conceive of it as a momentane Stosskraft [a momentary impulse]."[11] "The first thing Freud says about the drive is, if I may put it this way, that it has no day or night, no spring or autumn, no rise and fall."[12] Later on he quotes Freud as saying, "As far as the object of the drive is concerned, let it be clear that it is, strictly speaking, of no importance. It is a matter of total indifference."[13] The drive is a tendency-towards of a wholly nonspecific kind; it is a tendency that seeks satisfaction (even dreams in utero may be wish-fulfillments!) but that seeks it nonspecifically—and furthermore will never find it. This frustration need not be rendered in melodramatic terms; it is not a tragedy—but it is enough to keep the drive going, for a whole life in fact.


For I take it that the intentionality is switched on, as it were, when the embryo begins to dream, is never switched off except by death (though it may be suspended in dreamless sleep). In one sense its final satisfaction may be death—that would certainly be an acceptable reading for Freud, a final acknowledgment of Thanatos. But while it remains on it is at the service of Eros—as a side-effect, one might almost say.

In casting about for a suitable image here I find two candidates, from the very early and very late industrial revolution. The late one would be of the central processing unit of a computer; it is what gets switched on at the beginning of the day (indeed it may be left on for the working life of the system), and it is at the service of whatever functions one may want the machine to perform. But I don't wish to reinforce a computer model of mind (computers as we know them can only be part of the story, though I don't doubt they are a part), and I prefer the earlier image, which would be of one of those huge factory steam engines of the sort that can still be seen in museums, which drove a single shaft from which every machine in the factory, by means of belts, derived its motive power. Everything the subject does is powered by what we may think of as the intentional drive, which directs it out and towards its objects, which may be engaged with more or less force according to the nature of the involvement, sleepy or wide-awake, normal or neurotic.

What of the subject itself? I have suggested that it may come into being after the manner of pair-creation, the sudden (or in this case perhaps gradual, like a developing image) emergence of a subject-object polarity defining an intentional vector. What thus comes into being at the subject pole is perhaps the hardest thing to specify in the whole of philosophy. It is surely already in some rudimentary form Dasein; it ventures forth as Existenz; yet it is a Nothingness at the heart of Being. It is the condition of experience and of meaning. One understands how tempting many philosophers have found it to suppose that, the conditions for its functioning having been produced organically, the subject itself arrives from some other order of being, spiritual perhaps, as an adventitious supplement to the organic—and at the same time how inexcusable it is to yield to that temptation on the available evidence.

I have myself tried some formulations of the nature of subjectivity, and the closest I have come to anything even minimally satisfactory is this: "Subjectivity is the animation of structure."[14] Unfortunately this definition is not fully intelligible unless one has already begged the question, because "structure" has to be understood as a set of relations maintained intentionally in order to distinguish subjective from merely organic states, and intentionality presupposes subjectivity. But then there is in any case a systematic difficulty in the definition of subjectiv-


ity, regarded not as an inscription in the symbolic (as Lacan would have it—though for him the subject is a consequence of structure rather than its ground) but as an originating intentionality at the level of the real: namely, that any definition puts the definiendum along with the definiens in the object position, and that is the one place where subjectivity cannot be put. But then it can't be explained either, the same argument holding mutatis mutandis for the explanandum and explanans.

These limitations do not mean that we are left with nothing to say. If we can't objectify the subject we can distinguish between its proximate and remote, its focal and marginal, objects, and much of phenomenology devotes itself to these distinctions. We can also as it were zero in on its place, show where in the world it is likely to manifest itself. This has in all so far known cases proved to be somewhere between the sensory input and the motor output of a living organism having a developed nervous system. It isn't that we should go looking for subjectivity there; rather, we have already found it: in our own case, to which the other known cases all prove analogous. Knowledge of the complexity of the structures subjectivity animates in our own case can suggest what precursors of subjectivity, such as reactivity, sensitivity, purposiveness, might be sustained by intermediate cases of organic complexity, and I have attempted on several occasions to sketch in this way an ascent from the inert to the fully subjective.

In the end, though, subjectivity has to be lived. But this turns out to be tantamount to living objectivity, because of the pairing relationship already drawn attention to: no subject without object, be it merely intentional (and that, as I have maintained, is how subjectivity begins); no world without ego, no ego without world. The world subjectivity lives, the life-world, or in the case of a world borrowed from literature what I have called the megalekton, is as a known world the total noematic correlate of the subject's noetic activity. Is it the real world, though; is it a public world? These are questions that can only be raised and answered within it, and they will be answered in terms of whatever theory of the real, or the public, the subject who raises them has at its disposal. I do not say "at his or her disposal," not because I want to evade the issue of sexism but because I think that gender, along with every other determination of the subject, is part of its object domain—but if this is the case then, for example, my being a child or an adult, a plumber or a philosopher, is part of my object domain and does not characterize my subjectivity essentially.

We are back to the nonspecific drive: the subject pole is empty, at all events of any of the sorts of objectivity it is capable of intending, its intentionality is presuppositionless, and it remains what it is constantly and persistently over the term of its embodiment, from prenatal emer-


gence to final extinction in death, allowing only (as remarked earlier) for periodic suspension in dreamless sleep. Of course we could say, as Heraclitus did of the sun, that the subject is new every day, though it is hard to see what would be gained by this, since while we were at it we could say it was new every moment. What we might draw from this line of argument is the conclusion that subjectivity reliably emerges, at the appropriate level of wakefulness, whenever the conditions are right for it, which would make its initiation in utero unsurprising but would also mean that if just those conditions were ever realized in an artificial device it would acquire subjectivity too.

While I am confident of the correctness of the foregoing account, it does not, without further argument at least, settle the issues raised above, about realism, about the public nature of perceptual space, and so on, and while I think that there are better and worse positions on these questions, and that what I have been saying about subjectivity is relevant to them, I cannot pursue them further here. In any case, in discussing them, the whole contents of the participating subjects' object worlds, gender and all, acquired over their lifetimes—generalized versions of what Sartre would call their facticities—would come into play, and the idiosyncratic variety of these explain why philosophical arguments flourish and are not easily settled.

It was in fact because of this occlusion of argument by prejudice, inherited or acquired, that Husserl introduced the epoche and the technique of bracketing, and there is a sense in which all I may have done in this essay is to reinvent the transcendental ego. But I think that the features of drive and in particular of persistence that I have attributed to subjectivity take us at least marginally further than the transcendental ego, which I take to be momentary and positional. However, this "momentary" raises a problem about temporality that must be addressed if only glancingly. The persistence of the subject could just as well be interpreted as its timelessness, since the passage of time is one of the things it has to be able to intend. And this would explain the sense that many of us have of being "the same person" as we were when younger, even as a child. I once tried to catch the essence of this timelessness aphoristically: "Until the moment of death, everyone is immortal." I admit to having found this a comforting thought, however paradoxical.

In closing let me indicate a line of philosophical argumentation that would in my view be settled, in the sense of being closed off, if the foregoing considerations were to prevail, namely, the very attempt to deal with subjectivity, or intentionality, from an objective point of view, or indeed from any point of view other than that of the subject in question. Failure to see the futility of this is, I think, what vitiates a great


deal of the work I referred to at the beginning. The history of this futility goes back to Hegel, whose habit of forgetting that he was an existing individual so much amused Kierkegaard. But somehow Kierkegaard's point still manages to be overlooked. Of course there is a sense in which any discourse at all about subjectivity necessarily partakes of the objective, in that the language itself has this status; it belongs to the objective domain—to speak of "the relation between subject and object" is already to invoke a lekton in which the subject plays an apparently objective role, and it would seem suspiciously ad hoc to rule that just expressions involving it failed to pick out propositional content.

This difficulty however is not insurmountable: in self-reference I don't need an independent lekton to know that I am the referent. There can't be a case in which what is referred to (namely the subject) is absent, which would activate the distinction between sense and reference. By courtesy I can speak of your subjectivity, or of subjectivity in general, but that doesn't endow what is thus spoken of with objective status, in my world or in anyone's. If something other than unqualified subjectivity is said to be the carrier of intentionality, the relation spoken of may be complex and interesting but it won't manage to be intentionality in the sense in which I have been using the term. So when Chisholm speaks of intentionality as a property of mental phenomena,[15] or Searle speaks of it as a property of mental states,[16] they may succeed in evoking some relationship between different classes of phenomena or between mental states and other features of the world but they don't grasp what I mean by intentionality thereby. (Mental states don't intend—subjects intend; as to mental phenomena, the concept puzzles me, since I would have thought nothing could appear except to a mind—but that would take too long a digression.) Similarly, when Dennett looks at systems like us from an intentional stance he gets interesting explanatory results too but comes no closer than the others to what intentionality is for us as subjects.[17]

Am I being impossibly exigent here? I don't think so. It isn't that subjects with genuine intentionality wouldn't exhibit just the relationships that Chisholm and Searle and Dennett specify; it's that they could exhibit all that and still not have genuine intentionality. This is a problem as old as Descartes and his robot animals, and I know no way of dealing with it except in the first person singular. Of course that only works for me, but I can invite you to engage in a parallel activity, and although I'll never know for sure whether it worked for you, you will. My concluding metaphilosophical remark, then, is that I find it odd that so many philosophers simply decline to work in the first person singular, even when the problems they are confronted with clearly require it.


Perhaps that is because they misconstrue the problems. When Quine says,

One may accept the Brentano thesis [of the irreducibility of intentional idioms] either as showing the indispensability of intentional idioms and the importance of an autonomous science of intention, or as showing the baselessness of intentional idioms and the emptiness of a science of intention,[18]

and opts for the second alternative, I think he misconstrues the problem on two counts. One is that in spite of Brentano's couching of the problem in their terms, it isn't a matter of idioms but of insight; the other is that the whole question is begged ahead of time if what is envisaged is a science of intention. There can't be a science of intention because science presupposes intention. All I'm asking here is that this fact about intentionality be recognized. As I said earlier, it leaves most of the rest of philosophy intact, but puts it under a modality that can, I think, lead to new and exciting possibilities, and has indeed been doing so for some decades, for a century perhaps—but not yet enough.


Yorick's World, or the Universe in a Single Skull

In what follows I shall explore a set of ideas formerly important in the history of philosophy—ideas that seem on the face of it quite implausible, as so many philosophical ideas do—to see whether in the light of recent developments in science they may not contain significant truths. The central idea, briefly put, is this: that when we look at the world it is not the case, as physicists are thought to claim, that light strikes the seen objects and is reflected into our eyes; on the contrary the seen objects are themselves the products, not the causes, of perception; they are in fact objects in a kind of private and extremely detailed 3-D movie that is playing inside our heads—quite literally inside our skulls. Hence the appeal to Yorick.

I need Yorick only for exemplary purposes, because his is the most obvious skull in the commonly available literature. While I am at it I should confess that there is another common point of reference that echoes in my title, namely Thomas Eakins's Max Schmitt in a Single Scull, one of his Schuylkill River paintings and an old favorite of mine. The connection started out merely as a bad pun, but like all good works of art this one lends itself to interpretation, and I will draw at least this much from it: that it represents an individual entrusting himself to an elegant bit of machinery. (For reasons that may become clear later on I want to block the Freudian reading that would have him afloat on the unconscious, or anything of that sort.)

Yorick appears in act 5, scene I of Hamlet—it is the gravedigger scene, and that too may seem appropriate, since as I go on I shall be unearthing philosophical skeletons some might think best left in peace.


Hamlet is with Horatio, the Stoic, who lacks philosophical imagination—everyone remembers the remark in act 1, scene 5: "There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy." But here Hamlet himself is in a thoroughly materialistic mood. It is not just that death is a social leveler, but that the dust it reduces us to persists, serving perhaps humbler functions in the end: Hamlet speculates that the noble dust of Alexander might be found, in the imagination, stopping a bung-hole. Alexander, like Yorick, was a material object, his skull was full of real physical stuff. What made him Alexander was not the stuff but its arrangement as Alexander. The stuff of which we are made is in fact just about as old as the universe (and some of it probably formed part of Alexander on its way from the Big Bang to the present day).

Two questions, then: how does the stuff we are made of have to be arranged in order for us to have the life we do in the world we know, to perceive, to act, to feel and so on? And when it returns to dust, how much of that world perishes with it? What dies with Yorick? This question can be put in another way: what required Yorick for its existence when he was alive? Or more precisely still, what depended for its being on Yorick's knowing it? To start with, everything private to him—the secrets, as we might say, that "died with him"; his own desires, his memories, his consciousness, his subjectivity. But then also his perspective on the world, his way of seeing, his associations—and his associations in another sense, the social entities of which he formed a part. Suppose he had had a passion for one of the scullery maids—the clown had been his rival for her affection; that is why he poured the Rhenish—suppose even that they were secretly married. Then the couple, the marriage, depended on him and came to an end with his death. Suppose he had been the last surviving member of the Danish Jesters' Association: then the Association died with him.

In fact it would not be unreasonable to say that what dies with Yorick is Yorick's world. This assumes a general distinction between the world as an objective totality, on the one hand, and particular people's worlds—yours, mine, Yorick's—as the total contents of our individual experiences, on the other. When I die my world will come to an end, but the world will, I take it, survive, or at any rate persist. I believe that it was there before I was born and that it will outlast me. Certainly the castle of Elsinore was still there after Yorick's death, and so was the state of Denmark, rot and all. But now let us raise the stakes a bit. Suppose all the Danskers, as Shakespeare calls them, had died? The castle would have survived them too, but how about the state of Denmark?

The direction of this bit of argument will by now be clear enough, so


I will skip several steps and propose a provisional conclusion: the social world, I shall say, is mind-dependent. There are two levels on which this proposition can be sustained, one of them involving a stronger claim than the other. It is possible to make a rough division of things in the world into objects of the social sciences and objects of the natural sciences by asking of each object whether or not human intentions were involved in its production. The weak claim is that social objects are mind-dependent in this sense, as having been brought about through human action, in which we can suppose that human intentions were at work (although it is not necessary to claim that any particular human intended any particular outcome, such as the arms race or inflation, since as Engels once remarked, "What each individual wills is obstructed by everyone else, and what emerges is something that no one willed").[1] The stronger claim is that social objects are not only produced but also sustained by human intentions, and are therefore mind-dependent in the sense that if everyone's mind stopped sustaining them they would, as social objects, cease to exist.

It follows from this stronger claim that idealism is the appropriate philosophy for the social sciences, and it is about philosophical idealism that I now want to speak. The idealist everyone knows about is George Berkeley, Bishop of Cloyne, who thought that the category of "material substance" in philosophy was just a mistake. John Locke had recently developed an empiricist philosophy that said that the qualities—shape, color, hardness, warmth, and so on—which make up things we know (and they do make them up, in the sense that having all the qualities of an object together can't be distinguished by us from having the object) inhere in material substance. When asked, however, what substance was, he quoted the story of the Indian who was asked what the earth rested on: it rested, he said, on the back of a great elephant, which rested on the back of a great tortoise, which rested on "something, he knew not what."[2] Substance, said Locke, was something, but he knew not what. Berkeley got a good laugh out of this and brought some very sophisticated arguments to bear, one of which I want to spend a little time on.

Berkeley puts the argument in the mouth of Philonous, the mind-lover, who is trying to convince Hylas, the materialist, that his position is untenable. Philonous keeps pointing out that in order for us to perceive something it has to be perceptible, and he claims that we can't know that something is perceptible unless we actually perceive it. In order to perceive it we have to be there. I am obliged because of its notoriety to refer here to the lone tree that has so often fallen in the deserted forest, and to the noise it does or doesn't make. Berkeley would have thought that a silly question: if the tree is there, of course


it makes a noise—but it isn't, unless someone is watching it or listening. Berkeley thought, as he believed (correctly, it seems to me) good Christians were obliged to think, that God automatically looked after falling trees as well as falling sparrows; and if God omnipotently holds qualities together in objects we don't need to call on material substance for that service. Two of the three really good philosophical limericks I know—by Ronald Knox and "Anon," respectively—together sum up this side of Berkeley's views (we will get to another side later):

There once was a man who said "God
Must find it exceedingly odd
When he sees that this tree
Continues to be
When there's no one about in the quad."

"Dear Sir, Your astonishment's odd;
I am always about in the quad—
And that's why the tree
Continues to be
Since observed by, yours faithfully, God."

I hope I may be forgiven all this familiar stuff—we are on the way to more serious matters.

During the course of the argument between Hylas and Philonous, Philonous makes an offer and Hylas thinks he has given the whole game away. Philonous says, in effect, "If you can conceive of a tree existing without [i.e., outside] the mind, I'll give up." Hylas says gleefully, "Nothing easier—I do now conceive a tree existing after that manner." "Hold!" says Philonous, "Haven't you forgotten your own mind?" "Oh bother," says Hylas, "what a silly mistake—I thought of this tree all by itself but of course it was I who was thinking about it the whole time," and so on.[3] Now a brief consideration of Philonous's formulation: "conceive of a tree existing without the mind," shows that Hylas gives in too easily. Construe this as "conceive of a tree existing / without the mind," and we have to admit that one can't conceive of anything , existing or not, without a mind; but construe it as "conceive of/ a tree existing without the mind," and it is clear that some of the things one can conceive of with the mind can themselves, as so conceived, quite well be "without" any mind at all.

If we can conceive of a tree in a mindless world idealism fails, at least where trees are concerned. And I think we can conceive of trees, and oceans, and planets, and Big Bangs, and all the apparatus of physical existence, as existing without minds. We can, in other words, conceive of a world without us. But could we conceive of a joke in a


mindless world? or a purchase? or an argument? or a friendship? Or even a book (as something read) or a meal (as something enjoyed) or a war (as something suffered) or a nation (as something governed or defended)? The implied answer to these questions, at least as far as the social world is concerned, is: All these things depend on our knowing them; if there were no one to know them, they wouldn't exist.

Now I might leave the matter there, but the perversity of the profession drives me on. I said just now that we could conceive of a tree in a mindless world, could conceive of a world in which we didn't exist. But what would that tree and that world be like? I am tempted to say, what would they look like? but the mistake in that would be too obvious. If we replace "conceive" by "perceive" in our text no tricks of segmentation will help; we can't conceive of anything's being perceived in a mindless world, so the tree we conceive there can't be a perceived tree. But all the trees we've actually been acquainted with have been perceived ones. How shall we proceed with the argument? Well, it might help to look at what some philosophers have thought about perception, and particularly about the paradigm case of perception that we call vision.

The standard scientific account of vision is that light is reflected from surfaces, strikes the eye, is refracted and focused, activates the rods and cones of the retina, and produces nervous impulses that somehow translate into the experience of sight. It is important for my present purposes to realize that most of that account is very recent. A few hundred years ago nobody understood even reflection in any detail at all, and as to rods and cones and nervous impulses, they were as yet unthought of in anything like their present form. Still, people had theories of vision, often quite elaborate ones. Given the available evidence, some of these theories amounted to pure genius, and we have to admit that their proponents were at least as intelligent as the best of us and a good deal more imaginative. It is often instructive to try to think oneself into the frame of mind of some earlier philosopher, and sometimes the enlightenment that comes from that exercise is genuine.

The perception-as-movie notion in fact goes back at least as far as Plato, so I will begin this part of the argument with something familiar again, his doctrine of the cave (fig. 2). In Plato this is just an image, although he really does believe that the reality of things belongs to a world other than the world of appearance in which we live. He describes this cinematic space in Book VII of the Republic in these terms:

Behold! human beings living in an underground den . . . here they have been from their childhood, and have their legs and necks chained so that they cannot move, and can only see before them, being prevented by the



Figure 2

chains from turning round their heads. Above and behind them a fire is blazing at a distance, and between the fire and the prisoners there is a raised way; and you will see, if you look, a low wall built along the way, like the screen which the marionette players have in front of them, over which they show the puppets.

I see.

And do you see, I said, men passing along the wall carrying all sorts of vessels, and statues and figures of animals made of wood and stone and various materials, which appear over the wall? . . . And of the objects which are being carried . . . they [the prisoners] would see only the shadows? . . . And if they were able to converse with one another, would they not suppose that they were naming what was actually before them?

Very true.

And suppose further that the prison had an echo which came from the other side, would they not be sure to fancy when one of the passers-by spoke that the voice which they heard came from the passing shadow?

No question, he replied.

To them, I said, the truth would be literally nothing but the shadows of the images.[4]



Figure 3

If in doubt about my description of the cave as a cinema, look carefully for a moment at the relations between the fire, the prisoners, the original objects, and their shadows in figure 2, and compare them with the same relations in figure 3.

Now all this is external to the prisoners, and there are many of them in the same cave. Let me now invoke the third-century Neoplatonist Plotinus, who changes the situation a bit and makes it into a serious theory of perception, not just an allegory. Plotinus is about as far from a causal theory, in which existing things cause us to see them, as it is possible to get; in his view, on the contrary, we help to cause them to exist. His world is as it were turned inside out: we think that what we see is the outer surface of material things; he thinks it is the inner surface of a world of light. You will get some idea of this inversion if you think metaphorically of turning on the light in a room, as we normally conceive of it and as Plotinus might. We think the room is already there and that the light just bounces off the walls. Plotinus might say that the light creates the room, blowing it up, as it were, from the lamp as an infinitely thin balloon that assumes exactly the room's shape, bulging inwards where the furniture is, perhaps poking out into the hall if the door is open. The difficulty with this metaphor is how we get into the room. Plotinus's answer is to turn us into the lamp: instead of having the projector make images and the prisoners look at them, the projector projects through the eyes of the prisoners. Figure 4 is an attempt to picture Plotinus's system.



Figure 4

In Plotinus Plato's Idea of the Good has become a single self-generating and sustaining entity, which he calls The One. The One is perfect, so full of perfection indeed that it overflows, emanating Being in all directions. I cannot do better than cite from The Enneads of Plotinus a series of brief quotations that give the essentials of the theory:

Seeking nothing, possessing nothing, lacking nothing, the One is perfect and, in our metaphor, has overflowed, and its exuberance has produced the new; this product has turned again to its begetter and been filled and has become its contemplator and so an Intellectual-Principle. Soul arises as the idea and act of the motionless Intellectual Principles. . . .[5]

Ever illuminated, receiving light unfailing, the All-Soul imparts it to the entire series of later Being which by this light is sustained and fostered and endowed with the fullest measure of light that each can absorb. It may be compared with a central fire warming every receptive body within range. . . .[6]


. . . the All-Soul [has] produced a Cosmos, while the particular souls simply administer some one part of it. . . .[7]

. . . so far as the universe extends, there soul is. . . .[8]

. . . the Soul's operation is not similarly motionless; its image is generated from its movement. It takes fullness by looking to its source; but it generates its image by adopting another, a downward, movement. This image of Soul is Sense and Nature.[9]

Matter, for Plotinus, is as good as nonexistent. In the words of his translator, Stephen McKenna:

Matter is the last, lowest, and least emanation of the creative power of the All-Soul, or rather it is a little lower than that even: it is, to speak roughly, the point at which the creative or generative power comes to a halt; it is the Ultimate Possible, it is almost Non-Being; it would be NonBeing except that Absolute Non-Being is non-existent, impossible in a world emanating from the bounty of Being: often no doubt it is called Non-Being but this is not in strict definition but as a convenient expression of its utter, all-but-infinite remoteness from the Authentic-Existence to which, in the long line of descent, it owes its origin.[10]

Particular bits of matter, for instance, are under the administration. whenever attention is being paid to them, of the particular souls who see them, who participate in their emanation from the One via the Intellectual-Principle and the All-Soul; in this particular case, us. Nature and everything in it is, as we have just learned, "the image of the soul." The words of Plotinus rendered by this summary formula would be done better justice, I think, if instead of reading, "Nature is the image of the soul," we were to read, "Nature is an appearance belonging to the soul." The soul to whom my appearances belong, or in whose charge they are, is of course my own—this is an individual and not a collective matter, even if we all draw our emanations in the end from the same source.

In what follows, I want to hang on as well as I can to this Plotinian view. It has a kind of crazy plausibility; the attempt to see the surfaces of things as the screens on which our own projections terminate is quite feasible, as it turns out, and a fine challenge to the intellectual imagination. Why did Plotinus need such a bizarre doctrine? Why couldn't he have accepted the notion that light is reflected from objects, etc.? But do we realize, I wonder, just how bizarre that doctrine, in its turn, really is? What physics asks us to believe is that space is full of trillions upon trillions of photons, shooting this way and that at the


speed of light (which is their speed), in such prodigious numbers that wherever I put the pupil of my eye, however dim the illumination may be, enough of them are coming from every direction to that very point for me to see what is there if I turn my eye in that direction, all this happening so fast that statistical fluctuations are flattened out and I see things steadily. Such an account would no doubt have seemed to Plotinus to require a simply inconceivable prodigality of fuss and bother, and wildly implausible quantities of things. He does in fact entertain the idea that it may be something coming from the object, through the air, that enables us to see it, but he has an answer to that:

For the most convincing proof that vision does not depend upon the transmission of impressions of any kind made upon the air, we have only to consider that in the darkness of night we can see a fire and the stars and their very shapes.

No one will pretend that these forms are reproduced upon the darkness and come to us in linked progression; if the fire thus rayed out its own form, there would be an end to the darkness. In the blackest night, when the very stars are hidden and show no gleam of their light, we can see the fire of the beacon-stations and of maritime signal-towers.[11]

We would say, of course, that even from the distant signal-tower photons are streaming by the billions, second after second, in every direction of space. I wish to underline this point—the astonishingly large number of things that the scientific account requires—because I shall need it later on.

I now jump fifteen centuries, back to Berkeley's time, with first however a quick quotation from Leibniz to show that Plotinus's view was not just a Neoplatonic aberration. Leibniz was puzzled by the fact that each of us lives in his or her own world, and had a lot of difficulty in seeing how we could manage to have a world in common; he concluded that everyone was as it were shut in, but that God arranged for each enclosed world to agree with every other in mirroring the whole. God for Leibniz is, as can be seen in these quotations (from the Monadology ), rather in the position of the One in Plotinus.

Thus God alone is the primitive unity or the original simple substance; of which all created or derived monads are the products, and are generated, so to speak, by continual fulgurations of the Divinity. . . .[12]

Now this connection, or this adaptation, of all created things to each and of each to all, brings it about that each simple substance has relations which express all the others, and that, consequently, it is a perpetual living mirror of the universe.

And as the same city looked at from different sides appears entirely different, and is as if multiplied perspectively; so also it happens that, as


a result of the infinite multitude of simple substances, there are as it were so many different universes, which are nevertheless only the perspectives of a single one, according to the different points of view of each monad.[13]

Still there remains something a bit odd about this. I return to Berkeley, in his New Theory of Vision , to give the argument a more modern twist.

Berkeley is much preoccupied, as we can understand an idealist's being, with the question of why we think that the perceived world is outside us rather than inside. He concludes that it is because of the fact that part of our experience of that world is what we might call depth of field, a sense of distance or "outness," as he puts it. Once again his own words will convey the ideas best:

From what hath been premised it is a manifest consequence that a man born blind, being made to see, would at first have no idea of distance by sight; the sun and stars, the remotest objects as well as the nearer, would all seem to be in his eye, or rather in his mind. The objects intromitted by sight would seem to him (as in truth they are) no other than a new set of thoughts or sensations, each whereof is as near to him as the perceptions of pain or pleasure, or the most inward passions of his soul. . . .[14]

Upon the whole, I think we may fairly conclude that the proper objects of vision constitute an universal language of the Author of nature, whereby we are instructed how to regulate our actions in order to attain those things that are necessary to the preservation and well-being of our bodies, as also to avoid whatever may be hurtful and destructive of them. It is by their information that we are primarily guided in all the transactions and concerns of life. And the manner wherein they signify and mark unto us the objects which are at a distance is the same with that of languages and signs of human appointment, which do not suggest the things signified by any likeness or identity of nature, but only by an habitual connexion that experience has made us to observe between them.[15]

With this last point we are really on the contemporary scene, since language is one of the dominant philosophical preoccupations of our century. It is to be noticed that we don't share language, or inhabit the same linguistic space; language is something we each severally have. And we each have a whole language—it isn't that I have some and you have some more, but insofar as we are able to communicate, my language duplicates yours; I carry mine around with me and you do the same, so that when we meet we can speak and be understood. The same thing is true with respect to social institutions. Each of us in academic life has, for example, a whole University, and it is our carrying it around with us that makes the University—the institution, not the buildings—exist.


The idea that emerges from Leibniz and Berkeley is that we each have a perceived world; we don't live in the same one. It was popular in the seventeenth century to speak of the perceived or sense world as a "sensorium," a space full of things sensed, as an auditorium is a space full of things heard. Newton used to say that space as a whole was the sensorium of God. The point to which the argument has so far brought us is that we can imagine, if Berkeley is correct, that each of us has a private sensorium and that its contents bear no necessary resemblance to what there actually is in the world, nor to what is in other sensoria. Is there any reason to think that he is correct? We might balk at the bit about the Author of nature, and not be willing to follow Berkeley in saying, as he does, that what there actually is in the world is a lot of ideas in the mind of God, but is there some other way of interpreting the position he takes?

Suppose that instead of Berkeley's God or Plotinus's One we postulate simply "the world without us," however it may turn out to be, and suppose that instead of emanations passing through the soul or messages coming from the Author of nature we postulate the physical effect that world has on us, however that goes. But suppose we keep, from Plotinus, the notion that what we then perceive is something that proceeds from us, and from Berkeley the notion that its contents indicate to us, but do not reproduce or represent, what there is in the world without us. Suppose, in other words, we hold that something in us generates (under suitable stimulation) a sensorium and its contents, corresponding sometimes partially at least, and in some way yet to be specified, to the world there is but not necessarily being in any obvious way like it. Is this a conceivable view?

What might make it difficult to accept, or even inconceivable? One of the most implausible things about it is that it would require something in us, rather than something in the external world, to provide (in response, to be sure, to detailed instructions from without) the visible detail of the perceptual field, and that field is so rich, so nuanced, so finely grained, so charged on examination with minute and unexpected curiosities, that it seems silly to think of us as having anything to do with its production . But before jumping to conclusions let us remember all those photons, and how inconceivable it would have seemed to anyone more than a few hundred years ago—if indeed it does not still seem inconceivable—that they are really all there, rushing invisibly about; and let us also remember Yorick and his skull. When Yorick was alive, what did his skull materially contain?

The answer we can now give, although Shakespeare could not have given it, is: thirty billion neurons . A few months ago I like everyone else would have said ten billion, but recent neurological research has


given us a bonus.[16] At all events we begin with thirty billion, though they start dying before we are born and no new ones are produced, so it is downhill all the way. However, at birth their interconnections are pretty primitive, and the epigenetic development of the brain produces networks of unbelievable complexity. The information carried in the visual field, however minutely detailed, can be handled with a tiny fraction of the available computational resources. As far as that goes, the input that triggers the generation of that field has to be handled by a mere ten million rods and cones. The fact that the number is finite means that the field ought to be grainy, and it would be if it were not for the fact that the projecting mechanism smooths that over, a spatial effect not unlike that of the smoothing-over in time that we automatically perform on the flickering images in ordinary movies.

When we look at an object, a white pitcher, for example—something that gives us a feeling of quiet and simplicity—there are actually all sorts of busy transactions going on: photons rushing and bouncing, cells in the retina firing, impulses tearing along nerves and exploding packets of chemical transmitters at synapses—but the visual field projects for us something quite different: a stable, continuous, firm entity. We might be meditating on perfection in total tranquillity, and all that frenzied activity would still be churning inside the skull. There seems to be pretty good evidence that the brain puts together the sensorium we experience from sequences of inputs that it stores and processes. We have the steady sense of being in a more or less peaceful enclosed space, relatively large and enveloping, but our eyes are darting here and there all the time, picking up bits of information and feeding them into the neural machinery, as studies of saccadic eye movement have shown.

A further reinforcement of the thesis comes from some early structuralist or protostructuralist work by Ernst Cassirer, who proposed that just as in linguistics, where we infer grammatical constants from groups of utterances or even groups of languages, we infer perceptual constants from groups of experiences.[17] My final point is from an eccentric English cybernetician called Oliver D. Wells, who in his book, HOW COULD YOU be so naive! , borrows from Gibson the idea of an integration of overlapping contents as part of the mechanism of generating the sensorium, although he does not use that term. Gibson imagines someone sitting in a chair and looking around a room, taking in one part of it and then another and thus assembling a representation of the whole room. Wells takes the process to a more fundamental level and says simply, in effect, The brain computes the world.[18]

Where does all this leave us with respect to our original questions? What liberated us from Plotinus's theory of vision, we might say, was


the development of a physical account of the propagation of light, the realization that the burning of the signal fire, most of whose products are wasted as heat, really does generate sufficient energy to fill the universe, locally and temporarily, with sufficiently generous numbers of photons to activate any eye within reach. What restores the theory to us, in revised form, is the development of a physical account of the operation of the brain, the realization that the complexity of the interrelations of the neurons really is great enough to provide each of us, in his or her private bony screening room, with a complete picture of the world. It is an incidental virtue of this view that it makes dreams, hallucinations, intoxication, and so on, not to mention imagination and indeed thought itself, perfectly and immediately intelligible as the functioning of the projection mechanism under other than perceptual stimuli. It remains to answer the final question—how much of the world there really is can we really know?—and to fit our possession of this apparatus into an account of ourselves: who we are, where we come from, how if enclosed in our own sensoria we can make contact with one another.

Perceptual consciousness does not always convince us of the existence of an outer world. Consider a room in which one of the walls is a mirror: we can't see the place in space, away from the edges at any rate, where the real room merges into the mirrored one, but we still don't believe there are two rooms. If we have to operate in the mirror world—as dentists sometimes do, or people backing up their cars—that doesn't pose any serious problem after a bit of practice. Experiments have been done with the total inversion of the visual field by the use of special prismatic glasses bolted to the head, and after a bit of stumbling the field rights itself (only to reverse again, with more stumbling, when the glasses are taken off). There isn't time to pursue this point, but it lends weight to Berkeley's remark that by vision "we are instructed how to regulate our actions in order to attain those things that are necessary to the preservation and well-being of our bodies, as also to avoid whatever may be hurtful and destructive of them." Perception, in other words, helps us to locate the icebox and not to bump into the furniture. (I mean of course the real icebox and the real furniture, not just their perceptual counterparts.)

As such it clearly has survival value, and while we cannot say that the perceptual apparatus was developed in order to be used in this way, we can say that to some degree at least the remarkable progress human beings have made in understanding what happens in the world is due to their ability to project perceptual models of it. The final question has to do with what knowledge of the "world without us" the evidence of perception allows us to claim. This falls into two problems again: other


people, and the physical world. We have reason to think (on the basis of perceptual evidence) that other people live in worlds similar to our own; nothing is lost, indeed everything is to be gained up to a point, by living practically as if we had a sensorium in common, even though we couldn't, given the physical and neurological facts, possibly do so. It might be that if your neurological impulses could be fed into my brain I would experience them as a horror movie, or it might be that I would feel at home. I think in fact the former is more likely to be the case, since I have spent fifty idiosyncratic years getting used to my world, and my strategies for being comfortable in it are very likely, in their details at least, to be very different from yours. But the latter is a more friendly thought. I could at all events expect to know the language, as it were, since the chances are that our brains were programmed in roughly similar ways, although even that is not by any means certain. Thirty billion neurons, loaded since well before birth with nonstop inputs from all five senses, will have evolved some of their own programs, and it may well be that you store the instructions for saying (or recognizing) "blue" where I store the instructions for saying or recognizing "salami." This might have interesting though uncomfortable consequences should we ever get our wires crossed.

As to the physical world, that is a different story. Everything we have learned about science suggests that away from the normal macroscopic center of things we can't form a perceptual model of it at all. We have grown up in what I call the "flat region"—a metaphor I take from the fact that the earth seems flat where we live and we need to go off into space, or make geographical inferences of one sort or another, to conclude that it is round. So in the direction of the very large, the very fast, the very distant, the very small, we can only have mathematical models of how it really is. Science began in the familiar world with the mathematical formulation of perceptual relations, and for a while we could imagine that the extensions of perception by means of instruments—microscopes, telescopes—gave us access to its remoter parts. But we know now that what is really going on, not "beneath" perception but in the world of which perception gives us only an approximate and sturdy model, suitable for our macroscopic purposes, consists of events we can't in principle perceive, among them the very events that make perception possible.

It follows from this—and I will conclude with perhaps the most preposterous claims of this essay and the ones that will seem philosophically the most uncomfortable—that the microevents to which I allude aren't happening in the world we vividly know at all. There aren't any photons in the perceptual room, and no perceptual event causes another, just as David Hume realized. So where are the photons and


the microevents in general? Why, in the real world. And what is the status of our knowledge of it? This is where we have to hold on to our seats, because nothing protects us from the conclusion that we have no direct knowledge of it whatever, that all we can ever allege about it is purely and massively hypothetical. Berkeley might be right: God might be doing it all by the word of his power, and there be no such thing as material substance. There is no reason whatever to think that this is actually the case, and anybody who claimed it was wouldn't have any better evidence than we do, in fact would have great difficulty in even tackling a lot of questions that within our hypothesis we have adequate answers for. But only within the hypothesis. There is in fact no worked-out alternative, which is why we have come to have such confidence in science. But Hume's skepticism remains unrefuted.

There is one respect, however, in which there may be a way out of our ignorance, and that is through mathematics. It seems likely that certain structural relations, such as "between," "greater than," etc., must have formal counterparts in the real world, and that we might therefore learn to speak its language—to speak at any rate with its grammar, although we would still have to use our own vocabulary. The supplementary point that needs to be made here is that to say the real is hypothetical or mathematical does not mean, as some fanciful popularizers of science have suggested, that it evaporates into formulae, is merely an idea in our heads, etc. We attribute hypothetically to the real just the kind of materiality it needs in order to sustain us, just the relations among its real parts whose mathematical expression we are able to divine. The fact that we don't know it any better doesn't mean that it doesn't exist; our knowledge or lack of it is a matter of complete indifference to it; it is , and has no need of us. Our proper attitude to it, it seems to me, should be one of gratitude for sustaining us as perceiving and feeling beings.

The real we hypothesize can have devastatingly nonhypothetical effects, of which some of the most notorious and most troubling occur in the domain of nuclear physics. The apparatus of our sensorium is adapted to flat-region phenomena, and our imagination is limited to plausible extensions of those phenomena. We can observe many chemical reactions, or their perceptual counterparts, and, horrendous as warfare is, it counted, until 1945, as a component of the familiar. One of the things that makes nuclear explosions horrific is that they constitute such a violation of the scale of possible imagined causal relations; they draw on forces we can't experience, even by courtesy, as it were. This fact may account for the automatic horror-movie feeling of their perceptual consequences. I am not sure that any good antinuclear argument could be constructed on this basis, because it looks as if it would


rule out benign uses like radiation therapy, and the like, but it might be worth working on. It seems silly to let the existence of these fragile sensoria—and they are fragile, requiring as they do the coincidence of thousands of small determinations for their perpetuation—be threatened by default. Save the Sensorium sounds like a pretty good motto. Even if I inhabit mine alone, as you do yours, we keep showing up in each other's, and it is too good a life to give up without a struggle.


A Case for the Human Sciences

The Natural Sciences, the Social Sciences, and the Humanities

In this final chapter l shall argue that research and teaching in some areas of the humanities and social sciences stand to benefit from the recognition of a coherent domain of inquiry that might properly be called the "human sciences." I say "in some areas" because I assume that there are and will continue to be empirical and quantitative features of the social sciences, and practical and creative features of the humanities, that will fall towards the natural sciences on the one hand, and towards private experience on the other, in such a way as to resist inclusion in this domain. At the same time it may also be true that, at least from the point of view of teaching, the natural sciences themselves, as human creations, might profit from the attitudes and methods of the human sciences.

The current classification of the disciplines into the three domains of the natural sciences (and mathematics), the social sciences, and the humanities (and fine arts), has a perfectly sound basis not only historically but also conceptually. In setting out the conceptual basis I shall insist on some distinctions that may appear to have been discredited, notably the methodological distinction between the natural and social sciences and the ontological distinction between fact and value. My own view is that in spite of their having lost currency these distinctions are still viable and important.

The difference between the natural and social sciences is best ex-


pressed, I think, in the contrast between things and processes among whose causal antecedents are to be found human intentions, and those among whose causal antecedents human intentions are not to be found. The former belong in the domain of the social, and the latter in the domain of the natural, sciences. Many things (such as tools, buildings, works of art, etc.) will be found in both domains, according as their meanings and uses on the one hand, or their material embodiments and properties on the other, come into question. For example, human intentions determine that a given painting should be taken as representative or abstract, should change hands at a certain price, should be attributed to a particular epoch, etc., but no such intentions determine that it will submit the wire on which it is hung to such and such a tension, that it will tear if pulled or cut with a certain force, or that if it burns it will do so at a particular temperature and with given products and residues of combustion. It should be noted that the causally-determining intentions need not be those of the maker of the object in question, nor need the event into whose causal antecedents they enter be the production of any thing , nor need what emerges be the realization of any particular intention (there may be unintentional consequences of intentional actions—but they will only be apprehended as such from the point of view of some intentionality).

The distinction between the sciences (including the social sciences) and the humanities rests on considerations of a different order. The objects of the sciences can be observed and described as they are, and the scientific theories that apply to them will be confirmed or refuted according to the content of those observations and descriptions. From the point of view of a social science inquiry, say, into the distribution of lexemes in a given text, it makes little difference whether the text is high literature or Harlequin romance. But from the point of view of the humanities it is just what distinguishes these cases, in the mode of critical judgment, that selects the one for scholarly attention and not the other. And the work of the humanities on what is thus selected consists as much in unfolding and valorizing the basis for that judgment in the particular case as in confirming and refuting any theory.

This is not, of course, to play down the role of theory in humanistic studies; it is just to insist that objects in the domain of the humanities are included there not only because they are the products of human intentions but also because they are taken to be embodiments of human value. I take value to reside not in factual presence or even structure but in a power on the part of the object, experienced and attributed as such, to evoke and hold interest and concern, even passion, in the reader, viewer, listener, etc. I say "on the part of the object"—that is how we experience it, but in fact the phenomenon of value owes as


much to the preparation of the consumer (through personal experience, acculturation, scholarly training, etc.) as to the properties of the object.

Values are facts, or are embodied in facts, to which imperatives are attached.[1] In contrast to the sciences, whose facts once established can be left in peace and whose experiments (except in the mode of rechecking or the fulfillment of predictions) need not be repeated, the "facts" of the humanities keep always a future-referential aspect—they are to be understood, enjoyed, and preserved, and it is to deepening our future understanding, enriching our future enjoyment, and justifying the future preservation of their objects that the main energies of the humanities are directed.

Sciences and Disciplines

There is another possible contrast here that is worth drawing and will be of use in the sequel. Sciences, as their name suggests, aim at forms of knowledge, systematized and made coherent in theories. To the extent that the humanities do this they have something in common with the sciences. But an ancient opposition sets praxis over against theoria , as a matter of interacting with the world rather than internalizing a representation of it, something learned by example and by doing rather than by instruction and by thinking. This suggests a distinction between the sciences and what might properly be called "disciplines." The difference between a science and a discipline is fairly obvious—in a science the ultimate object is knowledge, about the world or about society, and what practice there is follows from the knowledge (or serves it, e.g., in experimentation), whereas in a discipline the object is an activity, carried out, of course, in a suitably disciplined way.

Literary criticism, comparative literature, and most of philosophy count as critical disciplines, whereas the practice of literature goes along with art, music, and the rest into the creative disciplines. This distinction is obviously not proposed here as a change in usage, since "discipline" in its ordinary acceptation means just what I used it to mean at the beginning of the chapter, namely, each of the sciences and humanities in their institutional setting. But what it suggests is a contrast between activity and product, and a different balance between them in the different domains. If the natural sciences have their experimental or investigative disciplines the humanistic disciplines may have their unifying and clarifying sciences. It remains to be seen what form these should take.


The Human Sciences

The term "human sciences," however, is not intended to be limited to the theoretical components of the traditional humanities only; it extends to those parts of the social sciences which (to use a distinction due to Kenneth Pike) attend to the "emic" rather than the "etic" features of their objects. Pike borrowed the suffixes from the contrast between "phonetic" and "phonemic" in linguistics and made them into freestanding terms. Phonetics deals with the way an utterance objectively is, the shape of its sounds according to a standard representation, phonemics with the way the segments of the utterance contain and convey its meaning. So "the etic viewpoint studies behavior as from the outside of a particular system, and as an essential approach to an alien system. The emic viewpoint results from studying behavior as from inside the system."[2] Inside the system is where the meanings are; and meanings have an essential component of value.

Nor is the term "human sciences" by any means new, though it has not been generally accepted in English usage. That is no doubt because of the history of the word "science" in English. Curiously enough, though, we owe the name of the human sciences indirectly to an English original, namely, the "moral sciences" to which John Stuart Mill devoted book 6 of his System of Logic ; it was through the translation of that expression by Dilthey as Geisteswissenschaften and the rendering of this into French as sciences humaines that "human sciences" suggested itself in English. But in order for this to happen, "science" itself had to undergo a certain modification of meaning, and it may still be that this modification is not yet sufficiently general for the new terminology to be accepted without misunderstanding. (Lending the name to a seminar may perhaps in some small way help the process along.[3] )

The English word "science" came to have its modern meaning, as a systematic body of objectively confirmed propositions about a well-defined domain of inquiry, in the eighteenth century; the term "scientist" did not emerge for another century (it is first reported in 1840), having been coined by William Whewell on the analogy of "artist."[4] Earlier usage made what we would call a natural scientist a natural philosopher. There were also moral philosophers, who dealt not merely with ethics but also with psychology and the forerunners of what we could now call the social sciences. Mill brought over into the domain of science, under the name "moral sciences," the whole of moral philosophy—except for Morality itself, along with "Prudence or Policy, and Aesthetics," these three forming "a body of doctrine, which is properly the Art of Life, in its three departments . . . the Right, the Expedient, and the Beautiful or Noble, in human conduct and works."[5] Art, for


Mill, was nevertheless dependent on the truths of science, and science was everything organizable according to the principles of logic.

Insofar as Mill's moral sciences were to be scientific at all, they were to be so in the same mode as the natural sciences. "The backward state of the Moral Sciences can only be remedied by applying to them the methods of Physical Science, duly extended and generalized."[6] In the English-speaking world this principle remained unchallenged until quite recently, and the standard view in the philosophy of science was that the social sciences were just less exact natural sciences in which concessions had to be made to statistics. That view was reinforced by two twentieth-century developments (quite in line with their seventeenth- and eighteenth-century antecedents, the mathematical physics of Newton and the materialism of the Enlightenment, both of which took root more firmly in British than in Continental science): the erasure of the boundary separating logic and mathematics, and the introduction of behaviorist psychology. All science thus became quantitative and "etic."

Not that Continental thought was innocent in this regard—on the contrary: one of the thinkers who had the most influence on Mill was Auguste Comte, while the advent of behaviorism was soon reinforced, first indirectly (through interpreters like A. J. Ayer) and then in person (on the part of refugees like Carnap), by the second wave of positivism, from Vienna. But a more generous interpretation of "science," or at least of its German and French equivalents, was preserved on the continent because of the original way in which Dilthey interpreted Mill. For to Mill's essentially external treatment of the moral sciences, and particularly the science of history, Dilthey added an internal component, which he identified as Verstehen or "understanding," in effect an emic component. (Though I find Pike's terminology appealing, because compact, his innovation is only a reinvention of Dilthey.)

Verstehen adds an indispensable element of interpretation to the "facts" of history; the validation of a historical claim about given events requires that the judgment of a participant in those events, or of someone who is in a position to know what it must have been like to be a participant in them, be consulted; but that judgment must be made, or have been made, from a point of view, and again the meaning of the events as judged from that point of view will involve value essentially and cannot be a merely factual matter.

A view something like this has informed the Geisteswissenschaften and the sciences humaines ever since. In this way some continuity between personal experience and the structure of the world was guaranteed, since these were mediated by a domain in which the latter could be seen only in the light of the former. One might in fact imagine a


continuum of "sciences," from one extreme at which there is no room for a subjective and personal component at all, to the other at which the last trace of the objective and impersonal has at last vanished: at this point, but only at this point, the continuum shades off from science into something else.

In the English-speaking world, however, with its "two cultures" (a pernicious over-simplification on the part of Lord Snow, the Allan Bloom of his day, who congratulated himself in public on having been spared, by his superior but of course personally merited good fortune, from the common lot), this mediation was missing. Its absence was perceptible, and this led to a revolt against scientism that turned, mistakenly in my view, against science. Some symptoms of the attempt to plug or bridge the gap, all well-intentioned no doubt but unfortunate in their consequences and implications, have been Popper's theory of objective knowledge, Kuhn's theory of paradigm shifts, and Rorty's theory of cultural conversation. Another move, partly from the same and partly from different motives, was the importation into the English-speaking debate of a historicism derived from Marx rather than Dilthey.

But Popper's invention of his World III to accommodate what he took to be the objective reality of "problems" merely echoed (unconsciously no doubt—he was certainly made angry enough by the suggestion) an earlier thought of Bachelard's; Kuhn's revolutions had similarly been anticipated by Canguilhem; as to Rorty, his mixture of the disciplines into a general broth of culture was the last in a long series of similar programs that could be traced from Hegel's Encyclopedia through to Hermann Hesse's Glass Bead Game .

Historicism and the Linguistic Turn

In general it may be said that all these attempts to build up the low ground between scientific certainty on the one hand and existential certitude on the other suffered from an embarrassment and a lack. The embarrassment was historicism: it seemed generally to be believed that something Hegel had concocted and Marx had swallowed, that had survived the dialectical inversion from idealism to materialism, must carry the weight of plausibility if not of truth itself. Everyone therefore became politically and historically conscious and relativist—and this just as there was coming to be available the notion of a science that might be genuinely cumulative and thus in itself ahistorical, that might be fed by every culture and thus transcend cultural relativism. (It is to be noted that the opposite of "relative" in this context is not "abso-


lute" but "neutral" or "indifferent.") The lack was of an ontology that could steer between the Scylla of immutable givenness and the Charybdis of momentary animation.

The paradigm case of the sort of thing for which such an ontology was required was seen quite early on, by a few pioneers, to be language, or, in general, systems of signs or of significance. There had of course been studies of language before, grammatical or philological or comparative, but the question what signs in fact were, or language was, seems not to have been posed as a serious ontological question much before Ferdinand de Saussure (this at all events was his opinion). To be sure there had been theories of the sign, in the Stoics and then again in Charles Sanders Peirce (to mention the earliest and latest and most sophisticated entries in a long historical series), but the Stoic doctrine of the lekton , which might have developed into the required ontology, remained embryonic, while Peirce, out of modesty or disinterest or both, avoided ontological questions when he could. (A typical disclaimer occurs in "How to Make Our Ideas Clear": "as metaphysics is a subject more curious than useful, the knowledge of which, like that of a sunken reef, serves chiefly to enable us to keep clear of it, I will not trouble the reader with any more Ontology at the moment.")[7]

Nevertheless in Peirce there are, as usual, brilliant anticipatory hints, and one of them comes in a remarkable passage at the end of his essay, "Some Consequences of Four Incapacities," where he says,

The word or sign which man uses is the man himself. . . . That is to say, the man and the external sign are identical, in the same sense in which the words homo and man are identical. Thus my language is the sum total of myself: for the man is the thought.

It is hard for man to understand this, because he persists in identifying himself with his will, his power over the animal organism, with brute force. Now the organism is only an instrument of thought. But the identity of man consists in the consistency of what he does and thinks, and consistency is the intellectual character of a thing, that is, is its expressing something.

Finally, as what anything really is, is what it may finally come to be known to be in the ideal state of complete information, so that reality depends on the ultimate decision of the community; so thought is what it is only by virtue of its addressing a future thought which is in its value as thought identical with it, though more developed. In this way, the existence of thought now depends on what is to be hereafter; so that it has only a potential existence, dependent on the future thought of the community.

The individual man, since his separate existence is manifested only by ignorance and error, so far as he is anything apart from his fellows, and from what he and they to be, is only a negation.[8]


This citation seems to me to head exactly in the right direction, only to veer off in the end. "The organism is only an instrument of thought," "reality depends on . . . the future thought of the community," "the individual . . . is only a negation": that "only" spoils a promising emphasis, and the subsequent stress on the community as against the individual compounds the error. The organism is indeed the instrument of thought, and individuals constitute the community. Yet words or signs surely are human reality, and any science of that reality will surely have to be, among other things, a science of language.

Saussure, like Peirce, seems to have been an original (each of them reinvented a term for the science of signs on the basis of the Greek semeion , Peirce choosing "semiotic" and Saussure "semiology"). But Saussure raised the ontological question directly, deciding that there must exist something called langue , a system of rules that in his words "resides in the collectivity." (He was roundly denounced for this by C. K. Ogden and I. A. Richards, who wrote an influential book called The Meaning of Meaning , and I do not doubt that this set back the acceptance of Saussure's work in the English-speaking world.) I think this is the wrong answer, as wrong in Saussure's case as it was in Peirce's—but it was the right question, and the subsequent development of Saussurean linguistics and the structuralism that emerged from it was not in fact hampered by the wrongness of the answer (any more than Edison's development of electric light was hampered by the wildly erroneous theories of electricity generally held at the time).


The mistake is to give ontological priority to the community or collectivity, when it is the individual and the individual alone who embodies langue (even though its acquisition by and usefulness to any individual depends absolutely on its also being and having been embodied, individually, in other individuals). And yet the structure of langue , or of any other social system, as embodied in (or as I would wish to say "instructed into") each individual, is in fact just what it would be if it could, per impossible , be embodied in the collectivity directly, so that it can be dealt with, up to a point, as if it were so embodied. What a structuralism of objectified collective properties misses is everything that hinges on idiosyncrasy and variation as between individuals, the always discrete processes of diffusion and instruction, and these things can become important. To a first approximation it will do, because it is useful to be able to speak of languages or cultures in general. But in detail it won't do, because no two people have exactly the same lan-


guage, nor do they share exactly any social object, even in the elementary case of the freely-formed couple (the clearest illustration of the dependence of the collective on individuals, since it comes into being when, but only when, each partner says so, and ceases to exist—except residually in the intentional domains of third parties—as soon as either partner says so).

The truly central contribution of structuralism, which Saussure articulated avant la lettre as well as anyone has done since, is its insistence on the differential and relational character of the objects with which it deals, as opposed to the self-identical and substantial character of the objects of the natural sciences. The opposition between the two cases can now be put in yet another way: the entities with which the natural sciences deal can be thought of as preexisting and independent of the relations into which they enter; higher-level entities (consisting of lower-level entities in relation) still preserve this thing-like independence with respect to each other and the relations into which they in turn enter.[9] The entities with which the social sciences deal, however, are constituted out of relations, apprehended by some consciousness or other, produced by some intentionality, presumably human (since human intentionalities are the only ones we know anything about).

Structures are sets of relations, and structuralism just is the view that the objects of the social (or as we may now say human) sciences are relational rather than substantial. In other words, the natural is there whether or not it is thought about, the social is not there unless it is thought about. Or again, more succinctly yet: the appropriate metaphysics for the natural sciences is realism, the appropriate metaphysics for the social or human sciences is idealism.

The flowering (and fading) of structuralism need not be recounted here, but a couple of its eccentricities do need to be noticed, since they contributed, I believe, to a general failure to perceive that it did in fact offer the human sciences the theoretical foundation they were lacking. Partly because of Saussure's location of langue in the collectivity and partly, no doubt, because of an exaggerated respect for Marxism, it came to be generally thought that the structures with which structuralism dealt were objective in the old natural-scientific way, and even that they were somehow not merely intelligible but intelligent, capable of independent agency.

That they do have some objectivity at the neurophysiological level there is little reason to doubt, but such structures are not available for explanatory purposes within the human sciences—human beings use them (in some sense of "use," though certainly not the normal purposive one) to think the intentional structures they do think, but they themselves remain objects for a natural science just as any useful device


would, and are as little capable of agency or purpose as any other complex physical arrangement. This however was not the objectivity that Lévi-Strauss and Foucault had in mind when they represented language and myth and power and sexuality as cunning forces in the world whose unwitting pawns we are, when they insisted on saying not "we think the world through myths" but rather "myths think themselves through us," not "I write" but rather "I am written," and other such dark formulae.

I call this aberration "misplaced agency"; its effect is to deny subjective agency on our part or even, with sublime inconsistency (since its proponents are themselves the subjects of their own utterances and intentions), subjectivity itself. The structuralists talked a lot about the elimination of the subject in favor of an interplay of structures, thus missing the important point that subjectivity is in fact the animation of structure and the only thing that makes structures function as they do.

Of course I don't animate the whole English language, say, all by myself; it transcends me, but only in the persons of other individuals with their own subjectivity , not as an "objective" structure that can dispense with such embodiment. That there are very many individuals involved (all the English-speakers throughout history) explains on the one hand why the English language seems so historically entrenched, to the point of taking on a kind of objectivity, but on the other hand and at the same time why a theory that insists on the individual embodiment of structure is adequate to carry the ontological weight of the social: precisely because the burden is so widely shared.

This last example may be taken as paradigmatic, but it leaves out one important consideration, namely, the role of what Sartre called the "practico-inert" as a carrier of structure, or rather as a template or generator by means of which structure is perpetually re-created for subjects. This must be an element in any developed human science. Such sciences need to be constructed with all the slow care that goes into the construction of any science, and can be, in the confidence that their objects have, in their own way, the sort of permanence that makes successive approximations and repeated confirmations possible. Because of the dominance of relativism and historicism, however, there has come to be an assumption that major theoretical positions will succeed one another with some regularity and frequency, and that there is something stagnant about a field that does not undergo the appropriate revolution every decade or so.

In the natural sciences there have been, it is true, revolutionary theoretical changes in the last century; but in spite of metatheoretical claims to the contrary, the process of slow convergence has not really been compromised. Its apparent acceleration has been due to the very


large numbers of workers whose inquiries have cross-fertilized one another and to the similarly accelerated changes in available technology due also to exponentially growing numbers in that domain. The world, however, remains as it was, submitting inertly (I am tempted to say patiently, but the world is no more patient than it is agent) to experimental probes and reacting consistently again and again to the same actions. It is this perpetual availability of nature that makes the natural sciences as happily cumulative as they are, even when current work is at conceptually distant frontiers.

It might seem that everything must be different with the human sciences, that their objects are perpetually different rather than similar, that the winds of change and fashion make any convergence impossible. Certainly the breathless succession of postures, since modernism and structuralism came on the scene, seems to be informed by some such conviction (one critic has spoken of "the new maelstrom" of poststructuralist modernity.[10] ) But if the human sciences can only agree on the kind of thing they are about—and that is the task I have been trying to advance in this chapter—they may find themselves able to consolidate in a way that transcends, without betraying, the individuals who create and sustain those objects and inform them with history (rather than being dragged along by it; a plausible case could be made for the view that only individuals have histories, or even history, but that must await another occasion).

Certainly structuralism in some of its modes seems to approach this condition, being as it is as much at home with classical mythology as with seventeenth-century drama or contemporary sexuality; with its (Saussurean) doctrine of the synchronic it may be on its way to the conquest of time that the natural sciences have naturally and unconsciously achieved. Humans, it is true, cannot be considered as enduring as nature. But if we conservatively estimate their endurance on this earth so far (which is not to be confused with their history) at, say, a quarter of a million years, and optimistically grant them the expectation of an equal run into the future, that should give the human sciences time to establish themselves. The natural sciences, after all, have in their mature form managed it in a few centuries.

A New Understanding of Science

A final word as to what that mature form is. One of the misunderstandings of natural science (to which natural philosophers and their scientist successors since the eighteenth century have themselves been prone) is that it is a total system, a coherence that reaches over the


universe and everything in it, a potentially complete account of the state and causal determination of everything at some appropriate level: physical, chemical, or whatever. It is this "mirror of nature" view to which some recent criticism has been addressed, and that some of the social or human sciences have unwisely tried to imitate. But of course science is no such thing, though in the heady days of the triumphant Newtonian world-view it was easy to think that that was what it might become. Laplace is notorious for having formulated this culmination of progress with his vision of an intelligence for whom "nothing would be uncertain . . . the future, like the past, would be directly present to his observation."[11] Whether God or demon, this intelligence is no longer conceivable, and not only because of Heisenberg, Gödel, and the rest. It is not so much that a Newtonian paradigm has been displaced by an Einsteinian one (Kuhn has much to answer for in his careless adaptation of that word)—indeed, Newton hasn't been displaced, except at the remote fringes of conceptual possibility—it's rather that Newton never covered even his own domain in the way Laplace thought. Newton could give a complete account of how two massive bodies would interact in an otherwise empty universe, and the whole success of Newtonian science has consisted in pretending that real events can be represented as aggregates of independent pairwise interactions, as up to a pragmatically satisfactory limit they can. But Newton couldn't, and science still can't, give a complete account of how three massive bodies would interact, even in an otherwise empty universe.

If science can't even solve the three-body problem in mechanics, its most elementary branch, how can anyone ever have thought that it could mirror the whole of nature? We can appreciate and learn from the natural sciences without making that mistake—indeed, learning from them depends on our not making that mistake. The assumption that total discursive adequacy was what science claimed (rather than being what some immoderate scientists and their admirers claimed) has obscured the genuine lesson that science has to teach.

A good paradigm case (using the metaphor exactly rather than wildly) of the scientific treatment of an event is to be found at the beginning of modern science with Galileo's equation for the motion of a ball on a smooth inclined plane. A bit of the world, experimentally delimited, is matched by a bit of discourse, formally constructed; the matching is exact and reproducible. Might we find another bit of the world, somewhere else, and another discursive match? We might indeed—many of them—and this is how science grew. Might these bits join up, so that the experimental domains could be seen to be connected and the discursive domains unite in a larger science? This too happened, most dramatically in Newton's truly prodigious merger of Kepler's ce-


lestial with Galileo's terrestrial mechanics, through his inspired conjecture that the moon might be falling towards the earth, as indeed it perpetually is. Will this process go on until the whole world, in all its detail, is matched by a single science?

To this question eighteenth-century science gave an excited and prematurely affirmative answer, but on reflection it is obvious that the very idea involves a paradox, like the map, somewhere in Borges I think, that in the end was as big and as real as the country itself. Even if we overlook the small matter of the three-body problem, it is clear that when the science comes to match what is in the scientist's head some difficulty will arise, since after all that is where the science itself is and must be. Science is not in the scientist's world but about it, and we are the only beings we know capable of sustaining the relation "about."

How far, then, can the coherent domain of science extend? Well, as far as it does extend—and, up to now, no further. There is no Platonic science waiting to be realized, only an Aristotelian one achieved and articulated as best we have been able to do these things, which under the circumstances is pretty well, though the resulting science is and will always remain partial. We can be convinced that the world is one and causally connected (up to the creative moment we inhabit) without thinking that we must strive for one scientific representation of it. This thought is in fact an echo of Aristotle:

An infinity of things befall . . . one man, some of which it is impossible to reduce to unity; and in like manner there are many actions of one man which cannot be made to form one action. One sees, therefore, the mistake of all the poets who have written a Heracleid , a Theseid , or similar poems; they suppose that, because Heracles was one man, the story also of Heracles must be one story.[12]

The lesson here, for the natural sciences, is one of restraint as to what they can pretend, but for the human sciences it is one of encouragement and liberation. For, in the sense I have been insisting on, they can be as scientific as any discipline in finding discursive matches for their objects and articulating these into as ramified a structure as the traffic will bear. The "coherent domain of inquiry" referred to in my very first paragraph will be a limited domain, but by now that goes without saying. No doubt part of its activity will be its own perpetual deconstruction. But if responsibly carried out, this activity will leave some things in place, and it is in the cumulative articulation of what thus remains in place that the human sciences are established. For a science is possible wherever there is some constancy of object and some stability in discourse.

As to constancy of object, Latin verse is still with us after two thou-


sand years, and Baroque music after two hundred; I see no reason why, along with Cubist painting and other more recent arrivals, they should not still be occupying the attention of the human sciences thousands of years from now. The essential difference between this and the natural-scientific case is that these things will endure only if the means of reading them out of the practico-inert continue to be instructed into generation after generation. This is a chain that can be broken and probably will be, in the eventual dissolution or transcendence of our present environment, but it may well have a run some orders of magnitude longer than cultural history so far. It might be broken also by despair, brought on by irresponsible and unstable metatheory.

But the metatheory, I think, only seems unstable. The late twentieth century, in fact, seems to have something in common with the late eighteenth—then everyone was being excitable about the world's hanging together; now people are getting excited about its falling apart. But the real work of the sciences, whether natural or human, goes on at a different level, not under some fashionable "paradigm" but in the confrontation of their objects and the imaginative structuring of their discourse. The aim in both cases is the same: to understand the world and to articulate its representations, testing the limits of possibility. Among the objects to be understood are the sciences themselves. In the case of the human sciences the construction of such an understanding is a reflexive activity, a thought which I now turn self-referentially back as a means of closure.




1. Edmund Husserl, The Crisis of European Sciences and Transcendental Phenomenology: An Introduction to Phenomenological Philosophy, trans. David Carr (Evanston, Ill.: Northwestern University Press), 130.

From Physics to the Human Sciences—The Itinerary of an Attitude

1. Peter Caws, review of R. Harré, The Principles of Scientific Thinking, in Synthese 25, 1/2, Nov/Dec 1972, 253.

2. Sir Isaac Newton, The Mathematical Principles of Natural Philosophy, trans. Andrew Motte, 3 vols. (London: Sherwood, Neely, and Jones; and Davis and Dickson), 2:160-162.

3. Pierre-Simon Laplace, Essai philosophique sur les probabilités (Paris: Gauthier-Villars, 1921), 3.

4. See Alfred North Whitehead, Science and the Modern World (New York: New American Library [Mentor Books], 1948 [originally published in 1925]), 56.

5. Donald Davidson, "On the Very Idea of a Conceptual Scheme," Proceedings and Addresses of the American Philosophical Association, 48, 1973-74, 5-20.

6. Peter Caws, The Philosophy of Science: A Systematic Account (Princeton: Van Nostrand, 1965).

7. Samual Taylor Coleridge, The Friend 1:iv (1865), 118.

8. Peter Caws, Science and the Theory of Value (New York: Random House, 1967).

9. Aristotle, Nicomachean Ethics 1094 b 20-26.

10. See Peter Caws, Sartre (London: Routledge and Kegan Paul [series "Arguments of the Philosophers"], 1979). break

11. Edmund Husserl, The Crisis of European Sciences and Transcendental Phenomenology: An Introduction to Phenomenological Philosophy, trans. David Cart (Evanston, Ill.: Northwestern University Press), 48-49, 103-189.

12. Sir Karl Popper, Objective Knowledge: An Evolutionary Approach (Oxford: Clarendon Press, 1972), 106.

13. Gaston Bachelard, L'activité rationaliste de la physique contemporaine (Paris: Presses Universitaires de France, 1951), 6.

1— Aspects of Hempel's Philosophy of Science

1. Carl G. Hempel, Philosophy of Natural Science (Englewood Cliffs, N.J.: Prentice-Hall, 1966).

2. Leiden: Sijthoff, 1936.

3. P. W. Bridgman, The Logic of Modern Physics (New York: The Macmillan Co., 1927).

4. Rudolf Carnap, Der Logische Aufbau der Welt (Berlin-Schlachtensee: Weltkreis-Verlag, 1928).

5. Karl R. Popper, Logik der Forschung (Vienna: Springer Verlag, 1934).

6. Morris R. Cohen and Ernest Nagel, An Introduction to Logic and Scientific Method (New York: Harcourt, Brace and Co., 1934).

7. In the paper "Studies in the Logic of Explanation" with Paul Oppenheim, first published in Philosophy of Science 15 (1948): 135-175, and now reprinted in Aspects of Scientific Explanation (see note 11 below).

8. P. K. Feyerabend, "How to Be a Good Empiricist," in Philosophy of Science, The Delaware Seminar , ed. Bernard Baumrin, 2 vols. (New York: Interscience Publishers, 1963), 2:9.

9. Sir Isaac Newton, Opticks: Or, a Treatise of the Reflections, Refractions, Inflections and Colours of Light , 3d ed. (London: William and John Innys, 1721), 256.

10. Fundamentals of Concept Formation in Empirical Science , vol, 2, no. 7 of International Encyclopedia of Unified Science (Chicago: University of Chicago Press, 1952).

11. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science (New York: The Free Press, 1965).

12. Stephen Toulmin, in Scientific American (February 1966): 129-133.

13. P. K. Feyerabend, "Explanation, Reduction, and Empiricism," in Minnesota Studies in the Philosophy of Science , ed. H. Feigl and G. Maxwell (Minneapolis: University of Minnesota Press, 1962), 3:28.

14. P. K. Feyerabend, "How to Be a Good Empiricist," 37.

15. Peter Achinstein, "The Problem of Theoretical Terms," in American Philosophical Quarterly 2, no. 3 (July 1965): 193-203.

16. T. S. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962). break

2— Science and System: On the Unity and Diversity of Scientific Theory

1. A. N. Whitehead, Adventures of Ideas (Cambridge: Cambridge University Press, 1933), 203.

2. Galileo Galilei, Dialogues Concerning Two New Sciences , trans. H. Crew and A. De Salvio (New York: Macmillan, 1914), 160.

3. Rudolf Carnap, "Logical Foundations of the Unity of Science," in Otto Neurath, Rudolf Carnap, and Charles Morris, Foundations of the Unity of Science: Toward an International Encyclopedia of Unified Science , 2 vols. (Chicago: University of Chicago Press, 1955-1970), 1:55.

4. Paul Oppenheim and Hilary Putnam, "Unity of Science as a Working Hypothesis," in Minnesota Studies in the Philosophy of Science , ed. Feigl, Scriven, and Maxwell (Minneapolis: University of Minnesota Press, 1958), 2: 3.

5. Carnap, "Logical Foundations," 61.

6. George Perrigo Conger, Synoptic Naturalism (Minneapolis: University of Minnesota Library, 1960), vi.

7. Francis Bacon, Novum Organum , bk. 1, aph. 45.

8. Herbert Spencer, The Genesis of Science (New York: Humboldt Publishing Co., 1887), 14-15.

9. Otto Neurath, "Unified Science as Encyclopedic Integration," in Otto Neurath, Rudolf Carnap, and Charles Morris, Foundations of the Unity of Science: Toward an International Encyclopedia of Unified Science , 2 vols. (Chicago: University of Chicago Press, 1955-1970), 1:20.

10. R. G. Collingwood, Speculum Mentis (Oxford: Clarendon Press, 1924), 191.

11. Herbert Simon, "The Architecture of Complexity," General Systems 10 (1965): 69.

12. Kenneth Boulding, "General Systems Theory—The Skeleton of Science," General Systems 1 (1956): 11.

13. Herbert Spencer, The Genesis of Science (New York: Humboldt Publishing Co., 1887), 34.

14. Henri Poincaré, "Relations entre la physique experimentale et la physique mathématique," in Ch.-Ed. Guillaume and L. Poincaré, Rapports présentés au Congrès International de Physique réuni à Paris en 1900 (Paris: Gauthiers-Villars, 4 v., 1900), 1:24.

3— Gosse's Omphalos Theory and the Eccentricity of Belief

1. Edmund Gosse, Father and Son (New York: Charles Scribner's Sons, 1907), 328.

2. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S., by His Son (London: Kegan Paul, Trench, Trübner and Co., 1890), 72. break

3. Edmund Gosse, Father and Son , 6.

4. Titian Peale, a painter of animals, is the only brother he mentions; Rubens and Rembrandt, who earlier had made important contributions to American natural history, were by this time considerably older than Gosse. The father of these three (and of eight other children also named after artists) was Charles Willson Peale, the famous portrait painter.

5. Philip Gosse, Letters from Alabama (U.S.) Chiefly Relating to Natural History (London: Morgan and Chase, 1859). Letter 12 deals with manners in the south, especially with slavery.

6. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 157.

7. Philip Gosse, The Canadian Naturalist. A Series of Conversations on the Natural History of Lower Canada (London: John Van Voorst, 1840), 2.

8. Edmund Gosse, Father and Son , 113.

9. P. H. Gosse (assisted by Richard Hill, Esq., of Spanish-Town), The Birds of Jamaica (London: John Van Voorst, 1847).

10. Philip Gosse, The Romance of Natural History (London: James Nisbet and Co., 1860), 270.

11. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 152.

12. Ibid., 70.

13. Ibid., 72.

11. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 152.

12. Ibid., 70.

13. Ibid., 72.

11. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 152.

12. Ibid., 70.

13. Ibid., 72.

14. Edmund Gosse, Father and Son , 97.

15. Ibid., 71.

16. Ibid., 99.

17. Ibid., 84.

14. Edmund Gosse, Father and Son , 97.

15. Ibid., 71.

16. Ibid., 99.

17. Ibid., 84.

14. Edmund Gosse, Father and Son , 97.

15. Ibid., 71.

16. Ibid., 99.

17. Ibid., 84.

14. Edmund Gosse, Father and Son , 97.

15. Ibid., 71.

16. Ibid., 99.

17. Ibid., 84.

18. P. H. Gosse, The Ocean (Philadelphia: Parry and Macmillan, 1856), 101. (The title page bears the inscription, "from the last London edition.")

19. Philip Henry Gosse, A Naturalist's Rambles on the Devonshire Coast (London: John Van Voorst, 1853), 354-357. This was not, after all, quite the discovery Gosse thought it. Johnstonella was not a new genus, but a subgenus of Tomopteris , which had been named in 1825 by Eschscholtz. The species catharina is still recognized by some workers, although Gosse's drawing and description are too vague to provide clear identification, and the name helgolandica attached to a later and more accurate description by Greeff is more usual. As a result, what Gosse hoped would be called Johnstonella catharina is in fact called Tomopteris helgolandica —a disappointing sequel to so magnanimous a gesture. (I am indebted for the foregoing information to Mr. Frederick M. Bayer, Acting Curator of the Division of Marine Invertebrates, Smithsonian Institution.)

20. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 376.

21. Edmund Gosse, Father and Son , 346.

22. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 367.

23. Philip Henry Gosse, Omphalos: An Attempt to Untie the Geological Knot (London: John Van Voorst, 1857), 5.

24. A Greek word meaning "navel." The epigraph to Omphalos is from Aristotle's Historia Animalium , book 7.8, and in D'Arcy Wentworth Thompson's translation it reads: "All animals, or all such as have a navel, grow by continue

the navel." The idea is, clearly, to make an analogy between Adam as the microcosm, whose navel pointed to a birth which never took place, and the earth as the macrocosm, whose fossils similarly are signs of an unreal past; but this comparison is not taken up seriously in the book, there being only two casual references to the navel at pp. 289 and 334. One might therefore look for a deeper significance in the title, in keeping with various secondary uses of the Greek term, such as its application to the stone at Delphi which was supposed to represent the center of the earth. But Gosse's epigraphs, like his scriptural quotations, are often disappointingly irrelevant, and on the whole it seems unlikely that there is any more to the title than the obvious meaning referred to above.

25. Philip Gosse, Omphalos , vii-viii.

26. Edmund Gosse, Father and Son , 116.

27. Philip Gosse, Omphalos , 103-104.

28. Ibid., 110.

29. Ibid., 122.

30. Ibid., 123.

27. Philip Gosse, Omphalos , 103-104.

28. Ibid., 110.

29. Ibid., 122.

30. Ibid., 123.

27. Philip Gosse, Omphalos , 103-104.

28. Ibid., 110.

29. Ibid., 122.

30. Ibid., 123.

27. Philip Gosse, Omphalos , 103-104.

28. Ibid., 110.

29. Ibid., 122.

30. Ibid., 123.

31. John Donne, Essays in Divinity , ed. E. M. Simpson (Oxford: Clarendon Press, 1952), 18.

32. Philip Gosse, Omphalos , 124-125.

33. Ibid., 126.

34. Ibid., vi.

35. Ibid., 372.

36. Ibid.

37. Ibid., 369.

32. Philip Gosse, Omphalos , 124-125.

33. Ibid., 126.

34. Ibid., vi.

35. Ibid., 372.

36. Ibid.

37. Ibid., 369.

32. Philip Gosse, Omphalos , 124-125.

33. Ibid., 126.

34. Ibid., vi.

35. Ibid., 372.

36. Ibid.

37. Ibid., 369.

32. Philip Gosse, Omphalos , 124-125.

33. Ibid., 126.

34. Ibid., vi.

35. Ibid., 372.

36. Ibid.

37. Ibid., 369.

32. Philip Gosse, Omphalos , 124-125.

33. Ibid., 126.

34. Ibid., vi.

35. Ibid., 372.

36. Ibid.

37. Ibid., 369.

32. Philip Gosse, Omphalos , 124-125.

33. Ibid., 126.

34. Ibid., vi.

35. Ibid., 372.

36. Ibid.

37. Ibid., 369.

38. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 280.

39. Ibid., 280-281.

40. Ibid., 281.

38. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 280.

39. Ibid., 280-281.

40. Ibid., 281.

38. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 280.

39. Ibid., 280-281.

40. Ibid., 281.

41. John Henry Cardinal Newman, Apologia pro Vita Sua: Being a Reply to a Pamphlet Entitled "What, Then, Does Dr. Newman Mean?" (London: Longman, Green, Longman, Roberts, and Green, 1864), 120.

42. Edmund Gosse, Father and Son , 118.

43. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 281.

44. Philip Henry Gosse, The Romance of Natural History, Second Series (London: James Nisbet and Co., 1861), 89.

45. Edmund Gosse, The Life of Philip Henry Gosse, F.R.S. , 349.

5— The Paradox of Induction and the Inductive Wager

1. J. M. Keynes, A Treatise on Probability (London: Macmillan, 1921), 272.

2. David Hume, An Inquiry Concerning Human Understanding (New York: The Liberal Arts Press, 1955), 48.

3. Ibid., 51-52.

4. Ibid., 48.

5. Ibid., 52. break

2. David Hume, An Inquiry Concerning Human Understanding (New York: The Liberal Arts Press, 1955), 48.

3. Ibid., 51-52.

4. Ibid., 48.

5. Ibid., 52. break

2. David Hume, An Inquiry Concerning Human Understanding (New York: The Liberal Arts Press, 1955), 48.

3. Ibid., 51-52.

4. Ibid., 48.

5. Ibid., 52. break

2. David Hume, An Inquiry Concerning Human Understanding (New York: The Liberal Arts Press, 1955), 48.

3. Ibid., 51-52.

4. Ibid., 48.

5. Ibid., 52. break

6. R. F. Harrod, Foundations of Inductive Logic (London: Macmillan, 1956), passim.

7. Donald Williams, The Ground of Induction (Cambridge, Mass.: Harvard University Press, 1947), 21.

8. Ibid., chap. 1, passim.

7. Donald Williams, The Ground of Induction (Cambridge, Mass.: Harvard University Press, 1947), 21.

8. Ibid., chap. 1, passim.

9. Blaise Pascal, Pensées , trans. W. F. Trotter (New York: Random House [Modern Library], 1941), p. 84, no. 235.

10. Ibid., no. 234.

9. Blaise Pascal, Pensées , trans. W. F. Trotter (New York: Random House [Modern Library], 1941), p. 84, no. 235.

10. Ibid., no. 234.

11. Williams, The Ground of Induction , 62.

12. Hans Reichenbach, Experience and Prediction (Chicago: University of Chicago Press, 1938), 350.

13. J. O. Wisdom, Foundations of Inference in Natural Science (London: Methuen, 1952), 266.

14. Reichenbach, Experience and Prediction , 348.

15. Wisdom, Foundations of Inference , chap. 24.

16. Hume, Human Understanding , sect. 4, pt. 2.

17. R. G. Collingwood, An Essay on Metaphysics (Oxford: Clarendon Press, 1940), passim.

18. Reichenbach, Experience and Prediction , 363.

19. Wisdom, Foundations of Inference , 229.

20. Pascal, Pensées , p. 79, no. 230.

21. Hans Reichenbach, "The Logical Foundations of the Concept of Probability," trans. Maria Reichenbach, in Readings in the Philosophy of Science , ed. Herbert Feigl and May Brodbeck (New York: Appleton-Century-Crofts, 1953), 466.

22. Wisdom, Foundations of Inference , 226.

This article is dedicated to the memory of Norwood Russell Hanson, vice-president of AAAS section L in 1961-1962 and for many years secretary of the section.

6— The Structure of Discovery

1. Karl Popper, The Logic of Scientific Discovery , new ed. (New York: Harper, 1965 [original German ed., 1934]), 31.

2. Peter Caws, "Three Logics, or the Possibility of the Improbable," Philosophy and Phenomenological Research 25 (1965): 522. (Appears as chapter 8 in this work.)

3. Popper, The Logic of Scientific Discovery , 32.

4. A. Koestler, The Act of Creation (London: Hutchinson, 1964), pt. 2.

5. Charles Darwin, in The Life and Letters of Charles Darwin , ed. F. Darwin, new ed. (Basic Books: New York, 1959 [original ed., 1888]), 83.

6. See for example G. Polya, Patterns of Plausible Inference , vol. 2 of Mathematics and Plausible Reasoning (Princeton, N.J.: Princeton University Press, 1954).

7. G. Frege, "Begriffschrift," in From Frege to Gödel , ed. J. Van Heijenoort (Cambridge, Mass.: Harvard University Press, 1950), 5. break

8. N. R. Hanson, Patterns of Discovery (Cambridge, England: Cambridge University Press, 1958), 70.

9. P. B. Medawar, The Art of the Soluble (London: Methuen, 1967).

10. F. Bacon, The New Organon , new ed. (New York: Liberal Arts Press, 1960 [original ed., 1620]).

11. Charles Darwin, The Origin of Species , new ed. (New York: Modern Library, n.d. [original ed., 1859]).

12. Darwin, in Life and Letters , 68.

13. E. Jones, "The Nature of Genius," Scientific Monthly 84 (1957): 75.

14. Darwin, in Life and Letters , 82.

15. Sir Isaac Newton, letter to Robert Hooke (1676).

16. R. K. Merton, "The Role of Genius in Scientific Advance," New Science 12 (1961): 306.

17. P. B. Medawar, The Art of the Soluble (London: Methuen, 1967).

18. Charles S. Peirce, Values in a Universe of Chance , ed. Philip P. Wiener (Garden City, N.Y.: Doubleday, 1958), 255.

19. D. E. Berlyne, "Curiosity and Exploration," Science 153 (1966): 25.

20. Claude Lévi-Strauss, The Savage Mind , new ed. (Chicago: University of Chicago Press, 1966 [original French ed., 1962]), 9.

21. M. Bunge, The Search for System , vol. 1 of Scientific Research (New York: Springer, 1967), 345.

7— Induction and the Kindness of Nature

1. Grover Maxwell, "Induction and Empiricism," in Induction, Probability, and Confirmation , ed. Grover Maxwell and Robert M. Anderson, Jr., Minnesota Studies in the Philosophy of Science, 6 (Minneapolis: University of Minnesota Press, 1975), 106-165.

2. Ibid., 107.

3. Ibid., 106.

1. Grover Maxwell, "Induction and Empiricism," in Induction, Probability, and Confirmation , ed. Grover Maxwell and Robert M. Anderson, Jr., Minnesota Studies in the Philosophy of Science, 6 (Minneapolis: University of Minnesota Press, 1975), 106-165.

2. Ibid., 107.

3. Ibid., 106.

1. Grover Maxwell, "Induction and Empiricism," in Induction, Probability, and Confirmation , ed. Grover Maxwell and Robert M. Anderson, Jr., Minnesota Studies in the Philosophy of Science, 6 (Minneapolis: University of Minnesota Press, 1975), 106-165.

2. Ibid., 107.

3. Ibid., 106.

4. Peter Caws, "The Paradox of Induction and the Inductive Wager," Philosophy and Phenomenological Research 22, no. 4 (June 1962): 512-520. (Appears as chapter 5 of this work.)

5. Maxwell, "Induction and Empiricism," 125.

6. Ibid., 129-130.

7. Ibid., 134.

8. Ibid., 136.

9. Ibid., 150.

10. Ibid.

5. Maxwell, "Induction and Empiricism," 125.

6. Ibid., 129-130.

7. Ibid., 134.

8. Ibid., 136.

9. Ibid., 150.

10. Ibid.

5. Maxwell, "Induction and Empiricism," 125.

6. Ibid., 129-130.

7. Ibid., 134.

8. Ibid., 136.

9. Ibid., 150.

10. Ibid.

5. Maxwell, "Induction and Empiricism," 125.

6. Ibid., 129-130.

7. Ibid., 134.

8. Ibid., 136.

9. Ibid., 150.

10. Ibid.

5. Maxwell, "Induction and Empiricism," 125.

6. Ibid., 129-130.

7. Ibid., 134.

8. Ibid., 136.

9. Ibid., 150.

10. Ibid.

5. Maxwell, "Induction and Empiricism," 125.

6. Ibid., 129-130.

7. Ibid., 134.

8. Ibid., 136.

9. Ibid., 150.

10. Ibid.

11. Charles S. Peirce, Values in a Universe of Chance , ed. Philip P. Wiener (Garden City, N.Y.: Doubleday, 1958), 370-371.

12. Ibid., 372-373.

11. Charles S. Peirce, Values in a Universe of Chance , ed. Philip P. Wiener (Garden City, N.Y.: Doubleday, 1958), 370-371.

12. Ibid., 372-373.

13. David Hume, An Inquiry Concerning Human Understanding (New York: The Liberal Arts Press, 1955), 52.

14. Ibid., 60.

15. Ibid., 67-68. break

13. David Hume, An Inquiry Concerning Human Understanding (New York: The Liberal Arts Press, 1955), 52.

14. Ibid., 60.

15. Ibid., 67-68. break

13. David Hume, An Inquiry Concerning Human Understanding (New York: The Liberal Arts Press, 1955), 52.

14. Ibid., 60.

15. Ibid., 67-68. break

16. Maxwell, "Induction and Empiricism," 112.

17. Ibid., 157.

16. Maxwell, "Induction and Empiricism," 112.

17. Ibid., 157.

18. Peter Caws, "Mach's Principle and the Laws of Logic," in Induction, Probability, and Confirmation , ed. Grover Maxwell and Robert M. Anderson, Jr., Minnesota Studies in the Philosophy of Science, 6 (Minneapolis: University of Minnesota Press, 1975), 487-495. (Appears as chapter 9 of this work.)

19. Ibid., 491-492.

18. Peter Caws, "Mach's Principle and the Laws of Logic," in Induction, Probability, and Confirmation , ed. Grover Maxwell and Robert M. Anderson, Jr., Minnesota Studies in the Philosophy of Science, 6 (Minneapolis: University of Minnesota Press, 1975), 487-495. (Appears as chapter 9 of this work.)

19. Ibid., 491-492.

20. Cf. Peter Caws, "The Structure of Discovery," Science 166 (Dec. 12, 1969): 1375-1380. (Appears as chapter 6 of this work.)

21. Ibid.

20. Cf. Peter Caws, "The Structure of Discovery," Science 166 (Dec. 12, 1969): 1375-1380. (Appears as chapter 6 of this work.)

21. Ibid.

8— Three Logics, or the Possibility of the Improbable

1. Charles Sanders Peirce, Collected Papers of Charles Sanders Peirce , ed. Charles Hartshorne and Paul Weiss (Cambridge, Mass.: Harvard University Press, 1931-1935), 1:306.

2. G. H. Hardy, A Mathematician's Apology (Cambridge, 1941).

3. William and Martha Kneale, The Development of Logic (Oxford: Clarendon Press, 1962), 742.

4. Pierre-Simon Laplace, Essai philosophique sur les probabilités (Paris: Gauthier-Villars, 1921), 3.

5. C. G. Hempel and Paul Oppenheim, "The Logic of Explanation," in Readings in the Philosophy of Science , ed. Herbert Feigl and May Brodbeck (New York: Appleton-Century-Crofts, 1953).

6. Peirce, Collected Papers 7:131.

7. Ibid., 6:86.

8. Ibid., 6:324.

6. Peirce, Collected Papers 7:131.

7. Ibid., 6:86.

8. Ibid., 6:324.

6. Peirce, Collected Papers 7:131.

7. Ibid., 6:86.

8. Ibid., 6:324.

9. F. Hoyle, The Nature of the Universe (New York: New American Library, 1950), chap. 7, passim.

10. Peirce, Collected Papers 1:148.

11. M. Heidegger, An Introduction to Metaphysics , trans. Ralph Manheim (New Haven: Yale University Press, 1959), 1.

12. Peirce, Collected Papers 1:148.

13. Ibid., 174.

12. Peirce, Collected Papers 1:148.

13. Ibid., 174.

14. Jean-Paul Sartre, Being and Nothingness , trans. Hazel Barnes (New York: Philosophical Library, 1956), 21ff.

9— Mach's Principle and the Laws of Logic

1. Peter Caws, "' . . . Quine/Is Just Fine'" Partisan Review 34, no. 2 (Spring 1967): 302.

2. See in this connection Charles Hartshorne, "Some Empty Though Important Truths," in American Philosophers at Work , ed. Sidney Hook (New York: Criterion Books, 1959), 225ff. break

10— A Quantum Theory of Causality

1. Michael Scriven, "The Concept of Cause," Abstracts of Contributed Papers (Stanford, Calif.: International Congress for Logic, Methodology, and Philosophy of Science, 1960).

2. Bertrand Russell, Human Knowledge, Its Scope and Limits (London: George Allen and Unwin, 1948), 333.

3. Pierre-Simon Laplace, Essai philosophique sur les probabilités (Paris: Gauthier-Villars, 1921), 3.

4. Russell, Human Knowledge , 334.

5. Laplace, Essai philosophique , 3.

6. David Hume, An Inquiry Concerning Human Understanding (New York: The Liberal Arts Press, 1955), 48.

7. Alfred Landé, "Non-Quantal Foundations of Quantum Theory," Philosophy of Science 24 (1957): 309.

8. Immanuel Kant, Critique of Pure Reason , trans. N. Kemp Smith (London: Macmillan, 1956), 50.

9. W. Ross Ashby, An Introduction to Cybernetics (New York: John Wiley, 1958), 28.

12— Science, Computers, and the Complexity of Nature

1. Isaac Newton, Mathematical Principles of Natural Philosophy , trans. Andrew Motte and Florian Cajori (Berkeley: University of California Press, 1947), 398.

2. Pierre-Louis Moreau de Maupertuis, "Essai de Cosmologie," Oeuvres de Maupertuis , 4 vols. (Lyon: Jean-Marie Bruyset, 1768), 1:42-43.

3. W. Ross Ashby, An Introduction to Cybernetics (New York: John Wiley, 1958), 5.

4. See for example Karl Popper, The Logic of Scientific Discovery (New York: Basic Books, 1959), chap. 7; J. O. Wisdom, Foundations of Inference in Natural Science (London: Methuen, 1952), chap. 7; Sir Harold Jeffreys, Scientific Inference (Cambridge: Cambridge University Press, 1957), sect. 2.7; Nelson Goodman, "Axiomatic Measurement of Simplicity," Journal of Philosophy 52, no. 24, etc.

5. W. Ross Ashby, "General Systems Theory as a New Discipline," General Systems, Yearbook of the Society for General Systems Research 3 (Ann Arbor, Michigan, 1958): 5.

13— Praxis and Techne

1. Aristotle, Parts of Animals 645 a 20.

2. Plato, Gorgias 465 a .

3. The fact that, as Carl Mitcham has pointed out, Aristotle uses technologia to mean "grammar"—the techne of the logos —does not invalidate this argu- soft

ment, which rests on current usage in English. Etymological analyses are helpful because they show how terms are articulated and sometimes how they have changed, not because classical usage supports our own.

4. Aristotle, Nicomachean Ethics , 1098 a 28.

5. Freidrich-Karl Forberg, Manuel d'érotologie classique , trans. Alcide Bonneau (Paris: Au Cercle du Livre Précieux, 1959), 1:6. Forberg contrasts the treatise Dodecatechnon with the courtesan (Cyrene) known as ''Dodecamechanos," because the former talks about the twelve positions while the latter knew how to practice them—a further reinforcement of the distinction between techne and praxis and an illustration as well of the classic forerunner of the notion of the machine. Whether (as some commentators maintain—see V. de Magalhaes-Vilhena, Essor scientifique et technique et obstacles sociaux à la fin de l'antiquité [Paris, n.d.]) this use of mechanos meant that the courtesans practiced the art of love in a "mechanical" way, or whether, which seems more likely, it reflects a conception of the human body as a kind of living machine, Forberg does not say.

6. Karl Marx, Capital: A Critique of Political Economy , trans. Ben Fowkes (New York: Vintage Books, 1977), 247ff.

7. Mao Tse-tung, "On Practice," in Four Essays on Philosophy (Peking: Foreign Languages Press, 1966), 14.

8. Benedict de Spinoza, De intellectus emendatione , trans. A. Boyle, published with the Etica (London: J. M. Dent and Sons, 1910), 236.

9. Alfred