Preferred Citation: Caws, Peter. Yorick's World: Science and the Knowing Subject. Berkeley:  University of California Press,  c1993 1993. http://ark.cdlib.org/ark:/13030/ft0d5n99m0/


 
6— The Structure of Discovery

6—
The Structure of Discovery

It has been widely held that, while logical analysis is appropriate to the justification of claims to scientific knowledge, such knowledge being expressed in hypotheses having empirical consequences, it is not appropriate to an inquiry into the way in which such claims originate. Questions about origins are said to belong to the "context of discovery" rather than to the "context of justification," and to require a different kind of logic. The devising of hypotheses is ascribed to genius, intuition, imagination, chance, or any number of other extralogical processes; it comes to be regarded as a paradigm case of science in its authentic natural state, inaccessible to logical reconstruction by philosophers who do not really know what it is like to be a scientist.

One of the tactics most often used by proponents of the mystique of genius, who are always bandying about terms like creativity, insight, ripeness , and so on, is the recounting of tales about moments of enlightenment in the lives of the great scientists. Everybody has heard of Kekulé's dream about the snakes biting one another's tails, and of Poincaré's long bout with the Fuchsian functions on his geological bus trip through Normandy. Such stories no doubt give an accurate account of what "really happened"; they are suitably sensitive to the "actual development" of scientific theories. But to draw attention to them at all in connection with an analysis of the process of discovery seems to me a radical mistake. The mistake involved shows up clearly in a passage from Popper's The Logic of Scientific Discovery , where he says, "The initial stage, the act of conceiving or inventing a theory, seems to me neither to call for logical analysis nor to be susceptible of it. The

This article is dedicated to the memory of Norwood Russell Hanson, vice-president of AAAS section L in 1961–1962 and for many years secretary of the section.


99

question how it happens that a new idea occurs to a man—whether it is a musical theme, a dramatic conflict, or a scientific theory—may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge."[1]

Popper thus dismisses the possibility of a logical analysis of the conception or invention of a theory because he thinks of these things in terms of "how it happens." But in the case of deductive argument nobody would think of asking how it happens; it would be the structure of the process, not its particular embodiment in a particular individual, that would be seen by everybody to be the crucial issue. In fact, in demonstrative argument just as in the process of discovery, there would be nothing strange in its not happening at all—the actual movement from the premises to a conclusion is just as intuitive, creative, and so on as the actual having of a new idea, and very stupid or very stubborn people, like the tortoise in Lewis Carroll's fable, may quite well decline, or be unable, to make it—but the fact that it failed to happen would not alter in any way the logical structure of the relationship between premises and conclusion. Even if one wished to maintain that, in the case of discovery, there are not any identifiable premises (or even any premises at all—a strategy I have explored elsewhere[2] ) one could still choose to regard the process as in principle intelligible rather than unintelligible; what is disturbing about the passage from Popper is that he seems to opt for the latter. In fact he says explicitly, "My view may be expressed by saying that every discovery contains 'an irrational element,' or a 'creative intuition,' in Bergson's sense."[3]

My point is that if this is to be said of the process of discovery it may just as well be said of the process of strict logical deduction, so we might add to the canon exciting tales about that activity too. I hope I may be forgiven an autobiographical example to try out this parallel. I remember very clearly the moment when, as a schoolboy, I first understood the principle of linear simultaneous equations. The circumstances are engraved in my memory just as clearly as Poincaré's foot on the step of the bus became engraved in his; it was in the yard of my school, and I remember the red brick wall, the bicycle racks, and so on, in proper Proustian fashion. I saw, in a flash of intuition, why two equations were needed for two unknowns, and how the substitution from one equation into the other proceeded. Now, as I need hardly say, there was no question of originality here; I had had all the information for a number of weeks, during which my mathematics teacher had been trying to pound the principle into my head. As far as that goes, it wasn't that I couldn't do simulataneous equations—I could follow all the rules and get the right answer; it was just that I hadn't seen the underlying rationality of the process. When I finally saw it I got the "Eureka


100

feeling," of which Koestler speaks,[4] just as surely as if I had invented simultaneous equations myself, but I didn't suppose that that had anything to do with the logic of the situation.

The trouble with "Eureka!" is that the temptation to shout it is a very poor index of success in the enterprise at hand. Such a feeling can only be a by-product of the process—a not unimportant one, perhaps, from some evolutionary point of view, but certainly a dispensable one. A discovery would still be a discovery if it were made in cold blood without any such affective concomitant, and if it turned out to be mistaken it would still be mistaken even though the heavens had opened upon the lucky discoverer at the moment of enlightenment. It is perhaps conceivable that somebody might become addicted to the Eureka feeling and, in order to have it as often as possible, try very hard to make many discoveries, some of which might be valid. But scientists have to learn to be wary of emotional commitments to their hypotheses. Darwin says, "I have steadily endeavored to keep my mind free so as to give up any hypothesis, however much beloved (and I cannot resist forming one on every subject) as soon as facts are seen to be opposed to it. Indeed, I have had no choice but to act in this manner, for with the exception of the Coral Reefs, I cannot remember a single first-formed hypothesis which had not after a time to be given up or greatly modified." And he continues, "This has naturally led me to distrust greatly deductive reasoning in the mixed sciences."[5]

Another distinction frequently drawn between the logic of justification and the logic of discovery is that in the former case rules can be given. This is only apparently true; on the one hand, although in principle all deductions can be carried out by a rule-following technique, in practice good logicians and mathematicians are constantly making wild leaps only later justified by rules, if at all, while on the other hand certain workers—notably Polya[6] —have made significant steps in the direction of formulating rules for "plausible inference." Frege was among the first to try to carry out logical deductions strictly according to rule, and he found it extraordinarily difficult, as he testifies in the preface to the Begriffschrift .[7] If there were no rules of plausible inference, nobody could learn techniques of research, nor could the agencies responsible for funding it have any confidence whatever that the tasks undertaken by researchers would bear fruit. Yet people do learn, and suitably financed campaigns of research (like the Manhattan project) do regularly produce results. The task is then to find out what is going on, not dismiss it all as ineffable or mysterious.

Scientists, as Norwood Russell Hanson points out, "do not start from hypotheses; they start from data."[8] The question, then, is what happens between the data and the hypotheses, taken in that order—not


101

whether a deductive rule can be written to get from the former to the latter, but whether some intelligible structure can be discerned in the transition. I take "intelligible" in this context to be equivalent to "logical"—a procedure which certainly has etymological sanction, even if it means abandoning the narrower sense of "logical," which requires the specification of rules. In fact it need not mean this, if we remember that the use of "logic" in the expression "inductive logic" is a perfectly orthodox one, and that it sanctions a use of "rule" in the expression "inductive rule" which differs considerably in its connotations from the corresponding use in the deductive case. We have come to think of deductive rules as effective procedures , leading with certainty to the right result. In the inductive case, however, we have to get accustomed to rules which lead, with finite probability, to the wrong result. When people say "there could be no rule for making discoveries," they generally have the first sense of the term in mind: there could be no way of being sure of making discoveries. But there might still be sets of rules, which, if faithfully followed, would increase the chances of making them. These, as inductive logicians have begun to realize, may include rules of acceptance as well as rules of inference. The manner of their implementation (their relation to rules of practice) needs further study, but it is not my purpose to pursue the question further here.

A Model for Discovery

How do hypotheses arise? The answer I wish to suggest is that, strictly speaking, they arise naturally ; hypotheses are to be accounted for in the same manner as the events they seek to explain—indeed the hypothesis that this is so has arisen in this way. The evidence for this hypothesis is of course far from conclusive; while I think it preferable to any alternative which calls upon nonnatural occurrences, it would admittedly be difficult to show that no such occurrences were involved in the process (just as it would be difficult to show this for deductive arguments). But if a model can be constructed within which the emergence of hypotheses follows obviously from other properties of the model, the nonnatural element will be shown to be dispensable, just as it might be shown to be dispensable in deductive arguments by remarking that anybody can follow the rules.

Such a model can, I think, be put together from a number of disparate sources. It shows that, given certain facts about human beings and human cultures, there is nothing odd about the emergence of science or about the rate of its development, or about the fact that some of those who have contributed to this development have been geniuses.


102

The model, it is true, gives the main part of its account in collective rather than in individual terms—but that has now become commonplace, since the analysis of individual discoveries has shown that, in practically every case, the individual acted as the catalyst for a complex process in which many other individuals played a role. This need not be taken to mean that no credit is due the individual for having advanced a particular science in a particular way at a particular time, but it does mean that (probably) no individual has been indispensable to the advance of science in general. "Very simple-minded people think that if Newton had died prematurely we would still be at our wits' end to account for the fall of apples," says Medawar.[9] We must be able to find a way of reconciling our admiration for Newton with the avoidance of this mistake.

I make no apology for beginning my exposition of this theory of discovery with Bacon, whose method has, I believe, been misunderstood in important respects. The feature of the method which has always struck me most forcibly occurs in book 2 of the Novum Organum ,[10] where, after the construction of the inductive Tables, Bacon says (aphorism xx),

And yet since truth will sooner come from error than from confusion I think it expedient that the understanding should have permission, after the three Tables of First Presentation (such as I have exhibited) have been made and weighed, to make an essay of the Interpretation of Nature in the affirmative way; on the strength both of the instances given in the Tables, and of any others it may meet with elsewhere. Which kind of essay I call the Indulgence of the Understanding or the Commencement of Interpretation or the First Vintage .

This is strikingly similar to Darwin's remark in the introduction to The Origin of Species , where he says, "It occurred to me, in 1837, that something might perhaps be made out on this question by patiently accumulating and reflecting on all sorts of facts which could possibly have any bearing on it. After five years' work I allowed myself to speculate on the subject."[11] He remarks elsewhere[12] that he worked on "true Baconian principles," a claim which is denied by a number of commentators who have not read Bacon as closely as Darwin himself evidently did. There is a hint of the same kind of thing in Frege's concern not to jump to conclusions in the course of his logical work.

The truth to which I think these and other citations point is that the practical problem is often one not so much of finding hypotheses as of holding them in check. Bacon's use of a word like "indulgence," and Darwin's of the phrase "I allowed myself," suggest that, once the evidence is in, there is simply no need of a rule for getting the hypothe-


103

sis—it has long since formed and is only waiting to be recognized. (Remember Darwin's comment: "I cannot resist forming one on every subject.") But two questions immediately present themselves: By what mechanism of thought did the hypothesis come into being? And, if it is a natural process, why isn't everybody a genius? (It was Bacon's failure to recognize that everybody is not a genius which constituted the chief weakness in his program for making the methods of science available to the population at large.)

As for everybody's not being a genius, the answer may be that everybody above a certain level of natural intelligence in principle is, until inhibiting factors supervene—which almost always happens. It may be worth making a more general point here about a habit of thought into which philosophers of science sometimes fall—a habit due largely, I suspect, to the influence of Hume's analysis of causality. We think of events as in general being made to happen (and ask what antecedent events produced them), rather than as just happening (in which case the relevant question would be what antecedent events, by failing to happen, failed to prevent them). It is noticeable however that, when scientists perform delicate experiments, they expend their energy not on making sure that the desired outcome occurs but on trying to make sure that some undesirable outcome does not occur; they take experimental precautions against Nature, rather than giving experimental encouragement to Nature. Similarly, when engaged in logical argument we don't really need a rule to tell us how to proceed; what we chiefly need is a kind of single-minded concentration that keeps out irrelevant thoughts, and a facility for spotting wrong moves. The motive power of the enterprise doesn't come from the rules—they just keep it on the rails. Rules, it is true, can play a leading rather than a guiding part when the motive power is comparatively unintelligent, as in computers, but the critical thing seems to be to let the machinery run. This view is fully in keeping with the fact, frequently remarked upon, that the process of discovery may be unconscious: the scientist wakes up the next morning—or, in stubborn cases like Poincaré's, a week or so later—with the required solution. Whether or not all the steps are conscious is irrelevant to the question of whether or not they are logical.

If we are to admit biographical evidence, the point about inhibiting factors (and, on the other side of the coin, stimulating ones) may be illustrated by the fact that many geniuses have been characterized by a strong resistance to authority (that is, resistance to having their conclusions drawn for them) and, at the same time, by an openness to random suggestion amounting almost to credulity. Ernest Jones[13] observes this with respect to Freud, and Darwin[14] observes it with respect to himself. Ordinary social experience, and especially education, work,


104

of course, in precisely the opposite sense, imposing, even in the most well-meaning of democracies, an extraordinarily authoritarian view of the world and, at the same time, encouraging the belief that people should be selective about what they take in, and skeptical about all evidence from nonauthoritarian sources. These tendencies alone would be enough to account for the inhibition of discoveries in all but a handful of the population at any given time.

The hypothesis emerges naturally only when all the evidence is in—the conclusion follows only from a complete or almost complete set of premises. I add "almost complete" because there is a powerful Gestalt phenomenon to be observed here: closure is sometimes procured by the addition of a premise which is the obviously missing one, the only one which fits in with the rest of the pattern. Often, however, not even this much is required. All the premises for the hypothesis of the origin of species through natural selection were present both for Darwin and for Wallace, and, once they had them all (including the indispensable contribution from Malthus), they both got the point at once. Now there is of course no effective way of ever being sure that one has all the premises. But in this respect, also, the logic of discovery is in precisely the same boat as deductive logic: the rules there do not yield the premises either, they only yield the conclusion once the premises have been provided.

What are the premises which lead to a scientific discovery? Where do they come from? At this point, in the literature, the search for a logic of discovery frequently gets thrown off the scent by the insertion of a great deal of irrelevant talk about motivation, perplexity, or crisis; it is thought necessary to point out that discoveries do not happen if there is not some problem with the science we already have. This kind of thing is not only confusing but downright misleading. It suggests, again, a spurious difference between deductive logic and the logic of discovery. In fact, of course, nobody would carry out deductions either if there were not some reason to do so—and if that reason often amounts to nothing more than a passion for mathematics, having no direct relevance to the solution of any practical problem, a similar passion for investigation into nature has accounted for a great deal of inductive progress too.

The premises in question are of two principal kinds: on the one hand there are theories and observations made and confirmed by previous workers, and, on the other, observations not adequately covered by earlier theories, made by or communicated to the discoverer. The discovery consists, of course, in the provision of an adequate theory to cover these new observations. Premises of the former kind are part of the inheritance of the scientist, though finding them may involve a


105

search of the literature. Those of the latter kind may come from plain observation or from experiment; they may come into the possession of the scientist quite by accident, in a disguised form, and so on. It is at this stage—in the provision of the premises, rather than in the structure of the argument—that the notorious uncertainty of the process of discovery arises, that serendipity plays a part, and so on.

By far the most important contribution, however, is made by what I have spoken of as the scientist's "inheritance," although it might be better to use the genetic term rather than the legal one and speak instead of "heredity." Newton's celebrated remark about "standing on the shoulders of giants"[15] reminds us that the development of science is a stepwise process; nobody starts from scratch, and nobody gets very far ahead of the rest. At any point in history there is a range of possible discovery; the trailing edge of the range is defined by everything known at the time (I overlook here the fact that people are constantly "discovering" what is already known, which blurs this edge somewhat), and the leading edge is a function of what is already known, together with variables representing available instrumentation, the capacity of human brains, and so on. But, within the range, all movement is not forward—quite the contrary. While the mind moves with a kind of subjective conviction and (as it persuades itself) unerringly to its inductive conclusion, that conclusion is not always the discovery it is thought to be. There may be several reasons for this: the "discovery," if it fits the facts, may have been made before; if it does not fit them, that may be because there are still, without the scientist's knowing it, some missing premises (some fact not known, some previously established theory not taken into account), or it may be just because someone has made a mistake. In order to get a clear picture of scientific discovery the account has to be broadened somewhat to take into consideration the population of scientific workers at the time, together with the nature of the development of science. The best analogy for this development is again a genetic one: Just as mutations arise naturally but are not all beneficial, so hypotheses emerge naturally but are not all correct. If progress is to occur, therefore, we require a superfluity of hypotheses and also a mechanism of selection. At any given epoch in the development of science—to deal with the first requirement first—hypotheses are in fact emerging at a much higher rate than one might suspect from reading subsequent historical accounts. We all know about Darwin and Wallace, for example; but how many of the hundreds of other well-meaning naturalists of the middle nineteenth century, all tackling the problem of the persistence or mutability of species, are now remembered?

It may be useful in this connection to draw attention to a well-known


106

phenomenon which is more relevant to the development of science than most of us perceive it to be—namely, the phenomenon of the crackpot. We are accustomed to thinking of the advancement of science in terms of the half dozen great names in a given field; on reflection we may see that these half dozen are supplemented by a thousand or so working in more obscure laboratories. But we should also remember that there are myriads of people speculating, generally in a half-informed way, about the same topics from myriads of private vantage points; the occasional wild manifestos we all receive, showing how misguided Darwin and Einstein were, represent a mere fraction of their output. In every epoch something like this has gone on, and the unrecorded history of unsuccessful speculation would swamp completely the history of science as we know it if it could ever be added to the literature. Unsuccessful hypotheses are weeded out, of course, by their failure to square with the facts, or if they can be made to do that, by their failure to be predictive. But in this connection certain social factors tend to interfere with the evolutionary pattern, just as they do in the biological case. Just as the children of rich families may, under a less than equitable social system, be comparatively better protected against the hostility of the environment than the children of poor ones, so some theories produced under powerful sponsorship may have a longer run than they deserve.

Despite the fact that parallels present themselves so readily, there are a couple of puzzling things about the development of science that make this evolutionary analogy suspect. First of all, there is the fantastic rate of its growth in the last three or four centuries, quite unlike the leisurely pace at which biological adaptation usually proceeds. Second, there is the remarkable fact, documented in the work of Robert Merton and others,[16] that virtually all valid discoveries (let alone incorrect hypotheses) have been made by more than one worker, sometimes by many, while some great scientists appear to have made far more than their fair share of such discoveries. Clearly a random-mutation, Mendelian evolutionary model will not do.

The Evolution of Science

At this point it would be convenient to introduce some statistical analysis (already hinted at by the reference to Merton's work on multiple discoveries) to show how a given frequency of theoretical interest in a population, presumed to yield a rather smaller frequency of correct conjectures—these to be selected by the hostility of the experimental environment towards false theories—would account for the develop-


107

ment of science. Unfortunately the necessary statistical apparatus has not been worked out, since statisticians have concentrated their attention on Mendelian genetics, whereas the form of genetic theory required for this purpose is clearly Lamarckian. The accumulated empirical and theoretical knowledge passed on from one generation of scientists to another counts as an acquired characteristic, the fruit of direct adaptation rather than of mutation. To make matters worse, the pattern of reproduction is quite evidently not sexual. I can offer one or two further genetic analogies—for example, it is easy to find parts of theory behaving like dominant characteristics, in that they exclude or subsume alternative views, and others behaving like recessive ones, in that they are passed on with the rest of the inherited material but do not become important until they are conjoined with some other factor—but I have not been able to work out the details of the appropriate model.

Still I think the general evolutionary point holds. Discoveries represent a kind of adaptation which is almost bound to occur in a number of individuals if they are subjected to roughly similar environmental pressures, the environment in this case being an intellectual one. Medawar, in an exchange with Arthur Koestler about the latter's book, The Act of Creation , remarks,

Scientists on the same road may be expected to arrive at the same destination, often not far apart. Romantics like Koestler don't like to admit this, because it seems to them to derogate from the authority of genius. Thus of Newton and Leibniz, equal first with the differential calculus, Koestler says "the greatness of this accomplishment is hardly diminished by the fact that two among millions, instead of one among millions, had the exceptional genius to do it." But millions weren't trying for the calculus. If they had been, hundreds would have got it.[17]

That is as close to backing on the statistical point as I am likely to come for the moment. It is notoriously difficult to confirm counterfactuals of this sort, but there does seem to be a practical sense in what Medawar says, borne out by the tendency of various agencies to bombard scientists with research grants in an expectation of results at least comparable to that of geneticists bombarding Drosophila with gamma rays.

I have now sketched the main outlines of a possible model for scientific discovery. But there are two important components still missing—namely, some explanation, on the one hand, of the tendency of the human mind to produce hypotheses at all and, on the other, of the tendency of some great minds to produce many correct ones. Given that hypotheses are in fact produced, in a sufficiently prodigal fashion to provide the grounds for natural selection and consequently for the origin of new theories, how are we to account for the phenomenon? It


108

is not enlightening in this connection to talk about genius. To talk about imagination is a little better, although, as Peirce remarks in an essay on Kepler," 'Imagination' is an ocean-broad term, almost meaningless, so many and so diverse are its species."[18] I have already made reference to stresses from the intellectual environment, suggesting a theory of "necessity as the mother of invention," but that certainly cannot be carried through for a large—perhaps the greater—proportion of scientific discoveries.

Let me deal first with the special point about the disproportionate number of discoveries made by great scientists, and then go on to the more general, and concluding, point about the basic mechanism. Obviously no account which ignored "the distinctive role of scientific genius," as Merton calls it, can be considered satisfactory; but the term genius , meaning originally the spirit assigned at birth to guide a child's destiny, can now be admitted, if at all, only to describe people who have already proved themselves in the business of making discoveries, not to describe some potentiality they had before they started. There are clearly genetic determinants involved, having to do with brain capacity and other characteristics normally distributed in the population, with respect to which the genius will be found to lie under the right-hand shoulder of the bell-shaped curve, but none of them, nor any combination, can be equated with scientific genius, since a lot of similarly endowed people will be found living normal lives as stockbrokers, lawyers, and so on.

Once again, what makes people geniuses has nothing whatever to do with the logic they employ; and the point I wish to stress is that the discoverer needs no special logical endowment, no bag of creative tricks, in order to rise to the little eminence which, in the long historical view, he or she occupies for such a short time. I say "little eminence" not to minimize the respect we owe to genius—from close up, after all, we can properly refer to Einstein as a "towering genius"—but to reinforce the point made earlier about the comparatively narrow range within which at any time scientific discoveries can be made. The formation of a scientific genius, in fact, is comparable to the formation of an Olympic runner, or a tennis or chess champion. The chess analogy is a useful one; chess is, after all, a strictly deductive game, and all it takes to win every time is the ability to do a few billion calculations in the head within the period legally allowed for a move. Imagine a chess game in which there are some concealed pieces, moved by a third player, which influence the possible moves of the pieces on the board, and imagine that, instead of sixteen pieces to a side, there are several million, some governed by rules of play not yet known to the players. In such a game a player who, after a long apprenticeship with the ex-


109

perts, made three or four good moves during a lifetime career would have gained a place in history.

The kind of inference great scientists employ in their creative moments is comparable to the kind of inference the master at chess employs; it involves an ability to keep a lot of variables in mind at once, to be sensitive to feedback from tentative calculations (or experiments), to assess strategies for the deployment of time and resources, to perceive the relevance of one fact to another, or of a hypothesis to facts. The difference between their logic and ours is one of degree, not of kind; we employ precisely the same methods, but more clumsily and on more homely tasks. I wish to conclude by considering some crucial properties of the common logical mechanism with which we are all equipped, which explain, I think, the natural tendency for hypotheses to emerge, and in this connection to call on two diverse kinds of evidence, one from psychology and one from anthropology.

Psychology and Structuralism

On the psychological side, Berlyne has recently drawn attention to a form of behavior among higher animals which he calls "exploration." Under this heading, he says, may be grouped activities describable as "curiosity" and "play," or, in a human setting, as "recreation," "entertainment," "art," or even "science." This kind of activity is not indulged in because of its utilitarian value, although it sometimes has useful by-products. "An animal looking and sniffing around may stumble upon a clue to the whereabouts of food. A scientist's discovery may contribute to public amenity and his own enrichment or fame. Much of the time, however, organisms do nothing in particular about the stimulus patterns that they pursue with such avidity. They appear to seek them 'for their own sake.'"[19] Berlyne offers two lines of explanation for this exploratory activity. One of them is the conventional one of response to necessity, leading to "specific" exploration. The second, and more interesting, at least from the point of view of the problem of discovery, deals with what Berlyne calls "diversive" exploration.

It seems that the central nervous system of a higher animal is designed to cope with environments that produce a certain rate of influx of stimulation, information, and challenge to its capacities. It will naturally not perform at its best in an environment that overstresses or overloads it, but we also have evidence that prolonged subjection to an inordinately monotonous or unstimulating environment is detrimental to a variety of psychological functions. We can understand why organisms may seek


110

out stimulation that taxes the nervous system to the right extent, when naturally occurring stimuli are either too easy or too difficult to assimilate.

It looks, therefore, as if a certain kind of nondirected exploratory behavior is to be expected, both when the exterior world is too exciting (the intellectual withdraws into the ivory tower) and when it is not exciting enough (the explorer sets off to conquer new territories).

Now science is manifestly not the only possible kind of human exploration, even on the intellectual level, and this I think has to be recognized if scientific discovery is to be put in its proper context. The notion that true hypotheses emerge from the welter of speculation by a process of natural selection (the condition of survival being agreement with empirical evidence) can be extended by analogy to the emergence of science itself from a welter of natural mental activity. The final component of my model owes its inspiration to the work of the structuralists, notably Claude Lévi-Strauss, although it is an extension rather than a simple invocation of their views.

Lévi-Strauss observes, from the anthropologist's point of view, a phenomenon exactly analogous to that observed by Berlyne from the psychologist's. Primitive people, along with their totems and their myths, turn out to have an extraordinarily rich lore of a kind that can only be called scientific, since it represents a body of hypotheses about the natural world linked in some primitively acceptable way to a body of observations. This "science of the concrete," as Lévi-Strauss calls it, is not, in his words, "of much practical effect." But then "its main purpose is not a practical one. It meets intellectual requirements rather than or instead of satisfying needs. The real question is not whether the touch of a woodpecker's beak does in fact cure toothache. It is rather whether there is a point of view from which a woodpecker's beak and a man's tooth can be seen as 'going together' . . . and whether some initial order can be introduced into the universe by means of these groupings."[20]

This line of work is one which I think is at the moment of great interest and promise. What emerges from it is a view of mind as a structuring agent, which puts together a world of thought comparable in its complexity to the world of experience, thus satisfying the optimum conditions of mental activity described by Berlyne. The chief agency of structure is, of course, language. Of the various constructions made possible by language, science counts as only one, and initially enjoys no special advantage over myth. But sometimes what it says turns out to be true (the herb really does cure the disease), and although it is a long step from the truth of a report of practice to a genuinely theoretical


111

truth, this realization is the starting point of the process of scientific development. A story told for no other initial purpose than to keep mind in a kind of dynamic balance with the world, to assert it over against the world, turns out to hold the clue to control of the world. Other people continue to tell stories for other purposes, and the accumulation of specialized linguistic habits, specialized techniques, and so on, may soon persuade scientists that they are no longer like the others but engaged on a different quest with its own creative character. It is true that scientists, on the whole, care more than other people do that the stories they tell should be true; but then truth itself is a comparative latecomer on the linguistic scene, and it is certainly a mistake to suppose that language was invented for the purpose of telling it.

Scientific theories are no longer created ex nihilo ; the stories scientists tell are not free inventions. If the creative process starts from a very large set of premises already demonstrated to be true, its conclusion has a greater chance of being true than it would have if the process had started, like the conjecture of the primitive, from a random assortment of propositions indifferently true and false. When the conclusion is shown to be true by comparison with the evidence, we call the invention a discovery. ("Formulas are invented," as Bunge puts it, "but laws are discovered."[21] ) The major point I have wished to make can be summed up in this way: In the creative process, as in the process of demonstration, science has no special logic but shares the structure of human thought in general, and thought proceeds, in creation as in demonstration, according to perfectly intelligible principles. Formal logic, whose history as a rigorous system started with Frege and ended with Gödel, represents a refinement and specialization of the principles of everyday argument; the logic of scientific discovery, whose rigorous formulation is yet to be achieved (not that it holds out the hope of completeness once entertained by deductive logic), will similarly prove to be a refinement and specialization of the logic of everyday invention. The important thing to realize is that invention is, in its strictest sense, as familiar a process as argument, no more and no less mysterious. Once we get this into our heads, scientific creativity will have been won back from the mystery-mongers.


112

6— The Structure of Discovery
 

Preferred Citation: Caws, Peter. Yorick's World: Science and the Knowing Subject. Berkeley:  University of California Press,  c1993 1993. http://ark.cdlib.org/ark:/13030/ft0d5n99m0/