previous part
Seven— Introduction
next chapter

Seven—
Introduction

James J. Sheehan

There has always been a metaphorical connection between technology and nature. People have often looked to the natural world for images to help them understand social, political, and cultural developments, just as they have often imposed on nature images taken from human affairs. The character of natural and mechanical metaphors has changed over time. The boundary between nature and machine has sometimes been redrawn. But it is only recently that we have been able to imagine machines complex enough to be like humans and humans predictable enough to be like machines.

In the ancient and medieval worlds, what David Bolter calls the "defining technology" was not a mechanical device but rather some fairly simple form of manual production. When Plato looked for a metaphor to describe the way the universe functions, he spoke of a "spindle of necessity" with which the fates spin human destiny into the world. In the Timaeus , he compared the creator of the universe to a carpenter and a potter. Only in the seventeenth century, with the rise of a new cosmology, did the most important technological metaphors become mechanical rather than manual. The clock replaced the carpenter's lathe or the potter's wheel as a way of understanding how the universe operated. Like a clock, the universe was thought to be at once complex and comprehensible, a mechanism set in motion by a divine maker who, quite unlike the potter at his wheel, gave his creation the power to operate without his constant supervision.[1]

Just as the Greeks sometimes imagined that the gods had made living things from clay, so people in the mechanical age thought that some living things were no more than complex machines. This view was given


136

its most influential formulation by Descartes, who stands at the intersection of humanity's changing relationship to animals and machines. As we saw in the introduction to Part I, Humans and Animals, Descartes believed that animals were merely automata made of flesh. Even human activities that were not guided by reason could be seen as mechanical: "the beating of the heart, the digestion of our food, nutrition, respiration when we are asleep, and even walking, singing, and similar acts when we are awake, if performed without the mind attending to them." Such opinions would not, Descartes continued, "seem strange to those who know how many different automata or moving machines can be made by the industry of man." But, of course, Descartes did not think that humans were merely mechanical. The possession of reason opened a fundamental gap between humanity and machines—as well as between humanity and animals.[2]

In the eighteenth century, at the same time that some thinkers had begun to wonder if the gap between humans and animals was as fundamental as Descartes insisted, others called into question the basic distinction between humans and machines. The classic expression of this position was L'Homme machine , published by Julien Offray de La Mettrie in 1747. As his title suggests, La Mettrie believed that Humans were no less machinelike than animals. "From animals to men," he wrote, "the transition is not extraordinary." In blunt, provocative language, La Mettrie argued that mind and body were equally mechanical: "the brain has muscles to think, as the legs have to walk." But he did not have much to say about how mental mechanisms actually worked. His epistemology, which emphasized the power of imagination, had little connection with his notion of brain muscles. In Effect. L'Homme machine was what one scholar has called "a picturable analogy" of the mind, a "modus cognoscendi , desiged to promote scientific inquiry, inquiry, rather than any ultimate knowledge about the nature of things."[3]

In the nineteenth century, a great deal was learned about the body's physiological mechanisms. Research on chemistry and biology enabled scientists to understand how the body functioned like a steam engine—the machine that had replaced the clock as the era's "defining technology." Researchers believed that this knowledge of physiological mechanisms was part of a single scientific enterprise that would embrace the study of all matter, living and monliving. "Physilogy," Wilhelm Wundt maintained, "thus appears as a branch of applied physics, its problems being a reduction of vital phenomena to general physical laws, and thus ultimately to the fundamental laws of mechanics." Such laws, some maintained, explained mental no less than physical phenomena. Edward Youmans, for example, an American disciple of Herbert Spencer, re-


137

garded the First Law of Thermodynamics as extending from the farthest galaxies into the hidden recesses of the mind:

Star and nerve-tissue are parts of the system—stellar and nervous forces are correlated. Nay more; sensation awakens thought and kindles emotion, so that this wondrous dynamic chain binds into living unity the realms of matter and mind through measureless amplitudes of space and time.

The chain of being, once a hierarchy of existence forged by divine decree, thus becomes a unified domain subject to the same physical laws.[4]

Within certain limits, the nineteenth century's mechnical metaphors were useful tools for understanding physiological phenomena like respiration and digestion. But these metaphors worked much less well when applied to the mind. To be sure, experimental psychologists learned to measure some kind of perceptions and responses, and neurologists did important work on the structure of the brain; but it was difficult to imagine mechanisms complex and versatile enough to approximate mental activities. Nor could anyone come close to building a machine that could perform anything but the most rudimentary calculations. In the 1830s, for example, Charles Babbage failed in the attempt to manufacture an "analytical engine" that would be able to "think" mathematically. Because he did not have the technologicl means to perform the functions he had ingeniously contrived, Babbage remained "a brilliant aberration, a prophet of the electronic age in the heyday of the steam engine."[5]

Until well into the twentieth century, those who believed that the brain functioned mechanically had great difficulty describing just how these mental mechanisms worked. One product of these difficulties we behavioralism, perhaps the dominant school of empirical psychology in the early twentieth century, which denied that the nature of mental states was knowable and concentrated instead on studying behavioral responses to stimuli. In the 1920s, when a new controversy erupted over the concept of "l'homme machine," it was fought out on largely biological rather than psychologicla grounds. Joseph Needham, who took the La Mettriean position in this debate, acknowledged that this mechanistic view of human nature was no more than a "methodological fiction," even though he believed that "in science, man is a machine; or if he is not, then he is nothing at all." Given the sort of machines that Needham could imagine in 1928, it is not surprising that he could not find one that much resembled the mental world of human beings.[6]

All this changed with the coming of the computer. The theoretical and technological basis for he computer was laid in the 1930s by Alan Turing and others, but it was only after the Second World War that these machines moved from the realm of highly technical speculation to the


138

center of both scientific research and popular culture. Computers, unlike the crude calculating machines of the past, seemed fast, complex, and supple neough to approximate real thought. The gap between mind and machine seemed to be narrowing. For example, in 1949, Norbert Wiener published his influential work on "Cybernetics," which he defined as "the entire field of control and communication theory, whether in the machine or in the animal." That same year, John von Neumann, who had built a rudimentary computer in Princeton, pointed out the similarities between the computer and the brain. Two years later, in an essay entitled "Computing Machinery and Intelligence," Turing predicted tht within fifty years there would be machines that could perfectly imitate human intelligence.[7]

So swiftly did the use of computers spread that by the 1960s, they had become our "defining technology," as fundamental to the way we view ourselves and our worlds as Plato's crafts or Descartes's clockwork mechanisms had been to theirs. We now live in the era of "Turing's Man," whose relationship with machines Bolter summarizes in the following telling phrase: "By making a machine think as a man, man re-creates himself, defines himself as a machine."[8]

Computers played an essential role in the formulation of cognitive science, which Howard Gardner defines as the "empirically based effort to answer long-standin epistemological questions—particularly those concerned with the nature of knowledge, its components, its sources, its development, and its deployment." In the last two decades, this way of looking at the mind has tended to replace behavioralism as the most vigorous branch of empirical psychology. In the light of what computer scientists claimed about the way their machines worked, the behavioralists' self-imposed refusal to talk about mental states no longer seemed either necessary or desirable. As George Miller, who observed the shift away from behavioralism, recalled,

The engineers showed us how to build a machine that has memory, a machine that has purpose, a machine that plays chess, a machine that can detect signals in the presence of noise, and so on. if they can do that, then the kind of things they say about machines, a psychologist should be permitted to say about a human being.

Cognitive scientists insisted that they could open the so-called black box, which behavioralists believed concealed the mental connection between stimulus and response Within this box, many were now convinced, they would find something very much like a computer.[9]

Research on what came to be known as "artificial intelligence" began during the exciting early days of the computer revolution, when the technology's potential seemed without limit. The following statement,


139

from an article published in 1958 by Herbert Simon and Allen Newell, provides a good example of the style and tone with which the new field was announced:

It is not my aim to surprise or shock you. . . . But the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, thier ability to do these things is going to increase rapidly until—in a visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied.

La Mettrie had declared that humans were machines but could say little about how these machiens worked. Nineteenth-century physiologists could imagine the human body as a "hea engine," which converted fuel into growth and activities. To the advocates of artificial intelligence, the computer finally provided a mechanism comparable to the mind: to them, "intelligent beings are semantic engines— in other words, automatic formal systems with interpretations under which they consistently make sense." Marvin Minsky put the matter much less elegantly when, in what would become an infamous phrase, he defined the mind as "meat machine."[10]

Artificial intelligence draws on several disciplines and contains a variety of different elements. It is, in the words of one of its practitioners, "a field renowned for its lack of consensus on fundamental issues." For our purposes, two intradisciplinary divisions seem especially important. The first is the difference between "strong" and "weak" artificial intelligence. The former (vigorously examplified by Newell's essay in this volume) argues for the potential identity of mind and machine; the latter is content with seeing computers as models or metaphors for certain kinds of mental activities. The second division is rather more complex since it involves two different ways of thinking about and building intelligent machines. The one (and again, Newell is a fine example) seeks to define a sequential set of rules through which the computer can approximate—or duplicate—the workings of the mind. This perspective, which draws its philosophical inspiration from Descartes's belief in the possibility of defining certain rules of thought, builds on concepts first developed by Turing and von neumann. Sherry Turkle calls the other approach to artificial intelligence "emergent AI," which includes "connectionist" models as well as models based on the idea of a "society of mind." Modeled on assumptions about neural operations in the brain, this approach emphasizes parallel operations rather than sequences, the storing and manipulation of individual pieces of information rather than the formulation of general rules. Whiel both branches of artificial intelligence developed at about the same time, the former became dominant in the 1970s and


140

early 1980s, while the latter seems to have become increasingly influential after 1984.[11]

Like sociobiology, artificial intelligence has been attacked from several directions—as philosophically naive, methodologically careless, and politically dangerous. Terry Winograd's essay suggests some of these criticisms, to which we will return in the conclusion. Critics like Winograd argue that fulfilling the promises made during the early days of cognitive science has turned out to be much more difficult than Turing and his contemporaries had believed. Changing the scale and broadening the scope of what a computer cna do have involved conceptual nd technological problems few of the pioneers envisioned. Nevertheless, no one doubts that computer scientists have made great progress on a number of fronts. Moreover, given the unpredictable course of scientific progress, it is hard to say what is and what is not possible. After all, as two scientists have recently written, computer experts have "the whole future" in which to show that intelligent mchines can be created. Doubts about the feasibility of artificial intelligence, therefore, can lead to skepticism, not categorical rejection.[12]

A categorical denial that it will ever be possible to build machines that can think like humans can rest on two foundations. The first requires a belief in some spiritual attribute that separates humans from machines—and in Western culture at least, from animals as well. The second is based not on our possession of a soul but rather of a body: according to this line of argument, which is suggested in Stuart Hampshire's concluding remarks to this section, being human is inseparable from the feelings and perceptions that come from our physical existece. Without that knowledge of our own birth and death, a knowledge central to our conceptions of past, present, and future, any intelligence, no matter how skilled at solving certain sorts of problems, must remain essentially and unalterably different from our own. Melvin Konner makes this point by comparing the machines of the future to the gods of ancient Greece: "incredibly powerful and even capable of many human emotions—not because of their immortality, ineligible for admission into that warm circle of sympathy reserved exclusively for humans."[13]


previous part
Seven— Introduction
next chapter