Preferred Citation: Sheehan, James J., and Morton Sosna, editors The Boundaries of Humanity: Humans, Animals, Machines. Berkeley:  University of California Press,  c1991 1991. http://ark.cdlib.org/ark:/13030/ft338nb20q/


 
Eleven— Romantic Reactions: Paradoxical Responses to the Computer Presence

The Romantic Machines of Emergent AI

This, of course, is where new directions in artificial intelligence become central to our story. Because emergent AI presents an image of the computer as fundamentally beyond information .

For years, AI was widely identified with the intellectual philsoophy and methodology of information processing. Information-processing AI has roots in mathematician George Boole's intellectual world, in logic.[21] It relies on the manipulation of propositions to obtain new propositions and the combination of concepts to obtain new concepts. But artificial intelligence is not a unitary enterprise. It is a stuff out of which many theories can be fashioned. And beyond information processing, there is emergent AI.

Emergent AI is indissociable from parallel computation. In a traditional, serial computer, millions of units of information sit in memory


235

doing nothing as they wait for the central processor to act on them, one at a time. Impatient with this limitation, the goal of emergent AI is "pure" computation. The whole system is dynamic, with no distinction between processors and the information they process. In some versions of emergent AI, the processors are neuronlike entities connected in networks; in others, they are anthropomorphized societies of subminds. In all cases, they are in simultaneous interaction. The goal, no less mythic in its proportions than the creation of a strand of DNA, is the generation of a fragment of mind. From the perspective of emergent AI, a rule is not something you give a computer but a pattern you infer when you observe the machine's behavior, much as you would observe a person's.

The two AIs have fueled very different fantasies of how to build mind out of machine. If information-processing AI is captured by the image of the knowledge engineer, hungry for rules, debriefing the human expert to embody that expert's methods in algorithms and hardware, emergent AI is captured in the image of the computer scientist, "his young features rebelling, slipping into a grin not unlike that of a father watching his child's first performance on the violin," running his computer system overnight so that the agent within the machine will create intelligence.[22]

The popular discourse about emergent intelligence tends to stress that the AI scientists who work in this paradigm set up experiments in the computer and let them run, not knowing in advance what the interactions of agents within the system will produce. They stress the drama and the suspense.

To train NETalk to read aloud, Sejnowski had given his machine a thousandword transcription of a child's conversation to practice on. The machine was reading the text over and over, experimenting with different ways of matching the written text to the sound of spoken word. If it got a syllable right, NETalk would remember that. If it was wrong. NETalk would adjust the connections between its artificial neurons, trying new combinations to make a better fit. . . . NETalk rambles on, talking nonsense. Its voice is still incoherent, but now the rhythm is somewhat familiar: short and long bursts of vowels packed inside consonants. It's not English, but it sounds something like it, a crude version of the nonsense poem "The Jabberwocky." Sejnowski stops the tape. NETalk was a good student. Learning more and more with each pass through the training text, the voice evolved from a wailing banshee to a mechanical Lewis Carroll.[23]

Such descriptions of neural net "experiments" reflect the AI scientists' excitement in doing what feels like "real" laboratory science rather than running simulations. But at the same time that they are excited by the idea of AI as an experimental science, they are drawn to the mystery and unpredictability of what is going on inside of the machine. It makes their material seem more lifelike.


236

We are far from the language of data and rules that was used to describe the large expert systems of the 1970s. The agents and actors of emergent AI programs are most easily described through anthropomorphization. Take, for example, a very simple case of emergent intelligence, the perceptron, a pattern-recognition machine designed in the late 1950s. In the perceptron, inner agents, each of whom has a very narrow decision rule and access to a very small amount of data, essentially "vote." The perceptron weights their voices according to each agent's past record of success. It is able to take advantage of signals saying whether it has guessed right or wrong to create a voting system where agents who have guessed right get more weight. Perceptrons are not programmed but learn from their own "experiences." In an information-processing system, behavior follows from fixed rules. The perceptron has none. What is important is not what an agent knows but who it knows, its place in a network, its interactions and connections. While information processing begins with formal symbols, perceptrons operate on a subsymbolic and subformal level.

In the brain, damage seldom leads to complete breakdown. It usually leads to a degradation of performance proportional to its extent. Perceptrons show the graceful degradation of performance that characterizes the brain. With disabled "voter-agents," the system still works, although not as well as before. This connection with the brain is decisive for the theorists of the most successful of the trends in emergent AI: connectionism or neural nets.

In the early 1960s, the atmosphere in AI laboratories was heady. Researchers were thinking about the ultimate nature of intelligence. The goal ahead was almost mythic: mind creating mind. Perceptrons and perceptronlike systems had their successes and their adherents as did information-processing approaches with their attmepts to specify the rules behind intelligence, or in Boole's language, the "laws of thought."

But for almost a quarter of a century, the pendulum swung away from one computational aesthetic and toward another, toward rules and away from emergence. In its influence on spcyhology, AI became almost symonymous with information processing. Allen Newell and Herbert Simon posited that the human brain and the digital computer shared a level of common functional description. "At this level, both the humna brain and the appropriately programmed digital computer could be seen as two different instantiations of a single species of device—a device that generated intelling behavior by manipulating symbols by means of formal rules."[24] Newell and Simon developed rule-based systems in their purest form, systems that simulated the behavior of people working on a variety


237

of logical problems. The method nd its promise were spelled out in the Newell and Simon physical symbol system hypothesis:

A physical symbol system has the necessary and sufficient means for general intelligent action. By necessary we meant that any system that exhibits general intelligence will prove upon analysis to be a physical symbol system. By sufficient we meant that any physical symbol system of sufficient size can be organized further to exhibit general intelligence.[25]

Thus, simulations of what came to be called "toy problems" promised more: that mind could be built out of rules. But the ideas of information processing were most successful in an area where they fell far short of building mind. This was in the domain of expert systems. With the worldly success of expert systems in the 1970s, the emphasis was taken off what had been most mythic about the AI of the 1950s and early 1960s and placed on what computer scientists had learned how to do with craftsman's confidence—gather rules from experts and code them in computer programs.

However, I have noted that in the late 1970s and early 1980s, the pendulum swung again. There was new, powerful, parallel hardware and new ideas about how to program it. The metaphors behind programming languages shifted. They were no longer about lists and variables but about actors and objects. You could think about traditional programming by analogies to the step-by-step instruction of a recipe in a cook-book. To think about the new object-oriented programming, the analogie had to be more dynamic: actors on a stage. With these changes came a rebirth of interest in the concept of neural nets, reborn with a new capturing mnemonic, connectionism.

More than anyone else, Douglas Hofstadter captured the aesthetic of the new movement when he spoke about computation "waking up from the Boolean dream."[26] For connectionists, that dream had been more like a nightmare. Like the romantics, connectionists sought to liberate themselves from a constraining rationalism of program and rules. They take pride in the idea that the artificial minds they are trying to build have an aspect that, if not mystical, is at the very least presented as mysterious. From the point of view of the connectionists, a certain amount of mystery fits the facts of the case.

We cannot teach an information-processing computer the rules for most aspects of human intelligence that people take for granted because we simply do not know them. There is no algorithm for recognizing a face in a crowd. The connectionists approach this state of affairs with a strategy made possible by the new availability of massively parallel computing: build a computer that at least in some way looks like a brain and


238

make it learn by itself. Unlike symbolic information processing, which looked to programs and specified locations for information storage, the connectionists do not see information as being stored "anywhere" in particular. Rather, it is stored everywhere. Information is better thought of as "evoked" than "found."[27] The computer is treated as a black box that houses emergent processes.

There is an irony here. The computer presence was an important influence toward ending the behaviorist hegemony in American psychology in the late 1970s. Behaviorism forbade the discussion of inner states or entities. One could not talk about memory, only the behavior of "remembering." But the fact that computers had memory and inner states provided legitimation for discussing people as having them as well. Behaviorism presented mind as a black box. Information processing opened the box and filled it with rules, trying to ally itself as closely as possible with commonsense understandings. But this was a vulnerability from the point of view of nonprofessionals who were then exposed to these understandings. They seemed too commonsense. People had to be more than information and rules. Now connectionism closes the box again. What is inside these more opaque systems can once again be thought of as mysterious and indeterminate.

Philosopher John Searle exploited the vulnerability of information-processing models when he pursued a thought experiment that took as its starting point the question of what might be going on in a computer that could "speak Chinese." Searle, who assures us that he does not know the Chinese language, asks us to imagine that he is locked in a room with stacks and stacks of paper, say, index cards. He is given a story written in Chinese and then is passed slips of paper on which are written questions about the story, also in Chinese. Of course, he does not know he has a story, and he does not know that the slips of paper contain questions about the story. What he does know is that "clever programers" have given him a set of rules for what to do with the little pieces of paper he is passed. The rules tell him how to match them up with other little pieces of paper that have Chinese characters on them, which he passes out of the room. The rules say such things as "The squiggle-squiggle sign is to be followed by the squoggle-squoggle sign."[28] He becomes extraordinarily skillful at following these rules, at manipulating the cards in his collection. We are to suppose that his instructions are sufficiently complete to enable him to "output" chinese characters that are in fact the correct answers to the questions about the story.

All of this is set up for the sake of argument in order to ask one rhetorical question in plain English: Does the fact that he sends out the correct answers prove that he understands Chinese? For Searle, it is clear that the answer is no.


239

. . . I can pass the Turing test for understanding Chinese. But all the same I still don't understand a word of Chinese and neither does any other digital computer because all the computer has is what I have: a formal program that attaches no meaning, interpretation, or content to any of the symbols.[29]

In the end, for Searle, the system is only a "paper shuffler." He described the innards of the machine in terms so deeply alien to the ways most people experience the inner workings of their minds that they felt a shock of "nonrecognition." For may people, it fueled a sense that Searle had captured what always seemed wrong with AI. The disparity between the description of the paper shuffler and one's sense of self supported the view that such a systme could not possibly understand the meaning of chinese in the same sense that a person does.

Connectionism is less vulnerable to the Searlean argument. Its models postulate emergence of thought from "fuzzy" process, so opening up the box does not reveal a crisply defined mechanism that a critic can isolate and make to seem psychologically implausible. Connectionsits only admit just enough of a view onto the inside of their system to create a general feeling for its shape. And for many people, that shape feels right—in the way that Searle made rule-driven systems feel wrong. That shape is resonant with brainlike processes: associations and networks. Perhaps for these grown-up AI scientists, neural nets have something of the feel that the four-tube radio had for Sandy the child. The theoretical objects "feel right," and the theory does not require that they be pinned down to a high degree of specificity. And similar to Sandy, these scientists' "charges," their sense of excitement and achievement, depends not only on instrumental success but on the sense of being in touch with fundamental truths. In this, the connectionists, like Sandy, are romantics-in-practice.

Winograd worked as a young scientist in a very different intellectual culture, the culture of symbolic AI. This is the culure of which Dreyfus could say, "any domain must be formalizable." And the way to do AI is to "find the context-free elements and principles and to base a formal, symbolic representation on this theoretical analysis."[30] In other words, the goal is a limpid science, modeled on the physical sciences, or as Winograd put it, "We are concerned with developing a formalism, or 'representation,' with which to describe . . . knoweldge. We seek the 'atoms' and 'particles' of which it is built, and the 'forces' that act on it."[31] From the point of view of this intellectual aesthetic of transparency, the connectionist black box is alien and unacceptable. In other terms, the classic confronts the romantic.

Indeed, referring to the connectionist's opaque systems, Winograd has said that people are drawn to connectionism because it has a high percentage of "wishful thinking."[32] Perhaps one could go further. For


240

connectionism, wishful thinking is a point of method. They assert that progress does not depend on the ability to specify process. In other words, for connectionists, computers can have the same "opacity" as the brain. Like the brain, they are boxes that can remain closed; this does not interfere with their functioning as models of mind.


Eleven— Romantic Reactions: Paradoxical Responses to the Computer Presence
 

Preferred Citation: Sheehan, James J., and Morton Sosna, editors The Boundaries of Humanity: Humans, Animals, Machines. Berkeley:  University of California Press,  c1991 1991. http://ark.cdlib.org/ark:/13030/ft338nb20q/