Chapter Two—
Computation, Intentionality, and the Vindication of Intentional Psychology
The Computational Theory of Mind has received a great deal of attention in recent years, both in philosophy and in the empirical disciplines whose focus is cognition. On the one hand, the computer paradigm has inspired an enormous volume of theoretical work in psychology, as well as related fields such as linguistics and ethology. On the other hand, philosophers such as Fodor have claimed that CTM provides a solution to certain long-lived philosophical problems as well. The primary focus of this book is upon CTM's claims to solve philosophical problems. Two of these are of primary importance. The first is the claim that CTM provides a philosophical account of the intentionality and semantics of intentional states —in particular, that it does so in a fashion that provides thought with the same generative and compositional properties possessed by natural languages. The second is the claim that CTM "vindicates" intentional psychology by providing a philosophical basis for an intentional psychology capable of satisfying several contemporary concerns—in particular, concerns for (1) the compatibility of intentional psychology with materialistic monism, (2) the compatibility of intentional psychology with the generality of physics, and (3) the ability to construe intentional explanations as causal explanations based on lawlike regularities. Together, these claims imply that viewing the mind as a computer allows us to "naturalize" the mind by bringing both individual thoughts and mental processes within an entirely physicalistic world view.
It is important to note that the status of these distinctively philosophical claims is largely independent of the claim that the computer paradigm
has been empirically fruitful in inspiring important theoretical work in psychology and other disciplines. On the one hand, the theory might ultimately prove to be philosophically interesting but empirically fallow. Such was arguably the case, for example, with representational theories of mind before CTM, and could turn out to be the case for computationalism as well if, in the long run, it goes the way of so many unsuccessful research programmes that initially showed such bright promise. On the other hand, it is possible to interpret psychological research inspired by the computer paradigm—"computational psychology" for short—in a fashion that is weaker than CTM. Fodor acknowledges this when he writes:
There are two, quite different, applications of the "computer metaphor" in cognitive theory: two quite different ways of understanding what the computer metaphor is . One is the idea of Turing reducibility of intelligent processes; the other (and, in my view, far more important) is the idea of mental processes as formal operations on symbols. (Fodor 1981: 23-24)
The first and weaker view here is a machine functionalism that treats the mind as a functionally describable system without explaining intentional states by appeal to representations. On this view,
Psychological theories in canonical form would then look rather like machine tables, but they would provide no answer to such questions as "Which of these machine states is (or corresponds to or simulates) the state of believing that P?" (ibid., 25)
The second and stronger application of the computer metaphor is Fodor's CTM, which adds the philosophically pregnant notion of mental representation to what is supplied by machine functionalism. As we shall see in the course of this chapter, Fodor's arguments for preferring CTM to functionalism turn largely upon its ability to "vindicate" intentional psychology and not merely upon factors internal to empirical research in psychology. And hence the strengths and weaknesses of the philosophical claims made on behalf of CTM are largely independent of the viability of computational psychology as an empirical research strategy.
2.1—
CTM's Account of Intentionality
The first philosophical claim made on behalf of CTM is that it provides an account of the intentionality of mental states. The basic form of this account was already introduced in chapter 1: namely, that mental states involve relationships to symbolic representations from which the states "inherit their semantic properties" (Fodor 1981: 26) and intentionality.
Or, in Fodor's words again, "Intentional properties of propositional attitudes are viewed as inherited from semantic properties of mental representations" (Fodor 1980b: 431). This claim that intentional states "inherit" their semantic properties, moreover, is intended to provide an explanation of the intentionality and semantics of intentional states. Beliefs and desires are about objects and states of affairs because they involve representations that are about those objects and states of affairs; intentional states are meaningful and referential because they involve representations that are meaningful and referential. In this chapter we will look at this account in greater detail, with particular attention towards (a ) locating it within the more general philosophical discussion of intentionality and (b ) highlighting what might be thought to be its strengths.
2.2—
Intentionality
Since the publication of Franz Brentano's Psychologie vom empirischen Standpunkt in 1874, intentionality has come to be a topic of increasing importance in philosophy of mind and philosophy of language. While Brentano's own views on intentionality have not proven to be of enduring interest in their own right, his reintroduction of the Scholastic notion of intentionality into philosophy has had far-reaching ramifications. Brentano's pupil Edmund Husserl ([1900] 1970, [1913] 1931, [1950] 1960, [1954] 1970) made intentionality the central theme of his transcendental phenomenology, and the work of subsequent European philosophers such as Martin Heidegger, Jean-Paul Sartre, Jacques Derrida, and Michel Foucault has been articulated in large measure against Husserl's views about the intentionality of mind and language. In the English-speaking world, problems about intentionality have been introduced into analytic philosophy by Roderick Chisholm (1957, 1968, 1983, 1984a, 1984b), who translated and commented upon much of Brentano's work, and Wilfred Sellars (1956), who studied under Husserl's pupil Martin Farber.[1]
Several of the principal aspects of Brentano's problematic have been preserved in subsequent discussions of intentionality. Brentano's characterization of the directedness and content of some mental states has been adopted wholesale by later writers, as has his recognition that such states form a natural domain for psychological investigation and need to be distinguished both from qualia and from brute objects.[2] Recently, moreover, there has been a strong resurgence of interest in the relationship between what Brentano called "descriptive" (i.e., intentional) and
"genetic" (i.e., causal, nomological) psychology. Brentano had originally thought that genetic psychology would eventually subsume and explain descriptive psychology, but subsequently concluded that intentionality was in fact an irreducible property of the mental and could not be accounted for in nonintentional and nonmental terms. This position is sometimes described as "Brentano's thesis." This discussion in Brentano is thus a direct forebear of current discussions of the possibility of naturalizing intentionality, with Brentano's mature position represented by writers such as Searle (1983, 1993).
On the other hand, later discussions have placed an increasing emphasis on several aspects of intentionality that are either given inadequate treatment in Brentano's account or missing from it altogether. Notable among these are a concern for relating intuitions about the intentional nature of mental states to other philosophical difficulties, such as psychophysical causation and the mind-body problem, and a conviction that intentionality is a property of language as well as of thought, accompanied by a corresponding interest in the relationship between the intentionality of language and the intentionality of mental states. This interest in the "intentionality of language" has taken two forms. On the one hand, writers such as Husserl (1900) and Searle (1983) have taken interest in how utterances and inscriptions come to be about things by virtue of being expressions of intentional states. On the other hand, Chisholm (1957) has coined a usage of the word 'intentional' that applies to linguistic tokens employed in ascriptions of intentional states.[3] This widespread conviction that language as well as thought is in some sense intentional has been paralleled by a similar conviction that some mental states can be evaluated in the same semantic terms as some expressions in natural and technical languages. Notably, it is widely assumed that notions such as meaning, reference, and truth value can be applied both (a ) to occurrent states such as explicit judgments and (b ) to tacit states such as beliefs that are not consciously entertained, in much the fashion that these semantic notions are applied to linguistic entities such as words, sentences, assertions, and propositions. Providing some sort of account of the intentionality and semantics of mental states is thus widely viewed to be an important component of any purported "theory of mind."
2.3—
CTM, Intentionality, and Semantics
The motivation of CTM's account of intentionality found in Fodor (1981, 1987, 1990) plays upon several themes in the philosophical
discussion of intentionality. In particular, it is an attempt to exploit the relationship between the semantics of thought and language in a fashion that provides a thoroughly naturalistic account of the intentionality of mental states—in other words, an account that is compatible with token physicalism and with treating beliefs and desires as things that can take part in causal relations. Fodor writes,
It does seem relatively clear what we want from a philosophical account of the propositional attitudes. At a minimum, we want to explain how it is that propositional attitudes have semantic properties, and we want an explanation of the opacity of propositional attitudes; all this within a framework sufficiently Realistic to tolerate the ascription of causal roles to beliefs and desires. (Fodor 1981: 18)
Fodor begins his quest for such an account by making a case that intentional states are not unique in having semantic properties—symbols have them as well.
Mental states like believing and desiring aren't . . . the only things that represent. The other obvious candidates are symbols . So, I write (or utter): 'Greycat is prowling in the kitchen,' thereby producing a 'discursive symbol'; a token of a linguistic expression. What I've written (or uttered) represents the world as being a certain way—as being such that Greycat is prowling in the kitchen—just as my thought does when the thought that Greycat is prowling in the kitchen occurs to me. (Fodor 1987: xi)
It is worth noting that Fodor assumes here that words such as 'represent' can be predicated univocally of intentional states and symbols. But his example also involves an even stronger claim: namely, that symbolic representations such as written inscriptions "represent the world as being a certain way . . . just as [my] thought does." Here the implication would clearly seem to be that there is just one sort of "representation" present in the two cases—an assumption that will be shown to have significant consequences later in this book.
The succeeding paragraph in Psychosemantics begins to reveal what Fodor takes to be common to what initially appear to be separate cases (i.e., mental states and symbolic representation):
To a first approximation, symbols and mental states both have representational content . And nothing else does that belongs to the causal order: not rocks, or worms or trees or spiral nebulae. (Fodor 1987: xi)
It also reveals where his reasoning is headed:
It would, therefore, be no great surprise if the theory of mind and the theory of symbols were some day to converge . (ibid., emphasis added)
There are, however, at least two directions that a convergence of the philosophy of mind and semiotics might take. On the one hand, philosophers like Husserl (1900) and Searle (1983) have argued that the intentional and semantic properties of symbols are to be explained in terms of the intentional and semantic properties of mental states. As we have already seen, however, Fodor's view is quite the reverse: namely, that it is the semantic and intentional properties of mental states which are to be explained, and they are to be explained in terms of the intentional and semantic properties of symbols—specifically, the symbols that serve as the objects of the propositional attitudes. While Fodor does acknowledge that written and spoken symbols get their semantic properties from the states that they express, he nonetheless holds that
it is mental representations that have semantic properties in, one might say, the first instance; the semantic properties of propositional attitudes are inherited from those of mental representations and, presumably, the semantic properties of the formulae of natural languages are inherited from those of the propositional attitudes that they are used to express. (Fodor 1981: 31)
The resulting account of intentional states reduces the claim that a particular token intentional state has semantic or intentional properties to a conjunction of two claims to the effect that (a ) a mental symbol token has semantic or intentional properties, and (b ) an organism stands in a particular kind of functional relationship to that symbol token. As Fodor expresses it in the passage already cited from Psychosemantics,
Claim 1 (the nature of propositional attitudes):
For any organism O , and any attitude A toward the proposition P , there is a ('computational'-'functional') relation R and a mental representation MP such that
MP means that P , and
O has A iff O bears R to MP . (Fodor 1987: 17)
It seems clear that questions about the meaningfulness and (putative) reference of intentional states are to be construed as questions about the symbolic representations involved. The same may be said for truth value in those cases where the concept applies, though the applicability of truthfunctional evaluation to a given intentional state would seem to depend upon the attitude involved, since most kinds of cognitive attitudes (e.g., desire, dread, etc.) are not subject to truth-functional evaluation.
2.4—
The Virtues of the Account
There are several features of this account that render it attractive. First, the account locates the ultimate bearers of semantic properties in symbol tokens, and symbol tokens are among the sorts of things that everyone agrees can be physical objects. To the many who want intentionality and want materialism too, this is a substantial advance over previous theories that attributed intentionality either directly to minds (whose compatibility with materialism is in doubt) or directly to brain states (which are problematic as logical subjects of semantic predicates). The account also lends some clarity to the familiar analysis of intentional states in terms of intentional attitudes (such as belief and desire) and content. The attitude-content distinction is itself only a distinction of analysis. CTM fleshes this distinction out in a way that no previous theory had done. Attitudes are interpreted as functional relations between an organism and its representations, and content in terms of the semantic properties of the representations. CTM thus both retains and clarifies a central feature of the standard analysis of intentional states.
The account of intentionality and semantics offered by CTM also provides a way of understanding both narrow and broad notions of propositional content. According to CTM, what is necessary for an intentional state to have a particular content in the narrow sense—that is, what is necessary for it to be "about-X " construed opaquely, or in such a fashion as not to imply that there exists an X for the state to be about—is for it to involve a relationship between an organism and a symbol token of a particular formally delimited type. Whether the state is also contentful in the broad sense (i.e., "about X " under a transparent construal—one that does imply that there is an X to which the state is about) will depend upon how that symbol token is related to extramental reality: for example, whether it stands in the proper sort of causal relationships with X . While CTM does not provide an account of what relationships to extramental reality are relevant to the broad notion of content, the representational account of narrow content allows CTM to avoid several traditional pitfalls associated with the "hard cases" presented by illusions, hallucinations, false beliefs, and other deviant cases of perception and cognition. Notably, CTM escapes the Meinongian tendency to postulate nonexistent entities and the opposite inclination to identify the contents of intentional states with the extramental objects towards which they are directed.
Two features of CTM's account of intentionality, however, seem to
be of utmost importance: its relation to CTM's account of cognitive processes and its ability to endow thought with a compositional semantics. It is perhaps an understatement to say that CTM's representational account of intentionality would be of little interest outside of narrowly philosophical circles if it were not coupled with a causal theory of cognitive processes. Locating the arcane property of intentionality in the equally mysterious meanings of hypothetical mental representations would cut little ice were it not for the fact that treating thoughts as relations to symbols provides a way of explaining mental processes as computations. Indeed, as writers like Haugeland (1978, 1981) have noted, it is the discovery of machine computation that has revitalized representational theories of the mind.
The other signal virtue of viewing thoughts as relations to symbolic representations is that this allows us to endow the mind with the same generative and creative powers possessed by natural languages. We do not simply think isolated thoughts—"dog!" or "red!" Rather, we form judgments and desires that are directed towards states of affairs and represented in propositional form. And our ability to think "The dog knocked over the vase" is in part a consequence of our ability to think "dog" in isolation. We are, furthermore, able to think new thoughts and to combine the ideas we have in novel ways. If I can think "The dog knocked over the vase" and I can think "cat," I can also think "The cat knocked over the vase." Therefore there is more to be desired from a theory of intentional states than an account of the meanings of individual ideas: there is also the fact that thought seems to be generative and systematic.
Viewing the mind as employing representations in a language of thought gives us this for free. For we already have a way of answering the corresponding questions in linguistics by employing the principle of compositionality. If a language is compositional, then the semantic values of complex expression are a function of (a ) the semantic values of the lexical (or morphemic) atoms and (b ) the syntactic structure of the expression. The generative and systematic qualities of languages are explained by the use of iterative syntactic structures and the substitution of known lexical items into the slots of known syntactic structures. So if the semantic properties of our thoughts are directly inherited from those of the symbols they involve, and the symbols involved are part of a language employing compositional principles, then these explanations from linguistics can be incorporated wholesale into our psychology. The mind has generative and systematic qualities because it thinks in a language that has a compositional semantics.
This is an important result because it is virtually impossible to make sense of reasoning by way of a representational theory except on the assumption that complex thoughts, such as "The cat knocked over the vase," are composed out of simpler parts, corresponding to "cat" and "vase." For when one has a thought of a cat knocking over a vase, this thought is immediately linked to all kinds of other knowledge about cats and vases and causality. One may infer, for example, that an animal knocked over the vase, that something knocked over an artifact, or that the vase is no longer upright. If mental representations were all semantic primitives, the ability to make such inferences on the basis of completely novel representations would probably be inexplicable. The simplest explanation for our ability to combine our knowledge about cats with a representation meaning "The cat knocked over the vase" is that the representation has a discrete component meaning "cat," and that the overall meaning of the representation is determined by how the component representations are combined. This, however, points to the need for a representational system in which syntax and semantics are closely connected. For the only known way of endowing a system of representations with this kind of compositionality is by way of supplying the representational system with syntactic rules that govern how to form semantically complex representations out of semantic primitives. CTM provides for this compositionality, and it is not clear that any account not based on an underlying system of languagelike representations would be able to say the same.
2.5—
CTM as the Basis for an Intentional Psychology
The first important claim made on behalf of CTM is thus that it provides an account of the semantic and intentional properties of mental states. The second important claim made on behalf of CTM is that it provides a philosophical basis for intentional psychology. CTM's proponents believe that it provides a framework for psychological explanation that allows intentional state ascriptions to figure in such explanations, while also accommodating several contemporary concerns in philosophy of science. Three such concerns are of preeminent importance: (1) concerns that psychological explanations be causal explanations based on nomological regularities, (2) concerns that psychological explanations be compatible with the generality of physics (i.e., with the ability of an ideally completed physics to supply explanations for every token event), and (3)
concerns that the ontology implicit in psychology be compatible with materialistic monism. Proponents of CTM thus view their project as one of "vindicating commonsense psychology" or "showing how you could have . . . a respectable science whose ontology explicitly acknowledges states that exhibit the sorts of properties that common sense attributes to [propositional] attitudes" (Fodor 1987: 10).
The perceived need for such a "vindication" was occasioned by the disrepute into which intentional psychology—and indeed mentalism in general—had fallen in the first half of the twentieth century. By the time that the notion of computation was available as a paradigm for psychology, many philosophers and psychologists believed that there could not be a scientific psychology cast in mentalistic or intentional terms. The roots of this suspicion of mentalism and intentional psychology may be traced to the views about the nature of science in general, and psychology in particular, associated with two movements: methodological behaviorism in psychology and Vienna Circle positivism in philosophy. In order to understand fully the significant emphasis placed upon "vindicating intentional psychology" in articulations of CTM (particularly early articulations), it is necessary briefly to survey these other movements which were so influential in the earlier parts of this century.
2.6—
The Disrepute of Mentalism—a Brief History
The legitimacy of intentional psychology was seriously impugned in the first half of the twentieth century by ideas emerging from methodological behaviorists in psychology and from logical positivists in philosophy. Methodological behaviorism, as articulated by Watson (1913a, 1913b, 1914, 1924) and Skinner (1938, 1953), raised methodological concerns about explanations that referred to objects (mental states) that were not publicly observable and were not necessary (they argued) for the prediction and control of behavior.
Early logical positivism, as typified by Carnap's Aufbau (1928), adopted a "logical behaviorism" which Putnam describes as "the doctrine that, just as numbers are (allegedly) logical constructions out of sets, so mental events are logical constructions out of actual and possible behavior events " (Putnam [1961] 1980: 25). This interpretation of mental events is based upon a positivist account of the meanings of words, sometimes called the "verification theory of meaning." The criteria for verification of psychological attributes, the logical behaviorists argued,
consist in observations of (a ) the subject's overt behavior (gestures made, sounds emitted spontaneously or in response to questions) and (b ) the subject's physical states (blood pressure, central nervous system processes, etc.). Since motions and emissions of sounds are straightforwardly physical events, they argued, claims about psychological processes are reducible to statements in physical language.[4] The conclusion, in Hempel's words, is that "all psychological statements which are meaningful, that is to say, which are in principle verifiable, are translatable into statements which do not involve psychological concepts, but only the concepts of physics. . . . Psychology is an integral part of physics" (Hempel [1949] 1980: 18).[5]
Vienna Circle positivism was characterized by a tension between epistemological concerns (with a concomitant tendency towards phenomenalism) and a commitment to materialism . Logical behaviorism emerged in the context of the epistemological concerns and radically empiricist (and even phenomenalistic) assumptions of early Vienna Circle positivism. As a consequence, it involved the assumption that "observational terms refer to subjective impressions, sensations, and perceptions of some sentient being" (Feyerabend 1958: 35). Carnap's Aufbau was the most significant work advocating this kind of logical reduction, though the influence of phenomenalism may be seen clearly in the early works of Russell and in the nineteenth-century German positivism of Mach.
Yet Carnap soon rejected the Aufbau account of the relationship between physical and psychological terms and adopted a new understanding of science, emphasizing the materialist theme in positivism instead of the epistemological-phenomenalist theme. According to this view, observation sentences do not refer to the sense impressions involved in the actual observations, but to the (putative) objects observed, described in an intersubjective "thing-language."[6] Thus in 1936 Carnap writes, "What we have called observable predicates are predicates of the thing-language (they have to be clearly distinguished from what we have called perception terms . . . whether these are now interpreted subjectivistically, or behavioristically)" (Carnap [1936-1937] 1953: 69). And similarly Popper writes that "every basic statement must either be itself a statement about relative positions of physical bodies . . . or it must be equivalent to some basic statement of this 'mechanistic' kind" (Popper 1959: 103).
Oppenheim and Putnam's "Unity of Science as a Working Hypothesis" (1958) has become a locus classicus for this newer view, commonly called reductive physicalism—the view that every mental type has a cor-
responding physical type and all psychological laws are thus translatable into laws in the vocabulary of physics.[7] The ideal of science articulated by Oppenheim and Putnam shares with logical behaviorism and Skinnerian operationalism a commitment to a "reduction" of mentalistic terms, including intentional state ascriptions, but the "reductions" employed in the three projects differ both in nature and in motivation.[8]
Now while these three scientific metatheories differ with respect to their motivations and their chief concerns, each contributed to a growing suspicion of intentional psychology. By the time the digital computer was available as a model for cognition, it was widely believed that one could not have a scientific psychology that employed intentional state ascriptions. This skepticism about intentional psychology reflected four principal concerns: (1) a concern about the nature of evidence for a scientific theory—particularly a concern that the evidence for psychological theories be publicly or intersubjectively observable; (2) a concern about the nature of scientific explanation —in particular, a concern that scientific explanations be causal and nomological; (3) an ontological concern about the problems inherent in dualism, and particularly a commitment to materialistic monism;[9] and (4) a commitment to the generality of physics—that is, the availability of a physical explanation for every token event.
2.7—
Vindicating Intentional Psychology (1): Machine Functionalism
The proponents of CTM believe that it has supplied a way of preserving the integrity of explanations cast in the intentional idiom while also accommodating the concerns that had contributed to the ascendancy of reductive approaches to mind in the first half of the century. Historically, the attempt to vindicate intentional psychology involved two distinct elements: (1) the introduction of machine functionalism as a rigorous alternative to behaviorism of various sorts and to reductive physicalism, and (2) CTM's combination of machine functionalism with the additional notions of computation and representation .
In his 1936 description of computation, Alan Turing introduced the notion of a computing machine. The machine, which has come to be called a "Turing machine," has a tape running through it, divided into squares, each capable of bearing a "symbol."[10] At any given time, the machine is in some particular internal condition, called its "m -configuration." The overall state of the Turing machine at a particular time is described by
"the number of the scanned square, the complete sequence of all symbols on the tape and the m -configuration" (Turing 1936: 232). A Turing machine is functionally specifiable: that is, the operations that it will perform and the state changes it will undergo can be captured by a "machine table" specifying, for each complete configuration of the machine, what operations it will then perform and the resulting m -configuration.
Machine functionalism is the thesis that intentional states and processes are likewise functionally specifiable—that is, that they may be characterized by something on the order of a machine table.[11] The thesis requires some generalizations from the computing machine described by Turing. In Putnam's 1967 articulation, for example, the tape of the machine is replaced by "sensory inputs" and "motor outputs," and a corresponding adjustment is made to the notion of a machine table to accommodate these inputs and outputs. Putnam also generalizes from Turing's deterministic case, in which state transitions are completely determined by the complete configuration of the machine, to a more permissive notion of a "Probabilistic Automaton," in which "the transitions between 'states' are allowed to be with various probabilities rather than being 'deterministic"' (Putnam [1967] 1980: 226). Since a single physical system can simultaneously be the instantiation of any number of deterministic automata, Putnam also introduces "the notion of a Description of a system." Of this he writes,
A Description of S where S is a system, is any true statement to the effect that S possesses distinct states S1 , S2 , . . . , S n which are related to one another and to the motor outputs and sensory inputs by the transition probabilities given in such-and-such a Machine Table. The Machine Table mentioned in the Description will then be called the Functional Organization of S relative to that Description, and the Si such that S is in state Si at a given time will be called the Total State of S (at that time) relative to that Description. (ibid., 226)
This provides a way of specifying conditions for the type identity of psychological states in functional terms. As Block and Fodor articulate it, "For any organism that satisfies psychological predicates at all, there exists a unique best description such that each psychological state of the organism is identical with one of its machine states relative to that description" (Block and Fodor [1972] 1980: 240).
A psychology cast in functional terms possesses the perceived merits of behaviorist and reductive physicalist accounts while avoiding some of their excesses. First, a functional psychology founded on the machine analogy seems to provide the right sorts of explanations for a rigorous
psychology. The machine table of a computer expresses relationships between types of complete configurations that are both regular and causal . If cognition is likewise functionally describable by something on the order of a machine table, psychology can make use of causal, nomological explanations.
Machine functionalism is also compatible with commitments to ontological materialism and to the generality of physics. A computing machine, after all, is unproblematically a physical object, all of its parts are physical objects, and all of its operations have explanations cast wholly in physical terms. If functional description is what is relevant to the individuation of psychological states and processes, the resulting functional psychology could be quite compatible with the assumptions that (a ) all of the (token) objects in the domain of psychology are physical objects, and that (b ) all of the token events explained in functional terms by psychology are susceptible to explanation in wholly physical terms as well.
While machine functionalism is compatible with materialism and token physicalism, it is incompatible with reductive or type physicalism, since functionally defined categories in a computer (e.g., AND-gates) are susceptible to indefinitely many physical implementations that are of distinct physical types. It is for this reason that much of the early computationalist literature focuses on comparing the merits of functionalism with those of reductive physicalism. For example, Fodor offers a general sketch of the case against reductive physicalism:
The reason it is unlikely that every kind corresponds to a physical kind is just that (a ) interesting generalizations . . . can often be made about events whose physical descriptions have nothing in common; (b ) it is often the case that whether the physical descriptions of the events subsumed by such generalizations have anything in common is, in an obvious sense, entirely irrelevant to the truth of the generalizations, or to their interestingness, or to their degree of confirmation, or, indeed, to any of their epistemologically important properties; and (c ) the special sciences are very much in the business of formulating generalizations of this kind. (Fodor 1974: 15)
Additional arguments for the benefits of functionalism over reductionism were marshaled on the basis of Lashley's thesis of equipotentiality, the convergence of morphological and behavioral features across phylogenetic boundaries, and the possibility of applying psychological predicates to aliens and artifacts (see Block and Fodor [1972] 1980). Advocates of functionalism thus see it as capturing the important insights of reductionists (compatibility with materialism and the generality of physics) while avoiding the problems of reductionism.
Advocates of machine functionalism view it as capturing the better side of behaviorism in similar fashion. Functional definition of psychological terms avoids appeals to introspection and private evidence, thereby satisfying one of the concerns of methodological behaviorists like Watson and Skinner. Any ontological suspicion of "the mental" is also avoided by machine functionalism, since computers are plainly objects that are subject to physical instantiation. Functionalism also permits the use of black-box models of psychological processes, much like behaviorism; and like the behaviorisms of Tolman and Hull (but unlike those of Watson and Skinner) it permits the models to include interactions between mental states and does not restrict itself to characterizations of states and processes in dispositional terms, thereby accounting for the intuition that psychological states can interact causally.
Machine functionalism is thus seen by its advocates as uniting the best features of behaviorism with those of physicalism. This, writes Fodor, allowed for the solution of
a nasty dilemma facing the materialist program in the philosophy of mind: What central state physicalists seemed to have got right—contra behaviorists—was the ontological autonomy of mental particulars and, of a piece with this, the causal character of mind-body interactions. Whereas, what the behaviorists seemed to have got right—contra the identity theory—was the relational character of mental properties. Functionalism, grounded in the machine analogy, seemed to be able to get both right at once . (Fodor 1981: 9, emphasis added)
2.8—
Vindicating Intentional Psychology (2): Symbols and Computation
Despite its significant virtues, machine functionalism alone is not sufficient for vindicating intentional psychology. What machine functionalism establishes is that there can be systems which are characterized by causal regularities not reducible to physical laws. What it does not establish is that physical objects picked out by a functional description of a physical system can also be mental states or that functionally describable processes can also be rational mental processes. First, there is an ontological problem: functionalism alone does not show that the physical objects picked out by functional descriptions can be the very same things as the mental tokens picked out in the intentional idiom. As a consequence, explanations in intentional terms are still ontologically suspect, even if there can be some functionally delimited kinds which are
unproblematic ontologically. The second problem is methodological: unless the kinds picked out by a psychology, even functional psychology, are the sorts of things susceptible to semantic relationships, the explanations given in that psychology do not have the characteristics that explanations in intentional psychology have.[12]
CTM seeks to rescue intentional psychology from this impasse by uniting functional and intentional psychologies through the notion of symbol employed in the computer paradigm. Computers, according to the standard account, are not merely functionally describable physical objects—they are functionally describable symbol manipulators . Symbols, however, are among the sorts of things that can have semantic properties, and computer operations can involve transformations of symbol structures that preserve semantic relationships. This provides a strategy for uniting the functional-causal nature of symbols with their semantic nature, and suggests that a similar strategy might be possible for mental states. Thus Fodor writes,
Computation shows us how to connect semantical with causal properties for symbols . So, if having a propositional attitude involves tokening a symbol, then we can get some leverage on connecting semantical properties with causal ones for thoughts . (Fodor 1987: 18)
This, however, requires the postulation of mental symbols:
In computer design, causal role is brought into phase with content by exploiting parallelisms between the syntax of a symbol and its semantics. But that idea won't do the theory of mind any good unless there are mental symbols: mental particulars possessed of both semantical syntactic properties. There must be mental symbols because, in a nutshell, only symbols have syntax, and our best available theory of mental processes—indeed, the only available theory of mental processes that isn't known to be false—needs the picture of the mind as a syntax-driven machine. (ibid., 19-20)
It is this addition of the notion of symbol that makes CTM stronger than machine functionalism. And it is in virtue of this feature that CTM can lay some claim to solving problems that functionalism was unable to solve. First, it can lay claim to solving the ontological problem. The ontological problem was that functionalism provided no warrant for believing that the functionally individuated (physical) objects forming the domain of a functional psychology could also be mental states—in particular, it seemed doubtful that they could have semantic properties. But if some of those functionally delimited objects are physically instantiated symbols, the computationalist argues, this difficulty is solved. Symbols
can both be physical particulars and have semantic values. So if intentional states are relationships to physically instantiated symbol tokens, and the semantic and intentional properties of the symbol tokens account for the semantic and intentional properties of the mental states, then it would seem to be the case that mentalism is compatible with materialism.
The second problem for machine functionalism was that it was unclear how functionally delimited causal etiologies of physical events could also amount to rational explanations. But the computer paradigm also seems to provide an answer to this question. If we assume that (1) intentional states involve symbol tokens with semantic and syntactic properties, that (2) cognitive processes are functionally describable in a way that depends upon the syntactic but not the semantic properties of the symbols over which they are defined, and that (3) this functional description preserves semantic relationships, then (4) functional descriptions can pick out cognitive processes which are also typified by semantic relationships. Functional descriptions of computer systems are based in causal regularities, and so intentional explanations can pick out causal etiologies. And since the state changes picked out by the functional description are caused by the physical properties of the constituent parts of the system, intentional explanation is compatible with the generality of physics.
CTM thus purports to have accomplished a major tour de force. It claims to have vindicated intentional psychology by providing a model in which mentalism is compatible with materialism, and in which explanation in the intentional idiom picks out causal etiologies and is compatible with the generality of physics. The appeal of this achievement, moreover, has outlived the popularity of the movements in philosophy of psychology that originally motivated the desire for a "vindication" of intentional psychology. For while there are relatively few strict behaviorists or reductionists left on the scene in philosophy of science, it is still widely believed that a scientific psychology should employ causal-nomological explanations and be compatible with materialism and with the generality of physics. It is perhaps ironic that these desiderata emerged as consequences of particular short-lived theories in epistemology, philosophy of language, and the logic of science. The theories from which they emerged—the verification theory of meaning and the thesis that there are reductive translations between the languages of the various sciences—have largely been abandoned, but the suspicion of the mental they engendered has outlived them. And thus the "vindication" of intentional
psychology will likely continue to be perceived as a virtue so long as this suspicion remains.
2.9—
Summary
This chapter has examined two major claims made on behalf of CTM: that it offers an account of the intentionality and semantics of intentional states, and that it provides a vindication of intentional psychology. These results are largely independent of one another, but both depend heavily upon computationalists' largely uncritical use of the notion of symbol . Each of these two results is highly significant in its own right, and if CTM can make good on either claim, it will have made a significant contribution to philosophy of mind and psychology. The next chapter will discuss some problems that have been raised about CTM's account of intentionality and semantics, and will argue that a proper evaluation of the account will require an examination of the notions of symbol and symbolic representation .