7.12.1—
Representation Plus Causation
The first possibility is what John Searle (1980) has called "the Robot Reply" to his arguments against computational theories of mind. According to the Robot Reply, computation over symbols does not, indeed, provide a sufficient condition for the ascription of cognitive states, meaning, or intentionality. But if the computer were, additionally, connected to the external world in the right ways by means of transducers, then it would provide a model for understanding cognition. On this account, the semiotic-semantic properties of mental representations would not be sufficient to account for the intentionality and semantics of cognitive states, because part of what is involved in a belief being about Lincoln is that it be part of a causal chain involving Lincoln. But if one were to provide an account of cognitive states that alluded both to the meaningfulness of mental representations and to the causal chains involved in the formation of beliefs (and other cognitive states), this problem could be remedied.
Now it might well be possible to formulate a useful theory along these lines. As Searle has pointed out, however, this is no longer the same theory that was originally offered as part of CTM. The original claim was that "the objects of propositional attitudes are symbols (specifically, mental representations) and that this fact accounts for their intensionality and semanticity" (Fodor 1981: 24). But if one must, additionally, appeal to causal factors to explain the "intensionality and semanticity" of cognitive states, then one cannot account for it merely by saying that the objects of the attitudes are symbols. If an account of the intentionality and semantics of cognitive states needs to appeal to mental representations and needs, additionally, to appeal to causality, then CTM's account of the intentionality and semantics of cognitive states is not viable.