Preferred Citation: Horst, Steven W. Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. Berkeley:  University of California Press,  c1996 1996. http://ark.cdlib.org/ark:/13030/ft509nb368/


 
Chapter Seven— Semiotic-Semantic Properties, Intentionality, Vindication

7.12.2—
Causality Explains Semantics

Now while some writers certainly endorse the Robot Reply, it is not clear that this is Fodor's strategy when he appeals to causality in explaining semantics. In Psychosemantics, for example, Fodor invokes causality at


212

the level of explaining the semantic properties of mental representations. In so doing, he appears to be taking up a project at the point at which he left it off at the end of the introduction to RePresentations . In that introduction, Fodor gives what is perhaps his best articulation of CTM and how it emerged. He also give a clear indication of what it is intended to accomplish: "It does seem clear what we want from a philosophical account of the propositional attitudes. At a minimum, we want to explain how it is that propositional attitudes have semantic properties " (Fodor 1981: 18, emphasis added). Yet if CTM is supposed to provide an explanation of "how it is that propositional attitudes have semantic properties," it is curious that Fodor writes on the last page of that introduction, "What we need now is a semantic theory for mental representations; a theory of how mental representations represent. Such a theory I do not have" (ibid., 31). Now one way of reading this passage would be as an admission that CTM has thus far failed miserably at meeting Fodor's own standards for a theory of cognitive states. Such, however, is hardly the tone of the chapter in which it occurs. A better way of making sense of this passage, and of Fodor's subsequent treatment of the semantics of representations in Psychosemantics would be as follows: Fodor believes that CTM's representational account of the semantic and intentional properties of cognitive states is successful. Saying that cognitive states involve meaningful representations is enough to explain the meaningfulness of cognitive states: for example, saying that Jones is in a particular functional relation to a mental representation that means "Lo! a horse!" is all that needs to be said to provide an explanation of why Jones believes that there is a horse before him. But this still leaves an additional problem: how do we account for the semantic and intentional properties of the representations? Why does the mental representation mean "Lo! a horse!"? And it is here that Fodor wishes to give a causal answer—to the question of why mental representations that mean "horse" do, in fact, refer to horses. Fodor's initial, "crude" formulation of such a theory is that "a plausible sufficient condition for 'A's to express A is that it's nomologically necessary that (1) every instance of A causes a token of 'A'; and (2) only instances of A cause tokens of 'A'" (Fodor 1987: 126).

So it sounds as though Fodor wishes to make two separate claims: the first is just the representational account of the semantics and intentionality of cognitive states: namely, that cognitive states "inherit" their semantic and intentional properties from the representations they involve. The second claim is a causal theory of the semantic properties of men-


213

tal representations. (Fodor gives only a sketch of such a theory, and repeatedly voices doubts that a full-fledged semantic theory can be developed.)

In order to assess these claims, it is absolutely crucial at this point to determine (1) just what Fodor means when he uses words like 'intentional' and 'meaningful' of mental representations, and (2) how the way Fodor picks out semantic properties is related to his causal account of semantics. The first and most obvious possibility is that Fodor is applying semantic terms to symbols in the ordinary way: that is, using them to attribute semiotic-semantic properties. This should, I think, be the default reading of expressions like 'meaningful symbol'. After all, if someone says he is bringing you "healthy food" and produces a live fish in a bowl, you might well think that he is using language in a peculiar manner—a reaction that will not be changed if he explains, "Well, he is food, after all, and you've never seen a fish that was in better health!" Similarly, if someone says that cognitive states are meaningful (referential, intentional, etc.) because they involve "meaningful symbols," you may reasonably expect that he is using 'meaningful' in the way it is usually used when it modifies 'symbol'—and that, if he is not using it in that way, he should specify just how he is using it. Fodor and other advocates of CTM give no warning that they are using semantic terminology in nonstandard ways, so it is reasonable to begin by assuming that the standard (i.e., semiotic) usage is in force.

If the standard usage is in force, however, CTM's representational account of semantics and intentionality for cognitive states fails, for reasons described earlier in this chapter. And if the causal account of the semantics of mental representations is supposed to be independent from the representational account of the semantics of cognitive states, it can do nothing to bolster it. If the semiotic-semantic properties of representations cannot explain the mental-semantic properties of cognitive states, it does not matter, for purposes of an account of the intentionality of cognitive states, how the representations get their semiotic-semantic properties. Whatever the answer to that question might be, it does the representational account of the intentionality of cognitive states no good.


Chapter Seven— Semiotic-Semantic Properties, Intentionality, Vindication
 

Preferred Citation: Horst, Steven W. Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. Berkeley:  University of California Press,  c1996 1996. http://ark.cdlib.org/ark:/13030/ft509nb368/