Chapter Eight—
Causal and Stipulative Definitions of Semantic Terms
In the last chapter we began a project of assessing CTM's claims (1) that the intentionality of mental states can be explained in terms of the semantic properties of mental representations, and (2) that this will also provide a vindication of intentional psychology. The basic claim about the intentionality and semantics of mental states that we set out to examine was this:
Mental state M has semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has semantic property P .
In light of the distinction between mental- and semiotic-semantic properties, however, it was necessary to revise this schema for explaining intentionality in the following fashion:
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has _______-semantic property X .
The lacuna in clause (2) is to be filled by some more specific kind of "semantic property." What was shown in the last chapter is that filling the
lacuna in a way that offers an account in terms of mental-semantic properties or semiotic-semantic properties will not provide an explanation of the intentionality of the mental. And indeed, the problems arise not only at the level of the conventionality of semiotic meaning, but also involve problems with the conventionality of syntax and even of mere marker-hood . If 'symbol' means marker, then it will not do to speak of the mind as a manipulator of symbols, since that would again involve us in a regress of conventions.
However, we saw in chapter 5 that it is possible to develop the notion of a machine-counter in a fashion that seems to provide everything CTM should require when it speaks of "symbols" and "syntax," yet in a way that avoids commitments to conventions or intentions. It is therefore necessary to consider whether CTM might provide a viable account of the mind if we interpret the talk about "symbols" not as talk about markers and counters, but as talk about machine-counters . In order to do this, however, we will require more than the notion of a machine-counter. That notion might be sufficient for an articulation of the kind of "syntactic theory of mind" advocated by Stich (1983), but an interpretation of CTM will also require an interpretation of talk about the "semantic properties of the symbols" that supplements the notion of a machine-counter with a nonconventional notion of semantics. In this chapter, therefore, I shall present a way of interpreting CTM that avoids problems of the conventionality of symbols and syntax by interpreting CTM as dealing with machine-counters. Additionally, I shall explore two ways of interpreting CTM's use of semantic vocabulary as expressing some set of properties distinct from semiotic-semantic properties. First, I shall explore the possibility of treating Fodor's causal covariance theory of content as a stipulative definition of his use of semantic terms as applied to mental representations. Then, I shall explore the possibility of treating the semantic vocabulary in CTM as a truly theoretical vocabulary, whose meaning is determined by its use in the theory.
8.1—
The Vocabulary of Computation in CTM
In order to reinterpret CTM's claims so as to avoid the taint of convention and intention, we must find alternative interpretations for its talk about "symbols," "syntax," and "semantics." Chapter 5 already gives us a plausible alternative construal of talk about "symbols" and "syntax." For there we saw that some writers in computer science, like Newell and Simon (1975), seemed implicitly to use the word 'symbol' to denote
not the convention-based semiotic typing, but a typing tied directly to the functional analysis of the machine. There we suggested the technical notion of a machine-counter in an effort to make this usage more precise. The notion of a machine-counter was developed as follows:
A tokening of a machine-counter of type T may be said to exist in C at time t iff
(1) C is a digital component of a functionally describable system F ,
(2) C has a finite number of determinable states S :{s1 , . . . , sn } such that C 's causal contribution to the functioning of F is determined by which member of S digital component C is in,
(3) machine-counter type T is constituted by C 's being in state si , where s1 ÎS , and
(4) C is in state s i at t .
I argued in that chapter that this functional typing is quite distinct from the semiotic typing and can serve neither as an analysis nor as an explanation of it. But at the same time, this kind of functional typing may provide just what CTM needs to escape from the conventionality of markers and counters. It is thus only natural to try to reconstruct CTM in a way that substitutes an unobjectionable notion like that of a machine-counter for the problematic convention-laden notions of "symbol" and "syntax." Intuitively, the idea is that the mind has a functional analysis in terms of a machine table, and there are things in the mind or brain that (a ) appear as machine-counters in such an analysis and (b ) covary with content. We are thus ready to reconstruct CTM in a way that avoids the problems of conventionality explored in earlier chapters.
8.2—
A Bowdlerized Version of CTM
In Victorian England, there was a practice of producing editions of books that had been expurgated of all objectionable material (references to ankles and other such scandalous license). Such books were said to have been "bowdlerized," the word deriving from the name of one of the notable practitioners of such editing. What I propose to do here is to describe a bowdlerized version of CTM—BCTM—which avoids objectionable suggestions that MR-semantic properties are mental- or semiotic-semantic properties by characterizing MR-semantics in terms of
the work that the semantic vocabulary seems to do in CTM. Note that it is CTM in particular that is under discussion, and not cognitive theories generally: the operative meaning of semantic terminology might turn out quite differently if one were discussing other philosophical theories (e.g., those of Dennett, Searle, or Dretske) or if one were discussing particular empirical work (say, that of Colby, Newell and Simon, Marr, or Grossberg).
So, without troublesome references to symbols and semantics, it seems to me that what CTM wishes to claim is the following:
Bowdlerized Computational Theory of Mind (BCTM)
(B1) The mind's cognitive aspects are functionally describable in the form of something like a machine table.
(B2) This functional description is such that
(a ) attitudes are described by functions, and
(b ) contents are associated with local machine states. Call these cognitive counters .
(B3) These cognitive counters are physically instantiable.
(B4) Intentional states are realized through relationships between the cognizer and cognitive counters. In particular, for every attitude A and every content C of an organism O , there is a functional relation R and a cognitive counter type T such that O takes attitude A [C ] just in case O is in relation R to a tokening of T .
BCTM may be regarded as a special form of machine functionalism. It is stronger than mere machine functionalism in several respects. Condition (B1) asserts that machine functionalism is applicable to minds. Condition (B2) goes beyond this to make special claims about how the attitude-content distinction will be cashed out in functional terms. Machine functionalism, in and of itself, does not make such a claim and indeed does not even assure that the attitude-content distinction will be reflected in a psychological machine table. Nor does machine functionalism claim, as (B4) does, that things that are picked out by functional description will also play a role in determining content.
If we interpret computational psychology in the way suggested by BCTM, the notion of rule-governed symbol manipulation becomes more of a guiding metaphor for psychology than the literal sense of the theory. Cognitive counters are not "symbols" in the ordinary semiotic sense,
but machine-counters—specifically, they are the things that occupy the slots of machine-counters in the functional analysis of thought, as opposed to other functionally describable systems. On this view, the mind shares with computing machines the fact that the salient description of their causal regularities is math-functional in character, but differs in that what is described by the function table is not a set of entities with conventional semiotic interpretations but—well, something else whose true nature is not yet known. If the theory is right, we presently know cognitive counters and their MR-semantic properties only through the role they play in contributing to something we know more immediately: namely, intentional states and mental processes.
I should stress that I view this as a reconstruction of CTM and not as an attempt to guess at what its advocates had in mind. I think it seems clear that Fodor and others have generally assumed the univocity of the semantic vocabulary, and likewise assumed that there was a perfectly ordinary usage of terms like 'semantics' and 'meaning' that could be extended to mental representations. In light of the problems that have already been shown to exist for that assumption, I am now trying to see whether there is an alternative interpretation of computational psychology that can avoid the problems already raised. (I am trying to pull CTM's chestnuts out of the fire, if you will.) In the end, I think there are two very different questions here: one about the viability of computational psychology as an empirical research programme, and another about the distinctively philosophical claims CTM's advocates have made about explaining intentionality and vindicating intentional psychology. In the remainder of this chapter, I shall try to argue that BCTM does not allow the computationalist to make good on these philosophical claims. In the final section of the book I shall explore an alternative approach to computational psychology that liberates its empirical research agenda from unnecessary philosophical baggage.
8.3—
The Problem of Semantics
If successful, the analysis of semantic properties in chapters 4 through 6 has shown several important things about the task of explaining the intentionality of mental states. First, what we call "meaning" and "intentionality" with respect to mental states are not exactly the same properties we ascribe to symbols when we use those words. Second, the properties we ascribe to symbols are conceptually dependent upon those we ascribe to mental states. And hence, as shown in chapter 7, we can-
not use semiotic-semantics to explain mental-semantics. Most articulations of CTM have seemed to assume, on the contrary, that the semantic vocabulary can be predicated univocally to mental states, overt symbols, and mental representations, and that the semantic properties of representations could be used to explain those of mental states through "property inheritance" because they are, after all, the very same properties and need only be passed up the explanatory chain.
In light of the previous chapters, this direct explanation of intentionality by way of "property-inheritance" seems to be closed off. If the "semantic properties" of mental representations are semiotic-semantic properties, they cannot explain mental semantics. And if they are not semiotic-semantic properties, it remains to be seen what kind of properties they are supposed to be. However, it is possible that waiting in the wings there is a way to finesse this problem the way we were able to finesse problems of syntax and symbolhood by way of the notion of a machine-counter. That is, perhaps the semantic vocabulary expresses some distinct kind of property when applied to mental representations, and this kind of property gives us what we need to explain the intentionality of mental states. Of course, we do not have a theory until we spell out what these properties are supposed to be. But we may for the meantime indicate the fact that they are supposed to be distinct from mental-semantic properties and semiotic-semantic properties by indicating them as "MR-semantic" properties. (That is, the kind of properties expressed by the semantic vocabulary when applied to mental representations.)
Presumably, what is common to mental-, semiotic-, and MR-semantic properties is that in each case there is a relationship between the typing of the theory (i.e., types of intentional content, types of symbol, types of representation) and a set of objects or states of affairs. Indeed, presumably the mathematically reduced abstractions of the three sets of properties are in very close correspondence: since words are expressions of thoughts, words-to-world mappings will closely track thought-to-world mappings. And if there are indeed mental representations, presumably representation-to-world mappings will closely parallel thought-to-world mappings. (In the ideal case, they will be isomorphic. But it is likely that the relationship falls short of isomorphism due to factors such as two words expressing the same concept or one word ambiguously expressing multiple concepts.) As we have seen, this does not add up to a "common" notion of semantics, because the nature of the relations expressed by the mappings is different in each case. (For example, in the semiotic
case it is essentially conventional, while in the case of intentional states it is not.)
The problem for a computational-representational semantics is to articulate a theory of MR-semantics that can meet the following desiderata: (1) the MR-semantic typing of representations must correspond to their machine-counter typing; (2) the relation that establishes a mapping between representation types and their MR-meanings must be such as to be able to explain the presence of the mental-semantic properties of mental states; and (3) the mapping so established for representations must have a proper degree of correspondence to that of the semantics of mental states .
In this chapter I shall explore two possible ways of developing such a semantics for mental representations. First I shall examine the possibility of using Fodor's Causal Covariation Theory of Intentionality (CCTI) as a stipulative definition of the properties expressed by the semantic vocabulary when it is applied to representations. Later, I shall turn to the possibility that the semantic vocabulary, as applied to representations, is a true theoretical vocabulary, where the meanings of the terms are determined by the explanatory role they play in the theories in which they are introduced.
8.4—
A Stipulative Reconstruction of the Semantic Vocabulary
If, then, the semantic vocabulary is being used in some novel way when applied to mental representations, how is it being used? One reasonable hypothesis would be to suppose that it is being used to supply precisely the properties that Fodor ascribes to representations in his own theory of representational semantics—the so-called "causal covariation account." To repeat, I do not think that Fodor was in fact offering his semantic theory as a stipulative definition of the semantic vocabulary. But if the theory works, and the semantic vocabulary is in need of definition for mental representations, it seems a viable candidate. And if, as a stipulative definition, it is incapable of meeting the desiderata listed above, it will fail as a nondefinitional account as well, and so time spent critiquing it will not be ill spent.
Consider, then, what Fodor has to say about the nature of the "semantic properties" of mental representations. What Fodor provides by way of an "account of semantic properties for mental representations"
is what he calls a "causal theory of content." The motivation for this project Fodor explains as follows: "We would have largely solved the naturalization problem for propositional-attitude psychology if we were able to say, in nonintentional and nonsemantic idiom, what it is for a primitive symbol of Mentalese to have a certain interpretation in a certain context" (Fodor 1987: 98). This theory of "what it is for a primitive symbol of Mentalese to have a certain interpretation" has become progressively less vague in Fodor's work from 1981 to 1990, and Fodor describes the 1990 theory as providing an account of content having "the form of a physicalist, atomistic, and putatively sufficient condition for a predicate to express a property" (Fodor 1990: 52). The 1990 version of this account reads as follows:
I claim that "X" means X if:
1. 'Xs cause "X"s' is a law.
2. Some "X"s are actually caused by Xs.
3. For all Y not = X, if Ys qua Ys actually cause "X"s, then Ys causing "X"s is asymmetrically dependent on Xs causing "X"s. (ibid., 121)
It is clear from the context that this account is supposed to apply only to mental representations—that is, to be restricted to the cases where "X" indicates a mental representation—so we would seem to be on the right track in looking for an explication of 'means' as it is used of mental representations.
Let us, then, assume that this account of MR-semantic properties can serve as a stipulative definition of the semantic vocabulary as applied to mental representations. We may now substitute this account of MR-semantic properties into CTM's basic schema for explaining the intentionality of the mental, obtaining a Causal Covariation Theory of Intentionality (CCTI):
Causal Covariation Theory of Intentionality (CCTI)
Mental state M mental-means P because
(1) M involves a relationship to a mental representation MR of type R ,
(2a ) P s cause R s is a law,
(2b ) some R s are actually caused by P s, and
(2c ) for all Q¹P , if Q s qua Q s actually cause R s, then Q s causing R s is asymmetrically dependent upon P s causing R s.
We shall now examine the prospects of CCTI. CCTI is primarily intended as an examination of the consequences of using causal covariation as a stipulative definition of the semantic vocabulary. But of course CCTI could serve as a statement of Fodor's account of mental semantics generally, whether clauses (2a ) through (2c ) are supplied by definition of semantic terms or merely provide necessary and sufficient conditions. The assessment that follows, therefore, is of interest as a critique of causal covariation accounts, whether they involve stipulative definition or not.
In what follows, I shall argue that this approach to saving CTM has several serious problems. First, even if CCTI provides a consistent theory that avoids the problems of interpretational semantics, it does not inherit much of the persuasive force originally marshaled for CTM, because much of that persuasive force turned upon the intuition that the same "semantic properties" could be attributed univocally to mental states, discursive symbols, and mental representations. With this assumption already undercut, it is incumbent upon CTM's advocates to make clear the connection between MR-semantics and mental-semantics in such a fashion that the former can account for the latter—that is, to show how causal covariation is even a potential explainer of mental-semantics. This leads to a more fundamental problem about the causal covariation account. What this account seems to attempt to provide is a demarcation account for meaning assignments, not an explanation of meaningfulness. That is, it seems to correlate particular mental-meanings (i.e., meaning-X as opposed to meaning-Y ) with certain naturalistic conditions, on the assumption that there is some meaning there in the first place. What it does not do is explain why mental states are meaningful (rather than meaningless ) in the first place, or how causal covariation is supposed to underwrite this fact. In this regard CCTI compares unfavorably to some other naturalistic accounts, but there is also reason to doubt that any naturalistic account could provide an adequate account of the meaningfulness of mental states. Finally, at best, CCTI would provide an account of the semantic primitives of mentalese, leaving the semantic values of complex expressions to be generated through compositional rules. But as we have seen in the last chapter, the only way we know of to provide syntactically based compositionality is through conventions . So even if CCTI succeeds in escaping the problems of conventionality at the level of semantic primitives, those problems will still reassert themselves as soon as one is concerned with expressions whose semantic properties are due to compositionality.
8.4.1—
What Is Gained and Lost in Causal Definition
Before making a direct frontal assault upon the Causal Covariation Theory of Intentionality, it will be useful first to become clear about what is gained and what is lost in adopting the strategy of defining semantic terminology for mental representations in causal terms. There seem to be three immediate benefits. First, we have clarified the semantic terminology to a point where we seem in little danger of running afoul of the ambiguities in the semantic vocabulary. Second, we are no longer in the embarrassing position of not being able to say what kinds of properties it is that are supposed to explain the intentionality of mental states. Third, we have done so in a fashion that manages to avoid all of the awful problems about conventions and intentions that plagued the semiotic-semantic account. If causal covariation is not free from the taint of the conventional, it is hard to imagine what would be.
On the other hand, it is important to see that a truly vast amount of the persuasive strength of the case for CTM is lost in the transition. The case for CTM, after all, traded in large measure upon the intuition that thoughts and symbols have some important things in common: namely, both are meaningful, both represent, both have semantic properties. This is a point to which Fodor repeatedly returns. To take a few sample quotes:
Propositional attitudes inherit their semantic properties from those of the mental representations that function as their objects. (Fodor 1981: 26)
Mental states like believing and desiring aren't . . . the only things that represent. The other obvious candidates are symbols . (Fodor 1987: xi)
Symbols and mental states both have representational content . And nothing else does that belongs to the causal order: not rocks, or worms or trees or spiral nebulae. (Fodor 1987: xi)
The reasoning that is supposed to follow from such claims seems quite clear: computational explanation in cognitive psychology makes it seem necessary to suppose that there are mental symbols over which the computations are performed. Perhaps these have semantic properties as well, and it is the semantic properties of the symbols that account for the semantic properties of the intentional states in which they are involved. That is, one is inclined to argue as follows:
(1) Mental states have semantic properties.
(2) Symbols have semantic properties.
\ (3) There is a class of properties—semantic properties—shared by symbols and mental states.
so, (4) It seems reasonable to try to reduce the meaningfulness of mental states to that of the representations they involve.
Of course, in light of the distinctions made in chapters 4 and 5, the argument from (1) and (2) to (3) is exposed as a paralogism, since 'have semantic properties' must mean something different in the two contexts (mental- and semiotic-semantic properties, respectively). And without (3), there is much less reason to be inclined towards (4). It is one thing to claim
(A) M ental state M has property P because M involves MR , and MR has P .
It is quite another to claim
(B) Mental state M has property P because M involves MR , and MR has X , and X¹P .
(B) requires a kind of argumentation beyond what is required for (A), because (A) proceeds on the assumption that property P is in the picture to begin with, and just has to explain how M gets it. (B), on the other hand, has to do something more: it has to explain how P (in this case, mental-intentionality) comes into the picture at all .
As for the quotes cited above, their interpretation becomes quite problematic once they are read in light of the distinctions between different kinds of "semantic properties." If words like 'semantic', 'represent', and 'content' are defined in causal terms for mental representations, claims such as these are irrelevant at best. At worst they are logical howlers. To say, for example, that "mental states and (discursive) symbols both represent" is perilously misleading. As we have seen in chapter 4, there is no one property called "representing" that is shared by mental states and discursive symbols. Instead, 'represent', like other semantic terms, means different things when applied to symbols and to mental states. So the sentence, "mental states and symbols both represent" involves faulty parallelism that disguises a more basic conceptual error.
The same kind of problem occurs if we just define 'refers to' or 'means' in causal terms for mental representations. Suppose "mental represen-
tation MP refers to P " just means "mental representation MP was caused by P in fashion F ." What, then, would we make of such assertions as "propositional attitudes inherit their semantic properties from those of the representations that serve as their objects"? This assertion, like the claim that mental states and symbols "both represent," is perilously misleading. For the claim implies that there is some set of properties called "semantic properties" that are ascribed both to mental states and to mental representations. If the "semantic properties" ascribed to mental representations are defined in causal terms, however, the semantic properties ascribed to mental states must be defined in causal terms as well, if they are to be the same properties. But surely this is not so. When we say that Jones is thinking about Lincoln, what we mean is surely not precisely that he stands in a particular causal relation to Lincoln. We certainly mean nothing of this kind when we say that Jones is thinking about unicorns or numbers . So if we define semantic terms applied to mental representations in causal terms, it is misleading to speak of the "inheritance" of semantic properties: such properties as might be conferred upon mental states by representations are not the same properties that are possessed by the mental representations themselves. And such arguments for CTM as depend upon a genuine inheritance of the same "semantic properties" turn out to be fallacious.
A similar problem can be made for CTM's attempt to vindicate intentional psychology. The strategy for the vindication was to show, on the basis of the computer paradigm, that the postulation of mental representations could provide a way of coordinating the semantic properties of mental states with the causal roles they play in thought processes. Such an argument might be formulated as follows:
Argument V1
(1) Mental states are relations to mental representations.
(2) Mental representations have syntactic and semantic properties.
(3) The syntactic properties of mental representations determine their causal powers.
(4) All semantic distinctions between representations are preserved syntactically.
\ (5) The semantic properties of representations are coordinated with causal powers (3,4).
(6) The semantic properties of mental states are inherited from the representations they involve.
\ (7) The semantic properties of mental states are coordinated with causal powers (5,6).
Now consider just steps (5) through (7). If we were to interpret the expression 'semantic properties' univocally, we could recast (5) through (7) as follows:
Argument V2
(5́) There is a strict correspondence between a representation's semantic properties and its causal powers.
(6́) A mental state M has semantic property P if and only if it involves a representation MR that has semantic property P .
\ (7́) There is a strict correspondence between a mental state's semantic properties and its causal powers.
On this construal we appear to have a reasonable and valid argument. But consider this second construal, which is forced upon us by the recognition of the homonymy of semantic terms:
Argument V3
(5* ) There is a strict correspondence between a representation's MR-semantic properties and its causal powers.
(6* ) A mental state M has mental-semantic property P if and only if it involves a representation MR that has MR-semantic property X .
\ (7* ) There is a strict correspondence between a mental state's mental-semantic properties and its causal powers.
The plausibility of the deduction to (7* ) depends in large measure upon the plausibility of (6* ). The plausibility of (6* ), in turn, will depend upon what MR-semantic properties turn out to be. But whatever they may turn out to be, (6* ) lacks some of the immediate prima facie appeal of (6) and (6́), since it depends upon a (contingent) correlation of different kinds of properties, whereas (6) and (6́) involve ascriptions of the same properties to two different objects. This kind of contingent correlation is itself in need of explanation.
The upshot of these observations is this: if the "semantic properties"
of mental representations are defined in causal terms, the proponent of CTM owes us something that he did not owe us on the assumption that the semantic properties of mental states were the very properties possessed by mental representations: namely, he owes us a plausible account of why having a representation MR with certain MR-semantic properties (say, certain causal connections with objects in the environment) should be a sufficient condition for having a mental state with certain mental-semantic properties (say, a belief about dogs). This is significant because the arguments given in favor of CTM seem to assume that the same kinds of "semantic properties" can be ascribed indifferently to symbols, mental representations, and mental states. But if one defines the semantic terminology that is applied to representations in causal terms, most of what Fodor says to commend CTM to the reader is patently fallacious.
In summary, then, we may say that defining MR-semantic properties in terms of causal covariations allows us to avoid the major pitfalls presented for earlier readings of CTM, but the case for CTM now seems much weaker than it once did. The reason for this is that originally the road from representations to mental states was a road from semantics to semantics, and the road from semantics to semantics seemed relatively short and straight. If the "semantic properties" of mental states and representations were the same properties, there would be no question but that the latter are the sort of things that could account for the presence of the former, but only a question about whether such "inheritance" indeed takes place. On the current interpretation, however, the road from representations to mental states is a road from causal covariation to mental-semantics. That road is surely much longer, and there is no small question about whether the roads shall meet at all. It may be that they are like Down East roads: "Ya can't get there from here!"
8.4.2—
Covariation and Mental-Semantics
The vital question, then, is whether causal covariation is the right sort of notion to provide an explanation of the semantic properties of mental states. I believe that it is not. But in order to see why it is not, it may prove useful to see what it is suited to doing and how that falls short of explaining mental-semantics. In order to do this, it will be helpful to make two sorts of distinctions. First, we may distinguish between two sorts of accounts: those that provide explanations of what it is to be an X , and those that merely provide criteria for the demarcation of X 's from non-X 's.
Second, we may distinguish between accounts of meaning assignments (i.e., distribution of meanings) from accounts of meaningfulness . The former differentiate things that mean A from those that mean B , on the assumption that the items in question mean something; the latter explain why items mean something rather than nothing. I shall argue that CCTI is suited at best to providing a demarcation criterion for meaning assignments, whereas an account of mental-semantics requires something stronger: an explanation of meaningfulness.
8.4.2.1—
Explanation and Demarcation
To begin with, let us distinguish between accounts that give an explanation of why something is an X from accounts that merely provide a criterion for the demarcation of X 's from non-X 's. Aristotle's characterization of humans as featherless bipeds is an attempt at a demarcation criterion. It happens to be a poor attempt, since apes, tyrannosaurs, and plucked chickens are also featherless bipeds. But even if humans were, in point of fact, the only featherless bipeds, the featherless-biped criterion would at most give us a litmus for distinguishing humans from other species. If what we wanted was an explanation of what makes Plato a human being, the fact that he is a featherless biped is clearly a non-starter. The problem is not that demarcation criteria can be wildly contingent, for in fact they need not be—some demarcation criteria can be metaphysically necessary. Even demarcation criteria that are metaphysically necessary, however, can fail to be explanatory. For example, if you want to know what makes a figure a triangle, the answer had better be something like "the fact that it has three sides." But there are descriptions that distinguish triangles from everything else that do not provide this information: for example, "simplest orthogonal two-dimensional polygon," "shape of the faces on a regular octahedron," and (worst of all) "Horst's favorite geometric example." (This last, of course, is not metaphysically necessary.) If you want to know what makes a figure a triangle, the fact that it has the same shape as one of the faces of an octahedron just will not do as an explanation, though it is necessary and sufficient.
There are relationships between demarcation criteria and explanations. Significantly, things that can serve as explanations are a proper subset of things that can serve as demarcation criteria. On the one hand, an account that explains what it is to be an X must also be able, at least in principle, to serve as a demarcation criterion for distinguishing X 's from non-X 's. On the other hand, the opposite is not true: we have already seen examples of demarcation criteria that lack explanatory power. A
corollary of this is that one way of showing that something is not an explanation of what it is to be an X is to show that it does not even distinguish X 's from non-X 's.
8.4.2.2—
Meaning Assignment and Meaningfulness
Let us further distinguish between two aspects of accounting for a token T 's meaning-X . On the one hand, one might want to account for why T means X as opposed to meaning something else, treating it as a background assumption that T can mean something . When we explain the role of particular morphemes in determining the meanings of polymorphemic words, for example, we take it as a given that words can mean something and confine ourselves to asking, say, how various sorts of affixes interact with the meanings of root morphemes. This provides an account of why words have the particular meanings they have without touching upon the question of how language gets to be meaningful in the first place. But one might ask this second question as well, and it is here that, say, Ruth Millikan's account of truth and meaning for languages is at odds with accounts based on convention or speaker meaning. Such accounts are accounts of meaningfulness rather than of meaning assignment .[1] Presumably one may offer an account of meaning assignments without thereby offering an account of meaningfulness, and vice versa.
8.4.2.3—
Why We Need an Explanation of Meaningfulness
Now what kind of "account of meaning" is required for mental-semantic properties of mental states? Well, if one wants to know how it is that things in the mind get to be about things in the world, one presumably wants to know both how thoughts get to be about particular things and how they get to be about anything at all —that is, one wants accounts of meaning assignment and of meaningfulness. Now suppose further that we are interested (as CTM's most notable advocates clearly are interested) in a naturalistic account—one that explains mental-semantic properties on the basis of some naturalistic properties ("N-properties"). Here the problem of meaning assignment becomes one of associating particular mental-semantic properties (e.g., meaning "horse") with particular N-properties (e.g., causal covariations with horses). And if all we are interested in is a naturalistic demarcation criterion for particular mental-meanings, all the "association" need amount to is strict correlation—some set of N-properties that all and only horse-thoughts (as opposed to cow-thoughts, unicorn-thoughts, etc.) possess. But if we are interested
not merely in a demarcation criterion, but in an explanation of what it is to mental-mean "horse," our naturalistic account of meaning assignments needs to be augmented with a naturalistic account of meaningfulness as well. Unless N-properties are sufficient to explain mental-meaningfulness, particular N-properties cannot explain particular mental-meanings either.
If CCTI is to provide an adequate account of intentionality and mental-semantics, then, it must provide an explanation of mental-meaningfulness. I shall now argue, however, that CCTI cannot plausibly be supposed to do this. All it can plausibly be supposed to do is provide a demarcation criterion for meaning-assignments. I shall first argue that CCTI attempts to provide a demarcation criterion for meaning assignments, and then argue that it fails to do more than this.
8.4.3—
CCTI As a Demarcation Criterion for Meaning Assignments
There are three main reasons to see CCTI as a demarcation criterion for meaning assignments. First, there is a strong tendency in the literature to see the task of "fixing meanings of representations" as a matter of imposing a suitable interpretation scheme—namely, one that assigns the right meanings. Second, CCTI seems naturally suited to providing a demarcation criterion of the desired sort. Third, the bulk of the discussion of the causal covariation version of CTM has been centered around CCTI's success or failure at providing a successful such demarcation criterion.
8.4.3.1—
Demarcation, Interpretation, and Meaning Fixation
A reader of the cognitive science literature will have noticed that there is a strong tendency to view the problem of accounting for content of representations as one of imposing a coherent representational scheme. Pylyshyn writes, for example, that the computational approach to the mind involves the assumption that
there is a natural and reasonably well-defined domain of questions that can be answered solely by examining (1) a canonical description of an algorithm (or a program in some suitable language—where the latter remains to be specified), and (2) a system of formal symbols (data structures, expressions), together with what Haugeland (1978) calls a "regular scheme of interpretation" for interpreting these symbols as expressing the representational content of mental states (i.e., as expressing what the beliefs, goals, thoughts, and the like are about, or what they represent). . . . Notice . . . that we have not said anything about the scheme for interpreting the symbols—for example,
whether there is any indeterminacy in the choice of such a scheme or whether it can be uniquely constrained by empirical considerations (such as those arising from the necessity of causally relating representations to the environment through transducers). (Pylyshyn 1980: 116, emphasis added)
Notice two things about this quote. First, semantic properties are discussed in terms of a "scheme of interpretation." Second, the question about this scheme that seems foremost in Pylyshyn's mind is whether the meaning assignments of a given scheme can be constrained so as to be unique. Similar issues arise in Haugeland (1981: intro.; 1985: chap. 3). It seems clear that these writers view the issue of finding a semantics for mental representations as one of finding a way to constrain the specification of an interpretation scheme for representations so that it is unique and so that it gets the causal relationships right—that is, their concern is for providing an adequate demarcation criterion for meaning assignments.
8.4.3.2—
The Suitability of CCTI for Demarcation
CCTI also seems well suited to providing a demarcation criterion for meaning assignments. (Or, to be more precise, it seems suited to providing a candidate for such a criterion, since there is one question about what it sets out to do and another about whether it accomplishes it.) It is quite easy to see that, whatever else CCTI might be used to do, it at very least purports to be a demarcation criterion for meaning assignments. For it is set up to give sufficient conditions, in naturalistic terms, for particular mental-meanings: the mental states that mental-mean P are the ones that have mental representations that are in a relation of causal covariation with the class of objects or states of affairs designated by P . This account may or may not be true, but if it is true, it provides a way of separating mental states that mean P from those that mean Q : the former involve representations characteristically caused by P 's and the latter involve representations that are characteristically caused by Q 's.
8.4.3.3—
The Problem of Misrepresentation
Now there has been a substantial amount of discussion of CCTI in the literature, assessing the merits of causal covariation as a way of explaining mental-semantics. What this discussion seems to center on, however, are the prospects for causal covariation as a way of providing a demarcation criterion for meaning assignments . This provides some evidence that supports the conclusion that this is the role that the theory is commonly regarded as performing.
The focus of this discussion has been upon CCTI's ability to account for the possibility of misrepresentation. According to CCTI, those thoughts are about P 's that involve representations of a type caused by P 's. But it is surely possible to have thoughts about P 's that are not caused by P 's and, worse yet, to have thoughts that are about P 's that are caused by something other than P 's—Q 's, for example. So, for example, someone visiting Australia might see a dingo and say to himself, "Oh, there's a doggie out back in the outback!" (Dingos are not dogs, etymologically speaking.) This person's thought has the content "dog," but is caused by a nondog, a dingo. And it is even possible for this error to be systematic: someone might always mistake dingos for dogs, wrens for sparrows, gnus for cattle, and so on. The problem is that, according to CCTI, thoughts are supposed to be about whatever it is that is the characteristic cause of their representations. But if dingos systematically cause a to-kening of the same kind of representation that dogs cause, it would seem to follow that what this kind of representation MR-means is the disjunctive class dog-or-dingo. This has several unwelcome results. First, my dog-thoughts turn out to mean not "dog," but "dog or dingo." (And this quite unbeknownst to me and contrary to what I have assumed all along.) Second, it would seem to be impossible to misrepresent a Q as a P , since the fact that Q 's cause the same representations as P 's under certain conditions will occasion a change in the "meaning" to be assigned to such representations. (And it just seems wrong to say, for example, that someone who mistakes holograms of unicorns for real unicorns has thoughts that mean "hologram" and not "unicorn.") There are related problems arising from the fact that thoughts about dogs can be caused by things other than distal stimuli entirely—for example, I can think about dogs in dreams or in free fancy. It is hard to see just how a strict causal theory should treat these cases.
This problem, which Fodor likes to call the "disjunction problem," was apparently a significant incentive in his development of the causal covariation account of intentionality from the form in which he articulated it in 1987 to the form it took in 1990. What is new in the more recent account is the addition of a notion of "asymmetric dependence," which is introduced to handle the disjunction problem. Recall the form of the account in Fodor (1990), which we have used here to develop CCTI:
I claim that "X" means X if:
1. 'Xs cause "X"s' is a law.
2. Some "X"s are actually caused by Xs.
3. For all Y not = X, if Ys qua Ys actually cause "X"s, then Ys causing "X"s is asymmetrically dependent on Xs causing "X"s. (Fodor 1990: 121)
The first and second clauses are already implicit in the older formulation. The notion of asymmetric dependence appears in clause (3). The idea is as follows: a thought involving a given representation R can mean "dog" and not "dingo" or "dog-or-dingo," even if it is regularly caused by both dogs and dingos, if it is the case that the causal connection between dingos and R -tokenings is asymmetrically dependent upon the causal connection between clogs and R -tokenings. And the nature of this "dependence" is cashed out in purely modal terms: what it means is that if dogs did not cause R -tokenings, dingos would not either, but not the reverse. (In other words, dingos might fail to cause R -tokenings without dogs failing to do so as well.)
Now I have no interest in contributing here to the already good-sized literature debating the success or failure of this move. What I wish to do is merely to point to what it is a debate about . And what it is a debate about is whether CCTI provides meaning assignments in the ways we should wish a semantic theory for the mind to do so. It is about such questions as whether such a theory would assign counterintuitive meaning assignments (such as "dog-or-dingo") and whether it can accommodate such patent facts as misidentification, in which one has a thought the content of which does not match the thing one is trying to identify. It may be that the fancy footwork provided by the notion of asymmetric dependence can finesse a way through these problems, but it is these problems that it seems intended to finesse.
8.4.4—
What CCTI Does Not Do
What CCTI notably does not seem to do is provide more than an demarcation account of meaning-assignments. It is not clear that it is even an attempt to provide an account of meaningfulness for mental states; and if it is so intended, the account it provides is woefully inadequate. I shall attempt to argue this in two different ways. First, I shall argue that CCTI does not provide so much as a demarcation criterion for meaningfulness (as opposed to meaning assignments ), and hence cannot provide an explanation of meaningfulness, since an account that explains will also provide a demarcation criterion. Second, I shall argue that CCTI lacks the right sort of explanatory character to explain the intentionality of the mental.
8.4.4.1—
Failure to Demarcate the Meaningful
While causal covariation may or may not provide a demarcation criterion for meaning assignments , it does not provide a demarcation criterion for meaningfulness —that is, for separating things that mean something from those that mean nothing . For the notion of causal covariation is cashed out in terms of regular causation, and regular causation is a feature not just of mental states and processes, but of objects and events generally. The overall project here is to explain the mental-semantic properties of mental states in terms of some set N of naturalistic properties, and the proposal at hand is that N-properties are causal covariation relations. But this set of properties has a domain far broader than that of mental representations: any number of objects and events not implicated in thoughts have characteristic causes, and hence have N-properties. Cow-thoughts are not the only things reliably caused by cows: so are mooing noises, stampedes, and cowpies, to name a few. The CCTI cannot be a viable demarcation criterion of meaningfulness, because it does not distinguish thoughts about cows from stampedes and cowpies. And this is surely a demarcation we should expect a theory that accounted for meaningfulness to entail. So either we must impute mental-semantic properties to all kinds of objects and events, endowing much of nature with content, or we must allow that something more than N-properties are required to explain mental-semantics.
The obvious strategy for sidestepping this objection is to point out that, while representations may share N-properties with many other sorts of objects, it is only mental representations that take part in the relations characteristic of intentional states. There may appear to be a threat of endowing the world with content—namely, with MR-semantic properties. But remember that the word 'semantic' in "MR-semantic" is not doing much work, since we have defined the expression 'MR-semantic properties' in terms of causal covariation. Thus in allowing most of nature to have MR-semantic properties, we have not endowed them with anything counterintuitive, even though the word 'semantic' might suggest as much. Moreover, CCTI, as we have formulated it, involves more than causal covariation: it involves explicit reference to the effect that the items that have MR-semantic properties are also part of an intentional state . It is this additional fact that differentiates them from objects in nature generally. To use some terminology that has not yet been used here, we might say that indication or natural meaning plays a role in the production of mental-meaning only when the indicator is present in an organism in one of the functional relations characteristic of intentional attitudes . Or, to put it slightly differently,
the domain over which the CCTI is quantified is not all objects, but all objects that are representations involved in intentional states.
There is something appealing about this strategy, but it is important to note that it violates one of the fundamental canons of CTM: namely, that the semantic properties of mental states be "inherited" from the "semantic properties" of representations. According to the formulation in the previous paragraph, however, this is not the case: mental-semantic properties are not explicable solely in terms of MR-semantic properties of representations, but in terms of MR-semantic properties of representations plus something else . Worse yet, this "something else" seems to consist precisely in the fact that the representations are elements of an intentional state! But if we must allude to the fact that representations are part of an intentional state to make CCTI proof against the semantification of nature, we have failed to provide a naturalistic explanation of mental-meaning, since part of our account still presumes the intentional rather than explaining it. It is, of course, possible to begin by assuming intentionality, and then asking the question of what kinds of natural properties are involved in the realization of intentional states; and if we do this, we need not worry about the fact that part of what differentiates mental representations from other things that participate in causal covariation is that they also play a role in intentional states. But if we do this, we are no longer seeking an account that provides supervenience or explanatory insight. And this, it would seem, is less than CTM's advocates generally desire by way of an "account of intentionality" (even if it is, in my view, a far more sensible strategy).
The upshot of this is that CCTI does not succeed in providing a criterion for the demarcation of the meaningful from the meaningless. It is not really clear that it was intended to provide such a criterion, but it fails to do so regardless. It follows from this a forteriori that it does not provide an explanation of meaningfulness, since an explanation would also provide a demarcation criterion.
8.4.4.2—
Why CCTI Does Not Explain Meaningfulness
It is also possible to tackle the issue of the explanation of meaningfulness by way of a frontal assault. And it seems prudent to do this, since someone might be inclined to try to rescue CCTI as a potential demarcation criterion for meaningfulness by way of some clever patchwork, much as Fodor has tried to rescue it as a criterion for meaning assignment by way of the notion of asymmetric dependence. To do so, however, would be to miss a much more serious point. The deep problem with CCTI is not that
I have some clever counterexamples that it has failed to catch in its net, and that might be brought into line with the insertion of an additional clause or two. The deep problem, rather, is that causal covariation is just not suited to explaining why some X is capable of meaning something rather than nothing. Causality is just too bland a notion for that task, and fancy patchwork would only serve to reveal this problem rather than to remedy it.
Now the way I should like to be able to proceed here would be to provide a really tight and compelling analysis of explanation and then give a knock-down argument to the effect that CCTI does not fit that analysis if the explanandum is meaningfulness. Explanation, however, is a notion that is notoriously difficult to analyze, and I shall have to content myself with a slightly more roundabout course for getting to the same conclusion: I shall attempt to establish one of the crucial "marks" of successful explanations, and then attempt to argue that the account of intentionality offered by CTM lacks this mark.
One characteristic of successful explanation is the kind of reaction it produces: the "Aha!" reaction that comes with new insight. Suppose I have some familiarity with some phenomenon P , with a set S of notable features. Now suppose that I try to explain P by means of an explanation E , cast in terms of some set of entities and relations X . Now E succeeds as an explanation to the extent that understanding X gives me insight into S —that is, to the extent that upon understanding X I become inclined to say, "Ah, now I see why things in S are as they are." Indeed, in the ideal case, understanding of X should be sufficient for me to infer S , even if I have no prior knowledge of S . Someone with an adequate knowledge of the behavior of physical particles, for example, would be able to derive the notion of "valence" and the laws of thermodynamics, and hence particle theories provide first-rate explanations for these other phenomena. Of course, in practice the process of explanation progresses in the other direction, but an ideal grasp of the explaining phenomena could be sufficient to allow for the derivation of the explained phenomena. This idea that an ideal explanation should allow the derivation of one phenomenon from another (e.g., a more complex one from a simpler one) is part and parcel of the Galilean method of resolution and composition that has informed much of modern science and modern philosophy of science, and is found notably in recent philosophy of science in both reductionist and supervenience accounts.
8.4.4.3—
Instantiation and Realization
I think that the weakest sort of explanation meeting this strong requirement is what Robert Cummins (1983) calls an "instantiation analysis."
(There are stronger sorts of explanation meeting it as well, of course, such as reductions.) Cummins proposes the notion of an "instantiation analysis" as a way of understanding theories that identify instantiations of a property P in a system S by specifying organizations of components of S that would count as instantiations. An instantiation analysis of a property P in a system S has the following form:
|
Instantiation analyses are distinguished from reductions (ibid., 22-26) by the fact that a single property can have multiple instantiations in different systems, whereas the reduction of a property requires a unique specification of conditions under which it is present. But the instantiating property is intended to explain the presence of the instantiated property. Indeed, Cummins writes that one should be able to derive a proposition of the form (6i) from a description of the properties of the components of the system, and that when we can do this we can "understand how P is instantiated in S" (ibid., 18, emphasis added). That is, from a specification of the properties of the components of the system in the form
(6a) The properties of C1 . . . Cn are <whatever>, respectively,
we should be able to derive
(6i) Anything having components C1 . . . Cn organized in manner O—i.e., having analysis [C1 . . . Cn , O]—has property P:
Thus, with an instantiation analysis, supplying a description of the interrelations of the components of a system S should be enough to show that a property P is instantiated in S , because one can derive the conclusion that S has P from a statement such as (6i), and one can, in turn, derive (6i) just from a description components of S —that is, from a statement such as (6a). And since one can derive the conclusion that P is instantiated in S in this way, providing such an analysis should be sufficient to allay doubts that P can be instantiated in S: given a proper description of the components of S , one can, quite simply, infer the instantiation of P in S .
We may also distinguish the notion of an instantiation analysis from that of a weaker sort of account, which I shall call a realization account . A realization account provides a specification of how a property P is realized in a system S through the satisfactions of some set of conditions C1 , . . . , Cn —but without any implication that the satisfaction of C1 , . . . , Cn provides a metaphysically sufficient condition for the presence of P . I shall give several examples:
(1) There are individual objects that have a particular status, such as the Victoria Crown kept in the Tower of London or the Mona Lisa. One could, in principle, give a complete physical description of the matter through which the Mona Lisa is realized. But meeting that description does not provide a sufficient condition for being the Mona Lisa. Additional objects meeting that description would not be additional Mona Lisas, but perfect forgeries. Likewise, there are object-kinds such as "dollar bill" that must be realized through objects with a particular physical description. But once again, meeting that description alone does not make something a genuine dollar bill. If you or I make one, it is a forgery. Dollar bills are realized through particular material configurations, but no instantiation analysis of dollar bills is possible.
(2) Some kinds of human attributes are realized through a person's behavior without the behavior itself providing a sufficient criterion for the presence of the attribute. For example, Jones and Smith may both give a substantial portion of their resources to persons in need, yet in a very different spirit. It may be that Jones does so because he is generous, while Smith does so only because he believes that it is the sole way of saving himself from the flames of hell. Jones's behavior is a realization of generosity, while Smith's is not, even if the behaviors themselves are indistinguishable.
(3) We have seen that there are certain senses in which a computer may be said to perform such operations as adding two numbers. Such operations may be said to be realized through the processes that take place in the computer's components. But a specification of the processes that take place in the computer's components does not provide a sufficient condition for the computer's overall behavior counting as addition, because it only counts as addition by virtue of meaning-bestowing intentions or conventions of designers, programmers, or users, and these are not mentioned in specifications of the interactions of the components through which the adding process is realized in the machine.
Now there is an important methodological and theoretical difference
between instantiation analyses and realization accounts. Realization accounts proceed on the assumption that one may sensibly talk about the property P being realized in some system S . They do nothing, and can do nothing, to show that the organization of components of S would result in the presence of P . Indeed, it need not result in it—a particular set of behaviors might be a realization of jealousy or a realization of a fear of perdition, and a certain configuration of matter only counts as the Victoria Crown or a dollar bill in the context of particular institutional facts and historical acts. Realization accounts do not require even supervenience.
As a consequence, a realization account could not do anything to allay doubts about P 's being susceptible to realization in S: it proceeds on the assumption that P can be realized in S , and hence cannot justify that assumption. In the case of instantiation analysis, by contrast, one can infer the conditions for the ascription of P from a description of the components of S . As a result, providing an instantiation analysis of P in S also serves to vindicate the claim that P can be instantiated in a system like S . It vindicates it because it shows that it can be so. A realization account, on the other hand, does not in any comparable sense show that a property P can be instantiated in a system S . If someone is inclined to doubt that Jones is capable of generosity, for example, pointing to Jones's sizable donations to various charities will not prove the doubt to be mistaken. The donations might, of course, be realizations of generosity in Jones, but it might alternatively be the case that Jones really is incapable of generosity, and is merely giving of his wealth because he is trying to buy his way into heaven. Showing how a property is realized in a system gives us insight into the property and the system in which it is realized, but the resulting description cannot be used to demonstrate that the property is realized in the system or even that it can be .
8.4.4.4—
Instantiation and the Explanation of Meaningfulness
Now I think it should be clear that in order to explain meaningfulness in naturalistic terms, it would be necessary to provide something on the order of an instantiation analysis for meaningfulness—that is, to provide an account such that an adequate understanding of the explaining properties would be sufficient to ground inferential knowledge of the properties explained as well. It also seems clear that, as an explaining property, causal covariation does not come within a country mile of meeting this condition. Causal covariation might very well provide what is needed for
seeing why some thoughts are about one thing and other thoughts are about something else. (Then again it might not—I have no interest in taking sides here.) What it does not do is provide understanding of why causal regularities might contribute to meanings in the case of mental states while failing to do so in all of the other cases of causal covariation occurring in nature . And it is precisely here that the problem of meaningfulness lies.
Nor will any minor patchwork help in the slightest. Asymmetric dependence, for example, is of no assistance here. That can, at best, explain why my thought does not mean "dingo" or "dog-or-dingo." About why it means "dog"—or, more to the point, why it means something and other things caused by dogs do not (let the reader's imagination run wild)—is in no wise clarified by the notion of causal covariation.
Robert Cummins has suggested to me an alternative way of making this point: theoretical identifications, such as the identification of heat with a kind of motion, are of interest only insofar as they help us to understand something about the phenomena that are being explained. Descartes (Le Monde, chap. 2), for example, rejects the Scholastic view that "fire" or "heat" names a kind of substance in favor of the view that fire involves a kind of change of state in the matter of the combustible material, and that heat consists in the increased level of agitation of the matter. Other theorists were impressed by such factors as the ability to convert mechanical force into heat (as when a nail gets hot when it is driven by a hammer) and back again (as in the case of a steam engine). Viewing heat in terms of the motion of matter (and ultimately in terms of kinetic energy) allows us to understand why iron glows when heated and why nails get hot when pounded with a hammer. Now if CCTI is to be of interest as an explanation of intentionality, one would at very least expect there to be something about intentional states that we are able to understand better once we view them through the lens provided by the theory. But in fact there seems to be nothing of the sort. There was perhaps once hope of such a result when causal theorists were more inclined to identify content with information, and hence to view the causal chains involved in their accounts as being chains of information transmission. But the incompatibility of strict information accounts with misrepresentation has caused causal theories such as CCTI to abandon this identification. Information at least looked like an intuitively plausible candidate for explaining "aboutness" in a way that causation does not. If there is anything about intentional states that is explained by CCTI, its nature needs to be more clearly shown. In short, it does not
seem that CCTI explains the nature of intentionality; and indeed, it is not clear that there is anything of interest about intentionality that it does explain.
In summary then, CCTI seems at best to supply a demarcation criterion for meaning assignments, and neither an explanation of the same nor any sort of account of meaningfulness (see fig. 10).
8.4.5—
Some Telling Comparisons
The issue might be put into further perspective by contrasting the explanatory power of CCTI with that of some other "accounts of intentionality." There are a number of writers who address the issue of intentionality, either in general or in specific contexts such as visual perception, whose accounts seem to me at least to provide a certain degree of explanatory insight that CCTI fails to provide. The accounts that come most quickly to mind for me in this regard are Ruth Millikan's (1984) explanations of features of mind and language in terms of reproductively established categories with a selectional history, Kenneth Sayre's (1986) and Fred Dretske's (1981, 1988) information-theoretic accounts of intentionality in perception, and David Marr's (1982) account of vision. Each of these accounts is in some sense an attempt to reduce some kind of intentionality to some set of states and processes and relationships that can be specified naturalistically. (Or, if information is not a natural but a formal category, each tries to give a nonintentional specification of intentionality.)[2] And in each of their accounts causality
plays some explanatory role (in contrast, for example, with Searle's [1983] account, which is largely an ontologically neutral analysis of intentionality). But in each of these accounts, causality fits into the picture only within the framework of a much richer story about the mechanisms through which perception and cognition are accomplished.
Now each of these accounts is extremely complex and strongly resists presentation by way of a thumbnail sketch. I shall thus assume that the reader may refer back to the original sources for any details beyond the following brief sketches. Sayre (1986) tells a story of how information (in the technical sense of Shannon and Weaver [1949]) is conveyed, in a well-defined series of stages, from an object perceived to a stage of cognitive processing that might be rich enough to merit the name "intentionality." The account is an attempt to build "information," in the semantically pregnant sense of the term, out of "information" in the technical sense of "reduced uncertainty" or "negentropy," and assumptions about the functions of perceptual systems as describable as processors of information in the technical sense. Dretske employs a somewhat looser sense of "information" to similar ends. Both have stories about what it is for a thought to be about an object, stories that involve answers to questions about, for example, fidelity of perception and about what it is that connects object to intentional state and is common to both.[3] Millikan's account of belief also makes use of causal connections between the intentional state and its object, but these are embedded in a larger story about the function of belief and how it has been selected for within our species. To understand intentional states, on Millikan's view, is to understand a relationship between an organism and its environment that is the product of a history of adaptation and selection within the species. Marr presents an elaborate and detailed account of how the mind transforms sensory input into a three-dimensional visual representation through the application of a series of computational algorithms involving several distinct levels of representation of visual information.
Now these accounts do several things, in varying measures, that could contribute something towards legitimate insight into the phenomena they set themselves to discussing. (Of course it only merits the description of insight insofar as it turns out to be correct in the long run, but at least these accounts, if correct, yield new insights.) First, they subsume the phenomena to be explained (e.g., intentionality) under more general categories, and thereby provide a characterization, in nonintentional terms, of what kind of phenomenon it is. Millikan uses the notions of a
reproductively established kind and selection history to do this for intentionality generally. Sayre treats perception and perceptual intentionality as a very rapid kind of adaptation to environmental features (much as learning and evolution are much slower sorts of adaptation), further characterized by a state of high mutual information. Second, these accounts give some insight into what kinds of mechanisms are necessary to the realization of particular kinds of mental states, whether the formal properties of these mechanisms be characterized in terms of algorithms from computer science (Mart) or in terms of the Mathematical Theory of Communication (Sayre). There is, to be sure, a purely empirical component in this latter enterprise, but there is also a component that one might describe as "transcendental." Talk of things such as intentionality of perception is primarily motivated by our own case, and it therefore makes sense to ask what must be true of creatures who perceive as we do, much as it made sense for Kant to ask what must be true of beings whose only contact with an external world is through sensuous intuitions. Insofar as we take the phenomena going on in our own mental lives as given and try to provide an account of them, we gain substantial insight from accounts that succeed in telling us what sorts of processes must go on for such phenomena to take place.[4]
Now I do not think that any of these accounts goes so far as to provide an instantiation analysis for intentionality or any particular variety thereof. I shall present my reasons for this conclusion in the next chapter. There are, however, ways of providing more or less insight—and hence of coming closer to providing an adequate explanation—short of an instantiation analysis. My intent here has been to indicate that, in comparison with these other accounts, CCTI fares comparatively poorly in explanatory merits. For while the accounts offered by Millikan, Sayre, or Marr may not provide an instantiation analysis for intentionality, they do (if successful) provide at least the two kinds of insight already mentioned. If, for example, the things Millikan says are essentially correct, and I take the time to master her theory, I will have gained substantial insight into the nature of intentionality. As far as I can see, the same cannot be said for causal covariation accounts. It may well be that an adequate account of intentionality would have to involve a causal component, but when I entertain this proposition, I do not have a sense that any fundamental secrets about intentionality have thereby been revealed, or that I have achieved a grasp of even one principal aspect of the nature of intentionality. My own sense is that, if it is a fact about intentional states that they (characteristically) involve representations standing
in a relationship of causal covariation with the intentional objects of those states, this fact stands with respect to intentionality in a relationship analogous to that in which being the shape of a face of an octahedron stands to triangularity, or perhaps the relation that being a featherless biped stands to being human (that is, if we are talking about intentional states generally, and not about specific kinds of intentional states, such as perceptual judgments, in which causal connections do seem to be essential). Causal covariation might provide some kind of demarcation criterion, but it seems to me that it provides no insight into meaningfulness, and indeed can be invoked only with the prior assumption of meaningfulness. It does not provide an explanation of mental-meaning or intentionality. (I have grave doubts about causal covariation even as a demarcation criterion for meaning assignments. These will be a special case of the arguments against "strong naturalization" in the next chapter.)[5]
8.4.6—
The Tension between Generality and Explanatory Force
Now the consideration of accounts such as those offered by Millikan, Sayre, Dretske, and Marr brings up an additional issue that is worthy of consideration. On the basis of the sample presented by these accounts, it would seem that accounts of intentionality become more plausible as explanations of what it is to be about something or to mean something as they become more detailed in their descriptions of how a system is related to its environment. But as they become more detailed, they become correspondingly more specific and less general . But this has the consequence that as they become more explanatory, they stray further from being general accounts of intentionality, and look more like accounts of, say, the realization of intentionality in the visual perceptual apparatus of human beings . What would seem to be required for a general account of intentionality or mental-semantics, however, would be a characterization that applied equally well to different kinds of cognizers (human, Martian, angelic, silicon-based) and that was indifferent to the intentional modality (perception, judgment, will, etc.). This kind of generality, moreover, is absolutely essential if we want to view cognition as computation over meaningful representations of the sort that Fodor postulates, because the MR-semantic properties of the representations must be independent of what kind of propositional attitude they are involved in. (Indeed, even if one is not committed to computationalism, this would
seem to be implicit in the familiar attitude-content analysis of intentional states.)
To take an illustrative example, consider the account of the intentionality of visual perception in Sayre (1986). Sayre's account is compelling insofar as it makes a case for how some features of perceptual intentionality could be accounted for by viewing certain environmental conditions and features of the perceptual apparatus in information-theoretic terms. While Sayre's account does not supply logically sufficient conditions for getting semantics out of "information in the technical sense," it is a compelling attempt to show how the realization of perceptual intentionality is accomplished. But the details that make Sayre's account compelling also render it too local to be a general account of intentionality. For example, Sayre's account is concerned with mechanisms involved in perception, and hence is oriented towards successful cases of perception and towards transparent construals of ascriptions of intentionality. Familiar philosophical problem cases such as brains in vats and Cartesian demons lie far afield of Sayre's paradigm cases, and it is not clear how his model could address the problems they present for giving an account of intentionality that accommodates intuitions about opaque construals of intentional verbs. Second, Sayre's account of perceptual intentionality treats the intentionality involved in perception as directed towards an object rather than a proposition or proposition-like psychological state. It is quite possible that perception differs from other intentional modalities in this regard, however, and so the extension of Sayre's account to higher cognitive functions may well require a significantly different sort of account from his account of perceptual intentionality. Third, while Sayre's account is sufficiently abstract to avoid being specific to a species, it does seem to be based upon a construal of the abstract nature of the processes that beings such as ourselves undergo in perception. It is conceivable that other beings might reach a similar goal (perceptual intentionality) by a different path, one not describable by Sayre's story.
Millikan's story about intentionality has features that make it arguably even more local: to explain intentionality you have to tell a story about adaptive role and selection history. And selection history is dependent upon lineage. Indeed, according to Millikan, if a being were suddenly to emerge into existence that was identical with one of us in structure, in input-output conditions, and in subjective experiential states, this being would nonetheless have no beliefs or desires, because, according to Millikan, what it is to be a belief or a desire involves being the product of a certain kind of selection history. This would seem to have the
consequence that we would have to tell separate stories about intentionality in species where the relevant functions did not develop in a common evolutionary history. (Perhaps even if the histories were completely parallel to one another.) This might not mean that we would have to tell separate stories for humans and chimps (since the relevant selection process may have taken place before the species diverged), but we would have to tell separate stories for humans and Martians, or even humans and Twin-Earthers. (How we would tell such a story about beings without an evolutionary history—such as God, angels, and intelligent artifacts—is quite beyond me.)
Now it is not fully clear what moral one ought to draw from this. One distinct possibility is that what we have here is evidence that, contrary to commonsense assumptions, there is no one phenomenon called "intentionality," but several different phenomena which require rather different sorts of accounts. A slightly more modest moral would be that we have evidence here that the direction of inquiry ought to be to begin with more local phenomena that sometimes receive the label "intentionality"—for example, "intentionality" as it appears in visual perception—and proceed to an attempt at a general theory only when we have a good understanding of specific kinds of intentionality already in hand.[6]
There is, however, a very different possibility, which will be developed more fully in the next chapter: namely, that the problem may lie not with the notion of intentionality, but with attempts to provide a "naturalization" of it. In particular, it may be that all a naturalistic theory can hope to do with respect to the mental is to spell out how mentalistic properties are realized in particular kinds of physical systems, in which case it comes as little surprise (a ) that what is common to different cases is not captured by the naturalistic theory, or (b ) that different kinds of accounts may be required for different kinds of beings having the same intentional properties, since the same mentalistic properties might need to be realized through different means in different kinds of beings.
8.4.7—
Compositionality Revisited
Even if CCTI were to succeed as an account of the semantics of the primitive elements in the hypothesized language of thought, CTM would not thereby be immune to criticism. For in addition to telling a story about the semantic properties of the primitives, CTM attempts to tell a compositional story about the semantics of the complex representations. Unfortunately, the only way we know of telling a story about composi-
tionality is to tell a story about symbols whose semantic properties, in conjunction with syntactically based rules, generate meanings for symbolic expressions. Now on the one hand it is not clear that there is any real force left to speaking of representations as symbols if one is no longer endowing them with symbolic meaning (i.e., semiotic-meaning). On the other hand, we still have no nonconventional way of generating meanings for complex expressions (i.e., complex machine-counters) out of concatenations of simple expressions, even if we take the meanings of the simple expressions for granted. At best, the account leaves the fact that there are such compositional functions an unexplained brute fact. What we need, in addition, is some rule that makes it the case that, for example, things of the form x-&-y will mean "X and Y ." In overt languages, this is accomplished through convention. It is not clear that it could be accomplished in any other way. For it is not clear that there is any other pathway that will yield the kind of specificity of interpretation that we are able to get by dint of arbitrary conventions in a natural language. At the very least, even if advocates of CCTI could make their analysis of semantic primitives stick, they would further need to provide a naturalistic account of compositionality before their account could be regarded as viable. The notion of syntax that yields compositionality is conventional to the core, as argued in chapter 6, and no theory of compositionality has been developed for machine-counters.
8.5—
A Second Strategy: Theoretical Definition
If this stipulative definition of the semantic vocabulary will not save CTM's account of intentionality, it behooves us to examine a second possible reinterpretation as well: namely, that the semantic vocabulary employed in CTM is to be understood as a theoretical vocabulary whose interpretation is fixed by the work it does in the theories in which it is employed. The very brief answer, I shall argue, is no: if the semantic vocabulary of CTM is defined theoretically, then we do not have an explanation of intentionality (and hence no vindication of intentional psychology) until the underlying nature of these properties that are initially specified theoretically is spelled out. Until then, the so-called "explanation" of intentionality by appeal to "semantic properties of representations" really amounts to an appeal to dormative virtues.
Now what do we mean by "theoretical definition"? Sometimes terms employed in scientific theories mean precisely what they meant all along in ordinary language. In other cases, however, scientific theories appro-
priate ordinary-language terms and use them in new ways. Terms like 'matter' and 'particle' probably at one time had as part of their meaning all of the notions bound up in the Cartesian notion of "extension," such as size, shape, and definite location. Modern physics, however, countenances the use of these terms even for objects that lack one or more of these properties. Whatever the ordinary connotations of 'work', it has a very specific technical definition in physics. And naturally the property of "charm" attributed to quarks has nothing to do with good breeding and etiquette. Of course, science also countenances the introduction of new terms as a part of theories as well. And sometimes these also have their semantic values fixed by the theories in which they play a part. The word 'gene' in biology, for example, was at one time defined only by the theory in which it played a role: a gene was, by definition, the kind of thing, whatever it would turn out to be, that accounted for phenotypes of living things. When Watson and Crick discovered that the locus of this genetic encoding was the DNA molecule, the term perhaps underwent a change in meaning; but before that time it was a purely theoretical term —that is, a term whose meaning was fixed solely by the role it played in a theory.
The suggestion I wish to explore is that when CTM speaks of "semantic properties of representations," the words 'semantic properties' express properties that are theoretically defined in much the same fashion. These properties, which we have called "MR-semantic properties," might thus be defined as follows:
MR-semantic properties = df Those properties of mental representations, whatever they turn out to be, that explain the mental-semantic properties of mental states.
The actual nature of these properties is thus left unspecified at the outset, though presumably it may be determined in the course of further research. This reconstruction of the semantic vocabulary employed in CTM provides a new way of interpreting that theory that avoids the problems involving conventions and intentions.
8.5.1—
Does Theoretical Definition Explain Intentionality?
Let us then look at the claim that the kind of theoretical definition of semantic terms employed in BCTM provides us with an account of the in-
tentionality of mental states. Earlier, we proposed a schematic version of CTM's proposed account of intentionality:
Schematic Account
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has MR-semantic property X .
Having specified that MR-semantic properties are defined in theoretical terms, we can substitute our theoretical definition into our schematic account. But there are two different ways of substituting into our definition, which we may think of as the de dicto and de re substitutions. The de dicto substitution simply replaces the expression 'MR-semantic property X ' with its theoretical definition as follows:
De Dicto Interpretation
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has that property of MR , whatever it is, that accounts for mental-semantic property P .
The de dicto interpretation yields a pseudo-explanation of a well-known type. On this reading, MR-semantic properties fail to explain for precisely the same reason that we cannot explain the soporific powers of a medicine by appeal to its "dormative virtues." If saying "mental states inherit their semantic properties from mental representations" amounts to nothing more than saying "mental states get their semantic properties from something that has the property of giving them semantic properties," we do not have a legitimate explanation of semantics or intentionality.
However, it is also possible to substitute our theoretical definition into the schematic account in another way that does not share this problem: namely, by substituting a de re reading of the theoretical definition as follows:
De Re Interpretation
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR ,
(2) MR has some property X ,
(3) the fact that MR has X explains the fact that M has P , and
(4) X is called an "MR-semantic property" because
(a ) it is a property of a mental representation, and
(b ) it is the property that explains the fact that M has P .
On this interpretation, there are no dormative virtues lurking in the wings. Unfortunately, as the account stands, there is no explanation of intentionality either until we know (1) what the all-important property X might be, and (2) how we can derive the intentionality of mental states from the fact that cognitive counters have this wonderful property (the way we can, say, derive thermodynamic laws from statistical mechanics). BCTM does not supply us with this information; therefore BCTM does not supply an account of intentionality. BCTM no more explains intentionality than nineteenth-century genetics explained phenotype. With regard to intentionality, on a best-case scenario (that is, on the assumption that BCTM is on the right track with respect to the functional shape of the mind and the ultimate possibility of explaining intentionality by appeal to the properties of localized states), BCTM is in the position genetics was in before Watson and Crick: it is a functional-descriptive theory in search of an underlying explanation. (Of course, in the worst-case scenario, mental representations and their MR-semantic properties go the way of heavenly spheres and Piltdown man.)
In short, it seems to me that BCTM makes no progress at all on the semantic front. It does not so much provide an explanation of intentionality as it makes evident the absence of such an explanation. This fact has generally been obscured by confusions that result from assuming that the semantic vocabulary can be applied univocally to mental states, symbols, and representations. If we say, "Mental states inherit their meanings from mental representations," it looks as though there is progress on the semantic front, because we have reduced the problem of mental meaning to a problem about the meanings of symbols in the brain. Meaning, at any rate, looks like the right sort of thing to be a potential explainer of meaning, because we do not have to explain how meaning came upon the scene in the first place in order to explain mental-semantics. However, if it turns out that the semantic vocabulary applied to representations is a truly theoretical vocabulary, the appearance of progress begins to look like smoke and mirrors. As we noted earlier in the chapter, it is one thing to claim
(1) Mental state M has property P because M involves MR , and MR has P .
But it is quite another to claim
(2) Mental state M has property P because M involves MR , and MR has X , and X¹P .
Claim (1) proceeds on the assumption that property P is in the picture to begin with, and just has to explain how M gets it, while claim (2) has to do something more: namely, to explain how P (in this case, mental intentionality) comes into the picture at all . CTM simply does not do this, and to describe CTM as "explaining intentionality" is simply a gross distortion of what it actually accomplishes.
8.6—
Mr-Semantics and the Vindication of Intentional Psychology
The reader will recall that the explanation of intentionality was the first of two philosophical treasures that CTM was supposed to have unearthed, the second being a vindication of intentional psychology. Let us now return to the problem of vindication. Recall how the attempted vindication was inspired by the computer model. In a computer, the semiotic-semantic properties of the symbols are coordinated with the causal role symbol tokenings can play in the system. It is a useful contrivance to speak of the relationship between symbols and causality as being mediated by syntax, but speaking of the "syntactic properties" of the symbols—indeed, talking about computer states as symbols—is largely a matter of convenience. The symbolic and syntactic character of the symbols is conventional in origin and etiologically inert. What matters is that the semiotic interpretations of symbols are coordinated with the functional-causal role they can play. Now the hope CTM presented was that the mind was a computer, and hence it might be that the mental-semantic properties of mental states could be coordinated with the causal roles they play in inference, thus showing that (contrary to appearances) intentional explanation is grounded in lawlike causal regularities.
Notice that purging CTM of dependence upon symbols and syntax has thus far done nothing to weaken the case for the vindication of intentional psychology. For in point of fact, the notions of symbol and syntax
played less of a role in the case of computers than was commonly believed. But notice also that there is an important difference between coordinating the semiotic-semantic properties of symbols in computers with their functional-causal roles, and coordinating the mental-semantic properties of mental states with their functional-causal roles: the former is done directly, the latter is done (according to CTM) by an intermediate step: namely, coordinating the MR-semantic properties of representations with their causal roles. The difference is represented graphically in figure 11.
This illustration reveals several respects in which the computer paradigm itself falls short of providing a vindication of intentional psychology. These are not reasons that one cannot vindicate intentional psychology in the manner suggested, but they do show what more one needs if such a vindication is to proceed as planned.
(1) The computer paradigm shows that semiotic-semantic properties can be coordinated with functional-causal properties. What one needs for CTM, however, is a demonstration that some other kinds of "semantic" properties (immediately, the MR-semantic properties of mental representations) can be coordinated with functional-causal properties. The computer paradigm by no means assures that this can be done. (After all, there might be something special about semiotic-semantics.)
(2) The computer paradigm only shows how two sets of properties of one sort of object can be coordinated. CTM needs something more: it needs to show that, by coordinating the MR-semantic properties of representations with their causal roles, it can thereby coordinate the mental-
semantic properties of mental states with their causal roles as well. This would seem to place some additional constraints upon the "vindication" beyond what is involved in saying the mind is a computer.
In what follows, I should like to build a case that each of these problems is potentially very serious. First, there is good reason to hesitate in concluding that other types of "semantic" properties can be coordinated with causal role in the fashion that semiotic-semantic properties are so coordinated in computers. Second, in order for BCTM to license a vindication of intentional psychology, it would have to be able to show that the coordination of MR-semantic properties with causal role would thereby secure the coordination of mental-semantic properties of mental states with causal role as well; and in order to do this, it would have to supply an instantiation analysis of mental-semantics in terms of MR-semantics—a realization account is not enough for vindication.
8.6.1—
The Special case of Semiotic-Semantic Properties
The computer paradigm shows that a symbol's semiotic-semantic properties can be correlated with the causal role the symbol can play, so long as all semiotic-semantic distinctions between symbols are reflected in syntactic distinctions. What links the semiotic-semantic properties to the marker type, however, are the conventions and intentions of symbol users. So if an adding circuit has the binary pattern 0001 tokened in one register and 0011 in a second and produces a tokening of 0100 in a third as a result, the tokening of the third is accounted for by the functional architecture of the machine and the specific patterns present in the registers, but the overall process is said to be an instance of adding one and three and obtaining a sum of four only because of the interpretive conventions that are being applied.
Now what, in this paradigm, accounts for the "coordination" of syntax with semantics? On the one hand, the functional properties of the system provide necessary conditions for the reflection of semantic distinctions in the syntax. On the other hand, it is the conventions of symbol users that actually establish (a ) the marker types employed, (b ) the syntactic types by virtue of which markers can be counters, and (c ) the semantic interpretation schemes by virtue of which the markers may be said to have semantic properties. The "coordination" of syntax and semantics depends upon the relationship between semantic and syntactic conventions, and so is highly convention-dependent.
I should like to suggest that this convention-dependence is precisely
what gives the "coordination" of syntax with semiotic-semantics in computers one of its more useful features, and that we should not expect syntax—or, more exactly, functional role and syntactic interpretability-in-principle—to be "coordinated" with non-semiotic-semantic properties in the same sort of way. For one thing that interpretive conventions (or intentions) can do is pick out a unique interpretation for each marker that is to serve as a counter. This is significant because (notoriously) any symbol system is subject to more than one consistent interpretation. (Notably, there will always be an interpretation entirely within the domain of number theory.) It is the conventions and intentions of symbol users that account for the fact that a token in a given symbol game means (for example) dog and not the set of prime numbers . And it is these conventions and intentions that determine which semantic properties are coordinated with which syntactic properties.
Now there is really something at once unique and mundane about the coordination between semiotic-semantic and syntactic properties of symbols. If someone asks why a given counter type is associated with (i.e., is interpretable as bearing) a particular interpretation, the answer is not at all mysterious: it is associated with that interpretation because there is a convention to that effect among a particular group of symbol users. And if someone asks why it is not associated with (i.e., is interpretable as bearing) another interpretation, the answer is that there is no convention linking it to that interpretation. It may indeed be surprising that symbol games as large as geometry and significant portions of arithmetic can be formalized, and it may be surprising that formalizable systems can be automated in the form of a digital computer, but the basis of the connection between counter types and semiotic-semantic interpretation is not at all arcane.
What would seem to be unique about this kind of association between semantic values and marker types is that the relationship between semantic value and marker type is determined by stipulation —and it is this that allows for the association of marker types with unique interpretations. Now it might be the case that there are other factors that could determine how syntactic features of mental representations are to be connected to particular (nonsemiotic) semantic properties and not to others. But it is not at all clear that we ought to expect it to be the case. For one might well think that it is only the stipulative character of semiotic conventions and meaning-bestowing acts that can provide the kind of unique correlation of semantic value with counter type that one finds in symbolic representations in a computer. I know of no convincing argu-
ment that would absolutely rule out the possibility that some other factor could provide such a unique correlation, but I must say that it seems a bit mysterious just what other kind of factors could provide a unique association between the syntactic properties of any mental representations there might be and their MR-semantic properties. It must not be a matter of stipulation, because that would lead to the kind of semantic regress discussed in the previous chapter. But without stipulation, it is unclear how one could get uniqueness of interpretation. The prospects of applying the computer paradigm analogously are thus rendered doubtful, though not precluded entirely.
8.6.2—
Instantiation, Realization, Vindication
Now even if it is possible to coordinate MR-semantic properties with causal role, this is not enough for the vindication of intentional psychology. For that one also needs it to be the case that coordinating the MR-semantic properties of representations with their causal roles secures the further coordination of the mental-semantic properties of mental states with their causal roles. Presented in the way the case was originally presented, when we assumed that the "semantic" properties of mental states were the very same properties as those of their representations, securing this further coordination seemed almost trivial. The argument for it is expressed by this argument presented earlier in the chapter:
Argument V2
(1) Mental states are relations to mental representations.
(2) Mental representations have syntactic and semantic properties.
(3) The syntactic properties of mental representations determine their causal powers.
(4) All semantic distinctions between representations are preserved syntactically.
(5́) There is a strict correspondence between a representation's semantic properties and its causal powers.
(6́) A mental state M has semantic property P if and only if it involves a representation MR that has semantic property P .
\ (7́) There is a strict correspondence between a mental state's semantic properties and its causal powers.
But of course once one has distinguished different kinds of semantic properties, the argument has to be adapted as follows:
Argument V3
(1) Mental states are relations to mental representations.
(2) Mental representations have syntactic and MR-semantic properties.
(3) The syntactic properties of mental representations determine their causal powers.
(4) All MR-semantic distinctions between representations are preserved syntactically.
(5* ) There is a strict correspondence between a representation's MR-semantic properties and its causal powers.
(6* ) A mental state M has mental-semantic property P if and only if it involves a representation MR that has MR-semantic property X .
\ (7* ) There is a strict correspondence between a mental state's mental-semantic properties and its causal powers.
The issue here turns upon (6* ), the claim that mental-semantic properties of mental states can be coordinated with MR-semantic properties of representations, and the inference to (7* ), the claim that mental-semantic properties of mental states would thereby be coordinated with causal powers. In order for (6* ) to be true, the mental-semantic properties of mental states would have to be at least correlated with the MR-semantic properties of representations. But in order for this argument to provide a vindication of intentional psychology, something more is required: one must be able to show that the MR-semantic properties of representations determine the mental-semantic properties of mental states. For in order to vindicate something, one must show that it could be the case. To vindicate intentional psychology, one would have to show that the mental-semantic properties of mental states can be coordinated with causal roles, and not merely show what benefits would be derived if they were so coordinated. Given that we can show that MR-semantic properties of representations can be coordinated with causal roles, we would still have to show that, as a consequence, mental-semantic properties of mental states would be coordinated with causal role as well.
Now what sort of account of mental-semantic properties would be
needed to achieve this end? What is required is an instantiation analysis of mental-semantics in terms of MR-semantics—a realization account is not enough. For recall a key difference between instantiation and realization: since an instantiation account provides conditions from which one can infer the instantiated property, it provides a vindication of existence claims for that property, given that the instantiating properties are satisfied. But with a realization account, no such benefit accrues: since the realizing properties are not a sufficient condition for the realized property, they do not provide proof for someone who doubts that such a property can be realized. Now we are seeking an account that vindicates the claim that the mental-semantic properties of mental states can be coordinated with their causal powers. An account of how mental-semantic properties are instantiated through the MR-semantic properties of representations could provide such a proof, because one would be able to infer the mental-semantic properties of the mental states from the MR-semantic properties of the representations. A realization account, on the other hand, merely presupposes that there is some special relationship between the properties picked out in the intentional idiom and those picked out by the functional-causal account, without either specifying the nature of the relationship or showing why it obtains. Such a presupposition may have great advantages if you are doing empirical psychology, because you can do your research without waiting for definitive results of debates about dualism, reduction, supervenience, or psychophysical causation. But for this version of the vindication of intentional psychology to work, we must not assume such a special connection, because the possibility of such a connection is precisely what has been called into doubt . If someone doubts that the semantic and intentional properties of mental states can be coordinated with naturalistic properties, and one gives a realization account for the intentional and semantic properties of mental states that just assumes that they are specially connected to some naturalistic properties, one has not assuaged the doubt so much as begged the question.
8.7—
Summary
The general conclusion of these past two chapters is that CTM does not, in fact, provide an account of intentionality. It provides the illusion of such an account by saying that the semantic properties of mental states are inherited from those of mental representations. But on closer inspection, we have not found any properties of "mental representations"
(i.e., our hypothetical cognitive counters) that could serve to explain mental-semantic properties of mental states. Semiotic-semantic properties, as we saw in the last chapter, fail on a number of grounds, including the fact that they render the explanation circular and regressive.
One focus of this chapter was upon the possibility that the kind of causal covariation account of semantics championed by Fodor might actually be able to serve as a stipulative definition of semantic terms as applied to representations. I have serious doubts that this was Fodor's intention. But if one were to make such a move, it would seriously undercut the persuasive force of Fodor's apologia for CTM, since that involved explicit and implicit arguments that turn out to be blatantly fallacious if notions such as meaning and intentionality are defined in causal terms for mental representations. Moreover, causal covariation stories do not go very far towards providing an account of what it is for a mental state to be mental-meaningful or mental-intentional—they don't provide an explanation . First, the causal covariation story just seems like the wrong kind of "account": it appears to give a demarcation criterion that does not explain, and it seems to distinguish states that have different meanings instead of distinguishing the meaningful from the meaningless. That is, it seems to assume that it is dealing with meaningful entities, and then asks, "How can we distinguish the ones that mean X from the ones that mean Y ?" In addition, I have tried to make a case that, if the notion of causal covariation is too bland a notion to provide an explanation of intentionality or meaningfulness, this blandness seems the price one must pay for generality: naturalistic accounts become more explanatory as they become more detailed, but in the process they lose the generality one would want from an "account of intentionality." Finally, I have argued that even if CCTI were to succeed as an account of semantics for the primitive representations, it would need to be supplemented by a naturalistic account of compositionality as well, and it is hard even to imagine how such an account might proceed. The upshot of this is that causal covariation does not provide us with a notion of representational meaning that can explain mental-meaning or vindicate intentional psychology.
The theoretical definition of the semantic vocabulary for representations fares no better. On one construal (the de dicto construal), it provides a fallacious pseudo-explanation that appeals to dormative virtues. On another (the de re construal) it provides no explanation at all. This, I think, is as far as CTM can be made to stretch: it is a theory of the form of mental processes that stands glaringly in need of an account of semantics to supplement it. We saw as well that we cannot "vindicate" in-
tentional psychology in the way envisioned by CTM's advocates unless we have such an account—and indeed a naturalistic account—of semantics and intentionality in hand. In the next chapter, we shall explore the prospects for such a "naturalistic theory of content." In the final section of the book, we shall explore an alternative way of looking at the computer paradigm in psychology that renders unnecessary both the naturalization of the mental and its vindication.