PART III—
THE CRITIQUE OF CTM
Chapter Seven—
Semiotic-Semantic Properties, Intentionality, Vindication
The preceding chapters have brought us to a point from which it is possible to return to the issues that were raised in the discussion of Searle's and Sayre's objections to CTM in chapter 3. There it was suggested that, if it were to turn out to be the case that words used in the attribution of intentionality and semantic properties are systematically homonymous, this might pose problems for CTM's account of the intentionality and semantic properties of mental states. The reason for this concern was straightforward: CTM attempts to account for the semantic and intentional properties of mental states by saying that these are "inherited" from those of the mental representations they contain. The general schema for explaining the semantic properties of a mental state M would appear to be something like this:
Mental state M has semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has semantic property P .
But if it should turn out to be the case that the semantic properties predicated of mental states are not the same properties as those predicated of symbols, then this schema is at best in need of refinement and at worst betrays a deep confusion about semantic properties, because the expression "semantic property P " cannot be said univocally of symbols and of mental states, and hence one cannot sensibly speak of "inheritance."
Now the results of chapters 4 and 5 have borne out the suspicion that the terms used in attributions of semantic properties are systematically homonymous. The kinds of "semantic properties" attributed to symbols are both different from and conceptually dependent upon the kinds of "semantic properties" attributed to mental states. It was suggested that we can mark this distinction by adding prefixes to words such as 'semantic' and 'intentional' so as to disambiguate these crucial terms. The kinds of semantic properties attributed to mental states we may designate mental-semantic properties, and similarly the intentionality of the mental we may designate mental intentionality . In contrast, we may refer to the kinds of semantic properties attributed to symbols as semiotic-semantic properties, and the kind of intentionality attributed to symbols as semiotic intentionality .
In order to determine whether this analysis will have any consequences for CTM, it is necessary first to revise CTM's schema for explaining intentionality and semantic properties in light of these new distinctions. It seems clear that the kinds of properties of cognitive states that are to be explained by CTM are their mental-semantic properties. What is less clear is just what kinds of "semantic" properties mental representations are supposed to possess, in virtue of which they can provide the basis for an account of the mental-semantic properties of mental states. There seem to be three basic possibilities: (1) they are mental-semantic properties, (2) they are semiotic-semantic properties, or (3) they are neither mental-semantic properties nor semiotic-semantic properties but some other sort of properties that have not yet been clearly identified or distinguished from mental- and semiotic-semantic properties. This third possibility must be considered, since it could be that references to the "semantic properties of mental representations" are best construed as attributions of some kinds of properties that are particular to mental representations. It is not clear what these properties are supposed to be, but if someone were to advance the claim that there are such properties, the properties might be distinguished from mental- and semiotic-semantic properties by calling them MR-semantic properties, where "MR" is short for "mental representation."
I should like to separate the task of exegesis of texts by Fodor and other proponents of CTM from the task of analyzing variations on the account of intentionality. I do not wish to place too much emphasis upon the exegetical task. That task may well be pointless: it does not look as though Fodor recognizes the ambiguity of the semantic vocabulary, and if this is so, there is no point in asking which leg of the ambiguity he in-
tended. In spite of this, however, it makes perfect sense to ask what various construals of CTM amount to, what their prospects are, and what advocates of the theory might need to provide in order to lend further support to their account. So here are three variations upon CTM's account of the intentionality of mental states:
Mental-Semantic Version
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has mental-semantic property P .
Semiotic-Semantic Version
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has semiotic-semantic property X .
MR-Semantic Version
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has MR-semantic property Y .
7.1—
A Brief Discussion of the Three Versions
These three versions of CTM's account of semantics and intentionality are not all of equal interest. The first version, the mental-semantic version, seems plainly to be of little merit. While it may be that this version best reflects the fact that CTM's advocates fail to distinguish between different kinds of "semantic properties," it is also quite hard to see what it would mean to attribute mental-semantic properties to mental symbols—or indeed to anything other than a mental state. To embrace the mental-semantic version would be to say that mental representations are symbols that do not "have semantic properties" in the normal sense in which symbols are said to have semantic properties, but instead (and unlike other symbols) have the very same kind of semantic properties that one attributes to mental states. I am hard pressed to see what such a claim could really mean, and am fairly confident that none of CTM's advocates would wish to offer it as a clarification of his theory.
The semiotic-semantic version of the account would clearly seem to be the best candidate for an interpretation of Fodor's account of intentionality. After all, Fodor repeatedly characterizes mental representations as "meaningful symbols," and one would certainly be justified in assuming that these representations are supposed to be symbols that are "meaningful" in the sense that symbols (as opposed to, say, mental states or discussions with one's therapist) are said to be "meaningful." If Fodor meant something else by 'meaningful' ('semantic', etc.), one would certainly expect that he would have said so. For this reason alone, the semiotic-semantic version should count as the default reading of CTM's account of semantics. In addition, CTM is supposed to be an application of the paradigm of machine computation; and in the case of symbols in computers, when we speak of their "semantic properties" it is their semiotic-semantic properties with which we are concerned. Therefore it seems reasonable to assume that it is semiotic-semantic properties that are attributed to the mental representations over which mental computation is supposed to take place. Moreover, it seems the only option that is really on the table. The only senses of semantic terms that we have become acquainted with are those that denote semiotic-semantic properties and those that denote mental-semantic properties, and it clearly will not do to attribute mental-semantic properties to mental representations. It may be that someone could develop another usage of words such as 'semantic' and 'intentionality' that could be used in denoting some other class of properties, perhaps properties particular to such mental representations as there might be; but to the best of my knowledge no one has clearly stated such an alternative usage, nor made clear what it might be used to denote. The problem with analyzing the MR-semantic version of the account of intentionality is not so much that an explanation of intentionality based on such a peculiar usage of semantic terminology is either impossible or that it would not prove fruitful. The problem is, rather, that it is difficult to criticize an account that has not yet been articulated. There is, however, a trend towards causal explanations of the semantic properties of mental representations that could, in principle, be taken as pointing towards a usage of semantic terminology that would be peculiar to mental representations, and this is worthy of some investigation.
What I propose to do, therefore, is to examine in this chapter the prospects of CTM for explaining intentionality and vindicating intentional psychology, on the assumption that the "semantic properties" of mental representations are semiotic-semantic properties. In the two chapters that follow, I shall explore two ways of developing an account of "semantic
properties" for representations that diverges from semiotic-semantics.
7.2—
Semiotic-Semantic Properties and CTM'S Account of Intentionality
The first order of business, then, is to consider the prospects of CTM's account of intentionality on the assumption that the "semantic" properties imputed to mental representations by CTM are the same kinds of semantic properties normally imputed to symbols—that is, that they are what I have called semiotic-semantic properties. In order to proceed here, it might be helpful to return to Fodor's own characterization of cognitive states in Psychosemantics:
Claim 1 (the nature of propositional attitudes)
For any organism O , and any attitude A toward the proposition P , there is a ('computational'-'functional') relation R and a mental representation MP such that
MP means that P , and
O has A iff O bears R to MP . (Fodor 1987: 17)
On the current interpretation, the condition "MP means that P " may be interpreted as "MP semiotically means that P ." But this does not yet leave us at a point at which we can evaluate this claim, for the simple reason that claims about semiotic-meaning are ambiguous: they might be claims about interpretability, about intended interpretation, about actual interpretation, or about interpretability-in-principle. So even if we confine ourselves to semiotic-meaning of mental representations, there are really four distinct accounts of cognitive states that might be seen in Fodor's characterization:
Authoring Intention Version
For any organism O and any cognitive attitude A towards a proposition P , there is a relation R and a mental marker MP such that
MP was intended as signifying (that) P , and
O has A iff O bears R to MP .
Actual Interpretation Version
For any organism O and any cognitive attitude A towards a proposition P , there is a relation R and a mental marker MP such that
MP was actually interpreted as signifying (that) P , and
O has A iff O bears R to MP .
Interpretability Version
For any organism O and any cognitive attitude A towards a proposition P , there is a relation R and a mental marker MP such that
MP is interpretable under convention C as signifying (that) P , and
O has A iff O bears R to MP .
Interpretability-in-Principle Version
For any organism O and any cognitive attitude A towards a proposition P , there is a relation R and a mental marker MP such that
MP is interpretable-in-principle as signifying (that) P , and
O has A iff O bears R to MP .
Our task thus becomes one of examining each of these four versions of the account and determining whether any of them can succeed in providing an explanation of the intentionality and semantic properties of mental states.
In the following sections, I intend to address each of these versions of Fodor's representational theory of mental states and to argue that none of them can provide an account of the semantic and intentional properties of such states. The arguments against three of the versions of the theory—those based upon interpretability, authoring intentions, and actual interpretation—are roughly cognate with one another, and hence these three versions will be addressed together. The case against the version based on interpretability-in-principle is quite different, and will be addressed separately.
7.3—
Intentions, Conventions, and the Representational Account
The first three versions of Fodor's representational account of cognitive states share an important feature: all three involve (covert) appeals to intentions and conventions. The Authoring Intention Version involves the claim that cognitive states involve mental representations that are intended as signifiers. But the logical form of the locutional schema 'is intended as signifying (that) P ' requires a specification of some author of the marker token whose intention it was that the token signify (that) P . Likewise, the Actual Interpretation Version involves the claim that cognitive states involve mental representations that are interpreted as signifiers. But for something to be interpreted as a signifier, there must be
some symbol user who does the interpreting. The Interpretability Version involves the claim that cognitive states involve representations that are interpretable as signifiers. But for a marker to be interpretable as a signifier, there must be a convention licensing the interpretation; and for there to be such a convention, there must be a community of symbol users who share a common understanding that such an interpretation is licensed.
In each of these cases, the resulting account of intentional states itself contains further reference to intentional states. In the case of authoring intentions and actual interpretation, it will involve reference to the intentional states involved in intending the marker to bear an interpretation or in construing it as bearing an interpretation, respectively. In the case of interpretability under a convention, the situation is only slightly more complex: conventions themselves are not intentional states, but the presence of a shared set of beliefs about how marker types may be used is a necessary (if not quite sufficient) condition for the presence of a semantic convention.
It thus turns out that versions of CTM based on interpretability, authoring intention, and actual interpretation are infected with exactly the kind of covert reference to cognitive states that was discussed in the development of the Conceptual Dependence Objection in chapter 3: the logical forms of attributions of intentional and semantic properties to symbols contain references to cognitive states. What remains to be seen is whether this fact imperils these versions of the account. I wish to claim that such accounts face four serious problems. First, they are empirically implausible. Second, they do not provide an explanation of the intentional and semantic properties of cognitive states. Third, they undercut one of the fundamental tenets of representational accounts of mind: namely, the intuition that access to extramental reality is mediated by mental representations. Finally, they lead to circularity and regress.
7.4—
The Empirical Implausibility of the Account
The first problem with the versions of CTM based on convention and intention is that they are highly implausible as empirical theories. Indeed, they are so empirically implausible that it would be difficult to find any stranger theories in the history of science. For suppose that the semiotic-semantic version of CTM is true. If this is so, then whenever you have a belief that (for example) Lincoln was president, you have a mental representation MP that means (that) Lincoln was president. And if one of
the three versions of CTM based on conventions or intentions is correct, 'means (that)' has to be cashed out in terms of conventions or intentions. According to the Interpretability Version, you can only have a belief that Lincoln was president if there is some convention C that licenses the interpretation of MP as meaning that Lincoln was president. According to the Authoring Intention Version, you can only have such a belief if someone "authored" MP and intended it to mean that Lincoln was president. And according to the Actual Interpretation Version, you can only have such a belief if someone apprehends MP and takes it to mean that Lincoln was president.
All of these possibilities seem very unlikely, to say the least! Who is it , after all, whose intentions, interpretations and conventions are supposed to explain the meaningfulness of MP? One possibility would be that it is the thinker's own intentions, interpretations, or conventions. But there are two problems here, both of which may be familiar from criticisms of Hume offered by Thomas Reid and Edmund Husserl. First, there is certainly no experience of authoring or interpreting a symbol in ordinary cognition. (And it is not clear what it would mean to interpret or author a symbol one does not and cannot apprehend.) Second, in order to intend or interpret a symbol token as being about something else, one must have access both to the symbol and to the thing it is to represent. As we shall see below, this runs afoul of some basic motivations for representational theories of mind.
But perhaps the relevant conventions and intentions are not those of the organism itself, but of some other being(s). It is, perhaps, conceivable that there are some supernatural beings, or perhaps some very sophisticated Martian psychologists, who have subtle enough access to human brain states to view them as computers—for example, by constructing Turing machine descriptions for each human being. But it really seems quite unlikely. And according to these convention- and intention-based versions of CTM, humans could only be said to be in cognitive states if there were such beings. A theory that appeals to the unlikely to explain the matter-of-fact surely has to be regarded as highly suspect.
7.5—
The Irrelevance of Conventions and Intentions
In addition to being highly unlikely, the presence of beings who do in fact interpret human psychological states is quite irrelevant to our as-
criptions of intentional states to humans, and to ascriptions of semantic and intentional properties to those states. For suppose that there are two possible worlds that are indiscernible with respect to all features accessible to human observers. In one world—all it the Demon World—there are beings called demons, indetectable to humans, who have a kind of access to and understanding of human mental processes that is simply uncanny. Among other things, they can instantly see how a particular human being's mind is describable as a Turing machine, and can assign interpretations to the operations and the symbols picked out by this Turing machine description in such a fashion that the person has a mental state of type A with content P when, and only when (a ) the human, described as a Turing machine, has a tokening of a symbol of type MP in a particular functional relationship R with the rest of the "machine," and (b ) the demon's interpretation scheme associates MP -tokens with P and associates the propositional attitude A with functional relationship R . Let us assume, moreover, that these demons do "read off" humans' mental states, and that they can even effect tokenings of intentional states by causing tokenings of symbols in humans. In the Demon World, humans do have states for which there are conventional interpretations, there are acts of interpretation of these symbols, and there are authoring acts in which these symbols are intended to have particular meanings.
Consider now a second world. It is indiscernible from the Demon World in all aspects accessible to human observers. But this world—all it the Demon-Free World—contains no beings who have the peculiar kind of access to human psychological states and processes that the demons in the Demon World have. Humans in the Demon-Free World have exactly the same experiences as humans in the Demon World. And ideally completed empirical psychologies in the two worlds would come to precisely the same conclusions. The two worlds are, by stipulation, indiscernible with respect to all features accessible to human observers.
Now let us pose the following question: would there be any differences in what mental states we should ascribe to humans in the Demon World and humans in the Demon-Free World? Would they have different beliefs, desires, and hopes? I think that the answer is, clearly, no . If the two worlds are indiscernible both with respect to the experiences of individual human beings and with respect to everything an empirical psychology might discover, it is hard to see how there could be any grounds for attributing different intentional states in the two worlds. Moreover, it is impossible for us to know beyond Cartesian doubt which sort of
world we live in. It is epistemically possible that there are, in fact, such demons; it is similarly possible that there are not. But this realization does not (and should not) put us into any kind of doubt about whether we have particular beliefs or desires.
But if the intentionality of our mental states were a matter of our being in relationships with mental representations that were bound to meanings by conventions or intentions, then the existence of beings who employ such conventions or have such intentions would be a necessary condition for our being in intentional states. Since the existence of such beings is patently irrelevant to our attributions of intentional states, it follows that the intentionality of mental states is not dependent upon the association of symbols with meanings via conventions or the acts of symbol users.
Consider, in addition, the following concerns. Suppose that the demons in the Demon World suddenly decide to change their interpretive conventions, and they then start interpreting human psychological states in new ways. There is no change in what people experience when they are in particular psychological states, but the demons shuffle their assignments of interpretations to marker types. Should we say that there is a corresponding change in what intentional states we should assign to humans in the Demon World? Surely not. Questions about what intentional states people are in are surely not dependent upon anything so contingent as externally imposed interpretations. If this is the consequence of the versions of Fodor's account based upon conventions and intentions, those accounts fail to provide conditions that are relevant to proper ascription of cognitive states and of the kind of semantic and intentional properties normally ascribed to cognitive states.
7.6—
Conflicts in the Notion of Representation
In the previous section it was argued that external impositions of interpretations to mental representations (through authoring intentions, interpretive acts, or conventions) would be irrelevant to the ascription of mental states to an organism, and to the ascription of semantic and intentional properties to an organism's mental states. Internal impositions of interpretations are likewise problematic, albeit for a different reason. The reason to be developed here is suggested by Thomas Reid (1983) and Edmund Husserl ([1913] 1931). Reid and Husserl both offer arguments against theories of mind that postulate representations as objects that me-
diate the mind's access to extramental objects. Both philosophers' arguments are directed primarily against Hume, but their objection to representational theories can, as Keith Lehrer (1989) has recently suggested, be marshaled against contemporary theories as well, including CTM.
Reid and Husserl both claim that representational theories postulate "immanent objects" that mediate perception and cognition in order to account for the intentionality of perceptual and cognitive states. They also claim that, for a theory to be truly representational, this "immanent object" must be interpreted or taken as standing for the extramental object. But in order for such an act of interpretation to be possible, Reid and Husserl argue, the subject must have some kind of access both to the representation and to the thing represented: If Jones uses a symbol S to stand for some object X —if he judges "S stands for X " or decides "S shall stand for X ," he must be cognizant of S , and he must also be cognizant of X . And his access to X must be independent of his access to S , since his acquaintance with X must precede the interpretive act that associates S with X . But this, argue Reid and Husserl, has the consequence that mental representation is possible only where there can be independent access to extramental objects. And this consequence undercuts the whole motivation for the postulation of mental representations, since these were introduced to explain how access to extramental reality is possible.
The Reid-Husserl objection is straightforwardly applicable to the variations on Fodor's account of cognitive states that we are presently considering. As formulated, it can stand as an attack upon the Actual Interpretation Version, since it addresses theories in which someone must interpret a representation as standing for an extramental object. According to such a theory, an organism O can be in a cognitive state A about some object X only if (1) O is in a functional relation R to a mental representation MP , and (2) O interprets MP as being about X . As Reid and Husserl point out, the appeal of representational theories lies largely in what is to be gained by saying that access to extramental reality is mediated by mental representations. But if the representations are only "meaningful" in the sense of being interpreted as being about extramental objects, then this motivation for a representational theory is undercut. In order to interpret MP as being about X, O must have access to MP and have independent access to X . If O is to apprehend MP and decide, "Aha! This is about X ," O must have some kind of access to X that is not mediated by MP . His access to X may be very distant and dim, and might well be mediated by something other than his access to MP . (For example, X might be a number with a very large decimal, and O
might know of X only under the aspect of being the limit of a particular series of rational numbers.) But in order for O to interpret a particular symbol MP as being about X , he must have some idea of X that is not mediated by his apprehension of that particular symbol. Otherwise interpreting MP as being about X amounts to forming the judgment "MP is about what MP is about."
Now the appeal of representational theories lies in large measure in what is to be gained by saying that access to extramental reality is mediated by representations. But if representation can take place only if someone actually interprets the representation as standing for a particular object, and this requires access to the object that is not mediated by the representation, then it turns out that representational accounts of intentionality are self-defeating. For if we postulate a representation MP in order to explain an organism O 's access to an object X , but the very definition of representation ensures that using MP to represent X presupposes having access to X that is not mediated by MP , then it is simply fruitless to explain access to objects by postulating mental representations. If there can be access to objects that is not mediated by representations, it is unnecessary to postulate such representations. If there cannot be access to objects that is not mediated by representations, there cannot be representations either, because one can only interpret a symbol as being about X if one has some independent idea of what X is.
Neither Reid nor Husserl develops the objection specifically against symbolic representations, and neither of them seems to realize that there are several senses in which an object can be said to be a symbolic representation in addition to the sense of actually being interpreted as referring to something else. But the objection can easily be adapted so as to be applicable to representational accounts based on conventions or authoring intentions as well. First, suppose that an organism O has a belief about Lincoln just in case (1) O is in a particular functional relationship R to a mental representation MP , and (2) O has a convention C whereby the representation MP is interpretable as being about Lincoln. To have such a convention, O must know about Lincoln in a way that is not mediated by MP . Similarly, suppose that O has a belief about Lincoln just in case (1) O is in a particular functional relationship R to MP , (2) O authored MP , and (3) O intended that MP be about Lincoln. In order for O to intend that MP be about Lincoln, O must know about Lincoln in a way that is not mediated by MP . In any of these cases, it is impossible to make sense of the notion of mental representation without supposing that the organism also has access to the thing represented
in a fashion that is not mediated by such a representation. But if this is the case, postulating that there is a mental representation through which O apprehends Lincoln is pointless. Thus any such representational account of intentionality is bound to be self-defeating.
7.7—
Circularity and Regress
Finally, an account of the intentionality of mental states based upon interpretability, authoring intentions, or actual interpretations of mental representations would be circular and regressive. Consider first what would be involved in a claim that O 's mental state A means (that) P because it involves a representation MP that is either (a ) intended (by some agent A ) to mean (that) P or (b ) interpreted (by some symbol user H ) as meaning (that) P . If either account were correct, O could only be said to be in a mental state A that means (that) P if some organism O* (possibly, but not necessarily, distinct from O) were in some particular intentional states—namely, those involved in (a ) intending that MP mean (that) P or in (b ) interpreting MP as meaning (that) P .
But if this is the case, the strategy for explaining the intentionality of mental states has serious problems. First, it is circular: the intentionality and meaningfulness of mental states is accounted for by appealing to the meaningfulness of symbols, while the meaningfulness of the symbols is accounted for by appealing to the mental states involved in bestowing meaning upon those symbols. Second, the account is regressive: each time we account for the intentionality of a mental state A of an organism O , we allude to the "meaningfulness" of a representation MP . But the kind of "meaningfulness" we invoke involves covert reference to the intentional states A* of some organism O* . But since we are looking for a general account of the intentionality of mental states—not just an account of O 's mental states—we must account for the intentionality of O * 's mental states as well. Presumably, to account for O* 's mental state A* , we would have to posit a meaningful representation MP * , whose meaningfulness would in turn have to be cashed out in terms of the interpretive acts of some organism O** , and so on. The resulting account would not explain the intentionality of mental states in nonintentional terms; it could account for the intentionality of a given mental state only in terms of another mental state.
A very similar argument can be given against accounts where the "meaningfulness" of mental representations is to be understood in terms of interpretability under a convention. For while linguistic conventions
are not themselves mental states, they only obtain by virtue of several beings having a shared understanding of how certain symbols may be used. (Or, if one wishes to refer to the meaning assignments of idiolects as conventions, these obtain because one being has an understanding of how certain symbols may be used, and this understanding could, in principle, be shared by other language users as well.) And it is surely a necessary (if not a sufficient) condition for this shared understanding that the beings who share in it be in mental states that are similar in relevant ways. This, I take it, would have to be a part of the analysis of what it is for a group of language users to share a linguistic convention. But if this is the case, then a convention-based account of meaningfulness of mental representations is no better than an intention-based account, since it too ultimately depends upon allusions to intentional states and hence ends in the same kind of circularity and regress.
7.8—
The Interpretability-in-principle Version
There is, however, a fourth modality under which marker tokens can be said to be signifiers: namely, interpretability-in-principle. The Interpretability-in-Principle Version of CTM explained the semantic and intentional properties of an organism O 's cognitive state A —say, its meaning (that) P —by positing a mental representation MP and a functional relation R such that (1) MP is interpretable-in-principle as meaning (that) P , and (2) O is in relation R to MP . Two coextensive definitions for semantic interpretability-in-principle were offered in chapter 4. One definition was framed in terms of counterfactuals about conventions, the other in terms of the availability of a mapping from marker types to interpretations. Since the former definition seems clearly to risk running afoul of the same problems about convention that have already been discussed, we may assume that the second definition holds more promise for CTM. This definition was formulated as follows:
(S4* ): An object X may be said to be interpretable-in-principle as signifying Y iff
(1) X is interpretable-in-principle as a token of some marker type T ,
(2) there is a mapping M from a set of marker types including T to a set of interpretations including Y , and
(3) M(T) = Y .
Now semantic interpretability-in-principle is a very permissive notion. Every object is interpretable-in-principle as a token of a marker type (i.e., can, in principle, be used as a marker if someone comes up with a suitable marker convention); and every marker type can be mapped onto whatever interpretation one likes. Therefore, for every object X and every interpretation P , X is interpretable-in-principle as meaning (that) P .
One thing that should be noted about the notion of interpretability-in-principle is that the connection it makes between marker types and interpretations is not dependent either upon actually existing semantic conventions or upon acts of authoring or interpretation. And this has the significant consequence that the Interpretability-in-Principle Version of Fodor's account of cognitive states is immune to the criticisms raised against the Interpretability, Authoring Intention, and Actual Interpretation Versions. To put it differently, the logical form of attributions of semantic interpretability-in-principle does not involve references to semantic conventions or meaning-bestowing acts, with the consequence that the preceding arguments do not show that the Interpretability-in-Principle Version suffers from the pernicious kind of conceptual dependence that threatened the other versions.
I shall argue, however, that the Interpretability-in-Principle Version is also incapable of supplying a viable account of the semantic and intentional properties of cognitive states. In particular, there are four distinct problems. First, such an account would impute to mental states semantic and intentional properties which they clearly do not have. Second, it would impute the kinds of semantic and intentional properties that we ascribe to mental states to objects that clearly do not have them. Third, it would not provide an explanation of the intentionality and "semanticity" of mental states. And, finally, the definition of being interpretable-in-principle as a signifier token presupposes being interpretable-in-principle as a marker token—and that does involve conventions in a way that leads to circularity and regress, albeit not at the semantic level.
7.8.1—
Spurious Properties
The first problem with the Interpretability-in-Principle Version is that it would impute to mental states intentional and semantic properties that they clearly do not have. According to the Interpretability-in-Principle Version, for example, an organism O can have a belief about Lincoln just in case (1) O is in the right functional relationship R to a mental representation MP , and (2) MP is interpretable-in-principle as being about
Lincoln. Now if there are mental representations, it is surely the case that any mental representation MP is interpretable-in-principle as being about Lincoln—the definition of interpretability-in-principle is so permissive as to assure that. But by the same token, the definition is also so broad as to assure that MP is interpretable-in-principle as being about the number two, the Crimean War, or anything else. Indeed, for every interpretation P, MP is interpretable-in-principle as being about P .
Now suppose that (as the Interpretability-in-Principle Version suggests) O 's being in relation R to MP , in conjunction with MP 's being interpretable-in-principle as being about P , are conditions jointly sufficient for ascribing to O a belief about P . If this is the case, then O has beliefs about everything, since each marker token MP is interpretable-in-principle as being about everything. Indeed, each of O 's beliefs is about everything, since each belief involves a marker token that is interpretable-in-principle as being about everything.
Surely this consequence of the Interpretability-in-Principle Version is intolerable. There may sometimes be some unclarity, vagueness, and ambiguity as to just what our beliefs are about, but not to the extent that each of our beliefs is about everything! And as this is a consequence of the Interpretability-in-Principle Version, so much the worse for that account.
7.8.2—
Strange Cognizers
Depending upon how one takes the words 'organism' and 'functional relation' in Fodor's characterization of cognitive states, there may be a second problem for this version of the account as well. For one might well think that Fodor does not really mean to restrict his characterization of cognitive states to organisms . To do so in the context of a computational theory of mind would be very odd indeed! Perhaps the word 'system' could usefully replace the word 'organism'. And one might well think that the word 'functional' is used in the sense that it is used when one classifies digital machines according to their machine tables—that is, according to functional relationships between current states and succeeding states.
But if one does interpret Fodor in this way, it would seem that all kinds of things turn out to be cognizers. For, according to Fodor's account, it would seem that if (a ) two systems are functionally equivalent, and (b ) their symbols have the same semantic and intentional properties, and (c ) they are in equivalent functional relations to their symbols, then it should
be the case that they are in the same cognitive states. But consider the following problem. If a cognizer is describable in purely formal terms, it must be the case that there is an abstract formal system that is functionally equivalent to the cognizer. And if interpretability-in-principle is all that is needed to give a symbol system the kind of intentional and semantic properties that mental states enjoy, then it would seem to be the case that abstract symbol systems have intentional and semantic properties in just the same senses that mental states do. Presumably this would be enough to include such systems in the class of cognizers. But surely such a conclusion would be absurd.
7.8.3—
Lack of Explanatory Force
Even if we could avoid these problems, it is difficult to see how the interpretability-in-principle of a marker token could supply anything in the way of an explanation of the meaningfulness or intentionality of a mental state. Suppose that I wish to know why a particular state of Jones's is about Lincoln and someone tells me that it is because Jones is in a particular functional relationship to a mental representation, and that representation is about Lincoln. I then ask, "Why is that mental representation about Lincoln?" If the reply is merely, "Because there is a mapping from that representation's marker type to Lincoln," then I have not received an explanation. Even if I believe everything that I have been told, I still don't know why Jones's cognitive state is about Lincoln. Pointing to the availability of a mapping just does not supply the kind of information that would answer my question. (It is not clear just what would supply the right kind of information, but it is clear that this reply does not.)
7.8.4—
The Reappearance of Conventionality at the Marker Level
Finally, upon closer inspection, it turns out that the notion of semantic interpretability-in-principle is not so free of convention as at first it seemed. The connection between marker types and interpretations is, indeed, not conventional. But for an object to be interpretable-in-principle as a signifier, it must first be interpretable-in-principle as a marker, and the expression 'interpretable-in-principle as a marker' does have a conventional aspect. For remember how this notion was defined:
(M4): An object X is said to be interpretable-in-principle as a token of a marker type T iff
(1) a linguistic community could, in principle, employ conventions governing a marker type T such that any object having any pattern PiÎ P :{P 1, . . . , Pn } would be suitable to count as a token of type T ,
(2) X has a pattern p j, and
(3) pjÎP .
An object's being interpretable-in-principle as a marker is not just a matter of there being a mapping from one object to another, because marker types are necessarily conventional. The very notion of a marker is convention-dependent.
This has the consequence that the Interpretability-in-Principle Version does involve conceptual dependence upon cognitive notions. For while attributions of semantic interpretability-in-principle do not involve tacit ascriptions of semantic conventions or intentions, they do involve tacit reference to marker conventions. Any explanation of marker conventions, like semantic conventions, would have to involve reference to a community of symbol users who share a certain understanding about marker types and tokening. And this shared understanding must surely consist in large measure in the members of the community being in relevantly similar mental states. But if this is so, the Interpretability-in-Principle Version is bound to end in the same kind of circularity and regress as the other versions.
7.9—
Applicability of These Criticisms
Now one might wish to pause at this point and consider how directly these criticisms affect CTM. For one might be tempted to think that in developing my terminology I have set up a straw man that my arguments are suited to knocking down. Fodor and other proponents of CTM acknowledge, after all, that there are differences between the fashions in which discursive symbols, mental states, and mental representations have semantic properties. In particular, they acknowledge that discursive symbols get their semantic properties from those of the mental states they are used to express. They simply deny that the same is true of those symbols that serve as mental representations. It might therefore seem that, in likening mental representations to discursive symbols, I am ar-
guing against a position that Fodor and others have explicitly rejected.
But this is not the case. What Fodor claims is that discursive symbols, mental states, and mental representations all have the same kind of semantic properties, but come by them in different ways. I have shown that mental states and discursive symbols do not have the same kind of semantic properties, and that it is not clear what sort of "semantic properties" mental representations are claimed to have. Here I have been concerned with examining what happens if you suppose that mental representations have the same kinds of semantic properties—namely, semiotic-semantic properties—that symbols may uncontroversially be said to have. All of the problems that have arisen here arise purely from saying that the kinds of semantic properties representations have are semiotic-semantic properties. The problems do not arise because of some additional feature having to do with how they came by those properties; the problems arise because of the kinds of properties that are being attributed, and what they are used to explain. The position may be easily knocked down, but it is not the one that Fodor clearly rejects, and is in fact the most plausible interpretation of the ambiguous characterization that he offers.
7.10—
Two Possible Responses
Now there are two kinds of objections that one might expect to hear at this point, each based on key differences between computers and paper or other passive media for the storage of symbols. First, computers do not just store individual symbols. The computer's sensitivity to the syntactic features of the symbols and its ability to generate new representations in accordance with formal rules allow the overall system to encode the semantic relationships between the symbols as well. If we ask how a symbol-manipulation process in a computer counts as, say, addition, we must not talk merely about the interpretations sanctioned by programmers and users, we must say something about the process that goes on in the computer as well. It looks as though the computer has its own contribution to make towards the symbols it stores having semantic values. If there is more to tell about the meaningfulness of symbols in computers than can be told in terms of the conventions and intentions of language users, the objections offered here may not undercut CTM's account of semantics and intentionality entirely.
Second, computers can be equipped with transducers that allow them to be sensitive to features of their environments. As a consequence, it is
possible for the tokening of symbols in a computer to covary in regular ways with the presence of particular kinds of objects and circumstances in their environments. If a computer is able to detect when a light has been turned on, and inscribes "The light has been turned on" whenever it detects the light being turned on, one might be inclined to think that such an inscription is about the light being turned on in a way that a random inscription of the same symbol string would not be about the light being turned on. One might well think that the computer paradigm suggests more than a semiotic explanation of the intentionality of mental states: if the mind is a computer, and computers can support causal covariations between objects in the environment and the tokening of symbols of particular types, this kind of causal covariation might well form an important part of the explanation of the intentionality of mental states as well.
In the following sections, I propose to argue that neither of these lines of argument can rescue CTM's account of intentionality. The first line of argument fails because one can talk about the systematicity of meaning relationships in a symbol system only if one can first talk about assignments of interpretations; systematicity contributes nothing to the assignment of interpretations. The second line of argument may present an interesting theory, but that theory is simply not CTM's representational account of the semantics and intentionality of cognitive states. Moreover, as we shall see shortly, there are additional problems for CTM that arise from the fact that syntax, as well as semantics, is conventional in character.
7.11—
Systematic Symbol Manipulation
Computers do not merely store isolated, inert symbols. Indeed, much of what seems special about the computer paradigm is to be found in the way things in a computer are interrelated in the right ways—the way semantic relationships are mirrored by syntactic relationships, for example, and the way that derivations of symbol structures are truth-preserving under the right interpretation. Moreover, the systematic nature of the computer places constraints on how the symbols may sensibly be interpreted: the larger and more complex the representational system, the fewer reasonable interpretations are available. Haugeland, for example, writes that "an interpretation that renders a system's theorems as truths is a rare and special discovery," and that there is a sense in which "random interpretation schemes don't really give interpretations at all" (Haugeland 1981: 27).
And this is, in large measure, correct: interpretation schemes do take on special interest when they have certain properties: notably, (a ) when they map marker strings onto true propositions, (b ) when the interpretation of a marker coincides with something that is causally related to the tokening of that marker, and (c ) when the interpretation scheme "makes sense" of the overall performance of the system—that is, when it gives it an interpretation that makes it seem as though it is acting rationally.[1] It is important, however, to distinguish the question of how a symbol system is suitable for bearing a particular interpretation from the question of how the symbols may be said to bear any interpretation in the first place . In the case of a computer, the answer to the first question has two parts: (a ) a specification of how all of the semantic relationships necessary for a given interpretation scheme are reflected in syntactic relationships, and (b ) an account of how the formal rules that allow truth-and sense-preserving derivations are linked to causal regularities through the functional architecture of the machine. The answer to the second question—the question of how the symbols may be said to bear any interpretation at all—has little or nothing to do with computers per se. The question of how it is that symbols may be said to have meanings is a question about semiotics, and the answer would have to be given in terms of the interpretability of the symbols under conventions, the intentions and interpretations of the symbol users (programmers, designers, and users of computers), and the fact that symbols are interpretable-in-principle as bearing any interpretation whatever.
As we saw in chapter 5, the functional analysis of the computer and its semiotic interpretation are two distinct issues: getting them to coincide is a virtue of good programming and not a fundamental axiom of semiotics. What we saw in that chapter was that semiotic analysis and math-functional analysis were distinct enterprises. That result still holds good here. But one may also argue a stronger point: namely, that the functional organization of the computer as a symbol manipulator cannot uniquely determine a single privileged semiotic-semantic interpretation scheme for symbols in a computer, either—and hence even the combination of semiotic-semantics with functionally describable symbol manipulation cannot explain the unique mental-semantic properties of mental states.
Let us reconsider the example of a computer programmed to perform operations corresponding to addition. The computer has three storage locations—A, B, and C—each of which bears a string of binary digits representing an integer under some interpretation scheme I . The computer
proceeds by sampling the symbols present at A and B and causing the tokening of a symbol at C. The computer is so designed and programmed that (1) the syntactic patterns of the symbols present at A and B will determine what symbol is tokened at C, and (2) the symbol that is tokened at C will be mapped by interpretation scheme I onto the number that is the sum of the numbers represented (under I ) by the symbols stored at A and B. Now if we ask, "What makes this system such that one might sensibly refer to what it does as addition?" part of our answer will have to make reference to the features of the system as a system . We might express what is needed in algebraic terms: if we take the set B of binary strings that can be present at A, B, and C, and the function F that maps pairs of strings found at A and B onto the string that they would cause to be tokened at C, we may speak of a group G which is defined in terms of the elements of B and the function F .[2] Now for this system to be suitable for supporting "computer addition" of some subset of the integers, there must be some subset of the integers S such that the group consisting of the elements of S under the operation of addition is isomorphic to G . That is, there must be a one-to-one mapping M between binary strings in B and integers in S such that for any three binary strings b1 , b2 , and b 3 in B , and any three integers i1 , i 2 , and i3 in S , if M (b1 ) = i1 , M (b2 ) = i2 , and M (b3 ) = i 3 , then F (b1 ,b2 ) = b3 iff i 1 + i2 = i3 . Or, to put things quite informally, what is needed to render a computer system suitable for supporting an interpretation is that it have a functional description that has the right formal properties for supporting that interpretation.
Now when a computer has a particular functional description—say, that described by group G described above—this renders it suitable for supporting any number of interpretations. If, for example, its operations are suitable to be interpreted as addition over the first n natural numbers, they are equally suitable to be interpreted as addition over the first n even natural numbers, or indeed as addition over any set of numbers generated by taking the first n natural numbers and multiplying each by the same real number r . And it is suitable for bearing any number of other interpretations as well—some purely mathematical, some referring to systems involving concrete objects.
But the suitability of a system as a whole for bearing an interpretation scheme that interprets both the symbols and the operations does not fully determine what the symbols or the operations may be said to mean. A system's formal properties render it interpretable-in-principle under a number of interpretation schemes, but confer pride of place upon none of them. This does CTM no good: if the formal properties of cognitive
processes render the overall system interpretable-in-principle under several different interpretation schemes, but do not uniquely pick out the right interpretations for each representation, then semantic interpretability-in-principle of mental representations, even when applied to a whole system of such representations, is not sufficient to account for the semantic properties of cognitive states. And it is definitely the case that the formal properties of cognitive processes would leave them interpretable-in-principle in more than one way, because it can be shown that any formal system has more than one consistent interpretation. In particular, each has an interpretation in number theory. If semantic interpretability-in-principle of mental representations were a sufficient condition for the meaningfulness of mental states, it would turn out that all of our thoughts are about numbers, since any system of computations over mental representations would have a consistent number-theoretic interpretation. But it is clearly not the case that all of our thoughts are about numbers; therefore there must be more involved in the meaningfulness and intentionality of cognitive states than the availability of a consistent systematic interpretation of mental representations. And hence the fact that computers manipulate symbols does not save CTM's account of intentionality if the "semantic" properties attributed to mental representations are semiotic-semantic properties.
7.12—
Causality and Computers
There is, however, a second avenue of response to the arguments offered in this chapter. This response starts from the observation that computers may be equipped with transducers in such a fashion as to render them sensitive to environmental stimuli. What it is for a computer to be "sensitive to environmental stimuli" is for it to be so configured that it will dependably produce particular symbol tokens when particular conditions are present or when particular events take place in its environment. That is, a computer is sensitive to environmental stimuli to the extent that there are regular, causal covariations between conditions or events in the computer's environment and tokenings of particular symbol types in the computer: for example, that it writes out "Hello, Professor Pembrooke" whenever Professor Pembrooke enters the room, or tokens "My, but it's dark in here!" whenever the lights go out.
Now it is very tempting to assume that an inscription of "My, but it's dark in here!" that is produced whenever the lights go out and because the lights have gone out is about the lights going out, and in a fashion
that an inscription of the same symbol string that was not causally connected to the lights going out would not be about the lights going out. Regular, causal covariations with objects and conditions in the environment, moreover, are plausibly a factor relevant to the intentionality of mental states as well: it seems plausible that part of what it is for my thoughts to be about Lincoln is for there to be a causal chain stretching back to Lincoln and including my thoughts.
It is little wonder, then, that advocates of the computer metaphor in philosophy of mind have often gravitated towards an application of the computer paradigm that involves a causal component—in particular, towards accounts that explain the intentionality and semantics of mental representations in terms of regular, causal covariations between objects or conditions in the environment and the tokening of symbols of particular types. Fodor, for example, has placed increasing emphasis on causality. In The Language of Thought, published in 1975, his emphasis was completely upon the "internal code" of intrinsically meaningful representations in a language of thought. In "Methodological Solipsism," published in 1980, the emphasis was still upon meaningful representations, but Fodor hinted at the possibility of a naturalistic theory of reference (though he argues that this possibility is dubious, and says nothing about a naturalistic theory of meaning). Psychosemantics (1987) and A Theory of Content (1990) include the articulation of a sketch of a semantic theory that still accounts for the intentionality and semantics of cognitive states in terms of the intentionality and semantics of mental representations, but also tries to ground semantics and intentionality in causal relationships with objects in the environment.
But how is this supposed to rescue the account of intentionality? The answer, it seems, depends upon the relationship of the causal component of the theory to the representational component. I see four basic possibilities for such a relationship. First, causal regularity just adds an additional condition for intentionality over and above what is supplied by the representational account. Second, the semiotic-intentional properties of mental representations are still supposed to provide an adequate account of the mental-semantic properties of cognitive states, but causal regularities, in turn, are supposed to provide an account of the semiotic-semantic properties of mental representations. Third, the causal account is supposed to provide an alternative definition for semantic terms as applied to mental representations. Fourth, semantic terms are applied in some undisclosed way to mental representations, and these "semantic" properties are still supposed to explain the mental-semantic properties
of cognitive states, while causal regularities are supposed to explain these "semantic" properties. These four possibilities will now be examined in more detail.
7.12.1—
Representation Plus Causation
The first possibility is what John Searle (1980) has called "the Robot Reply" to his arguments against computational theories of mind. According to the Robot Reply, computation over symbols does not, indeed, provide a sufficient condition for the ascription of cognitive states, meaning, or intentionality. But if the computer were, additionally, connected to the external world in the right ways by means of transducers, then it would provide a model for understanding cognition. On this account, the semiotic-semantic properties of mental representations would not be sufficient to account for the intentionality and semantics of cognitive states, because part of what is involved in a belief being about Lincoln is that it be part of a causal chain involving Lincoln. But if one were to provide an account of cognitive states that alluded both to the meaningfulness of mental representations and to the causal chains involved in the formation of beliefs (and other cognitive states), this problem could be remedied.
Now it might well be possible to formulate a useful theory along these lines. As Searle has pointed out, however, this is no longer the same theory that was originally offered as part of CTM. The original claim was that "the objects of propositional attitudes are symbols (specifically, mental representations) and that this fact accounts for their intensionality and semanticity" (Fodor 1981: 24). But if one must, additionally, appeal to causal factors to explain the "intensionality and semanticity" of cognitive states, then one cannot account for it merely by saying that the objects of the attitudes are symbols. If an account of the intentionality and semantics of cognitive states needs to appeal to mental representations and needs, additionally, to appeal to causality, then CTM's account of the intentionality and semantics of cognitive states is not viable.
7.12.2—
Causality Explains Semantics
Now while some writers certainly endorse the Robot Reply, it is not clear that this is Fodor's strategy when he appeals to causality in explaining semantics. In Psychosemantics, for example, Fodor invokes causality at
the level of explaining the semantic properties of mental representations. In so doing, he appears to be taking up a project at the point at which he left it off at the end of the introduction to RePresentations . In that introduction, Fodor gives what is perhaps his best articulation of CTM and how it emerged. He also give a clear indication of what it is intended to accomplish: "It does seem clear what we want from a philosophical account of the propositional attitudes. At a minimum, we want to explain how it is that propositional attitudes have semantic properties " (Fodor 1981: 18, emphasis added). Yet if CTM is supposed to provide an explanation of "how it is that propositional attitudes have semantic properties," it is curious that Fodor writes on the last page of that introduction, "What we need now is a semantic theory for mental representations; a theory of how mental representations represent. Such a theory I do not have" (ibid., 31). Now one way of reading this passage would be as an admission that CTM has thus far failed miserably at meeting Fodor's own standards for a theory of cognitive states. Such, however, is hardly the tone of the chapter in which it occurs. A better way of making sense of this passage, and of Fodor's subsequent treatment of the semantics of representations in Psychosemantics would be as follows: Fodor believes that CTM's representational account of the semantic and intentional properties of cognitive states is successful. Saying that cognitive states involve meaningful representations is enough to explain the meaningfulness of cognitive states: for example, saying that Jones is in a particular functional relation to a mental representation that means "Lo! a horse!" is all that needs to be said to provide an explanation of why Jones believes that there is a horse before him. But this still leaves an additional problem: how do we account for the semantic and intentional properties of the representations? Why does the mental representation mean "Lo! a horse!"? And it is here that Fodor wishes to give a causal answer—to the question of why mental representations that mean "horse" do, in fact, refer to horses. Fodor's initial, "crude" formulation of such a theory is that "a plausible sufficient condition for 'A's to express A is that it's nomologically necessary that (1) every instance of A causes a token of 'A'; and (2) only instances of A cause tokens of 'A'" (Fodor 1987: 126).
So it sounds as though Fodor wishes to make two separate claims: the first is just the representational account of the semantics and intentionality of cognitive states: namely, that cognitive states "inherit" their semantic and intentional properties from the representations they involve. The second claim is a causal theory of the semantic properties of men-
tal representations. (Fodor gives only a sketch of such a theory, and repeatedly voices doubts that a full-fledged semantic theory can be developed.)
In order to assess these claims, it is absolutely crucial at this point to determine (1) just what Fodor means when he uses words like 'intentional' and 'meaningful' of mental representations, and (2) how the way Fodor picks out semantic properties is related to his causal account of semantics. The first and most obvious possibility is that Fodor is applying semantic terms to symbols in the ordinary way: that is, using them to attribute semiotic-semantic properties. This should, I think, be the default reading of expressions like 'meaningful symbol'. After all, if someone says he is bringing you "healthy food" and produces a live fish in a bowl, you might well think that he is using language in a peculiar manner—a reaction that will not be changed if he explains, "Well, he is food, after all, and you've never seen a fish that was in better health!" Similarly, if someone says that cognitive states are meaningful (referential, intentional, etc.) because they involve "meaningful symbols," you may reasonably expect that he is using 'meaningful' in the way it is usually used when it modifies 'symbol'—and that, if he is not using it in that way, he should specify just how he is using it. Fodor and other advocates of CTM give no warning that they are using semantic terminology in nonstandard ways, so it is reasonable to begin by assuming that the standard (i.e., semiotic) usage is in force.
If the standard usage is in force, however, CTM's representational account of semantics and intentionality for cognitive states fails, for reasons described earlier in this chapter. And if the causal account of the semantics of mental representations is supposed to be independent from the representational account of the semantics of cognitive states, it can do nothing to bolster it. If the semiotic-semantic properties of representations cannot explain the mental-semantic properties of cognitive states, it does not matter, for purposes of an account of the intentionality of cognitive states, how the representations get their semiotic-semantic properties. Whatever the answer to that question might be, it does the representational account of the intentionality of cognitive states no good.
7.12.3—
Causal and Other Definitions of Semantic Terms
The final two candidates for the relationship between representation and causal covariation do not really fall within the purview of this chapter. One candidate was the view that the usages of terms such as 'mean-
ingful' and 'semantic' should simply be defined in causal terms. The other was the view that the usages of semantic terms as applied to mental representations do not denote semiotic- or mental-semantic properties, and are not to be defined in causal terms, but that they denote properties that can be explained through causal covariations. In either case, the theory offered does not explain the mental-semantic properties of cognitive states in terms of the semiotic-semantic properties of representations. Whether such variations on CTM can provide any solace will be examined in the next two chapters.
7.13—
Compositionality and the Conventionality of Syntax
Thus far we have shown systematic disregard for one feature of CTM which is in some ways quite important—namely, that it is supposed to support semantic compositionality. The representations envisioned by Fodor and other advocates of CTM, after all, are not all lexical primitives; the vast majority of them are made up of a large number of primitives combined with the help of syntactically based compositional rules. The semantic properties of complex representations are explained by (a ) the semantic properties of their atomic constituents, in combination with (b ) the compositional rules by which those constituents are combined.
Now this feature of CTM leaves the theory no better off with respect to the objections already raised: if the "semantic properties" are of the conventional or intentional kind, the fact that compositionality is thrown in does not rescue the theory from circularity or regress. Any taint of semantic convention or intention is enough to scuttle the whole project. But the appeal to compositionality does introduce a further problem: according to the analysis of symbols in chapter 4, syntax, as well as semantics, is conventional in nature, and hence there is a second kind of conventionality involved in CTM for the complex representations, assuming that CTM's advocates mean by "syntax" what one normally means when speaking of the syntactic properties of symbols.
The problem might be looked at in the following way. In order for there to be compositionality, it is not enough to have assignments of interpretations to primitive elements and rules governing legal concatenations of symbols. That is, it is not enough to assign "Lincoln" to A and "Douglas" to B and say that there is a legal schema for expressions 'x -&-y ' into which A and B may be substituted. There must, additionally, be a rule that will further determine that 'A & B ' counts as meaning "Lin-
coln and Douglas" and not, say, "Lincoln, Douglas" (a list), "Lincoln is greater than Douglas," or "Lincoln or Douglas." Semantic compositionality requires a notion of syntax that consists in more than rules for legal concatenation—it requires a notion of syntax that delivers complex semantic values. (It is worth noting that most of the time when we speak of syntactic categories we speak of them in ways that have some semantic overtones: for example, "count noun," "dependent clause," "conjunction symbol," or even "Boolean operator.")
But this is quite problematic if we try to move from natural languages (where conventions are a commonplace) to an inner language of thought, where conventions are an embarrassment. For the only way we have of generating complex symbolic meanings from atomic meanings is through syntactically based combinatorial rules, and the only such rules we have are conventional rules. But if the meanings of mental states are dependent upon syntactic convention, the old problems about semantic conventions reassert themselves at a different level: in brief, (1) the actual existence of such conventions is extremely dubious, (2) their existence is in fact irrelevant to the meanings of our mental states, and (3) positing such conventions would lead to a regress of mental states.
This problem with the conventionality of syntax, moreover, in some ways poses a problem for CTM more fundamental than that posed by the conventionality of semiotic-semantic properties. As we shall see in the next chapter, one might try to rescue CTM by developing a notion of "semantic properties" for representations that is not convention- or intention-dependent. Some would say we already have such notions. It is far less clear, however, that we do have or could have an account of compositionality that was not ultimately based upon conventions, and hence this objection will recur for the versions of CTM to be explored in chapters 8 and 9.
7.14—
Semiotic-Semantics and the Vindication of Intentional Psychology
Up to this point, this chapter has been directed towards showing that CTM cannot provide an account of the intentionality and semantics of mental states based upon the semiotic-semantic properties of mental representations. What about CTM's other claim—the claim to provide a vindication of intentional psychology? There is a fairly straightforward case that, because the semiotic-semantic version of Fodor's account cannot explain the mental-semantic properties of mental states, it proves
unable to vindicate intentional psychology as well. The reason this is so is that the vindication of intentional psychology turns out to be dependent upon an account of intentionality in ways that may thus far have been unforeseen. To see how this is so, consider how CTM was supposed to provide a vindication of intentional psychology.
What the computer paradigm was supposed to show was that the semantic properties of symbols in computers can be coordinated with their causal powers because semantics can be coordinated with syntax, and in a computer a symbol's syntactic type determines its causal role. If we assume that the mind is a computer, and that the semantic properties of mental states are inherited from the symbols which it uses in its computations, then explanations cast in intentional vocabulary can (in principle) pick out psychological categories in a fashion that gets the causal regularities right.
This line of reasoning, however, is compromised by the analysis of symbols and semantics in chapters 4 and 5. For what we need for a vindication of intentional psychology is an account of how the mental-semantic properties of mental states can be coordinated with causal properties, and the most that a computational theory of mind can give us, it seems, is an account of how the semiotic-semantic properties of mental representations can be coordinated with causal powers. Of course, if one could account for the mental-semantic properties of mental states in terms of the semiotic-semantic properties of mental representations, the vindication of intentional psychology could proceed intact. But what we have seen in this chapter is that one cannot account for mental-semantic properties in this fashion. So even if there are mental representations with semiotic-semantic properties, and even if the semiotic-semantic properties of these are coordinated with causal roles, this does intentional psychology little good, because it does not explain how the kind of semantic properties ascribed to beliefs and desires can link up with causal regularities.
7.15—
Summary
CTM claims that the mind is a computer that operates upon mental representations that are symbols having semantic properties. But we have seen that the expression 'semantic properties' is ambiguous. In order to see just what CTM might be claiming, and how this might or might not support the claims to explaining intentionality and to vindicating intentionality psychology, it was necessary to substitute different senses of 'semantics' into CTM's account. Here we have seen that neither the account
of intentionality nor the vindication of intentional psychology can proceed upon the assumptions that the "semantic properties" ascribed to mental representations are mental-semantic or semiotic-semantic properties. (That is, they cannot proceed upon the assumption that they are the kinds of semantic properties ascribed to mental states or to symbols, respectively.) In the following chapter we shall examine whether substituting some other sense of the expression 'semantic property' might produce more hopeful results.
Chapter Eight—
Causal and Stipulative Definitions of Semantic Terms
In the last chapter we began a project of assessing CTM's claims (1) that the intentionality of mental states can be explained in terms of the semantic properties of mental representations, and (2) that this will also provide a vindication of intentional psychology. The basic claim about the intentionality and semantics of mental states that we set out to examine was this:
Mental state M has semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has semantic property P .
In light of the distinction between mental- and semiotic-semantic properties, however, it was necessary to revise this schema for explaining intentionality in the following fashion:
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has _______-semantic property X .
The lacuna in clause (2) is to be filled by some more specific kind of "semantic property." What was shown in the last chapter is that filling the
lacuna in a way that offers an account in terms of mental-semantic properties or semiotic-semantic properties will not provide an explanation of the intentionality of the mental. And indeed, the problems arise not only at the level of the conventionality of semiotic meaning, but also involve problems with the conventionality of syntax and even of mere marker-hood . If 'symbol' means marker, then it will not do to speak of the mind as a manipulator of symbols, since that would again involve us in a regress of conventions.
However, we saw in chapter 5 that it is possible to develop the notion of a machine-counter in a fashion that seems to provide everything CTM should require when it speaks of "symbols" and "syntax," yet in a way that avoids commitments to conventions or intentions. It is therefore necessary to consider whether CTM might provide a viable account of the mind if we interpret the talk about "symbols" not as talk about markers and counters, but as talk about machine-counters . In order to do this, however, we will require more than the notion of a machine-counter. That notion might be sufficient for an articulation of the kind of "syntactic theory of mind" advocated by Stich (1983), but an interpretation of CTM will also require an interpretation of talk about the "semantic properties of the symbols" that supplements the notion of a machine-counter with a nonconventional notion of semantics. In this chapter, therefore, I shall present a way of interpreting CTM that avoids problems of the conventionality of symbols and syntax by interpreting CTM as dealing with machine-counters. Additionally, I shall explore two ways of interpreting CTM's use of semantic vocabulary as expressing some set of properties distinct from semiotic-semantic properties. First, I shall explore the possibility of treating Fodor's causal covariance theory of content as a stipulative definition of his use of semantic terms as applied to mental representations. Then, I shall explore the possibility of treating the semantic vocabulary in CTM as a truly theoretical vocabulary, whose meaning is determined by its use in the theory.
8.1—
The Vocabulary of Computation in CTM
In order to reinterpret CTM's claims so as to avoid the taint of convention and intention, we must find alternative interpretations for its talk about "symbols," "syntax," and "semantics." Chapter 5 already gives us a plausible alternative construal of talk about "symbols" and "syntax." For there we saw that some writers in computer science, like Newell and Simon (1975), seemed implicitly to use the word 'symbol' to denote
not the convention-based semiotic typing, but a typing tied directly to the functional analysis of the machine. There we suggested the technical notion of a machine-counter in an effort to make this usage more precise. The notion of a machine-counter was developed as follows:
A tokening of a machine-counter of type T may be said to exist in C at time t iff
(1) C is a digital component of a functionally describable system F ,
(2) C has a finite number of determinable states S :{s1 , . . . , sn } such that C 's causal contribution to the functioning of F is determined by which member of S digital component C is in,
(3) machine-counter type T is constituted by C 's being in state si , where s1 ÎS , and
(4) C is in state s i at t .
I argued in that chapter that this functional typing is quite distinct from the semiotic typing and can serve neither as an analysis nor as an explanation of it. But at the same time, this kind of functional typing may provide just what CTM needs to escape from the conventionality of markers and counters. It is thus only natural to try to reconstruct CTM in a way that substitutes an unobjectionable notion like that of a machine-counter for the problematic convention-laden notions of "symbol" and "syntax." Intuitively, the idea is that the mind has a functional analysis in terms of a machine table, and there are things in the mind or brain that (a ) appear as machine-counters in such an analysis and (b ) covary with content. We are thus ready to reconstruct CTM in a way that avoids the problems of conventionality explored in earlier chapters.
8.2—
A Bowdlerized Version of CTM
In Victorian England, there was a practice of producing editions of books that had been expurgated of all objectionable material (references to ankles and other such scandalous license). Such books were said to have been "bowdlerized," the word deriving from the name of one of the notable practitioners of such editing. What I propose to do here is to describe a bowdlerized version of CTM—BCTM—which avoids objectionable suggestions that MR-semantic properties are mental- or semiotic-semantic properties by characterizing MR-semantics in terms of
the work that the semantic vocabulary seems to do in CTM. Note that it is CTM in particular that is under discussion, and not cognitive theories generally: the operative meaning of semantic terminology might turn out quite differently if one were discussing other philosophical theories (e.g., those of Dennett, Searle, or Dretske) or if one were discussing particular empirical work (say, that of Colby, Newell and Simon, Marr, or Grossberg).
So, without troublesome references to symbols and semantics, it seems to me that what CTM wishes to claim is the following:
Bowdlerized Computational Theory of Mind (BCTM)
(B1) The mind's cognitive aspects are functionally describable in the form of something like a machine table.
(B2) This functional description is such that
(a ) attitudes are described by functions, and
(b ) contents are associated with local machine states. Call these cognitive counters .
(B3) These cognitive counters are physically instantiable.
(B4) Intentional states are realized through relationships between the cognizer and cognitive counters. In particular, for every attitude A and every content C of an organism O , there is a functional relation R and a cognitive counter type T such that O takes attitude A [C ] just in case O is in relation R to a tokening of T .
BCTM may be regarded as a special form of machine functionalism. It is stronger than mere machine functionalism in several respects. Condition (B1) asserts that machine functionalism is applicable to minds. Condition (B2) goes beyond this to make special claims about how the attitude-content distinction will be cashed out in functional terms. Machine functionalism, in and of itself, does not make such a claim and indeed does not even assure that the attitude-content distinction will be reflected in a psychological machine table. Nor does machine functionalism claim, as (B4) does, that things that are picked out by functional description will also play a role in determining content.
If we interpret computational psychology in the way suggested by BCTM, the notion of rule-governed symbol manipulation becomes more of a guiding metaphor for psychology than the literal sense of the theory. Cognitive counters are not "symbols" in the ordinary semiotic sense,
but machine-counters—specifically, they are the things that occupy the slots of machine-counters in the functional analysis of thought, as opposed to other functionally describable systems. On this view, the mind shares with computing machines the fact that the salient description of their causal regularities is math-functional in character, but differs in that what is described by the function table is not a set of entities with conventional semiotic interpretations but—well, something else whose true nature is not yet known. If the theory is right, we presently know cognitive counters and their MR-semantic properties only through the role they play in contributing to something we know more immediately: namely, intentional states and mental processes.
I should stress that I view this as a reconstruction of CTM and not as an attempt to guess at what its advocates had in mind. I think it seems clear that Fodor and others have generally assumed the univocity of the semantic vocabulary, and likewise assumed that there was a perfectly ordinary usage of terms like 'semantics' and 'meaning' that could be extended to mental representations. In light of the problems that have already been shown to exist for that assumption, I am now trying to see whether there is an alternative interpretation of computational psychology that can avoid the problems already raised. (I am trying to pull CTM's chestnuts out of the fire, if you will.) In the end, I think there are two very different questions here: one about the viability of computational psychology as an empirical research programme, and another about the distinctively philosophical claims CTM's advocates have made about explaining intentionality and vindicating intentional psychology. In the remainder of this chapter, I shall try to argue that BCTM does not allow the computationalist to make good on these philosophical claims. In the final section of the book I shall explore an alternative approach to computational psychology that liberates its empirical research agenda from unnecessary philosophical baggage.
8.3—
The Problem of Semantics
If successful, the analysis of semantic properties in chapters 4 through 6 has shown several important things about the task of explaining the intentionality of mental states. First, what we call "meaning" and "intentionality" with respect to mental states are not exactly the same properties we ascribe to symbols when we use those words. Second, the properties we ascribe to symbols are conceptually dependent upon those we ascribe to mental states. And hence, as shown in chapter 7, we can-
not use semiotic-semantics to explain mental-semantics. Most articulations of CTM have seemed to assume, on the contrary, that the semantic vocabulary can be predicated univocally to mental states, overt symbols, and mental representations, and that the semantic properties of representations could be used to explain those of mental states through "property inheritance" because they are, after all, the very same properties and need only be passed up the explanatory chain.
In light of the previous chapters, this direct explanation of intentionality by way of "property-inheritance" seems to be closed off. If the "semantic properties" of mental representations are semiotic-semantic properties, they cannot explain mental semantics. And if they are not semiotic-semantic properties, it remains to be seen what kind of properties they are supposed to be. However, it is possible that waiting in the wings there is a way to finesse this problem the way we were able to finesse problems of syntax and symbolhood by way of the notion of a machine-counter. That is, perhaps the semantic vocabulary expresses some distinct kind of property when applied to mental representations, and this kind of property gives us what we need to explain the intentionality of mental states. Of course, we do not have a theory until we spell out what these properties are supposed to be. But we may for the meantime indicate the fact that they are supposed to be distinct from mental-semantic properties and semiotic-semantic properties by indicating them as "MR-semantic" properties. (That is, the kind of properties expressed by the semantic vocabulary when applied to mental representations.)
Presumably, what is common to mental-, semiotic-, and MR-semantic properties is that in each case there is a relationship between the typing of the theory (i.e., types of intentional content, types of symbol, types of representation) and a set of objects or states of affairs. Indeed, presumably the mathematically reduced abstractions of the three sets of properties are in very close correspondence: since words are expressions of thoughts, words-to-world mappings will closely track thought-to-world mappings. And if there are indeed mental representations, presumably representation-to-world mappings will closely parallel thought-to-world mappings. (In the ideal case, they will be isomorphic. But it is likely that the relationship falls short of isomorphism due to factors such as two words expressing the same concept or one word ambiguously expressing multiple concepts.) As we have seen, this does not add up to a "common" notion of semantics, because the nature of the relations expressed by the mappings is different in each case. (For example, in the semiotic
case it is essentially conventional, while in the case of intentional states it is not.)
The problem for a computational-representational semantics is to articulate a theory of MR-semantics that can meet the following desiderata: (1) the MR-semantic typing of representations must correspond to their machine-counter typing; (2) the relation that establishes a mapping between representation types and their MR-meanings must be such as to be able to explain the presence of the mental-semantic properties of mental states; and (3) the mapping so established for representations must have a proper degree of correspondence to that of the semantics of mental states .
In this chapter I shall explore two possible ways of developing such a semantics for mental representations. First I shall examine the possibility of using Fodor's Causal Covariation Theory of Intentionality (CCTI) as a stipulative definition of the properties expressed by the semantic vocabulary when it is applied to representations. Later, I shall turn to the possibility that the semantic vocabulary, as applied to representations, is a true theoretical vocabulary, where the meanings of the terms are determined by the explanatory role they play in the theories in which they are introduced.
8.4—
A Stipulative Reconstruction of the Semantic Vocabulary
If, then, the semantic vocabulary is being used in some novel way when applied to mental representations, how is it being used? One reasonable hypothesis would be to suppose that it is being used to supply precisely the properties that Fodor ascribes to representations in his own theory of representational semantics—the so-called "causal covariation account." To repeat, I do not think that Fodor was in fact offering his semantic theory as a stipulative definition of the semantic vocabulary. But if the theory works, and the semantic vocabulary is in need of definition for mental representations, it seems a viable candidate. And if, as a stipulative definition, it is incapable of meeting the desiderata listed above, it will fail as a nondefinitional account as well, and so time spent critiquing it will not be ill spent.
Consider, then, what Fodor has to say about the nature of the "semantic properties" of mental representations. What Fodor provides by way of an "account of semantic properties for mental representations"
is what he calls a "causal theory of content." The motivation for this project Fodor explains as follows: "We would have largely solved the naturalization problem for propositional-attitude psychology if we were able to say, in nonintentional and nonsemantic idiom, what it is for a primitive symbol of Mentalese to have a certain interpretation in a certain context" (Fodor 1987: 98). This theory of "what it is for a primitive symbol of Mentalese to have a certain interpretation" has become progressively less vague in Fodor's work from 1981 to 1990, and Fodor describes the 1990 theory as providing an account of content having "the form of a physicalist, atomistic, and putatively sufficient condition for a predicate to express a property" (Fodor 1990: 52). The 1990 version of this account reads as follows:
I claim that "X" means X if:
1. 'Xs cause "X"s' is a law.
2. Some "X"s are actually caused by Xs.
3. For all Y not = X, if Ys qua Ys actually cause "X"s, then Ys causing "X"s is asymmetrically dependent on Xs causing "X"s. (ibid., 121)
It is clear from the context that this account is supposed to apply only to mental representations—that is, to be restricted to the cases where "X" indicates a mental representation—so we would seem to be on the right track in looking for an explication of 'means' as it is used of mental representations.
Let us, then, assume that this account of MR-semantic properties can serve as a stipulative definition of the semantic vocabulary as applied to mental representations. We may now substitute this account of MR-semantic properties into CTM's basic schema for explaining the intentionality of the mental, obtaining a Causal Covariation Theory of Intentionality (CCTI):
Causal Covariation Theory of Intentionality (CCTI)
Mental state M mental-means P because
(1) M involves a relationship to a mental representation MR of type R ,
(2a ) P s cause R s is a law,
(2b ) some R s are actually caused by P s, and
(2c ) for all Q¹P , if Q s qua Q s actually cause R s, then Q s causing R s is asymmetrically dependent upon P s causing R s.
We shall now examine the prospects of CCTI. CCTI is primarily intended as an examination of the consequences of using causal covariation as a stipulative definition of the semantic vocabulary. But of course CCTI could serve as a statement of Fodor's account of mental semantics generally, whether clauses (2a ) through (2c ) are supplied by definition of semantic terms or merely provide necessary and sufficient conditions. The assessment that follows, therefore, is of interest as a critique of causal covariation accounts, whether they involve stipulative definition or not.
In what follows, I shall argue that this approach to saving CTM has several serious problems. First, even if CCTI provides a consistent theory that avoids the problems of interpretational semantics, it does not inherit much of the persuasive force originally marshaled for CTM, because much of that persuasive force turned upon the intuition that the same "semantic properties" could be attributed univocally to mental states, discursive symbols, and mental representations. With this assumption already undercut, it is incumbent upon CTM's advocates to make clear the connection between MR-semantics and mental-semantics in such a fashion that the former can account for the latter—that is, to show how causal covariation is even a potential explainer of mental-semantics. This leads to a more fundamental problem about the causal covariation account. What this account seems to attempt to provide is a demarcation account for meaning assignments, not an explanation of meaningfulness. That is, it seems to correlate particular mental-meanings (i.e., meaning-X as opposed to meaning-Y ) with certain naturalistic conditions, on the assumption that there is some meaning there in the first place. What it does not do is explain why mental states are meaningful (rather than meaningless ) in the first place, or how causal covariation is supposed to underwrite this fact. In this regard CCTI compares unfavorably to some other naturalistic accounts, but there is also reason to doubt that any naturalistic account could provide an adequate account of the meaningfulness of mental states. Finally, at best, CCTI would provide an account of the semantic primitives of mentalese, leaving the semantic values of complex expressions to be generated through compositional rules. But as we have seen in the last chapter, the only way we know of to provide syntactically based compositionality is through conventions . So even if CCTI succeeds in escaping the problems of conventionality at the level of semantic primitives, those problems will still reassert themselves as soon as one is concerned with expressions whose semantic properties are due to compositionality.
8.4.1—
What Is Gained and Lost in Causal Definition
Before making a direct frontal assault upon the Causal Covariation Theory of Intentionality, it will be useful first to become clear about what is gained and what is lost in adopting the strategy of defining semantic terminology for mental representations in causal terms. There seem to be three immediate benefits. First, we have clarified the semantic terminology to a point where we seem in little danger of running afoul of the ambiguities in the semantic vocabulary. Second, we are no longer in the embarrassing position of not being able to say what kinds of properties it is that are supposed to explain the intentionality of mental states. Third, we have done so in a fashion that manages to avoid all of the awful problems about conventions and intentions that plagued the semiotic-semantic account. If causal covariation is not free from the taint of the conventional, it is hard to imagine what would be.
On the other hand, it is important to see that a truly vast amount of the persuasive strength of the case for CTM is lost in the transition. The case for CTM, after all, traded in large measure upon the intuition that thoughts and symbols have some important things in common: namely, both are meaningful, both represent, both have semantic properties. This is a point to which Fodor repeatedly returns. To take a few sample quotes:
Propositional attitudes inherit their semantic properties from those of the mental representations that function as their objects. (Fodor 1981: 26)
Mental states like believing and desiring aren't . . . the only things that represent. The other obvious candidates are symbols . (Fodor 1987: xi)
Symbols and mental states both have representational content . And nothing else does that belongs to the causal order: not rocks, or worms or trees or spiral nebulae. (Fodor 1987: xi)
The reasoning that is supposed to follow from such claims seems quite clear: computational explanation in cognitive psychology makes it seem necessary to suppose that there are mental symbols over which the computations are performed. Perhaps these have semantic properties as well, and it is the semantic properties of the symbols that account for the semantic properties of the intentional states in which they are involved. That is, one is inclined to argue as follows:
(1) Mental states have semantic properties.
(2) Symbols have semantic properties.
\ (3) There is a class of properties—semantic properties—shared by symbols and mental states.
so, (4) It seems reasonable to try to reduce the meaningfulness of mental states to that of the representations they involve.
Of course, in light of the distinctions made in chapters 4 and 5, the argument from (1) and (2) to (3) is exposed as a paralogism, since 'have semantic properties' must mean something different in the two contexts (mental- and semiotic-semantic properties, respectively). And without (3), there is much less reason to be inclined towards (4). It is one thing to claim
(A) M ental state M has property P because M involves MR , and MR has P .
It is quite another to claim
(B) Mental state M has property P because M involves MR , and MR has X , and X¹P .
(B) requires a kind of argumentation beyond what is required for (A), because (A) proceeds on the assumption that property P is in the picture to begin with, and just has to explain how M gets it. (B), on the other hand, has to do something more: it has to explain how P (in this case, mental-intentionality) comes into the picture at all .
As for the quotes cited above, their interpretation becomes quite problematic once they are read in light of the distinctions between different kinds of "semantic properties." If words like 'semantic', 'represent', and 'content' are defined in causal terms for mental representations, claims such as these are irrelevant at best. At worst they are logical howlers. To say, for example, that "mental states and (discursive) symbols both represent" is perilously misleading. As we have seen in chapter 4, there is no one property called "representing" that is shared by mental states and discursive symbols. Instead, 'represent', like other semantic terms, means different things when applied to symbols and to mental states. So the sentence, "mental states and symbols both represent" involves faulty parallelism that disguises a more basic conceptual error.
The same kind of problem occurs if we just define 'refers to' or 'means' in causal terms for mental representations. Suppose "mental represen-
tation MP refers to P " just means "mental representation MP was caused by P in fashion F ." What, then, would we make of such assertions as "propositional attitudes inherit their semantic properties from those of the representations that serve as their objects"? This assertion, like the claim that mental states and symbols "both represent," is perilously misleading. For the claim implies that there is some set of properties called "semantic properties" that are ascribed both to mental states and to mental representations. If the "semantic properties" ascribed to mental representations are defined in causal terms, however, the semantic properties ascribed to mental states must be defined in causal terms as well, if they are to be the same properties. But surely this is not so. When we say that Jones is thinking about Lincoln, what we mean is surely not precisely that he stands in a particular causal relation to Lincoln. We certainly mean nothing of this kind when we say that Jones is thinking about unicorns or numbers . So if we define semantic terms applied to mental representations in causal terms, it is misleading to speak of the "inheritance" of semantic properties: such properties as might be conferred upon mental states by representations are not the same properties that are possessed by the mental representations themselves. And such arguments for CTM as depend upon a genuine inheritance of the same "semantic properties" turn out to be fallacious.
A similar problem can be made for CTM's attempt to vindicate intentional psychology. The strategy for the vindication was to show, on the basis of the computer paradigm, that the postulation of mental representations could provide a way of coordinating the semantic properties of mental states with the causal roles they play in thought processes. Such an argument might be formulated as follows:
Argument V1
(1) Mental states are relations to mental representations.
(2) Mental representations have syntactic and semantic properties.
(3) The syntactic properties of mental representations determine their causal powers.
(4) All semantic distinctions between representations are preserved syntactically.
\ (5) The semantic properties of representations are coordinated with causal powers (3,4).
(6) The semantic properties of mental states are inherited from the representations they involve.
\ (7) The semantic properties of mental states are coordinated with causal powers (5,6).
Now consider just steps (5) through (7). If we were to interpret the expression 'semantic properties' univocally, we could recast (5) through (7) as follows:
Argument V2
(5́) There is a strict correspondence between a representation's semantic properties and its causal powers.
(6́) A mental state M has semantic property P if and only if it involves a representation MR that has semantic property P .
\ (7́) There is a strict correspondence between a mental state's semantic properties and its causal powers.
On this construal we appear to have a reasonable and valid argument. But consider this second construal, which is forced upon us by the recognition of the homonymy of semantic terms:
Argument V3
(5* ) There is a strict correspondence between a representation's MR-semantic properties and its causal powers.
(6* ) A mental state M has mental-semantic property P if and only if it involves a representation MR that has MR-semantic property X .
\ (7* ) There is a strict correspondence between a mental state's mental-semantic properties and its causal powers.
The plausibility of the deduction to (7* ) depends in large measure upon the plausibility of (6* ). The plausibility of (6* ), in turn, will depend upon what MR-semantic properties turn out to be. But whatever they may turn out to be, (6* ) lacks some of the immediate prima facie appeal of (6) and (6́), since it depends upon a (contingent) correlation of different kinds of properties, whereas (6) and (6́) involve ascriptions of the same properties to two different objects. This kind of contingent correlation is itself in need of explanation.
The upshot of these observations is this: if the "semantic properties"
of mental representations are defined in causal terms, the proponent of CTM owes us something that he did not owe us on the assumption that the semantic properties of mental states were the very properties possessed by mental representations: namely, he owes us a plausible account of why having a representation MR with certain MR-semantic properties (say, certain causal connections with objects in the environment) should be a sufficient condition for having a mental state with certain mental-semantic properties (say, a belief about dogs). This is significant because the arguments given in favor of CTM seem to assume that the same kinds of "semantic properties" can be ascribed indifferently to symbols, mental representations, and mental states. But if one defines the semantic terminology that is applied to representations in causal terms, most of what Fodor says to commend CTM to the reader is patently fallacious.
In summary, then, we may say that defining MR-semantic properties in terms of causal covariations allows us to avoid the major pitfalls presented for earlier readings of CTM, but the case for CTM now seems much weaker than it once did. The reason for this is that originally the road from representations to mental states was a road from semantics to semantics, and the road from semantics to semantics seemed relatively short and straight. If the "semantic properties" of mental states and representations were the same properties, there would be no question but that the latter are the sort of things that could account for the presence of the former, but only a question about whether such "inheritance" indeed takes place. On the current interpretation, however, the road from representations to mental states is a road from causal covariation to mental-semantics. That road is surely much longer, and there is no small question about whether the roads shall meet at all. It may be that they are like Down East roads: "Ya can't get there from here!"
8.4.2—
Covariation and Mental-Semantics
The vital question, then, is whether causal covariation is the right sort of notion to provide an explanation of the semantic properties of mental states. I believe that it is not. But in order to see why it is not, it may prove useful to see what it is suited to doing and how that falls short of explaining mental-semantics. In order to do this, it will be helpful to make two sorts of distinctions. First, we may distinguish between two sorts of accounts: those that provide explanations of what it is to be an X , and those that merely provide criteria for the demarcation of X 's from non-X 's.
Second, we may distinguish between accounts of meaning assignments (i.e., distribution of meanings) from accounts of meaningfulness . The former differentiate things that mean A from those that mean B , on the assumption that the items in question mean something; the latter explain why items mean something rather than nothing. I shall argue that CCTI is suited at best to providing a demarcation criterion for meaning assignments, whereas an account of mental-semantics requires something stronger: an explanation of meaningfulness.
8.4.2.1—
Explanation and Demarcation
To begin with, let us distinguish between accounts that give an explanation of why something is an X from accounts that merely provide a criterion for the demarcation of X 's from non-X 's. Aristotle's characterization of humans as featherless bipeds is an attempt at a demarcation criterion. It happens to be a poor attempt, since apes, tyrannosaurs, and plucked chickens are also featherless bipeds. But even if humans were, in point of fact, the only featherless bipeds, the featherless-biped criterion would at most give us a litmus for distinguishing humans from other species. If what we wanted was an explanation of what makes Plato a human being, the fact that he is a featherless biped is clearly a non-starter. The problem is not that demarcation criteria can be wildly contingent, for in fact they need not be—some demarcation criteria can be metaphysically necessary. Even demarcation criteria that are metaphysically necessary, however, can fail to be explanatory. For example, if you want to know what makes a figure a triangle, the answer had better be something like "the fact that it has three sides." But there are descriptions that distinguish triangles from everything else that do not provide this information: for example, "simplest orthogonal two-dimensional polygon," "shape of the faces on a regular octahedron," and (worst of all) "Horst's favorite geometric example." (This last, of course, is not metaphysically necessary.) If you want to know what makes a figure a triangle, the fact that it has the same shape as one of the faces of an octahedron just will not do as an explanation, though it is necessary and sufficient.
There are relationships between demarcation criteria and explanations. Significantly, things that can serve as explanations are a proper subset of things that can serve as demarcation criteria. On the one hand, an account that explains what it is to be an X must also be able, at least in principle, to serve as a demarcation criterion for distinguishing X 's from non-X 's. On the other hand, the opposite is not true: we have already seen examples of demarcation criteria that lack explanatory power. A
corollary of this is that one way of showing that something is not an explanation of what it is to be an X is to show that it does not even distinguish X 's from non-X 's.
8.4.2.2—
Meaning Assignment and Meaningfulness
Let us further distinguish between two aspects of accounting for a token T 's meaning-X . On the one hand, one might want to account for why T means X as opposed to meaning something else, treating it as a background assumption that T can mean something . When we explain the role of particular morphemes in determining the meanings of polymorphemic words, for example, we take it as a given that words can mean something and confine ourselves to asking, say, how various sorts of affixes interact with the meanings of root morphemes. This provides an account of why words have the particular meanings they have without touching upon the question of how language gets to be meaningful in the first place. But one might ask this second question as well, and it is here that, say, Ruth Millikan's account of truth and meaning for languages is at odds with accounts based on convention or speaker meaning. Such accounts are accounts of meaningfulness rather than of meaning assignment .[1] Presumably one may offer an account of meaning assignments without thereby offering an account of meaningfulness, and vice versa.
8.4.2.3—
Why We Need an Explanation of Meaningfulness
Now what kind of "account of meaning" is required for mental-semantic properties of mental states? Well, if one wants to know how it is that things in the mind get to be about things in the world, one presumably wants to know both how thoughts get to be about particular things and how they get to be about anything at all —that is, one wants accounts of meaning assignment and of meaningfulness. Now suppose further that we are interested (as CTM's most notable advocates clearly are interested) in a naturalistic account—one that explains mental-semantic properties on the basis of some naturalistic properties ("N-properties"). Here the problem of meaning assignment becomes one of associating particular mental-semantic properties (e.g., meaning "horse") with particular N-properties (e.g., causal covariations with horses). And if all we are interested in is a naturalistic demarcation criterion for particular mental-meanings, all the "association" need amount to is strict correlation—some set of N-properties that all and only horse-thoughts (as opposed to cow-thoughts, unicorn-thoughts, etc.) possess. But if we are interested
not merely in a demarcation criterion, but in an explanation of what it is to mental-mean "horse," our naturalistic account of meaning assignments needs to be augmented with a naturalistic account of meaningfulness as well. Unless N-properties are sufficient to explain mental-meaningfulness, particular N-properties cannot explain particular mental-meanings either.
If CCTI is to provide an adequate account of intentionality and mental-semantics, then, it must provide an explanation of mental-meaningfulness. I shall now argue, however, that CCTI cannot plausibly be supposed to do this. All it can plausibly be supposed to do is provide a demarcation criterion for meaning-assignments. I shall first argue that CCTI attempts to provide a demarcation criterion for meaning assignments, and then argue that it fails to do more than this.
8.4.3—
CCTI As a Demarcation Criterion for Meaning Assignments
There are three main reasons to see CCTI as a demarcation criterion for meaning assignments. First, there is a strong tendency in the literature to see the task of "fixing meanings of representations" as a matter of imposing a suitable interpretation scheme—namely, one that assigns the right meanings. Second, CCTI seems naturally suited to providing a demarcation criterion of the desired sort. Third, the bulk of the discussion of the causal covariation version of CTM has been centered around CCTI's success or failure at providing a successful such demarcation criterion.
8.4.3.1—
Demarcation, Interpretation, and Meaning Fixation
A reader of the cognitive science literature will have noticed that there is a strong tendency to view the problem of accounting for content of representations as one of imposing a coherent representational scheme. Pylyshyn writes, for example, that the computational approach to the mind involves the assumption that
there is a natural and reasonably well-defined domain of questions that can be answered solely by examining (1) a canonical description of an algorithm (or a program in some suitable language—where the latter remains to be specified), and (2) a system of formal symbols (data structures, expressions), together with what Haugeland (1978) calls a "regular scheme of interpretation" for interpreting these symbols as expressing the representational content of mental states (i.e., as expressing what the beliefs, goals, thoughts, and the like are about, or what they represent). . . . Notice . . . that we have not said anything about the scheme for interpreting the symbols—for example,
whether there is any indeterminacy in the choice of such a scheme or whether it can be uniquely constrained by empirical considerations (such as those arising from the necessity of causally relating representations to the environment through transducers). (Pylyshyn 1980: 116, emphasis added)
Notice two things about this quote. First, semantic properties are discussed in terms of a "scheme of interpretation." Second, the question about this scheme that seems foremost in Pylyshyn's mind is whether the meaning assignments of a given scheme can be constrained so as to be unique. Similar issues arise in Haugeland (1981: intro.; 1985: chap. 3). It seems clear that these writers view the issue of finding a semantics for mental representations as one of finding a way to constrain the specification of an interpretation scheme for representations so that it is unique and so that it gets the causal relationships right—that is, their concern is for providing an adequate demarcation criterion for meaning assignments.
8.4.3.2—
The Suitability of CCTI for Demarcation
CCTI also seems well suited to providing a demarcation criterion for meaning assignments. (Or, to be more precise, it seems suited to providing a candidate for such a criterion, since there is one question about what it sets out to do and another about whether it accomplishes it.) It is quite easy to see that, whatever else CCTI might be used to do, it at very least purports to be a demarcation criterion for meaning assignments. For it is set up to give sufficient conditions, in naturalistic terms, for particular mental-meanings: the mental states that mental-mean P are the ones that have mental representations that are in a relation of causal covariation with the class of objects or states of affairs designated by P . This account may or may not be true, but if it is true, it provides a way of separating mental states that mean P from those that mean Q : the former involve representations characteristically caused by P 's and the latter involve representations that are characteristically caused by Q 's.
8.4.3.3—
The Problem of Misrepresentation
Now there has been a substantial amount of discussion of CCTI in the literature, assessing the merits of causal covariation as a way of explaining mental-semantics. What this discussion seems to center on, however, are the prospects for causal covariation as a way of providing a demarcation criterion for meaning assignments . This provides some evidence that supports the conclusion that this is the role that the theory is commonly regarded as performing.
The focus of this discussion has been upon CCTI's ability to account for the possibility of misrepresentation. According to CCTI, those thoughts are about P 's that involve representations of a type caused by P 's. But it is surely possible to have thoughts about P 's that are not caused by P 's and, worse yet, to have thoughts that are about P 's that are caused by something other than P 's—Q 's, for example. So, for example, someone visiting Australia might see a dingo and say to himself, "Oh, there's a doggie out back in the outback!" (Dingos are not dogs, etymologically speaking.) This person's thought has the content "dog," but is caused by a nondog, a dingo. And it is even possible for this error to be systematic: someone might always mistake dingos for dogs, wrens for sparrows, gnus for cattle, and so on. The problem is that, according to CCTI, thoughts are supposed to be about whatever it is that is the characteristic cause of their representations. But if dingos systematically cause a to-kening of the same kind of representation that dogs cause, it would seem to follow that what this kind of representation MR-means is the disjunctive class dog-or-dingo. This has several unwelcome results. First, my dog-thoughts turn out to mean not "dog," but "dog or dingo." (And this quite unbeknownst to me and contrary to what I have assumed all along.) Second, it would seem to be impossible to misrepresent a Q as a P , since the fact that Q 's cause the same representations as P 's under certain conditions will occasion a change in the "meaning" to be assigned to such representations. (And it just seems wrong to say, for example, that someone who mistakes holograms of unicorns for real unicorns has thoughts that mean "hologram" and not "unicorn.") There are related problems arising from the fact that thoughts about dogs can be caused by things other than distal stimuli entirely—for example, I can think about dogs in dreams or in free fancy. It is hard to see just how a strict causal theory should treat these cases.
This problem, which Fodor likes to call the "disjunction problem," was apparently a significant incentive in his development of the causal covariation account of intentionality from the form in which he articulated it in 1987 to the form it took in 1990. What is new in the more recent account is the addition of a notion of "asymmetric dependence," which is introduced to handle the disjunction problem. Recall the form of the account in Fodor (1990), which we have used here to develop CCTI:
I claim that "X" means X if:
1. 'Xs cause "X"s' is a law.
2. Some "X"s are actually caused by Xs.
3. For all Y not = X, if Ys qua Ys actually cause "X"s, then Ys causing "X"s is asymmetrically dependent on Xs causing "X"s. (Fodor 1990: 121)
The first and second clauses are already implicit in the older formulation. The notion of asymmetric dependence appears in clause (3). The idea is as follows: a thought involving a given representation R can mean "dog" and not "dingo" or "dog-or-dingo," even if it is regularly caused by both dogs and dingos, if it is the case that the causal connection between dingos and R -tokenings is asymmetrically dependent upon the causal connection between clogs and R -tokenings. And the nature of this "dependence" is cashed out in purely modal terms: what it means is that if dogs did not cause R -tokenings, dingos would not either, but not the reverse. (In other words, dingos might fail to cause R -tokenings without dogs failing to do so as well.)
Now I have no interest in contributing here to the already good-sized literature debating the success or failure of this move. What I wish to do is merely to point to what it is a debate about . And what it is a debate about is whether CCTI provides meaning assignments in the ways we should wish a semantic theory for the mind to do so. It is about such questions as whether such a theory would assign counterintuitive meaning assignments (such as "dog-or-dingo") and whether it can accommodate such patent facts as misidentification, in which one has a thought the content of which does not match the thing one is trying to identify. It may be that the fancy footwork provided by the notion of asymmetric dependence can finesse a way through these problems, but it is these problems that it seems intended to finesse.
8.4.4—
What CCTI Does Not Do
What CCTI notably does not seem to do is provide more than an demarcation account of meaning-assignments. It is not clear that it is even an attempt to provide an account of meaningfulness for mental states; and if it is so intended, the account it provides is woefully inadequate. I shall attempt to argue this in two different ways. First, I shall argue that CCTI does not provide so much as a demarcation criterion for meaningfulness (as opposed to meaning assignments ), and hence cannot provide an explanation of meaningfulness, since an account that explains will also provide a demarcation criterion. Second, I shall argue that CCTI lacks the right sort of explanatory character to explain the intentionality of the mental.
8.4.4.1—
Failure to Demarcate the Meaningful
While causal covariation may or may not provide a demarcation criterion for meaning assignments , it does not provide a demarcation criterion for meaningfulness —that is, for separating things that mean something from those that mean nothing . For the notion of causal covariation is cashed out in terms of regular causation, and regular causation is a feature not just of mental states and processes, but of objects and events generally. The overall project here is to explain the mental-semantic properties of mental states in terms of some set N of naturalistic properties, and the proposal at hand is that N-properties are causal covariation relations. But this set of properties has a domain far broader than that of mental representations: any number of objects and events not implicated in thoughts have characteristic causes, and hence have N-properties. Cow-thoughts are not the only things reliably caused by cows: so are mooing noises, stampedes, and cowpies, to name a few. The CCTI cannot be a viable demarcation criterion of meaningfulness, because it does not distinguish thoughts about cows from stampedes and cowpies. And this is surely a demarcation we should expect a theory that accounted for meaningfulness to entail. So either we must impute mental-semantic properties to all kinds of objects and events, endowing much of nature with content, or we must allow that something more than N-properties are required to explain mental-semantics.
The obvious strategy for sidestepping this objection is to point out that, while representations may share N-properties with many other sorts of objects, it is only mental representations that take part in the relations characteristic of intentional states. There may appear to be a threat of endowing the world with content—namely, with MR-semantic properties. But remember that the word 'semantic' in "MR-semantic" is not doing much work, since we have defined the expression 'MR-semantic properties' in terms of causal covariation. Thus in allowing most of nature to have MR-semantic properties, we have not endowed them with anything counterintuitive, even though the word 'semantic' might suggest as much. Moreover, CCTI, as we have formulated it, involves more than causal covariation: it involves explicit reference to the effect that the items that have MR-semantic properties are also part of an intentional state . It is this additional fact that differentiates them from objects in nature generally. To use some terminology that has not yet been used here, we might say that indication or natural meaning plays a role in the production of mental-meaning only when the indicator is present in an organism in one of the functional relations characteristic of intentional attitudes . Or, to put it slightly differently,
the domain over which the CCTI is quantified is not all objects, but all objects that are representations involved in intentional states.
There is something appealing about this strategy, but it is important to note that it violates one of the fundamental canons of CTM: namely, that the semantic properties of mental states be "inherited" from the "semantic properties" of representations. According to the formulation in the previous paragraph, however, this is not the case: mental-semantic properties are not explicable solely in terms of MR-semantic properties of representations, but in terms of MR-semantic properties of representations plus something else . Worse yet, this "something else" seems to consist precisely in the fact that the representations are elements of an intentional state! But if we must allude to the fact that representations are part of an intentional state to make CCTI proof against the semantification of nature, we have failed to provide a naturalistic explanation of mental-meaning, since part of our account still presumes the intentional rather than explaining it. It is, of course, possible to begin by assuming intentionality, and then asking the question of what kinds of natural properties are involved in the realization of intentional states; and if we do this, we need not worry about the fact that part of what differentiates mental representations from other things that participate in causal covariation is that they also play a role in intentional states. But if we do this, we are no longer seeking an account that provides supervenience or explanatory insight. And this, it would seem, is less than CTM's advocates generally desire by way of an "account of intentionality" (even if it is, in my view, a far more sensible strategy).
The upshot of this is that CCTI does not succeed in providing a criterion for the demarcation of the meaningful from the meaningless. It is not really clear that it was intended to provide such a criterion, but it fails to do so regardless. It follows from this a forteriori that it does not provide an explanation of meaningfulness, since an explanation would also provide a demarcation criterion.
8.4.4.2—
Why CCTI Does Not Explain Meaningfulness
It is also possible to tackle the issue of the explanation of meaningfulness by way of a frontal assault. And it seems prudent to do this, since someone might be inclined to try to rescue CCTI as a potential demarcation criterion for meaningfulness by way of some clever patchwork, much as Fodor has tried to rescue it as a criterion for meaning assignment by way of the notion of asymmetric dependence. To do so, however, would be to miss a much more serious point. The deep problem with CCTI is not that
I have some clever counterexamples that it has failed to catch in its net, and that might be brought into line with the insertion of an additional clause or two. The deep problem, rather, is that causal covariation is just not suited to explaining why some X is capable of meaning something rather than nothing. Causality is just too bland a notion for that task, and fancy patchwork would only serve to reveal this problem rather than to remedy it.
Now the way I should like to be able to proceed here would be to provide a really tight and compelling analysis of explanation and then give a knock-down argument to the effect that CCTI does not fit that analysis if the explanandum is meaningfulness. Explanation, however, is a notion that is notoriously difficult to analyze, and I shall have to content myself with a slightly more roundabout course for getting to the same conclusion: I shall attempt to establish one of the crucial "marks" of successful explanations, and then attempt to argue that the account of intentionality offered by CTM lacks this mark.
One characteristic of successful explanation is the kind of reaction it produces: the "Aha!" reaction that comes with new insight. Suppose I have some familiarity with some phenomenon P , with a set S of notable features. Now suppose that I try to explain P by means of an explanation E , cast in terms of some set of entities and relations X . Now E succeeds as an explanation to the extent that understanding X gives me insight into S —that is, to the extent that upon understanding X I become inclined to say, "Ah, now I see why things in S are as they are." Indeed, in the ideal case, understanding of X should be sufficient for me to infer S , even if I have no prior knowledge of S . Someone with an adequate knowledge of the behavior of physical particles, for example, would be able to derive the notion of "valence" and the laws of thermodynamics, and hence particle theories provide first-rate explanations for these other phenomena. Of course, in practice the process of explanation progresses in the other direction, but an ideal grasp of the explaining phenomena could be sufficient to allow for the derivation of the explained phenomena. This idea that an ideal explanation should allow the derivation of one phenomenon from another (e.g., a more complex one from a simpler one) is part and parcel of the Galilean method of resolution and composition that has informed much of modern science and modern philosophy of science, and is found notably in recent philosophy of science in both reductionist and supervenience accounts.
8.4.4.3—
Instantiation and Realization
I think that the weakest sort of explanation meeting this strong requirement is what Robert Cummins (1983) calls an "instantiation analysis."
(There are stronger sorts of explanation meeting it as well, of course, such as reductions.) Cummins proposes the notion of an "instantiation analysis" as a way of understanding theories that identify instantiations of a property P in a system S by specifying organizations of components of S that would count as instantiations. An instantiation analysis of a property P in a system S has the following form:
| ||||||||
Instantiation analyses are distinguished from reductions (ibid., 22-26) by the fact that a single property can have multiple instantiations in different systems, whereas the reduction of a property requires a unique specification of conditions under which it is present. But the instantiating property is intended to explain the presence of the instantiated property. Indeed, Cummins writes that one should be able to derive a proposition of the form (6i) from a description of the properties of the components of the system, and that when we can do this we can "understand how P is instantiated in S" (ibid., 18, emphasis added). That is, from a specification of the properties of the components of the system in the form
(6a) The properties of C1 . . . Cn are <whatever>, respectively,
we should be able to derive
(6i) Anything having components C1 . . . Cn organized in manner O—i.e., having analysis [C1 . . . Cn , O]—has property P:
Thus, with an instantiation analysis, supplying a description of the interrelations of the components of a system S should be enough to show that a property P is instantiated in S , because one can derive the conclusion that S has P from a statement such as (6i), and one can, in turn, derive (6i) just from a description components of S —that is, from a statement such as (6a). And since one can derive the conclusion that P is instantiated in S in this way, providing such an analysis should be sufficient to allay doubts that P can be instantiated in S: given a proper description of the components of S , one can, quite simply, infer the instantiation of P in S .
We may also distinguish the notion of an instantiation analysis from that of a weaker sort of account, which I shall call a realization account . A realization account provides a specification of how a property P is realized in a system S through the satisfactions of some set of conditions C1 , . . . , Cn —but without any implication that the satisfaction of C1 , . . . , Cn provides a metaphysically sufficient condition for the presence of P . I shall give several examples:
(1) There are individual objects that have a particular status, such as the Victoria Crown kept in the Tower of London or the Mona Lisa. One could, in principle, give a complete physical description of the matter through which the Mona Lisa is realized. But meeting that description does not provide a sufficient condition for being the Mona Lisa. Additional objects meeting that description would not be additional Mona Lisas, but perfect forgeries. Likewise, there are object-kinds such as "dollar bill" that must be realized through objects with a particular physical description. But once again, meeting that description alone does not make something a genuine dollar bill. If you or I make one, it is a forgery. Dollar bills are realized through particular material configurations, but no instantiation analysis of dollar bills is possible.
(2) Some kinds of human attributes are realized through a person's behavior without the behavior itself providing a sufficient criterion for the presence of the attribute. For example, Jones and Smith may both give a substantial portion of their resources to persons in need, yet in a very different spirit. It may be that Jones does so because he is generous, while Smith does so only because he believes that it is the sole way of saving himself from the flames of hell. Jones's behavior is a realization of generosity, while Smith's is not, even if the behaviors themselves are indistinguishable.
(3) We have seen that there are certain senses in which a computer may be said to perform such operations as adding two numbers. Such operations may be said to be realized through the processes that take place in the computer's components. But a specification of the processes that take place in the computer's components does not provide a sufficient condition for the computer's overall behavior counting as addition, because it only counts as addition by virtue of meaning-bestowing intentions or conventions of designers, programmers, or users, and these are not mentioned in specifications of the interactions of the components through which the adding process is realized in the machine.
Now there is an important methodological and theoretical difference
between instantiation analyses and realization accounts. Realization accounts proceed on the assumption that one may sensibly talk about the property P being realized in some system S . They do nothing, and can do nothing, to show that the organization of components of S would result in the presence of P . Indeed, it need not result in it—a particular set of behaviors might be a realization of jealousy or a realization of a fear of perdition, and a certain configuration of matter only counts as the Victoria Crown or a dollar bill in the context of particular institutional facts and historical acts. Realization accounts do not require even supervenience.
As a consequence, a realization account could not do anything to allay doubts about P 's being susceptible to realization in S: it proceeds on the assumption that P can be realized in S , and hence cannot justify that assumption. In the case of instantiation analysis, by contrast, one can infer the conditions for the ascription of P from a description of the components of S . As a result, providing an instantiation analysis of P in S also serves to vindicate the claim that P can be instantiated in a system like S . It vindicates it because it shows that it can be so. A realization account, on the other hand, does not in any comparable sense show that a property P can be instantiated in a system S . If someone is inclined to doubt that Jones is capable of generosity, for example, pointing to Jones's sizable donations to various charities will not prove the doubt to be mistaken. The donations might, of course, be realizations of generosity in Jones, but it might alternatively be the case that Jones really is incapable of generosity, and is merely giving of his wealth because he is trying to buy his way into heaven. Showing how a property is realized in a system gives us insight into the property and the system in which it is realized, but the resulting description cannot be used to demonstrate that the property is realized in the system or even that it can be .
8.4.4.4—
Instantiation and the Explanation of Meaningfulness
Now I think it should be clear that in order to explain meaningfulness in naturalistic terms, it would be necessary to provide something on the order of an instantiation analysis for meaningfulness—that is, to provide an account such that an adequate understanding of the explaining properties would be sufficient to ground inferential knowledge of the properties explained as well. It also seems clear that, as an explaining property, causal covariation does not come within a country mile of meeting this condition. Causal covariation might very well provide what is needed for
seeing why some thoughts are about one thing and other thoughts are about something else. (Then again it might not—I have no interest in taking sides here.) What it does not do is provide understanding of why causal regularities might contribute to meanings in the case of mental states while failing to do so in all of the other cases of causal covariation occurring in nature . And it is precisely here that the problem of meaningfulness lies.
Nor will any minor patchwork help in the slightest. Asymmetric dependence, for example, is of no assistance here. That can, at best, explain why my thought does not mean "dingo" or "dog-or-dingo." About why it means "dog"—or, more to the point, why it means something and other things caused by dogs do not (let the reader's imagination run wild)—is in no wise clarified by the notion of causal covariation.
Robert Cummins has suggested to me an alternative way of making this point: theoretical identifications, such as the identification of heat with a kind of motion, are of interest only insofar as they help us to understand something about the phenomena that are being explained. Descartes (Le Monde, chap. 2), for example, rejects the Scholastic view that "fire" or "heat" names a kind of substance in favor of the view that fire involves a kind of change of state in the matter of the combustible material, and that heat consists in the increased level of agitation of the matter. Other theorists were impressed by such factors as the ability to convert mechanical force into heat (as when a nail gets hot when it is driven by a hammer) and back again (as in the case of a steam engine). Viewing heat in terms of the motion of matter (and ultimately in terms of kinetic energy) allows us to understand why iron glows when heated and why nails get hot when pounded with a hammer. Now if CCTI is to be of interest as an explanation of intentionality, one would at very least expect there to be something about intentional states that we are able to understand better once we view them through the lens provided by the theory. But in fact there seems to be nothing of the sort. There was perhaps once hope of such a result when causal theorists were more inclined to identify content with information, and hence to view the causal chains involved in their accounts as being chains of information transmission. But the incompatibility of strict information accounts with misrepresentation has caused causal theories such as CCTI to abandon this identification. Information at least looked like an intuitively plausible candidate for explaining "aboutness" in a way that causation does not. If there is anything about intentional states that is explained by CCTI, its nature needs to be more clearly shown. In short, it does not

Figure 10
seem that CCTI explains the nature of intentionality; and indeed, it is not clear that there is anything of interest about intentionality that it does explain.
In summary then, CCTI seems at best to supply a demarcation criterion for meaning assignments, and neither an explanation of the same nor any sort of account of meaningfulness (see fig. 10).
8.4.5—
Some Telling Comparisons
The issue might be put into further perspective by contrasting the explanatory power of CCTI with that of some other "accounts of intentionality." There are a number of writers who address the issue of intentionality, either in general or in specific contexts such as visual perception, whose accounts seem to me at least to provide a certain degree of explanatory insight that CCTI fails to provide. The accounts that come most quickly to mind for me in this regard are Ruth Millikan's (1984) explanations of features of mind and language in terms of reproductively established categories with a selectional history, Kenneth Sayre's (1986) and Fred Dretske's (1981, 1988) information-theoretic accounts of intentionality in perception, and David Marr's (1982) account of vision. Each of these accounts is in some sense an attempt to reduce some kind of intentionality to some set of states and processes and relationships that can be specified naturalistically. (Or, if information is not a natural but a formal category, each tries to give a nonintentional specification of intentionality.)[2] And in each of their accounts causality
plays some explanatory role (in contrast, for example, with Searle's [1983] account, which is largely an ontologically neutral analysis of intentionality). But in each of these accounts, causality fits into the picture only within the framework of a much richer story about the mechanisms through which perception and cognition are accomplished.
Now each of these accounts is extremely complex and strongly resists presentation by way of a thumbnail sketch. I shall thus assume that the reader may refer back to the original sources for any details beyond the following brief sketches. Sayre (1986) tells a story of how information (in the technical sense of Shannon and Weaver [1949]) is conveyed, in a well-defined series of stages, from an object perceived to a stage of cognitive processing that might be rich enough to merit the name "intentionality." The account is an attempt to build "information," in the semantically pregnant sense of the term, out of "information" in the technical sense of "reduced uncertainty" or "negentropy," and assumptions about the functions of perceptual systems as describable as processors of information in the technical sense. Dretske employs a somewhat looser sense of "information" to similar ends. Both have stories about what it is for a thought to be about an object, stories that involve answers to questions about, for example, fidelity of perception and about what it is that connects object to intentional state and is common to both.[3] Millikan's account of belief also makes use of causal connections between the intentional state and its object, but these are embedded in a larger story about the function of belief and how it has been selected for within our species. To understand intentional states, on Millikan's view, is to understand a relationship between an organism and its environment that is the product of a history of adaptation and selection within the species. Marr presents an elaborate and detailed account of how the mind transforms sensory input into a three-dimensional visual representation through the application of a series of computational algorithms involving several distinct levels of representation of visual information.
Now these accounts do several things, in varying measures, that could contribute something towards legitimate insight into the phenomena they set themselves to discussing. (Of course it only merits the description of insight insofar as it turns out to be correct in the long run, but at least these accounts, if correct, yield new insights.) First, they subsume the phenomena to be explained (e.g., intentionality) under more general categories, and thereby provide a characterization, in nonintentional terms, of what kind of phenomenon it is. Millikan uses the notions of a
reproductively established kind and selection history to do this for intentionality generally. Sayre treats perception and perceptual intentionality as a very rapid kind of adaptation to environmental features (much as learning and evolution are much slower sorts of adaptation), further characterized by a state of high mutual information. Second, these accounts give some insight into what kinds of mechanisms are necessary to the realization of particular kinds of mental states, whether the formal properties of these mechanisms be characterized in terms of algorithms from computer science (Mart) or in terms of the Mathematical Theory of Communication (Sayre). There is, to be sure, a purely empirical component in this latter enterprise, but there is also a component that one might describe as "transcendental." Talk of things such as intentionality of perception is primarily motivated by our own case, and it therefore makes sense to ask what must be true of creatures who perceive as we do, much as it made sense for Kant to ask what must be true of beings whose only contact with an external world is through sensuous intuitions. Insofar as we take the phenomena going on in our own mental lives as given and try to provide an account of them, we gain substantial insight from accounts that succeed in telling us what sorts of processes must go on for such phenomena to take place.[4]
Now I do not think that any of these accounts goes so far as to provide an instantiation analysis for intentionality or any particular variety thereof. I shall present my reasons for this conclusion in the next chapter. There are, however, ways of providing more or less insight—and hence of coming closer to providing an adequate explanation—short of an instantiation analysis. My intent here has been to indicate that, in comparison with these other accounts, CCTI fares comparatively poorly in explanatory merits. For while the accounts offered by Millikan, Sayre, or Marr may not provide an instantiation analysis for intentionality, they do (if successful) provide at least the two kinds of insight already mentioned. If, for example, the things Millikan says are essentially correct, and I take the time to master her theory, I will have gained substantial insight into the nature of intentionality. As far as I can see, the same cannot be said for causal covariation accounts. It may well be that an adequate account of intentionality would have to involve a causal component, but when I entertain this proposition, I do not have a sense that any fundamental secrets about intentionality have thereby been revealed, or that I have achieved a grasp of even one principal aspect of the nature of intentionality. My own sense is that, if it is a fact about intentional states that they (characteristically) involve representations standing
in a relationship of causal covariation with the intentional objects of those states, this fact stands with respect to intentionality in a relationship analogous to that in which being the shape of a face of an octahedron stands to triangularity, or perhaps the relation that being a featherless biped stands to being human (that is, if we are talking about intentional states generally, and not about specific kinds of intentional states, such as perceptual judgments, in which causal connections do seem to be essential). Causal covariation might provide some kind of demarcation criterion, but it seems to me that it provides no insight into meaningfulness, and indeed can be invoked only with the prior assumption of meaningfulness. It does not provide an explanation of mental-meaning or intentionality. (I have grave doubts about causal covariation even as a demarcation criterion for meaning assignments. These will be a special case of the arguments against "strong naturalization" in the next chapter.)[5]
8.4.6—
The Tension between Generality and Explanatory Force
Now the consideration of accounts such as those offered by Millikan, Sayre, Dretske, and Marr brings up an additional issue that is worthy of consideration. On the basis of the sample presented by these accounts, it would seem that accounts of intentionality become more plausible as explanations of what it is to be about something or to mean something as they become more detailed in their descriptions of how a system is related to its environment. But as they become more detailed, they become correspondingly more specific and less general . But this has the consequence that as they become more explanatory, they stray further from being general accounts of intentionality, and look more like accounts of, say, the realization of intentionality in the visual perceptual apparatus of human beings . What would seem to be required for a general account of intentionality or mental-semantics, however, would be a characterization that applied equally well to different kinds of cognizers (human, Martian, angelic, silicon-based) and that was indifferent to the intentional modality (perception, judgment, will, etc.). This kind of generality, moreover, is absolutely essential if we want to view cognition as computation over meaningful representations of the sort that Fodor postulates, because the MR-semantic properties of the representations must be independent of what kind of propositional attitude they are involved in. (Indeed, even if one is not committed to computationalism, this would
seem to be implicit in the familiar attitude-content analysis of intentional states.)
To take an illustrative example, consider the account of the intentionality of visual perception in Sayre (1986). Sayre's account is compelling insofar as it makes a case for how some features of perceptual intentionality could be accounted for by viewing certain environmental conditions and features of the perceptual apparatus in information-theoretic terms. While Sayre's account does not supply logically sufficient conditions for getting semantics out of "information in the technical sense," it is a compelling attempt to show how the realization of perceptual intentionality is accomplished. But the details that make Sayre's account compelling also render it too local to be a general account of intentionality. For example, Sayre's account is concerned with mechanisms involved in perception, and hence is oriented towards successful cases of perception and towards transparent construals of ascriptions of intentionality. Familiar philosophical problem cases such as brains in vats and Cartesian demons lie far afield of Sayre's paradigm cases, and it is not clear how his model could address the problems they present for giving an account of intentionality that accommodates intuitions about opaque construals of intentional verbs. Second, Sayre's account of perceptual intentionality treats the intentionality involved in perception as directed towards an object rather than a proposition or proposition-like psychological state. It is quite possible that perception differs from other intentional modalities in this regard, however, and so the extension of Sayre's account to higher cognitive functions may well require a significantly different sort of account from his account of perceptual intentionality. Third, while Sayre's account is sufficiently abstract to avoid being specific to a species, it does seem to be based upon a construal of the abstract nature of the processes that beings such as ourselves undergo in perception. It is conceivable that other beings might reach a similar goal (perceptual intentionality) by a different path, one not describable by Sayre's story.
Millikan's story about intentionality has features that make it arguably even more local: to explain intentionality you have to tell a story about adaptive role and selection history. And selection history is dependent upon lineage. Indeed, according to Millikan, if a being were suddenly to emerge into existence that was identical with one of us in structure, in input-output conditions, and in subjective experiential states, this being would nonetheless have no beliefs or desires, because, according to Millikan, what it is to be a belief or a desire involves being the product of a certain kind of selection history. This would seem to have the
consequence that we would have to tell separate stories about intentionality in species where the relevant functions did not develop in a common evolutionary history. (Perhaps even if the histories were completely parallel to one another.) This might not mean that we would have to tell separate stories for humans and chimps (since the relevant selection process may have taken place before the species diverged), but we would have to tell separate stories for humans and Martians, or even humans and Twin-Earthers. (How we would tell such a story about beings without an evolutionary history—such as God, angels, and intelligent artifacts—is quite beyond me.)
Now it is not fully clear what moral one ought to draw from this. One distinct possibility is that what we have here is evidence that, contrary to commonsense assumptions, there is no one phenomenon called "intentionality," but several different phenomena which require rather different sorts of accounts. A slightly more modest moral would be that we have evidence here that the direction of inquiry ought to be to begin with more local phenomena that sometimes receive the label "intentionality"—for example, "intentionality" as it appears in visual perception—and proceed to an attempt at a general theory only when we have a good understanding of specific kinds of intentionality already in hand.[6]
There is, however, a very different possibility, which will be developed more fully in the next chapter: namely, that the problem may lie not with the notion of intentionality, but with attempts to provide a "naturalization" of it. In particular, it may be that all a naturalistic theory can hope to do with respect to the mental is to spell out how mentalistic properties are realized in particular kinds of physical systems, in which case it comes as little surprise (a ) that what is common to different cases is not captured by the naturalistic theory, or (b ) that different kinds of accounts may be required for different kinds of beings having the same intentional properties, since the same mentalistic properties might need to be realized through different means in different kinds of beings.
8.4.7—
Compositionality Revisited
Even if CCTI were to succeed as an account of the semantics of the primitive elements in the hypothesized language of thought, CTM would not thereby be immune to criticism. For in addition to telling a story about the semantic properties of the primitives, CTM attempts to tell a compositional story about the semantics of the complex representations. Unfortunately, the only way we know of telling a story about composi-
tionality is to tell a story about symbols whose semantic properties, in conjunction with syntactically based rules, generate meanings for symbolic expressions. Now on the one hand it is not clear that there is any real force left to speaking of representations as symbols if one is no longer endowing them with symbolic meaning (i.e., semiotic-meaning). On the other hand, we still have no nonconventional way of generating meanings for complex expressions (i.e., complex machine-counters) out of concatenations of simple expressions, even if we take the meanings of the simple expressions for granted. At best, the account leaves the fact that there are such compositional functions an unexplained brute fact. What we need, in addition, is some rule that makes it the case that, for example, things of the form x-&-y will mean "X and Y ." In overt languages, this is accomplished through convention. It is not clear that it could be accomplished in any other way. For it is not clear that there is any other pathway that will yield the kind of specificity of interpretation that we are able to get by dint of arbitrary conventions in a natural language. At the very least, even if advocates of CCTI could make their analysis of semantic primitives stick, they would further need to provide a naturalistic account of compositionality before their account could be regarded as viable. The notion of syntax that yields compositionality is conventional to the core, as argued in chapter 6, and no theory of compositionality has been developed for machine-counters.
8.5—
A Second Strategy: Theoretical Definition
If this stipulative definition of the semantic vocabulary will not save CTM's account of intentionality, it behooves us to examine a second possible reinterpretation as well: namely, that the semantic vocabulary employed in CTM is to be understood as a theoretical vocabulary whose interpretation is fixed by the work it does in the theories in which it is employed. The very brief answer, I shall argue, is no: if the semantic vocabulary of CTM is defined theoretically, then we do not have an explanation of intentionality (and hence no vindication of intentional psychology) until the underlying nature of these properties that are initially specified theoretically is spelled out. Until then, the so-called "explanation" of intentionality by appeal to "semantic properties of representations" really amounts to an appeal to dormative virtues.
Now what do we mean by "theoretical definition"? Sometimes terms employed in scientific theories mean precisely what they meant all along in ordinary language. In other cases, however, scientific theories appro-
priate ordinary-language terms and use them in new ways. Terms like 'matter' and 'particle' probably at one time had as part of their meaning all of the notions bound up in the Cartesian notion of "extension," such as size, shape, and definite location. Modern physics, however, countenances the use of these terms even for objects that lack one or more of these properties. Whatever the ordinary connotations of 'work', it has a very specific technical definition in physics. And naturally the property of "charm" attributed to quarks has nothing to do with good breeding and etiquette. Of course, science also countenances the introduction of new terms as a part of theories as well. And sometimes these also have their semantic values fixed by the theories in which they play a part. The word 'gene' in biology, for example, was at one time defined only by the theory in which it played a role: a gene was, by definition, the kind of thing, whatever it would turn out to be, that accounted for phenotypes of living things. When Watson and Crick discovered that the locus of this genetic encoding was the DNA molecule, the term perhaps underwent a change in meaning; but before that time it was a purely theoretical term —that is, a term whose meaning was fixed solely by the role it played in a theory.
The suggestion I wish to explore is that when CTM speaks of "semantic properties of representations," the words 'semantic properties' express properties that are theoretically defined in much the same fashion. These properties, which we have called "MR-semantic properties," might thus be defined as follows:
MR-semantic properties = df Those properties of mental representations, whatever they turn out to be, that explain the mental-semantic properties of mental states.
The actual nature of these properties is thus left unspecified at the outset, though presumably it may be determined in the course of further research. This reconstruction of the semantic vocabulary employed in CTM provides a new way of interpreting that theory that avoids the problems involving conventions and intentions.
8.5.1—
Does Theoretical Definition Explain Intentionality?
Let us then look at the claim that the kind of theoretical definition of semantic terms employed in BCTM provides us with an account of the in-
tentionality of mental states. Earlier, we proposed a schematic version of CTM's proposed account of intentionality:
Schematic Account
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has MR-semantic property X .
Having specified that MR-semantic properties are defined in theoretical terms, we can substitute our theoretical definition into our schematic account. But there are two different ways of substituting into our definition, which we may think of as the de dicto and de re substitutions. The de dicto substitution simply replaces the expression 'MR-semantic property X ' with its theoretical definition as follows:
De Dicto Interpretation
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has that property of MR , whatever it is, that accounts for mental-semantic property P .
The de dicto interpretation yields a pseudo-explanation of a well-known type. On this reading, MR-semantic properties fail to explain for precisely the same reason that we cannot explain the soporific powers of a medicine by appeal to its "dormative virtues." If saying "mental states inherit their semantic properties from mental representations" amounts to nothing more than saying "mental states get their semantic properties from something that has the property of giving them semantic properties," we do not have a legitimate explanation of semantics or intentionality.
However, it is also possible to substitute our theoretical definition into the schematic account in another way that does not share this problem: namely, by substituting a de re reading of the theoretical definition as follows:
De Re Interpretation
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR ,
(2) MR has some property X ,
(3) the fact that MR has X explains the fact that M has P , and
(4) X is called an "MR-semantic property" because
(a ) it is a property of a mental representation, and
(b ) it is the property that explains the fact that M has P .
On this interpretation, there are no dormative virtues lurking in the wings. Unfortunately, as the account stands, there is no explanation of intentionality either until we know (1) what the all-important property X might be, and (2) how we can derive the intentionality of mental states from the fact that cognitive counters have this wonderful property (the way we can, say, derive thermodynamic laws from statistical mechanics). BCTM does not supply us with this information; therefore BCTM does not supply an account of intentionality. BCTM no more explains intentionality than nineteenth-century genetics explained phenotype. With regard to intentionality, on a best-case scenario (that is, on the assumption that BCTM is on the right track with respect to the functional shape of the mind and the ultimate possibility of explaining intentionality by appeal to the properties of localized states), BCTM is in the position genetics was in before Watson and Crick: it is a functional-descriptive theory in search of an underlying explanation. (Of course, in the worst-case scenario, mental representations and their MR-semantic properties go the way of heavenly spheres and Piltdown man.)
In short, it seems to me that BCTM makes no progress at all on the semantic front. It does not so much provide an explanation of intentionality as it makes evident the absence of such an explanation. This fact has generally been obscured by confusions that result from assuming that the semantic vocabulary can be applied univocally to mental states, symbols, and representations. If we say, "Mental states inherit their meanings from mental representations," it looks as though there is progress on the semantic front, because we have reduced the problem of mental meaning to a problem about the meanings of symbols in the brain. Meaning, at any rate, looks like the right sort of thing to be a potential explainer of meaning, because we do not have to explain how meaning came upon the scene in the first place in order to explain mental-semantics. However, if it turns out that the semantic vocabulary applied to representations is a truly theoretical vocabulary, the appearance of progress begins to look like smoke and mirrors. As we noted earlier in the chapter, it is one thing to claim
(1) Mental state M has property P because M involves MR , and MR has P .
But it is quite another to claim
(2) Mental state M has property P because M involves MR , and MR has X , and X¹P .
Claim (1) proceeds on the assumption that property P is in the picture to begin with, and just has to explain how M gets it, while claim (2) has to do something more: namely, to explain how P (in this case, mental intentionality) comes into the picture at all . CTM simply does not do this, and to describe CTM as "explaining intentionality" is simply a gross distortion of what it actually accomplishes.
8.6—
Mr-Semantics and the Vindication of Intentional Psychology
The reader will recall that the explanation of intentionality was the first of two philosophical treasures that CTM was supposed to have unearthed, the second being a vindication of intentional psychology. Let us now return to the problem of vindication. Recall how the attempted vindication was inspired by the computer model. In a computer, the semiotic-semantic properties of the symbols are coordinated with the causal role symbol tokenings can play in the system. It is a useful contrivance to speak of the relationship between symbols and causality as being mediated by syntax, but speaking of the "syntactic properties" of the symbols—indeed, talking about computer states as symbols—is largely a matter of convenience. The symbolic and syntactic character of the symbols is conventional in origin and etiologically inert. What matters is that the semiotic interpretations of symbols are coordinated with the functional-causal role they can play. Now the hope CTM presented was that the mind was a computer, and hence it might be that the mental-semantic properties of mental states could be coordinated with the causal roles they play in inference, thus showing that (contrary to appearances) intentional explanation is grounded in lawlike causal regularities.
Notice that purging CTM of dependence upon symbols and syntax has thus far done nothing to weaken the case for the vindication of intentional psychology. For in point of fact, the notions of symbol and syntax

Figure 11
played less of a role in the case of computers than was commonly believed. But notice also that there is an important difference between coordinating the semiotic-semantic properties of symbols in computers with their functional-causal roles, and coordinating the mental-semantic properties of mental states with their functional-causal roles: the former is done directly, the latter is done (according to CTM) by an intermediate step: namely, coordinating the MR-semantic properties of representations with their causal roles. The difference is represented graphically in figure 11.
This illustration reveals several respects in which the computer paradigm itself falls short of providing a vindication of intentional psychology. These are not reasons that one cannot vindicate intentional psychology in the manner suggested, but they do show what more one needs if such a vindication is to proceed as planned.
(1) The computer paradigm shows that semiotic-semantic properties can be coordinated with functional-causal properties. What one needs for CTM, however, is a demonstration that some other kinds of "semantic" properties (immediately, the MR-semantic properties of mental representations) can be coordinated with functional-causal properties. The computer paradigm by no means assures that this can be done. (After all, there might be something special about semiotic-semantics.)
(2) The computer paradigm only shows how two sets of properties of one sort of object can be coordinated. CTM needs something more: it needs to show that, by coordinating the MR-semantic properties of representations with their causal roles, it can thereby coordinate the mental-
semantic properties of mental states with their causal roles as well. This would seem to place some additional constraints upon the "vindication" beyond what is involved in saying the mind is a computer.
In what follows, I should like to build a case that each of these problems is potentially very serious. First, there is good reason to hesitate in concluding that other types of "semantic" properties can be coordinated with causal role in the fashion that semiotic-semantic properties are so coordinated in computers. Second, in order for BCTM to license a vindication of intentional psychology, it would have to be able to show that the coordination of MR-semantic properties with causal role would thereby secure the coordination of mental-semantic properties of mental states with causal role as well; and in order to do this, it would have to supply an instantiation analysis of mental-semantics in terms of MR-semantics—a realization account is not enough for vindication.
8.6.1—
The Special case of Semiotic-Semantic Properties
The computer paradigm shows that a symbol's semiotic-semantic properties can be correlated with the causal role the symbol can play, so long as all semiotic-semantic distinctions between symbols are reflected in syntactic distinctions. What links the semiotic-semantic properties to the marker type, however, are the conventions and intentions of symbol users. So if an adding circuit has the binary pattern 0001 tokened in one register and 0011 in a second and produces a tokening of 0100 in a third as a result, the tokening of the third is accounted for by the functional architecture of the machine and the specific patterns present in the registers, but the overall process is said to be an instance of adding one and three and obtaining a sum of four only because of the interpretive conventions that are being applied.
Now what, in this paradigm, accounts for the "coordination" of syntax with semantics? On the one hand, the functional properties of the system provide necessary conditions for the reflection of semantic distinctions in the syntax. On the other hand, it is the conventions of symbol users that actually establish (a ) the marker types employed, (b ) the syntactic types by virtue of which markers can be counters, and (c ) the semantic interpretation schemes by virtue of which the markers may be said to have semantic properties. The "coordination" of syntax and semantics depends upon the relationship between semantic and syntactic conventions, and so is highly convention-dependent.
I should like to suggest that this convention-dependence is precisely
what gives the "coordination" of syntax with semiotic-semantics in computers one of its more useful features, and that we should not expect syntax—or, more exactly, functional role and syntactic interpretability-in-principle—to be "coordinated" with non-semiotic-semantic properties in the same sort of way. For one thing that interpretive conventions (or intentions) can do is pick out a unique interpretation for each marker that is to serve as a counter. This is significant because (notoriously) any symbol system is subject to more than one consistent interpretation. (Notably, there will always be an interpretation entirely within the domain of number theory.) It is the conventions and intentions of symbol users that account for the fact that a token in a given symbol game means (for example) dog and not the set of prime numbers . And it is these conventions and intentions that determine which semantic properties are coordinated with which syntactic properties.
Now there is really something at once unique and mundane about the coordination between semiotic-semantic and syntactic properties of symbols. If someone asks why a given counter type is associated with (i.e., is interpretable as bearing) a particular interpretation, the answer is not at all mysterious: it is associated with that interpretation because there is a convention to that effect among a particular group of symbol users. And if someone asks why it is not associated with (i.e., is interpretable as bearing) another interpretation, the answer is that there is no convention linking it to that interpretation. It may indeed be surprising that symbol games as large as geometry and significant portions of arithmetic can be formalized, and it may be surprising that formalizable systems can be automated in the form of a digital computer, but the basis of the connection between counter types and semiotic-semantic interpretation is not at all arcane.
What would seem to be unique about this kind of association between semantic values and marker types is that the relationship between semantic value and marker type is determined by stipulation —and it is this that allows for the association of marker types with unique interpretations. Now it might be the case that there are other factors that could determine how syntactic features of mental representations are to be connected to particular (nonsemiotic) semantic properties and not to others. But it is not at all clear that we ought to expect it to be the case. For one might well think that it is only the stipulative character of semiotic conventions and meaning-bestowing acts that can provide the kind of unique correlation of semantic value with counter type that one finds in symbolic representations in a computer. I know of no convincing argu-
ment that would absolutely rule out the possibility that some other factor could provide such a unique correlation, but I must say that it seems a bit mysterious just what other kind of factors could provide a unique association between the syntactic properties of any mental representations there might be and their MR-semantic properties. It must not be a matter of stipulation, because that would lead to the kind of semantic regress discussed in the previous chapter. But without stipulation, it is unclear how one could get uniqueness of interpretation. The prospects of applying the computer paradigm analogously are thus rendered doubtful, though not precluded entirely.
8.6.2—
Instantiation, Realization, Vindication
Now even if it is possible to coordinate MR-semantic properties with causal role, this is not enough for the vindication of intentional psychology. For that one also needs it to be the case that coordinating the MR-semantic properties of representations with their causal roles secures the further coordination of the mental-semantic properties of mental states with their causal roles. Presented in the way the case was originally presented, when we assumed that the "semantic" properties of mental states were the very same properties as those of their representations, securing this further coordination seemed almost trivial. The argument for it is expressed by this argument presented earlier in the chapter:
Argument V2
(1) Mental states are relations to mental representations.
(2) Mental representations have syntactic and semantic properties.
(3) The syntactic properties of mental representations determine their causal powers.
(4) All semantic distinctions between representations are preserved syntactically.
(5́) There is a strict correspondence between a representation's semantic properties and its causal powers.
(6́) A mental state M has semantic property P if and only if it involves a representation MR that has semantic property P .
\ (7́) There is a strict correspondence between a mental state's semantic properties and its causal powers.
But of course once one has distinguished different kinds of semantic properties, the argument has to be adapted as follows:
Argument V3
(1) Mental states are relations to mental representations.
(2) Mental representations have syntactic and MR-semantic properties.
(3) The syntactic properties of mental representations determine their causal powers.
(4) All MR-semantic distinctions between representations are preserved syntactically.
(5* ) There is a strict correspondence between a representation's MR-semantic properties and its causal powers.
(6* ) A mental state M has mental-semantic property P if and only if it involves a representation MR that has MR-semantic property X .
\ (7* ) There is a strict correspondence between a mental state's mental-semantic properties and its causal powers.
The issue here turns upon (6* ), the claim that mental-semantic properties of mental states can be coordinated with MR-semantic properties of representations, and the inference to (7* ), the claim that mental-semantic properties of mental states would thereby be coordinated with causal powers. In order for (6* ) to be true, the mental-semantic properties of mental states would have to be at least correlated with the MR-semantic properties of representations. But in order for this argument to provide a vindication of intentional psychology, something more is required: one must be able to show that the MR-semantic properties of representations determine the mental-semantic properties of mental states. For in order to vindicate something, one must show that it could be the case. To vindicate intentional psychology, one would have to show that the mental-semantic properties of mental states can be coordinated with causal roles, and not merely show what benefits would be derived if they were so coordinated. Given that we can show that MR-semantic properties of representations can be coordinated with causal roles, we would still have to show that, as a consequence, mental-semantic properties of mental states would be coordinated with causal role as well.
Now what sort of account of mental-semantic properties would be
needed to achieve this end? What is required is an instantiation analysis of mental-semantics in terms of MR-semantics—a realization account is not enough. For recall a key difference between instantiation and realization: since an instantiation account provides conditions from which one can infer the instantiated property, it provides a vindication of existence claims for that property, given that the instantiating properties are satisfied. But with a realization account, no such benefit accrues: since the realizing properties are not a sufficient condition for the realized property, they do not provide proof for someone who doubts that such a property can be realized. Now we are seeking an account that vindicates the claim that the mental-semantic properties of mental states can be coordinated with their causal powers. An account of how mental-semantic properties are instantiated through the MR-semantic properties of representations could provide such a proof, because one would be able to infer the mental-semantic properties of the mental states from the MR-semantic properties of the representations. A realization account, on the other hand, merely presupposes that there is some special relationship between the properties picked out in the intentional idiom and those picked out by the functional-causal account, without either specifying the nature of the relationship or showing why it obtains. Such a presupposition may have great advantages if you are doing empirical psychology, because you can do your research without waiting for definitive results of debates about dualism, reduction, supervenience, or psychophysical causation. But for this version of the vindication of intentional psychology to work, we must not assume such a special connection, because the possibility of such a connection is precisely what has been called into doubt . If someone doubts that the semantic and intentional properties of mental states can be coordinated with naturalistic properties, and one gives a realization account for the intentional and semantic properties of mental states that just assumes that they are specially connected to some naturalistic properties, one has not assuaged the doubt so much as begged the question.
8.7—
Summary
The general conclusion of these past two chapters is that CTM does not, in fact, provide an account of intentionality. It provides the illusion of such an account by saying that the semantic properties of mental states are inherited from those of mental representations. But on closer inspection, we have not found any properties of "mental representations"
(i.e., our hypothetical cognitive counters) that could serve to explain mental-semantic properties of mental states. Semiotic-semantic properties, as we saw in the last chapter, fail on a number of grounds, including the fact that they render the explanation circular and regressive.
One focus of this chapter was upon the possibility that the kind of causal covariation account of semantics championed by Fodor might actually be able to serve as a stipulative definition of semantic terms as applied to representations. I have serious doubts that this was Fodor's intention. But if one were to make such a move, it would seriously undercut the persuasive force of Fodor's apologia for CTM, since that involved explicit and implicit arguments that turn out to be blatantly fallacious if notions such as meaning and intentionality are defined in causal terms for mental representations. Moreover, causal covariation stories do not go very far towards providing an account of what it is for a mental state to be mental-meaningful or mental-intentional—they don't provide an explanation . First, the causal covariation story just seems like the wrong kind of "account": it appears to give a demarcation criterion that does not explain, and it seems to distinguish states that have different meanings instead of distinguishing the meaningful from the meaningless. That is, it seems to assume that it is dealing with meaningful entities, and then asks, "How can we distinguish the ones that mean X from the ones that mean Y ?" In addition, I have tried to make a case that, if the notion of causal covariation is too bland a notion to provide an explanation of intentionality or meaningfulness, this blandness seems the price one must pay for generality: naturalistic accounts become more explanatory as they become more detailed, but in the process they lose the generality one would want from an "account of intentionality." Finally, I have argued that even if CCTI were to succeed as an account of semantics for the primitive representations, it would need to be supplemented by a naturalistic account of compositionality as well, and it is hard even to imagine how such an account might proceed. The upshot of this is that causal covariation does not provide us with a notion of representational meaning that can explain mental-meaning or vindicate intentional psychology.
The theoretical definition of the semantic vocabulary for representations fares no better. On one construal (the de dicto construal), it provides a fallacious pseudo-explanation that appeals to dormative virtues. On another (the de re construal) it provides no explanation at all. This, I think, is as far as CTM can be made to stretch: it is a theory of the form of mental processes that stands glaringly in need of an account of semantics to supplement it. We saw as well that we cannot "vindicate" in-
tentional psychology in the way envisioned by CTM's advocates unless we have such an account—and indeed a naturalistic account—of semantics and intentionality in hand. In the next chapter, we shall explore the prospects for such a "naturalistic theory of content." In the final section of the book, we shall explore an alternative way of looking at the computer paradigm in psychology that renders unnecessary both the naturalization of the mental and its vindication.
Chapter Nine—
Prospects for a Naturalistic Theory of Content
In the previous chapters it has been argued that CTM does not itself provide the explanation of intentionality that is claimed for it, and as a result it cannot produce the kind of "vindication" of intentional psychology it set out to perform. At best, a bowdlerized version of CTM might provide a way of describing the form of mental processes, and this in turn might form a part of a larger theory that would supply an independent theory of content. In our project of assessing CTM, it would not be completely unjust to leave the matter where it now stands. It is strictly speaking false that CTM explains intentionality, and this belies much that is commonly said about it. With the imposture unmasked, we could go straight to the credits and the final curtain without being truly unjust. However, one does not have to look very hard to see that, while BCTM does not itself supply an account of intentionality, it could be a part of a larger theory that does so if it were only to be supplemented by what is commonly called "a theory of content for mental representations." Indeed, in at least some places (e.g., the introduction to RePresentations ) Fodor himself seems to view the situation in this way. And however you slice the pie, the overall explanatory agenda for the computational-representational project is pretty much the same. Perhaps the more common (if mistaken) interpretation has been that the semantics of mental states have been explained by appeal to meaningful symbols, but now the meaningfulness of the symbols needs explanation, and that is what calls for a naturalistic theory of content (see fig. 12). If you take the semantic vocabulary for mental representations to be theoretical in char-

Figure 12
acter, the middle level simply falls out, and you need an account that directly ties the meaningfulness of mental states to some unknown properties of functionally delimited proper parts of the mind or brain that are sufficient to explain mental-semantics. Either way, you ultimately need a pathway from nonsemantic and nonintentional properties of your "representations" or cognitive counters to mental-semantic properties of intentional states. All that is lost in moving from Fodor's narrative to BCTM is the (paralogistic) illusion of having made some progress on the semantic front along the way. So the idea that BCTM is really a theory of the form of mental states and processes that is still in search of an explanation of semantics and intentionality might not be all that repugnant to many in the computationalist camp. The burgeoning industry of naturalizing content, after all, is keeping plenty of philosophers employed, and holds out the hope of someone playing Watson and Crick to Fodor's, Putnam's, or Pylyshyn's Mendel.
It thus behooves us to give at least a brief examination of the prospects of completing this project by explaining the mental-semantic properties of mental states in nonintentional terms in a fashion compatible with
BCTM. This is a big undertaking, and it is very different in character from the rest of this book. The preceding sections have been concerned with assessing the limitations of a particular theory. A complete assessment of the prospects for a naturalistic account of intentionality, by contrast, would require us to examine not only all those theories that have actually been proposed (variations on which seem to multiply by the hour) but also all possible theories that have not been thought of as well. Quite a daunting task, really, and definitely beyond the intentions of this book.
What I propose to do in this chapter is much more modest. I shall endeavor to do four things: First, I shall distinguish weaker and stronger ways of "giving an account," which I will refer to respectively as "weak naturalization" and "strong naturalization" of the mental. Second, I shall point to some different classes of mental states to which the word 'intentionality' is applied and make a case that what needs to be explained in these different classes may indeed be very different (e.g., broad versus narrow content, phenomenology versus functional relations and behavior). Third, I shall try to make a case that at least some kinds of "intentional states" (the ones with a phenomenology) have properties that it seems unlikely that we shall be able to naturalize. And finally, I shall make a case that, with the remainder of "intentional states," it seems dubious that the explanation of meaningfulness (as opposed to the demarcation of meaning assignments) will focus on localized cognitive counters, as required by BCTM, but rather will require an examination, at the very least, of an entire thinker or organism, and very likely its situation in its social and ecological environment as well.
9.1—
Strong and Weak Naturalization
We are thus brought to the question of evaluating the prospects for a naturalistic theory of content that could be grafted onto BCTM. In recent years it seems to have become almost a kind of religious commitment in some corners of the philosophy of mind that one believe that there can be a naturalization of content. Upon closer inspection, however, it becomes clear that naturalism is not only loosely argued for, but loosely defined as well. For even among people espousing a commitment to "naturalism" or "naturalization" you will find enormous disagreement about what would count as a naturalization of the mind, including differences as to what is constitutive of the "natural" (is it the domain of physical objects? of causal interactions? of lawful causation? the non-normative and nonteleological?) and differences as to what kind of
"account" or "theory" is at issue. Is it enough to count as naturalization if you specify brain states (or abstract states realized in brains) with which content varies without specifying any relationship stronger than logically contingent covariation? Or does a naturalization of psychology require something more: say, a metaphysical relationship such as reduction or supervenience, or an explanatory relationship such as conceptual adequacy? Just getting a grip on the different possible moves here is a daunting task, and would probably require a book entirely devoted to that topic. What I wish to do here is to make a kind of first Dedekind cut that will separate two very different kinds of projects.
First, consider an ambitious form of naturalism: a naturalism that seeks to bring the mind wholly within the realm of nature by showing how it is possible to subsume our special discourses about thought within the framework of the natural sciences. As a model for the kind of strong explanatory relationship such a project seeks, we might take such strong intertheoretic relationships as the famous proofs that thermodynamics can be derived from the mechanics of particle collisions, or the ability of the atomic theory to explain features of the periodic table and combinatorial laws of nineteenth-century chemistry. Statistical mechanics provides a kind of explanation of thermodynamics that has important properties both metaphysically and as explanation. Metaphysically, the mechanical laws are logically sufficient for the thermodynamic laws: that is, basic mechanical laws, in combination with necessary truths of logic and mathematics, are enough to entail the thermodynamic equations. Moreover, this entailment is epistemically transparent: a person with an adequate understanding of mathematics and mechanics could derive the thermodynamic equations even if she lacked a prior acquaintance with thermodynamics as a branch of physics. I call this kind of explanation "conceptually adequate explanation." A is a conceptually adequate explanation of B just in case the conceptual content of A is enough to derive the conceptual content of B without the addition of contingent bridge laws.[1]
I shall refer to the project of explaining the mind in a fashion that is in similar fashion metaphysically sufficient and conceptually adequate as strong naturalization . A strong naturalization of an intentional property I would explain I by appeal to some "naturalistic" properties N , where the term 'naturalistic' implies at least (a ) that the properties that comprise N are themselves nonintentional, and (b ) that they do not presuppose intentional properties. (For example, conventions are not themselves intentional, but arguably presuppose intentional states.) Obviously,
important candidates for the properties in N are properties found in the discourses of sciences such as neurology and biology, but I have deliberately left the description of the "natural" open to possibilities that properties of natural objects that are not relevant to the other sciences might prove important for psychology.[2]
In contrast with strong naturalization, consider a much weaker kind of project: that of specifying, so far as possible, the mechanisms in the nervous system through which mental states are "realized"—where "realization" implies some special connection whose metaphysical nature may be left vague. (Such a project need not confine itself to relations between minds and single organisms—it could also, of course, specify any crucial relationships between the thinker and her social or ecological environment with similar metaphysical neutrality.) Such a project need not produce intertheoretic relationships that are necessary or sufficient, and the naturalistic properties specified need not explain the mental properties to which they are linked. This kind of account suffers no lack of precedent. The relationships between variables within a given theory are generally of this sort (though they are sometimes explained by an additional theory that provides a microexplanation), as are bridge laws and statements such as that of the wave-particle duality of matter. The psychophysical regularities in such a theory would serve as a kind of contingent bridge law between an intentional psychology and a nonintentional neuroscience.
We might call this kind of project in psychology weak naturalization in contrast to the "strong naturalization" described above. However, it is with some misgivings that I apply the name "naturalization" to it at all, as (a ) most people calling themselves "naturalizers" seem to have strong naturalization in mind, and (b ) many people who would normally be considered something other than naturalists could subscribe to this "weak naturalization" project as well. Indeed, it is a project in which Descartes was an important pioneer, to which Spinoza explicitly subscribed, and which even Berkeley might have been able to endorse in connection with empirical research. As a result, I am sometimes more inclined to refer to it as the "Neutral Project."
BCTM can be located, with minor variations, within either kind of project: strongly or weakly naturalistic. However, a strong naturalization of the mental is required if CTM is to accomplish either of the two philosophical goals that it has set out for itself. To account for the intentionality of mental states, it is not enough to specify some contingent correlations between mental-state type and some physical or abstract
property. For this would not explain why meaningfulness appears on the scene at all; and that, after all, is the primary puzzle for the naturalist. Contingent correlations are simply not explanatory. And to vindicate intentional psychology, it is necessary to show that mental states can be understood in a way that meets the desired criteria. And this, in turn, requires explanation that is epistemically transparent.
Machine computation shows, for example, that for formalizable domains, the semiotic-semantic properties of the symbols can be linked to the physical-causal properties of the machine. The physical-causal properties of the machine, indeed, entail its description (or describability) in terms of a machine table (though not uniquely). Yet the physical-causal properties of the machine do not explain the semiotic-semantic properties, because these depend upon conventions as well. I think that this much is likely to prove to be much the same in the case of mental states. Where the two situations diverge (and this is what affects vindication) is the fact that, in the case of symbols in computers, we can make it transparent that the objects of the semiotic description are the very same objects as the objects of physical-causal description (the series of bistable circuits and whatnot), whereas identity ascriptions between mental and physical states are at best mere guesswork. The reason you can see this in the case of symbols in computers and not in the case of mental states turns upon the fact that there is something about the notion of a symbol that entails that a symbol have criteria involving a physical pattern. A token signifier is necessarily a token marker, and a token marker is necessarily a token physical object. But there is no similar connection with material objecthood built into the notion of a mental state. The connection between symbolhood and physical objecthood is conceptually necessary. That between mental states and physical objecthood is contingent at best. And to show the compatibility of mentalism with materialism, you need more than guesswork; you need to make the identity transparent. Otherwise there is no proof of compatibility, hence no vindication. This only makes a difference to those who are really sold on the premise that intentional psychology is in need of vindication, but it should matter quite a lot to them.
9.2—
What Is "The Mental"?
If assessing the possibility of "naturalizing the mental" requires some discussion of the notion of naturalization, it is equally in need of some discussion of its intended domain, "the mental" and even "the intentional."
Thus far, with the exception of a few hedges in chapter 1, we have proceeded as though there were a clear and shared understanding of the population of the intentional bestiary and of the "ordinary" or "pretheoretic" notion of intentionality. However, I have become convinced in recent years that this is not so. There are really several different kinds of things that are called "mental" and even "intentional states." Most important, I think, is the distinction between conscious episodes like perceptual experiences, conscious judgments, and episodes of recollection on the one hand, and dispositional states like beliefs and desires on the other. Their salient properties are very different from one another, and hence require very different accounts. Moreover, different groups of philosophers take different classes of states as their paradigm examples and, as a result, operate under very different assumptions about what a "theory of mind" or an "account of intentionality" would have to explain.
9.2.1—
Four Kinds of "Mental State"
I have argued elsewhere (Horst 1995) that we may usefully distinguish four kinds of entities that go under the name of "mental."
(1) Conscious Occurrent Episodes (judgments, perceptions) . Until fairly recently, people interested in the mental in general and intentionality in particular tended to concentrate on episodes of conscious thought in which some object or state of affairs was, as it were, "before the mind's eye." It seems quite clear that this is the sort of thing that the pioneers of modern work in intentionality like Brentano and Husserl had in mind, and it is surely true as well of work on the mind by most of the Early Modern philosophers such as Descartes, the British empiricists and Kant, as well as living philosophers such as Geach (1957), Nagel (1986), Goldman (1992, 1993a, 1993b), and Searle (1983, 1992). Such states would include things like perceptual gestalts, in which an object or scene is presented visually, occurrent judgments ("By gum! That's a dingo!"), conscious wishes ("Oh, that Rhett would come back to Tara!"), recollections, imagination, free fancy, and so on. Such things are events, they are conscious or at least consciously accessible, they have a phenomenology, and there is a quite palpable sense in which it makes sense to say they are "directed" towards something and have an "intentional object" that need not be a real object. In these cases the mind in some sense not only intends the object, but attends to it as well. Such episodes are, to a certain extent (and not infallibly), susceptible to introspection, and are certainly
not "purely theoretical" in the sense that protons are theoretical or that Pluto was theoretical before its existence was confirmed by telescopy. (That is, we have direct, quasi-observational evidence for their existence as well as retroductive evidence.)
(2) Dispositional States (beliefs, desires) . Most recent writers in cognitive science have concentrated, by contrast, on things like beliefs and desires, generally construed (with varying degrees of strictness) in dispositional terms. Dispositions are by definition unobservable. And where 'belief' means something other than "conscious judgment" (which it is sometimes used to mean), it does seem to indicate something that is truly theoretical and indeed cannot be confirmed through direct observation. Perhaps some dispositions have a phenomenology—say, believing that there is a loving God fosters a sense of inner peace and believing that the Mob has put out a contract on you produces a sickening anxiety—but the connection between the dispositional belief or desire and its phenomenology is far less direct (and arguably less essential) than that between occurrent states and their phenomenology. The "aboutness" of a perceptual gestalt is very closely related to the fact that I am appeared to in a fashion that involves an image of a dog, presented from a particular perspective (say, from behind), and under a particular interpretation (i.e., "That's Marco's dog, and she's chewing on my shoe!"). And all of this has a phenomenology. For the most part, beliefs only acquire a distinctive phenomenology when they eventuate in conscious episodes.
(3) The Freudian Unconscious . Freud speaks of "unconscious" mental states. These seem to be built on the model of conscious states, and are taken to be of the same kind, with the sole proviso that they are repressed. They can (it is said) be brought to conscious awareness in therapy. I do not intend to pursue Freudian theory here, but merely to point out that such events start out as theoretical entities, and particular ones may cease to be purely theoretical when made conscious. They may have a vague and extrinsic phenomenology that manifests itself in some of the complaints that bring the patient to the therapist's couch, but these are not particular to the content of the state in the way that, say, the phenomenology of perception is connected to how I am thinking of the object of perception (for a similar view, see Searle 1992).
(4) Infraconscious States . Finally, cognitive scientists often speak of things lying below the level of the consciously accessible in mentalistic terms. We hear talk of cognitive subsystems, for example, cashed out in terms of "beliefs" and "desires" of the subsystems. Such states are surely nonconscious, have a phenomenology only incidentally, and indeed may
bear no more than analogical relations to other things called "beliefs" and "desires," as argued by Searle (1992). Such states are also clearly theoretical in the strong sense that protons are theoretical. (In other words, our only warrant for believing in them is that doing so gives us a certain amount of explanatory payoff.)
Now clearly, when you are asking for an account of "mental states" it will make a great deal of difference what kinds of "mental states" you have in mind. In fact, there are plenty of people who are committed to one or more of these categories while remaining skeptical about others. Many people think Freudian psychology is bunk, for example, but believe in conscious states or beliefs; and outside of cognitive science it is common to find people who agree with Searle and myself that many of the attributions of "beliefs" to infraconscious states and processes are true only if interpreted metaphorically. Indeed, some of us think that nothing could be more clearly real than conscious states but harbor deep-seated misgivings about dispositional beliefs and desires. Conversely, some people seem not to understand talk of phenomenology and subjectivity at all (perhaps in the way some people do not experience imagery), and others think that the conscious experience of mental states is merely a gaudy epiphenomenon that is irrelevant to the "real" (i.e., causal) nature of beliefs and desires.
What you choose as your paradigm examples will have a significant impact on what you consider "essential" to the "mental" and hence what stands in need of explanation. Perception, imagination, recollection, judgment, conscious yearnings, and the like all involve a kind of directedness of the sort reported by Brentano, which in turn involves at least the possibility of consciousness, a phenomenological "what-it's-like," a perspectival character of the object-as-presented (we see and think about objects under only some of their aspects), and a kind of subjectivity (this experience is essentially my experience). All of this seems to be bound up in what writers like Brentano, Husserl, Searle, and Nagel mean when they talk about intentionality in particular and the mental in general. If this is how you are using those words, on the one hand, it is only natural to assume that an "account" of "the mental" or of "intentionality" should account for all these features. If your paradigm example of the mental is a dispositional belief, on the other hand, you are unlikely to include such features in your list of things needing explanation, and rightly so.
I happen to think that these distinctions explain a lot of the contemporary impasses in the philosophy of mind. People who think mental states are "theoretical" tend to be thinking of dispositional beliefs, the
unconscious, or the infraconscious. People who are thinking of perception and judgment regard characterizations of the mental as "theoretical" as outrageous. People in the occurrent-state camp also tend to regard phenomenology, subjectivity, and consciousness as crucial to the mental in general and to intentionality in particular, while those concerned with beliefs and desires often do not. It seems to me (see Horst 1995) that there is room for a dissolution of these impasses that saves face for all: namely, that things like judgments, imagination, and perception are not theoretical entities, and do essentially involve phenomenology, subjectivity, and consciousness, while dispositional states and infraconscious states are theoretical in character and do not involve these features, except incidentally.[3]
9.2.2—
Intentionality and Directedness
I think that there is likewise some variety in the literature in how the words 'intentionality' and 'intentional state' are used. When the word was reintroduced into philosophical parlance by Brentano (1874), it seems clear that he meant 'intentionality' to denote a feature of certain kinds of whole mental states (and not their proper parts). Indeed, Brentano speaks of intentionality as being the distinctive feature of his "mental" as opposed to "physical" phenomena, but it is clear on closer inspection that his "physical" phenomena are not physical objects but qualia! This may seem mysterious at first glance, but the mystery is resolved when one recognizes that Brentano is starting from the empiricist starting point of examining the contents of the mind from the first-person perspective (see McAlister 1974, 1976). His "phenomena" are literally "things that appear"—some of which (those he unfortunately calls "physical") involve only sensation, others of which (those he calls "mental") involve the presentation of some object as an object. Brentano's empiricist foundations, as well as his examples, make it clear that he is dealing with mental episodes in which one is conscious of some "intentional object" as it is presented, as it were, "before the mind's eye." The reason for speaking of the "directedness" of such states is quite palpable: when I have perceptual experience of a dog, or imagine a dog, or have a recollection of the family dog, my mental gaze is, as it were, directed towards the object of my thought. And famously, of course, this kind of "directedness" does not require the existence of an extramental object corresponding to our ideas. From the empiricist standpoint, or under Husserl's phenomenological "bracketing," "directedness" is a feature of experience itself—
the fact that it is an experience that presents us with a putative object and not just a sensation—and not a relation to extramental reality.
So in Brentano, the "mental states" that are characterized by his notion of "intentionality" are conscious episodes and not dispositional beliefs or desires. Indeed, it is not clear that the kind of "directedness" one finds in Brentano's examples can be applied to unconscious dispositions. Brentano also uses the term 'intentionality' to apply to whole mental states, and not to their proper subparts. This leaves the exact application of the term open to some interpretation. Writers like Husserl and Searle have taken the notion of intentionality to include the whole phenomenologically rich network of mental states that is involved in the directedness of conscious thoughts. When my thoughts are directed towards an object, there is a conscious experience in which
—I am present as the subject of the thought,
—an object is presented under certain aspects and not others, and
—the experience has a phenomenology.
Someone starting from this vantage point will naturally expect an "account of intentionality" to explain all of the salient aspects of such states, including their phenomenological feel and subjectivity.
Through the middle part of the century, however, discussions of intentionality interbred with discussions of the semantics of linguistic entities, with the result that many people now seem to use the word 'intentionality' or the 'directedness' of mental states to be more or less equivalent to the linguistic notions of meaning and reference. And those influenced by the view of formal semantics argued against in chapter 6 may be inclined to view both simply in terms of whatever establishes a mapping from words or thoughts to world. This notion of intentionality, unlike its predecessor, seems applicable to beliefs and desires as well as to conscious mental episodes. And it seems natural, if you use the word 'intentionality' in this way, not to view things like consciousness and subjectivity as being essential to intentionality.
9.2.3—
Broad Content, Narrow Content, Phenomenological Content
Significantly, the problem of accounting for "content" shapes up differently depending on which tradition you are starting from. In recent years, analytic philosophy has given a great deal of discussion to "broad" ver-
sus "narrow" content. But the natural construal of "content" from the phenomenological standpoint does not exactly map onto either of these. There the natural distinction is between what we might call the "intentional character" of mental states (the features that are invariant over all possible assumptions about extramental reality) and "veridicality" (hooking up to the world in a felicitous way). The notion of "content" that is a part of intentional character is neither wide nor narrow content exactly.
The basic idea behind the distinction between broad and narrow content is that at least some words and concepts depend for their semantics upon things outside of the mind. Writers like Kripke (1971) and Putnam (1975) have argued, for example, that it is part of the semantics of our notion of "water" (and likewise the word 'water') that it refer to H2 O, and that it did so even prior to the discovery that water was H2 O. Indeed, on this view, "water" would have referred to H2 O even if we all believed that water was of some other molecular type. If there were beings on Twin Earth who were phenomenologically, functionally, and physically identical to us but were exposed to some other compound XYZ in the same contexts we are exposed to water, their concept "water" would mean not H2 O but XYZ. (Of course, to make this work, you have to bracket the problems that arise from using a substance that comprises most of our body weight for the example. I suggest substituting another kind of substance if this distracts you.) A second kind of argument is raised by Burge (1979, 1986), who claims that many words, such as 'arthritis', are often used by people who do not know their full sense. According to Burge, we may use such words felicitously even without knowing their sense because we are tied into a social-linguistic network with experts who do know the sense of the words: when I say 'arthritis', I intend to refer to whatever condition it is that the experts refer to when they employ the word. "Broad" content—or perhaps the broad notion of content—is thus something that depends on mind-world relations. This kind of "externalist" view comes in two varieties: the "ecological" kind, which ties semantics to the thinker's environment through relations like causation, adaptation, learning, and selection, and the "social" kind, which embeds semantics within a social, and particularly a linguistic framework. "Narrow" content (or the narrow notion of content), by contrast, is often characterized as what is "in the head." It is often said that molecular (or functional) duplicates (quaintly called doppelgängers ) would necessarily share narrow content, though they might differ with respect to broad content due to being thrust into different social and natural environments.
From the phenomenological starting point, however, the natural distinction to make is not the distinction between broad and narrow content, but between those properties that are contained within the experience itself, regardless of the relation of the experience to extramental reality, and those properties that depend upon extramental reality as well. Thus Husserl invites the reader to perform an epoché or "bracketing" of everything that is dependent upon extramental reality in order to study intentional states as they are in their own right. And Chisholm and others resort to turns of phrase like "seeming to see a tree" or "being appeared-to-treewise" to distinguish the sense of verbs like 'see' that merely report the character of the experience from those that imply a kind of success as well. I shall mark this distinction by speaking of the notion of intentionality that implies a correspondence with extramental reality as veridical intentionality . The aspect of intentionality that does not vary with assumptions about extramental reality I shall call the intentional character of the mental.[4] What I mean by this latter expression are those aspects of an intentional state that do not vary with variations in extramental reality. And there are two kinds of invariants here: invariants in modality and invariants in content .
Let us consider an example of an intentional state. Suppose, for example, that I experience a perceptual gestalt of a unicorn on my front lawn. That is, I have an intentional state with the intentional modality VISUAL PRESENTATION and the content [unicorn on my front lawn]. Now there are certain things that one can say about such a mental state that do not depend upon issues such as whether there really is a unicorn there (or anywhere) or what causes me to have the experience that I have. Regardless of whether there is a unicorn there (or anywhere), it remains the case (a ) that my experience has the intentional modality of VISUAL PRESENTATION (it appears to me as though there is a unicorn on my lawn), and (b ) that my experience has the content of presenting a beast of a certain form and with certain associations (it appears to me as though there is a unicorn —rather than a cat or a rock—and it appears as though it is on my lawn ). Each of these aspects of my experience has a certain phenomenology to it. There is a "what-it's-like" to having a perceptual gestalt, and it is different from what it is like to have a recollection, however vivid, or to have a desire accompanied by imagery, and so forth. Perhaps there are pathologies in which such distinctions are lost, and in some cases we may not differentiate adequately between modalities (e.g., between different strengths of conviction of belief or between imagination and perception); but in ordinary cases, we can quite simply tell
what intentional modality is at work. Imagine how much more complicated life would be if we were systematically unable to distinguish experiences that were perceptual gestalts from those that were memories!
There is likewise a "what-it's-like" for having an experience with the content [unicorn on my front lawn], and it is very different from what it is like to be presented with an experience having the content [cat on my front lawn]. To determine whether I am having a gestalt of a cat or a unicorn, I do not have to consider my behavioral dispositions or the functional relations of my state of mind to other states of mind, any more than I have to do so to identify the feeling of pain as pain.[5] There is simply a difference in what different kinds of intentional states are like. So occurrent states have an intentional character that arguably dispositional beliefs do not have, and the notion of "content" that emerges from this perspective—which we may call phenomenological content —is a proper part of intentional character, which also involves an intentional modality as well.
It should be clear that phenomenological content is not equivalent to broad content, since the former partitions the mental in a way that is insensitive to relations to extramental reality while the latter depends essentially upon such relations.[6] The relationship between phenomenological content and narrow content is more difficult. Narrow content is sometimes associated with the notion of "methodological solipsism" (Fodor 1980), which seems to imply slicing the intentional pie according to things that are invariant for the thinker qua thinker. (It seems hard to see how a third -person functionalist approach could merit the name of "solipsism"!) This would seem to imply in turn that narrow content is just phenomenological content. But narrow content has also become associated with characterization in terms of what is (necessarily) shared by physical or functional doppelgängers, and that seems to be different from phenomenological content. After all, it seems epistemically possible both that I do have a body and that I do not (the Cartesian demon scenario). Similarly it seems conceivable, hence logically possible, that there be a being that is my phenomenological doppelgänger but not my physical or functional doppelgänger, and vice versa. In the absence of any way of deriving a particular phenomenology from a particular physical or functional description (or vice versa), it seems to me we should assume that these notions diverge—perhaps in real cases, but certainly in counterfactual ones. I suspect and hope that talk of narrow content is really a way of getting at phenomenological content, with incorrect assumptions being made about the necessity of relationships between the
two. But for purposes of clarity, I shall treat the notion of narrow content here as though it were defined in terms of what physical or functional duplicates would necessarily share in common.
9.2.4—
The Plan of Attack
My plan of attack on naturalistic theories of content, then, is as follows. There are different issues about explaining the phenomenologically pregnant notion of directedness associated with occurrent states, on the one hand, and explaining the broad and narrow content of dispositional states like beliefs and desires, on the other. I shall argue that, if one is concerned with things like perceptions, recollections, and judgments, then explaining the directedness of these does involve one in explaining their subjectivity, perspectival character, and phenomenology, and that writers like Searle and Nagel are right in saying that these features cannot be reduced to a third-person naturalistic discourse. Moreover, no naturalistic discourse can provide necessary or sufficient conditions for the invariants distinctive of intentional character and phenomenological content. But these arguments do not transfer directly to beliefs and desires. There I shall argue not that no naturalistic theory can provide an account of content (though I happen to believe it), but merely that the likely form of any such theory, were it to emerge, would not place the explanation of meaningfulness where BCTM says it ought to be—namely, in the so-called "representations." This is fairly obvious in the case of broad content. I shall argue that it is very likely true of narrow content as well.
9.3—
Phenomenology and the Mental
Our first aim, then, is to examine phenomenological content and the phenomenologically rich properties of consciousness, perspective, aspect, and subjective "feel." In what follows, I wish to separate three major sorts of issues concerning phenomenologically typed mental states. First, we shall examine the legitimacy of the phenomenological approach: whether the phenomenological features are real , whether they are essential to intentional states (or particular kinds of intentional states), and whether they make for a viable classification of mental states. Second, we shall examine the question of whether phenomenological properties, however legitimate or real they might be, are likely to play much of a role in the formation of a scientific psychology. Finally, we shall consider
whether phenomenological properties are the sorts of things that can be strongly naturalized.
9.3.1—
The Legitimacy of the Phenomenological Approach
It is one of the strange turns of twentieth-century philosophy that the phenomenological properties that provided the epistemic bedrock of seventeenth- and eighteenth-century philosophy are now thought by many to be in need of legitimation. There are really a number of separate issues here. One important issue is that of the connection between phenomenology and science. That will be considered in a later section. In this section we shall consider the following questions:
(1) Are phenomenological properties
—real as opposed to unreal?
—observational as opposed to theoretical?
—accurately described as opposed to inaccurately described?
—fundamental as opposed to nonfundamental?
(2) Are phenomenological features such as subjectivity, perspective, and "feel" essential to the occurrent conscious states to which they attach themselves, and more particularly, are they essential to the intentionality of those states?
(3) Does the phenomenological approach provide the basis for a classification of mental states (especially a classification according to "phenomenological content")?
9.3.2—
The Reality of Phenomenological Features
First, let us consider whether phenomenological features are real features. But "real" as opposed to what? They are certainly not unreal in the sense that fictions are unreal. I suppose that it is possible that there are people who do not have the kinds of phenomenological properties that I have, or that they do not have any at all, in much the way that it appears likely that some people do not experience any mental imagery while others do so very vividly. But for those of us who do report phenomenological properties, it seems as clear as anything could be that there is a what-it's-like to, say, seeing a dog in the yard, and that it's different from the "feel" of imagining the same scene or seeing something different. Likewise, subjectivity and perspective seem to be indubitably legitimate features to
attribute to my experience. For those of us who report a phenomenology, the claim that phenomenological features fail to be real the way fictions fail to be real is clearly a nonstarter.
It is quite another matter, however, if the issue is one of whether particular claims about phenomenology, or even particular descriptions of it, are as accurate as they might be. People who complain about phenomenology are often really concerned only about claims of special access that imply incorrigibility . But this is a red herring. I do not know any major philosopher in the phenomenological camp who has claimed that phenomenology was easy, or that we could not be mistaken about it, especially at the level of abstract characterization. Husserl was continually stressing the difficulty of phenomenological description to the point of describing himself as a "perpetual beginner" at it; and contrary to the common libel, Descartes acknowledged that we could be quite mistaken about our mental states, even in such seemingly straightforward cases as pain (see Principles 1.67 [AT VIIIA.32-33]). I am not aware of anyone who seriously thought that a thoroughgoing phenomenological account could be naively "read off" from introspection of one's own experience. (Though British empiricists and common sense philosophers sometimes spoke this way.) If the existence of phenomenological features is indisputable, it is equally clear that we have no definitive word on the topography of phenomenological space, nor even firm evidence that such a definitive description might be forthcoming.
I think, however, that there is an important sense in which this implies that our talk about phenomenology is "theory-laden," but also an important sense in which phenomenological properties are not "theoretical." There is a weak sense of "theory-ladenness" which implies only that the way we describe a thing (any thing) is set against a set of background assumptions about the world and a network of interrelated concepts or words. If this kind of network theory of meaning is true of language generally, it is surely true of our language for describing our own minds as well (unless, perhaps, one embraces the kind of phenomenalist atomism that Russell espoused at one point). But there is also a stronger sense of "theory" that implies retroduction , and this has implications about the kind of epistemic access we have to a thing. An entity or property that is "theoretical" in this strong sense is one supposed to exist just because this supposition explains something else. Pluto was "theoretical" in this sense until it was observed with a telescope. Protons are still "theoretical" in this sense. But it seems clear to me that phe-
nomenological properties are almost by definition not "theoretical" in this strong sense (unless perhaps to someone who has heard about them but not experienced them, if that is indeed possible). If you experience phenomenological properties, it cannot be the case that your only access to them is inferential. You may, of course, hold some theory-laden beliefs about them (especially if you are a philosopher), just as we may still hold many theory-laden beliefs about Pluto (or, for that matter, about rocks and rabbits). But they are not retroductive in origin or warrant.
Finally, questions about the "legitimacy" of phenomenological categories are sometimes questions about whether such categories "cut nature at the joints." In particular, one might wonder if they are (a ) fundamental as opposed to derivative properties, and (b ) relevant to the systematic description of the world characteristic of science or merely epiphenomenal. Now I think that raising the question of whether phenomenological properties are fundamental is important and appropriate at some point. But it is surely a ridiculous issue to bring up early in the game as an attempt to discredit phenomenology. Cartesian physics taught that light, magnetism, and gravitation were derivative from mechanical collision. Newtonian physics treated gravitation, light, and mechanical force as separate fundamental forces. Many people objected to the Newtonian view on the grounds that it seemed to involve action at a distance. And perhaps they were right and perhaps they were wrong to do so. But no one (at least no one whom we remember) suggested that the irreducibility of gravitation to contact interactions would undercut the legitimacy of the phenomenon (as opposed to the theory) of gravitation. To do so would have been sheer madness, not to mention bad scientific practice. Science aims at being systematic and universal, but it does so by integrating discourses that are initially local and particular. If we should arrive at a unified field theory in physics, it will be because we first had serious theories of mechanics, gravitation, electromagnetism, and strong and weak force. We reduced chemistry to physics because we first had a serious chemistry. Likewise, if phenomenology is reducible to something else, the only way we will discover this is by taking phenomenological properties seriously in their own right, and this means countenancing the possibility that they might be fundamental in the sense of not being derivable from nonphenomenological properties. A posteriori arguments on this subject are for the endgame, not the outset. I have never heard a vaguely plausible a priori argument to the effect that mental properties must not be fundamental.
9.3.3—
Is Phenomenology Essential to Some Mental States?
Next, let us consider whether phenomenological properties are essential to certain kinds of intentional states. Questions of essentiality are always difficult, but we might approach the issue by considering some examples of conscious mental episodes and then ask whether they could remain the same kind of episode if deprived of their phenomenology. Consider first a simple kind of perceptual experience, such as having a perceptual experience of a square, where the expression 'perceptual experience of a square' is interpreted in that distinctively intentionalistic way that does not imply a relation to an actual square. Of course, one never simply has perceptual experiences; they are always perceptual experiences in some particular modality—a tactile experience, say, or a visual experience. So let us say the experience in question is one of VISUAL PRESENTATION [square]. Normally, such an experience has a particular kind of phenomenology, both in terms of its qualitative elements (not just any configuration of qualia can be constituted as a square) and its conceptual ones (squares have a different "feel" from circles or triangles).[7] Normally, such experiences have very complicated relations to environmental and behavioral counterfactuals as well. Our natural-language attributions tend to be based on assumptions about such normal cases. But suppose that a being were to have states that were very similar to ours in its relations to the environment and behavior, but a radically different phenomenology or no phenomenology at all. We might well say that it was in some kind of perceptual state, but would we want to say that that state was VISUAL PRESENTATION ? The answer, I think, is not easy.
Consider first that we can ourselves have perceptions of the same things, and behave in similar ways, on the basis of several perceptual modalities. We can feel squares as well as see them, and blind humans can form most of the same concepts and negotiate most of the same environments as sighted humans. It is just that none of their perceptual states is visual in nature. The same goes for echolocation in bats: presumably, echolocation plays a very similar role in bat navigation that sight plays in human navigation, but it is a different modality and presumably has a different phenomenology.
But to make the point more clearly, Sur, Garraghty, and Roe (1988) performed experiments with ferrets in which the optic nerve was severed and reconnected to nonvisual tissue in the brain. The ferrets were able to respond to visual stimuli in a striking display of equipotentiality. Sup-
pose that the same thing could be done with human beings: the evil Dr. No rewires your nervous system so that your optical signals do not go to the visual cortex, but somewhere else. Now the human brain is probably significantly more specialized than are ferret brains, which lessens the probability that the special-purpose functions of the human visual cortex could be duplicated by other tissue; but it is at least worth entertaining the possibility (a ) that visual stimuli would produce, say, auditory qualia, and (b ) that you could be conditioned to distinguish some kinds of objects on the basis of these stimuli, thus forming a new kind of perceptual gestalt. Your experience might have the content [square], but would be accompanied by acoustical rather than visual qualia. Now ordinary language might well describe such an experience as "hearing shapes" or the like, but a more sober assessment would probably be that the victim of such rewiring was in fact experiencing a new kind of perceptual experience. Even if the process could be done so seamlessly that the patient could respond to the full panoply of visual stimuli that normal humans do with the same range of behaviors, I think most of us should be loath to call his experiences VISUAL PRESENTATION , precisely because of the differences in qualia. Indeed, even if someone's brain were wired like a normal human brain, I should be disinclined to call his states VISUAL PRESENTATIONS if I somehow came to believe that their phenomenology was acoustical.
Likewise with other intentional states. Suppose I have a recollection of my first day at college. This may or may not be accompanied with visual or auditory imagery; but in order to be a RECOLLECTION it must be presented as something that happened to me in the past . This is really a bit tricky, though. It is possible to become so engaged in memories, imagination, and particularly dreams that one mistakes them for current experiences. However, it is important to distinguish two different issues here. Sometimes, calling something a "memory" reports its causal history. Memories are experiences whose contents are dredged up out of previous experiences, whereas, say, perceptions are caused by one's environment. Thus the distinction between memory and perception can be a distinction of the source of the experience. But one might also use the same words to mark a distinction in the kind of experience involved: that is, a difference in intentional character —and more specifically in modality. In the ordinary cases, experiences that are dredged up from memory have the modality of RECOLLECTION and those caused by our environments have the modality of PERCEPTUAL PRESENTATION . In pathological cases and in dreams, however, this need not be so. We may take an image
from memory in a dream and have it presented under the modality of PERCEPTUAL PRESENTATION . (That is, we mistakenly believe that we are having veridical perceptions when in fact we are replaying old imagery under the modality of PERCEPTUAL PRESENTATION .) Likewise it is possible for imagination to cause states with the modality of PERCEPTUAL PRESENTATION . And of course it is possible to have states of RECOLLECTION that are false memories, or episodes presented as FREE FANCY that are in fact images that are remembered, and so on. So when I say that, say, states of recollection have a distinctive phenomenology, I mean precisely that states that present themselves as recollections do so, and not that states that in fact draw upon memory share a phenomenology.
The same may be said for many other intentional states. Some, for example, have a particular emotional phenomenology. I cannot experience remorse about some action of mine, for example, without having certain experiences, regardless of how I act. A sociopath might fake remorse even if he cannot feel it. Likewise, I cannot feel remorse over an action unless I represent it as my action, and so on. The point here is that if we take away the experiential character of such states, or change it too drastically, we are no longer left with the same kind of state. Let me hasten to caution the reader, however, about several things that are not implied by this.
(1) The phenomenological properties of such states need not be noticed or attended to . One can, for example, see features of a scene that one does not actively notice . One sign of this is the ability to notice later things about a previous experience that were not noticed at the time. One notices a square and later realizes that it was set against a lighter background.
(2) Not all psychological distinctions need be reflected in phenomenological distinctions. It is not clear, for example, that different kinds of judgment—judgment with certainty, conjecture, scientific hypothesis—are distinguishable by phenomenological features for everyone.
(3) Phenomenological typing need not be the only valid typing of psychological states, and states that differ with respect to phenomenology may be grouped together under a different typing. For example, there are undoubtedly typings that group together psychological mechanisms we share with other species regardless of whether animals are experiencing subjects. There is nothing particularly out of the ordinary for two objects or events to share one typing and diverge with respect to another, nor for two divergent typings each to be useful for a different kind of inquiry.
9.3.4—
Does Phenomenology Yield a Classification of the Mental?
There are really a variety of questions here. It certainly seems true that at a certain level of granularity of description, our natural distinctions between conscious mental states (e.g., between judgments and perceptual gestalts and imaginings) are accompanied by corresponding phenomenological differences. Likewise, it seems clear that we are in a significantly different epistemic position with respect to states that have a phenomenology and those that do not, such as beliefs and desires. If the latter are truly dispositional in nature, there is arguably a significant ontological difference there as well. It is far less clear that all meaningful psychological distinctions, even between states that have a phenomenology, are reflected in phenomenological differences. For the ordinary language classification of mental states is likely to prove as much a mixed bag of phenomenological, behavioral, and theoretical features as is the ordinary language classification of speech acts, which includes lots of cognitive, social, and emotional features as well as distinctions in illocutionary force. The project of taxonomizing speech act verbs turned out to be a mare's nest because of this (see Austin 1962, McCawley 1973, Vendler 1972, Fraser 1981, Bach and Harnish 1979, and Searle 1969 and 1971), and the same may hold true of the commonsense list of mental states. The difference between, say, speculating and hypothesizing may not consist in something that has a phenomenology, but upon something like our social conventions about kinds of thinking.
The really vital question for our purposes, however, concerns the typing of intentional attitudes and contents according to experiential invariants. Now, whatever experiential invariants there are, it seems clear that they will yield some partition of possible worlds: for example, between those in which I (or my counterpart) have exactly the same phenomenological properties that I actually have and all the rest. The issue is not whether phenomenology yields some classification, but whether the classification it yields is a good one. But good for what? It is certainly a good one for describing the mental from a first-person viewpoint. (What kind of classification could be better for that?) And if you think that phenomenology is crucial to the mental, this is itself good reason for liking this classification. But there is also another reason for liking it: it seems really crucial to all the other ways we have of classifying the mental.
It seems to me that all of the talk about "functional classification of
the mental" is deeply misleading at best. People speak of functional classification of intentional modalities and even of contents. But you never see such a characterization produced. I think this is quite ironic, as one of the stock arguments against the behaviorists turns upon exactly the same inability actually to produce a single definition of the sort their theory depends upon. When characterizing intentional modalities, rather, writers like Fodor appeal to the kind of mental state we are in when we think, as it were, "Lo! a horse!" But this is clearly an appeal to something on the model of a conscious occurrent state. We all know what kind of mental state is meant, but only because we associate the description with a kind of state we have experienced. It might be the case that an ideal psychology could produce a psychological Turing table from which one could derive characterizations of each kind of mental state holding the rest as constant. But this is surely not how we actually go about classifying the mental—probably not even in the case of beliefs and desires, and certainly not in the case of perceptual gestalts and judgments and imaginings. Rather, phenomenology gives us at least a rough initial classification to start with, and we test this against observations of people's behavior and try to systematize and refine it through rigorous modeling (including computer modeling). It is not as though the "functional classification" of the mind implied by some discussions of narrow content was carried out in isolation from a phenomenologically based starting point. (Indeed, it is not as though such a classification has ever actually been carried out at all—a point that is missed with shocking regularity.) Any functional classification of the mental there might be is a distillation of a classification that started out in phenomenology—and which, I shall argue in the next section, must answer to phenomenology as well. The notion of "narrow content" is really a kind of theory-laden abstraction from phenomenological content. (And, I expect, the functional notion of belief is ultimately a theory-laden abstraction from conscious judgments as well.)
As for broad content, that certainly goes beyond what is present, strictly speaking, in phenomenology (that is, in intentional character). But, first, it contains implications of intentional character: a veridical perception is, among other things, a perceptual gestalt. And, second, it seems to me that writers like Husserl have been correct in saying that intentional states in some sense carry with them their own "conditions of satisfaction." Having a veridical perception of a dog requires us to be in the right causal relationship with a dog. Why? Because that is built into the notion of the intentional modality of PERCEPTUAL PRESENTION . This
is no great empirical discovery. It is simply an explication of what is implicit in the phenomenology of this particular intentional modality. Likewise, if the broad content of "water" is fixed by something in the environment, it is because the intentional character of the state implies that it should be so. So in short it seems to me that it is simply bootless to deride phenomenological classification in favor of some other kind of classification, since the other kinds of classification that have been proposed turn out to depend heavily upon our prior phenomenological understanding.
9.4—
Phenomenology and Scientific Psychology
As often as not, those who minimize the role of phenomenological properties (or, for that matter, of the mental in general) do so not so much as a rejection of the reality of such properties or of their utility in commonsense predictions as they do as a rejection of the idea that such properties will play a role in an explanatory science of psychology. Of course, it is not uncommon on the current scene for a concept's inclusion in the theoretical vocabulary of a science to be held up as a standard of its ontological legitimacy—a view I shall argue against in chapter 11—but really that is a stronger position than one need take here. It is enough for the moment to say that phenomenological properties, albeit real, are not the sorts of properties that enter into causal-nomological relations (except perhaps insofar as they are reliably produced as epiphenomena of brain events), and that phenomenological typing will correspond to the typing of a mature psychology accidentally if at all.
I think that there are certain things that are right about this view, but many more that are mistaken. On the one hand, it is surely right that there are large domains of psychology that cannot be explained in terms of conscious mental states at all, much less in terms of their phenomenological features. While perception eventuates in conscious states with a phenomenology, the processes that produce this product are almost entirely infraconscious. Likewise memory and imagination have conscious products, but also involve mechanisms that must be of an entirely different sort. And while there are conscious processes of reasoning, association, and inference, there are also nonconscious processes that go by the same names—and even the conscious ones must have their own nonconscious mechanisms which support them. So if the issue is one of whether conscious states with a phenomenology can provide the bulk of

Figure 13.
From Gaetano Kanizsa,
"Subjective Contours,"
Scientific American 234
(April 1976): 51.
Copyright © 1976 by
Scientific American, Inc.
All rights reserved.
the explanatory resources needed by psychology, the answer is surely no .
On the other hand, there are clearly some kinds of explanation that do call for appeal to states with a phenomenology. Notably, when we ask why a person spoke or acted in the manner that she did, we will often appeal not just to dispositional beliefs and desires, but to conscious judgments and perceptions—and in particular, we will appeal to the phenomenological content of her judgments and perceptions. Why did Jane pick up the flyswatter? Because the thing flying around looked like a fly to her . Note that questions of broad content are irrelevant here—the explanation is unaffected if all of Jane's fly-gestalts were caused by midges. Likewise narrow content, if defined in purely functional-causal terms, does us no good here: it won't do to say that Jane picked up the flyswatter because she was in the kind of mental state caused by flies and resulting in flyswatter grabbing behavior.
Perhaps even more clearly, we need to appeal to phenomenological content to explain why people behave the way they do in the case of optical illusions like subjective contour figures, in which the subject "sees" a figure that is "not really there" in the sense that there is no objective reflectance gradient that makes up a figure of the type that is seen. For example, a subject seeing the Kanizsa square (fig. 13) will report seeing a light square against a slightly darker background, and will experience borders making up the edges of the square, even though there is no reflectance gradient to be found in those positions in the stimulus (see Kanizsa 1976 and 1979). Now suppose we ask our subject to respond in one way when she sees a square and another way when she sees a figure that is not a square. When presented with the Kanizsa square, she behaves as though presented with a square. How are we to explain this? What unites the cases of being presented with an actual square with the
cases of being presented with the Kanizsa square, and hence unites the behaviors involved? I submit that it is precisely that they share a certain phenomenology—namely, the phenomenology of experiences having the intentional character of VISUAL PRESENTATION [light square against darker background].
In short, it seems to me that, whenever it is necessary to appeal to conscious states like judgments or perceptions to explain behavior, it will very likely be a typing according to phenomenological content that will be relevant. Now whether typing by phenomenological content will produce the kinds of regularities needed for something systematic enough to count as a nomological science is still to be determined (as is the distinct yet related question of whether we could catch such regularities if they were there). But it does seem plausible that at least some such explanations will resort to phenomenological typing.
But there is another connection between phenomenology and scientific psychology that is, to my mind, far more important. If phenomenological properties make up a relatively small portion of the explanatory apparatus of psychology, they comprise a significantly larger portion of the phenomena that a scientific psychology needs to explain . That is, they make up much of the data of psychology. I think that the case can be made most forcefully here in the case of the relationship between psychophysics and theoretical work in perception. Psychophysics, which is viewed by many as the one area of psychology that has already attained some of the benchmarks of scientific maturity, is largely concerned with the measurement of relationships between stimuli and the percepts that they produce. The properties of the stimuli to be studied include things such as the objective intensity of the stimulus and the spatial and temporal patterns of intensity in stimuli. Percepts, however, are experiences—they are phenomenological in character. They involve properties like how intense a stimulus seems or whether one seems more intense than another, or the way the percept is organized into a perceptual gestalt. I shall discuss two well-known experimental results from the psychophysical literature and show how phenomenology is essential to psychophysics.
First, consider the Weber laws that describe the relationship between stimulus intensity and percept intensity in terms of a logarithmic law (Fechner 1882), or, in alternative versions, a power law (Stevens 1951, 1975). Here the relata are an absolute property of a stimulus (say, its luminance) and the subjective property of a percept (the word 'brightness' is often used in contrast with 'luminance' for this subjective property).
Now, first of all, it seems just inescapable here that the phenomenon we are after involves a phenomenological property. Take away the property of perceived brightness and there is no Weber law left. Second, it seems equally clear that the kind of description of human perception that the Weber law presents is exactly the sort of thing that we should require of our theories of perception. A model of perception that does not obey the Weber law or that does not produce the optical illusions that humans experience is, to that extent at least, a bad (or at least an incomplete) model (see Todorovic[*] 1987). Qualitative phenomenology is essential to psychophysical data such as the Weber law and provides much of the data for theories of vision and other perceptual modalities.
The same can be said for phenomena involving at least simple forms of intentionality. Consider again the Kanizsa square. Here the psychophysical data show that there is a certain class of conditions under which we "see" something that "is not there"—in this case, we perceive a square where there is no square and perceive it as brighter than its background when in fact the "interior" of the "square" and its "background" are actually equal in luminance. This kind of mismatch between the "objective" features of the stimulus and the "subjective" features of the percept tends to be what makes a given stimulus-percept pair an "effect" and renders it of particular psychological interest. (You can't make your reputation in experimental psychology by finding that people see squares when they are presented with squares; if they see squares when presented with circles, you get to have an effect with your name in front of it.) And the ability to reproduce such effects is precisely the sort of thing that can be used to test the adequacy of a particular model of how perception works in human beings. Again, our data involve a relationship (a mismatch) between an objective property of luminance distribution and a phenomenological property of seeing a particular kind of figure. Take away the phenomenology and there is no effect. Take away the effects and there is no psychophysics. Take away the psychophysics and there is nothing for theoretical psychology of perception to explain.
Unlike the Weber law, moreover, subjective contour features involve at least a primitive form of intentionality. The subject does not merely experience more and less intense qualia—she constitutes them as a figure of a particular kind and shape and constitutes the figure as being in a particular relationship to its background. Moreover, this kind of illusion vividly illustrates what Chisholm has cited as the cardinal property of intentionality and intentional objects: the subject can "see" a square when there is no square there to be seen.
I think that some other phenomenological properties likewise provide data that set tasks for psychological theories. It seems clear, for example, that the object-directed character of intentional states is something that needs to be mirrored in any successful theory. (It has surely motivated much work in artificial intelligence.) Likewise the perspectival character of conscious experiences: a theory of thinking must do more than provide for the fact that we think about objects; it must provide for the fact that we think about them under particular aspects and from particular points of view. It must, for example, account for the fact that we can infer the hidden edges of familiar three-dimensional objects, or move between different things we know about an object we are viewing without keeping all of its known properties before the mind's eye at once. The ultimate source of our knowledge that thought has these properties is phenomenological, and so once again phenomenology sets constraints on the form of a scientific psychology.
9.5—
Why Phenomenology Cannot Be Naturalized
A number of kinds of arguments have been offered over the years to the effect that some one or more features of the mental cannot be naturalized—features such as subjectivity, the what-it's-like of experience, the first-person perspective, and consciousness. I shall examine some variations on arguments of this sort in this section, as well as adding one of my own at the end.
9.5.1—
The Argument from Epistemic Possibility (Cartesian Demons Revisited)
The kinds of epistemological issues involved in old-style thought experiments involving Cartesian demons stem from the phenomenological perspective on content. The Cartesian demon experiment is, if nothing else, a marvelous tool for driving a wedge between the intentional character of my mental states and all questions of their veridicality. As Descartes points out, I can be mistaken about the causes of my experiences and about whether they correspond to extramental reality, but I cannot be mistaken in the same way about what kind of ideas I am experiencing.[8] I can be sure that I am experiencing a particular kind of perceptual gestalt, but I cannot be similarly sure, for instance, that there is indeed a cat before me.
Thought experiments involving such exotica as brains in vats and Cartesian demons do not enjoy the popularity that once they enjoyed. There are no doubt a number of factors contributing to their decline. One would probably be the shift away from epistemological interests in the philosophy of mind. Another would be a shift in interest from providing accounts involving logically necessary and sufficient conditions to finding accounts that are empirically adequate. Considerations of necessity and sufficiency do seem to be in order with accounts that purport to provide a strong naturalization, though. If mental-semantic properties are to supervene upon naturalistic properties, those naturalistic properties must provide sufficient conditions for them. And if the resulting account is to be an account of the nature or essence of mental-semantics or intentionality, it had best be necessary as well: if an object could have a property A while lacking B , then B cannot be essential to A .
Now I think that some of the traditional thought experiments are well suited to showing that naturalistic properties are neither necessary nor sufficient for intentionality or mental-semantics. Let us begin with necessity. The notions of supervenience and of instantiation analyses themselves claim nothing about the necessity of the conditions they provide. If A supervenes upon B , it does not follow that B is a necessary condition for A ; and if A is given an instantiation analysis in terms of B , it similarly does not follow that B is a necessary condition for A . But this is in some ways very misleading. When people say that the supervenience of A upon B does not involve a necessary relation from A to B , what they tend to be concerned with is the lower-order physical properties through which a mental property is realized—with the fact that it does not matter whether the underlying structure is wetware or hardware or whatever. But when people try to give a naturalistic account of intentionality, they tend not to be specifying the instantiating system at that low a level, but in terms of notions such as causal covariation, adaptational role, or information content. These notions form an intermediate level of explanation that is neutral as to underlying structure. And theorists who propose such theories generally do take it that the conditions they articulate at these intermediate levels are necessary conditions for intentionality and mental-semantics. Millikan, for example, is quite clear about this: a being that does not share our adaptational history not only does not share our particular beliefs, it does not have beliefs at all! Similarly strong views might be imputed to causal covariation theorists. In Fodor's account, it is a necessary condition for a representation of type MR to mean "P " that MR 's are sometimes caused by P 's. Similarly, with
Dretske's account, a representation cannot mean "P " if its type was never caused by a P in the learning period. So while the language of supervenience and token physicalism suggests that naturalistic explanations do not provide necessary conditions, this is belied by actual practice of theorists. Either accounts in terms of causal covariance and adaptational role are not naturalistic accounts, or the best-known contemporary naturalistic accounts of intentionality involve a commitment to providing necessary conditions . And this seems quite appropriate in a way, since such theorists claim to provide accounts of the nature or essence of mental-semantics and intentionality.
This being said, I think that there is good reason to believe that naturalistic accounts of these sorts do not succeed in providing necessary conditions, for reasons that may be developed by way of some familiar sorts of thought experiments. Consider the Cartesian scenario of a being that has experiences just like ours, not because he is in fact coming into contact with elm trees and woodchucks, but because he is being systematically deceived by a malicious demon. Such a scenario is clearly imaginable, since one cannot reach Cartesian certainty that it is not in fact an accurate description of one's own case. (There is, after all, no experiment one can perform to determine whether one's experiences are veridical or systematically misleading.) And there seems little reason to deny that such a scenario is logically possible. Now a being in such a state would be in many of the same sorts of intentional states that we are in-that is, states with the same attitude and the same phenomenological content. (Whether you have perceptual gestalts or recollections, after all, does not depend on whether you turn out to be the victim of a Cartesian demon.) But it would not share most of our naturalistic properties. In particular, the intentional states it has would not be hooked up to the world in the ways called for by a respectable naturalistic psychology. Thoughts about dogs are not caused by dogs, nor are beliefs about elm trees caused by elm trees, and the being may not even have the ancestors requisite for an adaptational history. All of his beliefs are demon-caused (although they are not about demons).
Here we have an example of a being that has meaningful intentional states but does not share the naturalistic descriptions that apply to us. A fortiori, it is possible for a being to be in a state with a mental-semantic property M while lacking naturalistic property N . Therefore N cannot be a necessary condition for M . Therefore naturalistic properties cannot be necessary conditions for mental-semantic properties.
It remains to consider sufficiency. In order for there to be an instan-
tiation analysis of some mental-semantic property M in terms of some naturalistic property N , it must be the case that N is sufficient for M . Indeed, it must be the case that someone who had an adequate understanding of N would be able to infer M from N . So if there can be cases of an entity possessing N but lacking M, N is not a sufficient condition for M , and hence one cannot have an instantiation analysis of M in terms of N .
Let us now bring some modal intuitions into play. It seems to be imaginable, and hence plausibly metaphysically possible, that there might be beings who were completely like us in physical structure and in behavioral manifestations, yet lacked the kind of interiority, or first-person perspective, that we have. When one stubs her toe, she says "Ouch!" and withdraws her foot, but she has no experience of pain. When one is asked to comment upon Shakespeare, she utters things that sound every bit as intelligent as what a randomly selected human being might say, but she never has any mental experiences of pondering a question or hitting upon an insight. If one could come up with a talented telepath, the telepath would deliver the verdict that nothing mental is going on inside this being. These beings, by stipulation, share all of our natural properties, yet they do not enter into any of the paradigm examples of mental states. Hence naturalistic properties do not provide sufficient conditions for intentional states, either.
9.5.2—
An Objection: Metaphysical and "Nomological" Sufficiency
One concern I can expect this argument to raise would be that people interested in supervenience accounts tend to view the kind of sufficiency involved not as logical or metaphysical sufficiency, as I have assumed, but as something called "nomological sufficiency." I must confess to some puzzlement about what is meant by "nomological sufficiency." It must mean something more than material sufficiency, since materially sufficient conditions may be completely unrelated to what they are conditions for. If the tallest man who ever lived was in fact married to the first woman to climb Everest, and was her only husband, then being married to the first woman to climb Everest is materially sufficient for being the tallest man who ever lived. But surely nomological sufficiency amounts to more than this. Perhaps nomological sufficiency amounts to something like "material sufficiency in all possible worlds that have the same natural laws as the actual world." But, according to the thought
experiment above, the world described is like the actual world in all physical laws. If these assured that the psychophysical relationships must be the same the way fixing your statistical mechanics fixes your thermodynamics, we should be able to derive this fact the way we can do so in the case of thermodynamics. But this seems plainly to be impossible. It seems, then, that naturalistic conditions would not be nomologically sufficient for intentionality either. But perhaps nomological sufficiency does not apply to all worlds with natural laws like our own, but only ones specified by a certain counterfactual. But which counterfactual? And how do we know that a world like the one described above does not fall within the scope of it? Indeed, how does one know that the actual world meets the desired criterion? But perhaps nomological sufficiency is material sufficiency in all worlds sharing psycho physical laws with the actual world. This stipulation, however, would be inadmissible for two reasons. First, this violates the condition of strong naturalism that the relation be metaphysically necessary and epistemically transparent. Second, we do not know that the naturalistic criteria are met in the actual world.
Finally, let us be quite clear about separating the question of logical possibility from the question of warranted belief. No one is claiming that it is reasonable to believe that one is, for example, in the clutches of a Cartesian demon. And while some people do claim that there are nonmaterial thinking beings, their use in this kind of example is not based upon the likelihood of their existence, but upon their possibility. If one has an account of what it is to be in a meaningful mental state, it had better apply to all possible beings that could have such mental states. Regardless of the likelihood of Cartesian demons or nonembodied spirits, if they are possible, then an account of the nature of intentionality had best apply to them too.
9.5.3—
The Phenomenological "What-It's-Like"
A number of writers have argued that at least some mental states (the conscious ones) have an experiential quality for the subject of the experience that is not captured in any third-person "objective" characterizations. This point now seems widely accepted with respect to qualitative states such as pain: even if we know that C-fiber firings are the physiological basis of pain, a complete knowledge of the neurology of C-fiber firings could not yield an understanding of what pain feels like . To know what pain feels like, you have to feel it. And likewise for other qualia: a blind person who knows state-of-the-art theory in electro-
magnetism, optics, and the physiology of vision will not thereby gain a knowledge of how magenta looks, and so on (see Jackson 1982). Thomas Nagel has developed this point famously in an article entitled "What Is It Like to Be a Bat?" (1974), in which he points out that a sensory modality like echolocation would, like vision, have its own phenomenology; and lacking this faculty, we cannot imagine what it would be like to have it.
While many writers in philosophy of mind acknowledge that there is a problem for naturalistic theories in trying to explain qualia, it is less often recognized that there is a similar problem for intentional states, which also have a phenomenology. Take perceptual experiences, like seeing a dog in the yard. There is a what-it's-like to seeing a dog in the yard, and it is different from what it's like to see a pine tree in the yard (change of content) and from what it's like to imagine a dog in the yard (change of intentional modality). And the differences here are not just differences in qualia. Suppose you are at the wax museum. You turn the corner and see a familiar face and say, "My gosh! That's Bill Clinton!" You have an intentional state of the form: VISUAL PRESENTATION [Bill Clinton]. But then you remember where you are and correct yourself. "Oh," you say, "that's just a wax replica of Bill Clinton! Boy am I a dope!" Your intentional state changes from VISUAL PRESENTATION [Bill Clinton] to VISUAL PRESENTATION [wax statue of Bill Clinton]. The qualia have not changed; it is just the content of the gestalt that has changed. But part of that gestalt is conceptual, and that conceptual part has a phenomenology. The difference between having an experience of seeing Bill Clinton and that of seeing a replica of Bill Clinton is not just a functional difference in how they relate to behavior and other mental states—they are different as experiences as well. Likewise in perceptual illusions like the Necker cube and the faces-vase illusion: the qualia remain the same while the interpretation changes; but clearly there is a difference in what it is like to see the faces and what it is like to see the vase.
The same point can be made with Nagel's bat. Perceptual modalities are among the sorts of things that have a phenomenology. But this phenomenology is not confined to individual qualia. There are ways of constituting things as objects in visual perception, in touch, in hearing; and in perception one situates oneself relative to the objects one constitutes as being in one's presence. A person lacking one of the sensory modalities is indeed unable to understand the qualia associated with that modality; but she is also unable to understand what it is like to constitute ob-
jects under that modality. For example, there are people blind from birth who have had operations that restore the integrity of the visual pathway and who, as a consequence, suddenly experience visual qualia. Many such people are already competent at identifying objects and persons by sound and touch, but this ability does not translate to the formation of visual gestalts. The person with restored visual pathways suddenly knows what visual qualia are like, but not visual perceptions . (In fact, they tend to feel quite disoriented by vivid but uninterpretable visual qualia.) Perception is characterized by a particular kind of intentional as opposed to qualitative experience that essentially involves constituting something as an object. Object experiences involving a sensory modality involve object-constituting operations that are modality-specific. Presumably the same would hold true with echolocation. We could perhaps build prosthetic devices that would duplicate the function of the bat's vocal cords and ears and surgically connect their output to some portion of the human brain. Perhaps the subject would even experience some new qualia. But this in itself would not add up to echolocation until there were also experiences corresponding to the conceptual representation of objects under particular aspects within this sensory modality. To know what it is like to be a bat, it is not enough to know what it is like to have the bat's qualia; we would also have to have the bat's experiences of constituting objects on the basis of those qualia as well.
Nagel and others urge upon us the idea that the what-it's-like of experiences cannot be accounted for in nonexperiential terms. In some cases, the argument appears to be an epistemic one: Jackson (1982), for example, appears to argue as follows:
(1) A person could know the neurophysiology of a mental state but fail to know what mental state it was.
(2) If you can give an account of P in terms of Q , then an adequate knowledge of Q should let you know you were dealing with P .
\ (3) You cannot give an account of mental states in terms of their neurophysiology.
Searle and Nagel, however, claim that their point is metaphysical as well: namely, that the phenomenological what-it's-like is a property of conscious mental states. Searle points out, for example, that some things have a what-it-feels-like while others do not, and argues further that the unit-
ing feature for those that do is consciousness:
The discussion of intentionality naturally leads into the subjective feel of our conscious states. . . . Suffice it to say here that the subjectivity necessarily involves the what-it-feels-like aspect of conscious states. So, for example, I can reasonably wonder what it feels like to be a dolphin and swim around all day, frolicking in the ocean, because I assume dolphins have conscious experiences. But I cannot in that sense wonder what it feels like to be a shingle nailed to a roof year in and year out, because in the sense in which we are using the expression, there isn't anything at all that it feels like to be a shingle, because shingles are not conscious. (Searle 1992: 131-132)
In a sense, though, the real crux of the matter is neither purely epistemological nor purely metaphysical: the real issue is whether you can give an account of the experiential what-it's-like in third-person naturalistic terms. If the kind of "account" you want is a strong naturalization, you need logical sufficiency and conceptual adequacy. And it does not look as though you are going to get either of those things. A person who did not have a commonsense notion of heat could still derive thermodynamic laws from the mechanics of particle collisions. But a person who did not know what a visual gestalt was like could not derive that from a knowledge of optics and the physiology of vision, or indeed from any list of sciences you might give. The sciences as we know them just do not seem to have the right conceptual resources to generate the necessary concepts. To be sure, the physiology of vision can explain why our phenomenological color space has some of the properties it has. (Given contingent relations between particular qualia and particular bodily states, it can explain why certain forms of color blindness occur, how color perception is affected by saturated lighting, why particular optical illusions occur and not others, etc.) Likewise, an account of the visual cascade through the visual cortex may explain why we can detect certain primitive shapes and not others and why we are subject to certain illusions. And they will hopefully tell us what brain processes are involved in the very experiences we describe in phenomenological terms. What they do not seem to have the resources to do is explain the phenomenological "feel" of those experiences. It is, of course, risky to make arguments about what cannot be done. On the other hand, it seems clear at this point that any assurance that we can derive phenomenology from neuroscience the way we can derive thermodynamics from statistical mechanics places a great deal of nonempirically based faith in the idea that
a particular paradigm of explanation can be applied universally. This kind of naturalism seems to be more ideology than well-argued position.
9.5.4—
Perspective, Subjectivity, and the Logical Resources of Natural Science
Next, let us consider two other features of intentional states that some writers think render them insusceptible to naturalization. First, Searle points out that intentional states are perspectival in character:
My conscious experiences, unlike the objects of the experiences, are always perspectival. They are always from a point of view. But the objects themselves have no point of view. . . . Noticing the perspectival character of conscious experience is a good way to remind ourselves that all intentionality is aspectual . Seeing an object from a point of view, for example, is seeing it under certain aspects and not others. . . . Every intentional state has what I call an aspectual shape . (Searle 1992: 131)
Second, an experience always involves a first-person perspective. And that first-person perspective is one of the identity conditions for the experience. You can have an experience just like mine, but you cannot have my experience. Even if you were a telepath or empath like the ones depicted in science fiction stories, you would not be experiencing my thoughts and emotions, but reproducing them in your own mind under some intentional modality distinctive to telepaths or empaths. Or, as Searle puts it, "For it to be a pain, it must be somebody's pain; and this in a much stronger sense than the sense in which a leg must be somebody's leg, for example. Leg transplants are possible; in that sense, pain transplants are not" (ibid., 94).
Here again it is possible to interpret the case in epistemic or in metaphysical terms. But here again I think the real issue lies in the possibility of explaining subjectivity and aspectual shape in third-person, "objective," naturalistic terms. And there is a weaker and a stronger variation of the case against naturalization here. First the weaker one. The project of explaining intentionality in naturalistic terms is one of uniting two bodies of discourse—the languages of two sciences, if you will. (Or, if you do not think discourse about experience is scientific, a science and a nonscience.) Let us call the language of our naturalistic discourse N and that of our phenomenological psychology P . The question is, does N have the right kind of conceptual resources for us to derive P from N
in the way, say, that we derive thermodynamics from statistical mechanics, or perhaps even the way we "derive" arithmetic from set-theoretic constructions? And there are features of aspectual shape and subjectivity that give us reason to suppose that the answer may well be no .
The reason subjectivity and aspectual shape pose problems for the would-be naturalizer is that a discourse that encompasses subjectivity and aspectual shape would seem to require logical features that do not seem to be present in the languages used for the natural sciences. This, I think, is what Searle is after when he says that "the world itself has no point of view, but my access to the world through my conscious states is always perspectival, always from my point of view" (Searle 1992: 94-95) and "my conscious experiences, unlike the objects of the experiences, are always perspectival. They are always from a point of view. But the objects themselves have no point of view" (ibid., 131). But if Searle is right about the basic issue here, he is wrong about the specific form it takes with respect to aspectual shape. It is true of course that objects themselves are nonperspectival; but it is also true that all of the sciences do represent objects under particular aspects: say, as bodies having a mass or as living beings. The problem is not in getting a perspective into our discourse, but with the fact that discourse about mental states requires that we build a second layer of perspective into that discourse: to attribute an intentional state to someone is not merely for us to represent an object under an aspect, but to represent a person as representing an object under an aspect. And it is not at all clear that the resources for this are present in the kind of discourse found in the natural sciences.
Likewise with subjectivity. The special problem here is that, in order to talk about my experience as experience, I have to talk about it as essentially mine, as experienced from a first-person perspective. And this seems to require a language that has resources for expressing first-person as well as third-person statements. But the languages of the natural sciences arguably lack such resources. As Nagel argues, a complete description of the world in third-person terms, including the person I happen to be, seems to leave out one crucial kind of fact: the fact that that person is me . I interpret Nagel to mean by this that third-person discourse, even third-person psychological discourse, lacks a way of linking itself into the first-person discourse that is vital to our description of our mental lives.
This seems to me to be a powerful objection to the project of strong naturalization. If the kinds of discourse employed in the natural sciences lack the logical and conceptual resources to generate the kind of discourse
needed to talk about subjectivity and aspectual shape, then these features of our mental lives cannot be strongly naturalized. And if these features are part and parcel of the phenomenon we call "intentionality," then intentionality cannot be strongly naturalized either.
9.5.5—
The Objective Self and the Transcendental Ego
An even more radical variation on the same sort of claim is, I think, to be found in the writings of Kant, Husserl, and Wittgenstein. These writers seem to note that every intentional thought requires an analysis that involves at least three features: (1) a thinker (the "transcendental ego"), (2) a content (meaning, or Sinn ), and (3) an object aimed at (the "intentional object"). However, it is important to note—as Kant, Wittgenstein, and Husserl do and many other writers do not—that these "features" in the analysis of intentional states do not function in experience as three things, but as aspects or features of a seamless unity . Wittgenstein expresses this as follows in the Tractatus:
|
Husserl similarly speaks of intentional experience as a unity encompassing subject, meaning, and object. He writes that
the experiencing Ego is still nothing that might be taken for itself and made into an object of inquiry on its own account. Apart from its "ways of being related" or "ways of behaving," it is completely empty of essential components, it has no content that could be unravelled, it is in and for itself indescribable: pure Ego and nothing further. (Ideas §80)
Kant likewise speaks of the transcendental ego only in the context of the transcendental unity of apperception—that is, the possibility of the "I think" accompanying every possible thought (Critique of Pure Reason, Sec. 2, §16, B131).
The reason this distinction seems important is that, if writers like Wittgenstein and Husserl are right, the great divide lies not so much between mental and physical objects as between discourse about the (logical) structure of experience and discourse about objects generally (including thoughts treated as objects). On this view, when one comes to a proper understanding of thinking, what one finds there are not several interrelated things (the self, the intentional state, the content, and the object-as-intended), but a single act of thinking that has a certain logical structure that involves it being (a ) the thinking of some subject (b ) aiming at some object (c ) by way of a certain content being intended under a certain modality. It is possible, of course, to perform an act of analysis whereby one directs one's attention separately to self, content, modality, and intentional object. And when one does that, each of these things comes to occupy the "object" slot of another intentional act. Indeed, from the perspective of the analysis of experience, what it is to be an object is to be a possible occupant of the object-slot of an intentional act .[9] But if this is so, then the logical structure of intentional states is in some sense logically prior to the notion of object, and the tags 'experiencing self', 'content', and 'object', as they are applied to moments or aspects of experiencing, are not names of interrelated objects. Indeed, they are not objects and hence are not related (since relations can only relate objects).[10]
Now if this is right, the task of relating objectival and experiential discourse becomes all the harder: relations are things that obtain between objects. If the "I" and the content that appear in experiential analysis do not appear there as objects, there can be no question of relating them to things appearing in discourse about objects. There can be no question of objectival-experiential relations, because in the experiential analysis,
the experiencing "I" and the content do not appear as objects at all. Nor is it possible to "cash out" the logical structure of intentional experience in terms of relations between objects, for reasons already described. (Or, as Husserl suggests, at least doing so necessarily involves a distortion of one's subject matter.) The only other way to bridge the Cartesian divide between mind and nature, it would seem, would be to find a way to subsume objectival discourse within experiential discourse, as Husserl tries to do in his transcendental phenomenology. I shall not pursue this possibility here, but shall point out that it seems right in at least one regard: namely, that intentional character is in a certain way conceptually anterior to the notion of an object in the world. For it is the content of an intentional state that lays down the satisfaction conditions determining what kind of object or state of affairs would have to exist in order for the state to be fulfilled. It is the content "unicorn" that specifies what criteria something would have to fulfill to be a real unicorn, and not vice versa. (It is, of course, possible simply to live with the dissatisfying result that there is an unbridgeable gap between two disparate realms of discourse. To those uneasy with such a gulf, I heartily recommend a careful consideration of the kind of combination of transcendental idealism and transcendental realism advocated by Husserl.)
9.5.6—
The Argument from the Character—Veridicality Distinction
Finally, it seems to me that there is a fairly straightforward argument to the effect that intentional character cannot be accounted for in naturalistic terms. Intentional character was defined in terms of the aspects of intentional states that are invariant under alternative assumptions about extramental reality. Hence, it should be clear that any analysis we might give of intentional character must not depend upon anything outside the domain of experience. Notably, it must not depend upon any presumptions about (a ) correspondence to extramental objects, (b ) the causes of the intentional states, or (c ) ontological assumptions about the mind. For having an experience with the character of, say, VISUAL PRESENTATION [unicorn on my front lawn] is compatible (a ́) with there being or not being a unicorn there, (b ́) with the experience being caused by a unicorn under normal lighting conditions, a dog under abnormal conditions, LSD, or a Cartesian demon, and (c ́) with materialism, dualism, transcendental idealism, Aristotelianism, and Middle Platonism, to name a few possibilities. And it seems to follow straightforwardly from this that
any account of intentionality that is not similarly neutral cannot serve as an account of intentional character because such an account would have to be valid for all possible instances of the phenomenon it explains. In particular, an account framed in terms of assumptions about the actual nature of physical world, including human physiology, cannot be broad enough to cover all possible cases that would share a particular intentional content. Hence one cannot have a naturalistic theory of content—at least if by a "theory of content" one means something like "an account of the essential features of intentional character" as opposed to, say, "a specification of the natural systems through which intentional character is realized."[11]
9.5.7—
Summary of Problems for Naturalizing Phenomenology
In short, then, the prospects for strongly naturalizing the phenomenological properties of mental states appear to be rather dim. Thought experiments about brains in vats and Cartesian demons cast significant doubt on whether there could be metaphysically necessary relations between phenomenologically typed states and naturalistic states. And properties like subjectivity, perspectival character, and the "what-it's-like" alluded to by Nagel do not seem to be susceptible to conceptually adequate explanation in naturalistic terms. Moreover, typing by intentional character necessarily classifies mental states in a way that is insensitive to extramental realities, so that it is impossible for a naturalistic theory to capture the same invariants. And finally, there is the tantalizing suggestion that discourse about "the experiencing self," "the thought," and "the intentional object" is not really discourse that relates objects at all, in which case it is hard to see how naturalistic discourse could have the right sorts of logical-grammatical resources to subsume it. If the kind of "content" we wish to naturalize is the kind that is delimited along phenomenological lines, weak naturalization (i.e., mathematical description and localization) is the best we are entitled to hope for.
9.6—
Naturalizing Broad Content
A very different set of issues confronts us when we turn to the broad notion of content. There has been a great deal of discussion in recent philosophical publications about the implications of broad content for a representational theory of the mind. It is not my intention to canvass these
or to discuss this already large conversation in any detail. Instead, I wish to focus on a very specific point. Unlike the arguments in the previous section, I shall not try to argue that broad content cannot be naturalized. (Though I suspect this is so.) Instead, I shall argue that the kind of theory that would be needed to naturalize broad content would not be able to focus its explanation on the properties of localized representations, as required by CTM, but would have to appeal to broader relations involving the entire organism and its environment as well.
Suppose, for example, that we want a naturalistic account of why a particular kind of thought means "arthritis," and that we accept Burge's contention that this story will have to be dependent upon the way the individual language user's words and concepts are tied in with those of expert users. We ask, "Why does this mental state M have the mental-semantic property (call it P ) of meaning (broad sense) 'arthritis'?" And here we might be asking one of two things: (1) why does M mean "arthritis" as opposed to meaning something else (the problem of meaning assignments), or (2) why does M mean "arthritis" as opposed to not meaning anything at all (the problem of meaningfulness)?
9.6.1—
Meaning Assignments
First, let us consider how CTM's explanation of semantics could be applied to the assignment of broad content. The schematic form of CTM's explanation of mental-semantics went as follows:
Schematic Account
Mental state M has mental-semantic property P because
(1) M involves a relationship to a mental representation MR , and
(2) MR has MR-semantic property X .
Now under the assumptions we have adopted, this schematic might be unobjectionable if the word 'because' were replaced by 'iff'. But in fact the schematic above is not intended merely as a biconditional but as an explanation. And as an explanation of broad content assignment, it seems to be barking up the wrong tree. The key issue in broad content assignment is what makes it the case that cognitive counters hook up with particular objects and properties and not with others. Whatever answer we give must explain the web of relations between organism and envi-
ronment that accounts for the relations that are thus established. It may be true that my brain has a state that has the broad content "arthritis" and my counterpart has a structurally identical state that means "osteoporosis" due to the fact that people on Twin Earth say "arthritis" when they want to refer to osteoporosis ('water' for XYZ, etc.). We might even say that my brain state has a property of meaning-arthritis (meaning-H2 0, etc.) while his has a property of meaning-osteoporosis (meaning-XYZ, etc.). But the fact that there is such a property would not explain broad content assignment, but be a by-product of such an explanation. The explanation of broad content assignment would have to focus not on localized properties of cognitive counters, but on the network of relations between organism, cognitive counter, and environment that endows those cognitive counters (and the mental states in which they play a part) with broad content. Properties of representations, in and of themselves, are just not the right sorts of things to explain broad content assignment. Moreover, once we have to appeal to properties of whole organisms and their environments to explain (simultaneously) the broad content of the cognitive counter and that of the mental state, it is no longer clear why we should try to localize the mental-semantics of thoughts to some properties of their proper parts in the first place.[12]
9.6.2—
Meaningfulness
CTM fares no better with the explanation of broad-meaningfulness. Here the issue, above and beyond the issues involved in narrow-meaningfulness, is that of how thoughts manage to attach themselves to particular real-world objects and properties in a way that is underdetermined by their sense and by other factors internal to the organism. Now one might think that because cognitive counters are themselves internal to the organism, this should disqualify them from being explainers of broad contentfulness. But this is not exactly right: even though the cognitive counters themselves are internal to the organism, their properties may nonetheless be relational properties whose relata include ecological and social factors.
The problem, rather, lies once again with the focus of the explanation. If we ask why a particular thought is about water, as opposed to just consisting of a bunch of descriptions, it would seem that what we need is a story that shows us how to get broad content out of an amalgam of (1) the organism, (2) its narrow-contentfulness, (3) the cognitive
counters, and (4) the environment. The explanation cannot focus solely on properties of the cognitive counters, as CTM would seem to have it, because the properties that can be explained by looking just at these entities remain constant over the cases of me, my counterpart on Twin Earth, and indeed over counterparts who may fail to have broad-contentful states at all.[13]
Let me make it clear what points I am and am not trying to make here. I am not saying that broad content cannot be strongly naturalized. (Though I happen to believe that it cannot.) Nor am I saying that some form of CTM—say, BCTM—cannot be making true assertions about the form of mental processes. Rather, I am saying that even if we grant both of these assumptions, and grant that semantic properties of mental states covary with properties of local states of cognitive counters, we cannot explain broad content merely by looking at the properties of these localized units, but must look back at the larger system embracing the whole thinker and her environment. And once we have done this, we might do well to reassess what is bought by trying to "reduce" the mentalsemantic properties of mental states to MR-semantic properties of their proper parts.
9.7—
Naturalizing Narrow Content
Finally, let us consider the question of whether narrow content can be strongly naturalized. The first problem we face here is in determining just what narrow content is supposed to be. An intuitive way of looking at narrow content is that it is the kind of content, or the portion of content, that is not dependent upon extramental factors such as ecological and social relations. This, however, sounds a great deal like phenomenological content. So one hypothesis about narrow content would be that it is the same thing as phenomenological content—that is, two mental states have the same narrow content just in case they are indistinguishable to the experiencing subject, and nothing lying outside of experience can be constitutive of a difference in narrow content. This also seems consistent with some discussions of "methodological solipsism" in the philosophy of mind (see Fodor 1980). However, most discussions of narrow content have concentrated not upon invariants of experience but invariants of structure and function. Narrow content is characterized as the property that molecular or functional duplicates would necessarily share. Now this would be consistent with the thesis that narrow content
is phenomenological content if it could be shown that molecular or functional duplicates were necessarily phenomenological duplicates as well, but that thesis is contentious at best. So rather than make assumptions about the nature of narrow content, I shall explore four possibilities: (1) narrow content is phenomenological content, (2) narrow content is defined in terms of the properties molecular duplicates would share, (3) narrow content is defined in terms of properties functional duplicates would share, (4) narrow content is defined in terms of some other property of cognitive counters.
First, if narrow content is just phenomenological content, then all of the problems of accounting for phenomenological content accrue to it as well. To account for phenomenological content, your theory has to explain subjective feel, consciousness, subjectivity, and the perspectival character of intentional states. We have argued in the previous section that there are significant obstacles to this kind of explanation, and they would simply carry over as problems for strong naturalizations of narrow content as well if narrow content is phenomenological content.
If narrow content is defined in terms of structural properties, one is faced with a number of messy problems. First, it is not transparent why something defined in structural terms ought to be called "content" at all. The mentalistic overtones require explanation here. Second, if narrow content is defined in structural terms, it is trivial that it can be strongly naturalized, as presumably structural descriptions are already cast in a discourse that is patently naturalistic. But, third, if this is the case, it is not clear that any explanation of the mental has taken place. Defining narrow content in structural terms would simply shuffle the mystery around—the mystery would then be how you get from structurally defined narrow content to the mental . To arrive at a strong naturalization of the mental, you need to have a road from your naturalistic description (in this case, a structural description) to your mentalistic description that enjoys metaphysical necessity and explanatory transparency. As anyone who has tried to supply such explanations knows, this is no trivial task. Defining narrow content in structural terms does not solve the problem here, it merely presents us with a special variation on the problem.
Similar problems arise if we define narrow content in functional terms. First, we must be careful to specify what use of the word 'functional' is operative here. At one level of abstraction (the "rich" level), we can look at the mental, qua mental, as a math-functionally describable system. At a second level (the "sparse" level), we can look at the functional de-
scription in purely formal terms, abstracting from the fact it was originally a description of mental states. A richly construed functional model of the mind does not really define mental states in functional terms, though it may provide a unique characterization of each particular kind of mental state in terms of its functional relations to the others and to stimuli and behavior. A rich model assumes some knowledge of what content is, and does not explain its nature in functional terms, any more than, say, Newton's laws explain what gravity is. But as we have seen, a sparse functional model cannot serve as a definition or explanation of the mental, on the grounds that (1) the same functional description can apply to many things (e.g., abstract number-theoretic entities) that are not mental, and (2) nothing about the functional description has the conceptual riches to generate the distinctively mental character of intentional states.
So it looks as though narrow content will have to be defined in some other way if it is to be a viable notion at all. I wish that I had a candidate for such a definition, but I don't. The intuitive notion of content that I work with is the notion of phenomenological content. Perhaps other people operate with a different notion, but if they do they have not made it very clear, beyond the constraints (1) that it is not to be broad, (2) that it is not to be phenomenologically based, (3) that it is somehow to map things in the head onto things in the world, (4) that it is necessarily to be shared by molecular or functional duplicates, and (5) that it is to be unproblematically mental in nature. I am not sure that there is anything that does fit this bill. (Similar skepticism about the category of narrow content is voiced by Baker [1987] and Garfield [1988].) But let us consider the possibility in any case, however vaguely specified.
First, note that this notion of content bears a peculiar dialectical relation to the project of strong naturalization. If one of the defining features of narrow content is that molecular duplicates must share narrow content, then the metaphysical side of strong naturalization is assured. Either there is such a thing as narrow content and narrow content assignments are implied by physical description, or else there is no such thing as narrow content. The other side of strong naturalization, however—the explanatory side—is probably less easy to come by. Whatever narrow content is supposed to be, a strong naturalization must not merely bind it to naturalistic conditions by contingent bridge laws, but demonstrate its presence from some lower-level theory. The viability of this will of course turn heavily upon what one means by "narrow content," but it is a tall order to fill in any case.
Second, it seems to me that there is a problem for explaining narrow content in terms compatible with CTM in just the way there was a problem for explaining broad content with these constraints. If we assume for purposes of argument that there are some naturalistic features of cognitive counters that covary with narrow content, we still do not have a strong naturalization of narrow content unless we can show why the fact that a proper subpart of an intentional system has particular naturalistic properties makes it the case that the system has a thought about objects or states of affairs in the world. I am not at all convinced that such an explanation is to be had at all, but if it is to be had, it seems clear that the place to look is not in properties of the cognitive counters themselves, but in relations between the overall system and its environment. Again, the point here is not that BCTM must be wrong as a theory of the form of mental states. Rather, the point is that, even if BCTM is right about its functional description of the mind, and cognitive counters are the things that covary with meaning assignment, we need a theory with a very different focus to account for meaningfulness. If the question is "Why does this thought mean X rather than Y?" it may be appropriate to look at cognitive counters. If the question is why the properties of cognitive counters are so intimately associated with the mentalistic property of meaningfulness, it seems that we shall have to look elsewhere, at a theory that embraces a larger system.
9.8—
Conclusion
This chapter has been a very quick examination of the prospects for what is commonly called a "naturalistic theory of content." We have seen that there are many issues lurking in the wings here—issues about what counts as "naturalization," what counts as a "theory," and what kind of "content," "intentionality," and "mental states" are at issue. What I have tried to argue here may briefly be summarized as follows: (1) If we are talking about conscious thoughts (the paradigms embraced by writers like Brentano, Husserl, and Searle, among others), these states do indeed seem indefeasibly to have properties like phenomenological feel, subjectivity, and the like, and there do seem to be serious obstacles to strongly naturalizing such properties. (2) If we are talking about dispositional states like beliefs and desires, and about their broad and narrow content, it may or may not prove possible to strongly naturalize these properties. But if it is possible to do so, it looks as though the kind of explanation we need will not focus on local properties of cognitive coun-
ters, as does CTM, but will appeal to relational properties of cognitive counters, the entire organism that is doing the thinking, and its environment.
I wish to draw both a weak moral and a strong one from this. The weak moral concerns where the burden of proof should lie. Much of the current discussion in the philosophy of mind assumes the possibility of a fairly strong sort of naturalization. But once we have distinguished the projects of strong and weak naturalization, we can see that strong naturalization calls for some fairly exacting (and rare) kinds of connections between discourses: namely, metaphysical sufficiency and conceptual adequacy. And once we look closely at the prospects of "explaining" the mental in these very stringent senses of "explanation," there seem to be some very large and glaring problems, especially if we add the further constraint of locating the nexus of meaning in properties of localized representations. Thus it seems to me that the burden of proof ought to be on the would-be strong naturalizer: we have reason to believe in strong naturalism when we see it accomplished and not before.
There is also a stronger moral one might draw, one that I want to state in a more assertive voice. All in all, it looks dubious that we shall ever have a strong naturalization of the mental. It looks even less likely that we should have one of the sort called for by CTM. CTM certainly does not provide such a theory of intentionality, and without it, those who doubt the propriety of intentional psychology have not been refuted. It is hard to see how to view computational psychology as a success if its success is to be judged by the standard of how well it naturalizes intentionality and vindicates intentional psychology. But perhaps these are not the right standards by which to judge computational psychology in the first place. Perhaps it is possible to separate issues about computational psychology as an empirical research programme from its relationship to more purely philosophical problems about the mind. In the final section of this book I shall attempt to present the beginnings of an alternative philosophical approach to computational psychology that frees it from the constraints of strong naturalization and vindication.