The Language of Thought Hypothesis

First published Tue May 28, 2019

The language of thought hypothesis (LOTH) proposes that thinking occurs in a mental language. Often called Mentalese, the mental language resembles spoken language in several key respects: it contains words that can combine into sentences; the words and sentences are meaningful; and each sentence’s meaning depends in a systematic way upon the meanings of its component words and the way those words are combined. For example, there is a Mentalese word whale that denotes whales, and there is a Mentalese word mammal that denotes mammals. These words can combine into a Mentalese sentence whales are mammals, which means that whales are mammals. To believe that whales are mammals is to bear an appropriate psychological relation to this sentence. During a prototypical deductive inference, I might transform the Mentalese sentence whales are mammals and the Mentalese sentence Moby Dick is a whale into the Mentalese sentence Moby Dick is a mammal. As I execute the inference, I enter into a succession of mental states that instantiate those sentences.

LOTH emerged gradually through the writings of Augustine, Boethius, Thomas Aquinas, John Duns Scotus, and many others. William of Ockham offered the first systematic treatment in his Summa Logicae (c. 1323), which meticulously analyzed the meaning and structure of Mentalese expressions. LOTH was quite popular during the late medieval era, but it slipped from view in the sixteenth and seventeenth centuries. From that point through the mid-twentieth century, it played little serious role within theorizing about the mind.

In the 1970s, LOTH underwent a dramatic revival. The watershed was publication of Jerry Fodor’s The Language of Thought (1975). Fodor argued abductively: our current best scientific theories of psychological activity postulate Mentalese; we therefore have good reason to accept that Mentalese exists. Fodor’s analysis exerted tremendous impact. LOTH once again became a focus of discussion, some supportive and some critical. Debates over the existence and nature of Mentalese continue to figure prominently within philosophy and cognitive science. These debates have pivotal importance for our understanding of how the mind works.


1. Mental Language

What does it mean to posit a mental language? Or to say that thinking occurs in this language? Just how “language-like” is Mentalese supposed to be? To address these questions, we will isolate some core commitments that are widely shared among LOT theorists.

1.1 The Representational Theory of Thought

Folk psychology routinely explains and predicts behavior by citing mental states, including beliefs, desires, intentions, fears, hopes, and so on. To explain why Mary walked to the refrigerator, we might note that she believed there was orange juice in the refrigerator and wanted to drink orange juice. Mental states such as belief and desire are called propositional attitudes. They can be specified using locutions of the form

X believes that p.

X desires that p.

X intends that p.

X fears that p.

etc.

By replacing “p” with a sentence, we specify the content of X’s mental state. Propositional attitudes have intentionality or aboutness: they are about a subject matter. For that reason, they are often called intentional states.

The term “propositional attitude” originates with Russell (1918–1919 [1985]) and reflects his own preferred analysis: that propositional attitudes are relations to propositions. A proposition is an abstract entity that determines a truth-condition. To illustrate, suppose John believes that Paris is north of London. Then John’s belief is a relation to the proposition that Paris is north of London, and this proposition is true iff Paris is north of London. Beyond the thesis that propositions determine truth-conditions, there is little agreement about what propositions are like. The literature offers many options, mainly derived from theories of Frege (1892 [1997]), Russell (1918–1919 [1985]), and Wittgenstein (1921 [1922]).

Fodor (1981: 177–203; 1987: 16–26) proposes a theory of propositional attitudes that assigns a central role to mental representations. A mental representation is a mental item with semantic properties (such as a denotation, or a meaning, or a truth-condition, etc.). To believe that p, or hope that p, or intend that p, is to bear an appropriate relation to a mental representation whose meaning is that p. For example, there is a relation belief* between thinkers and mental representations, where the following biconditional is true no matter what English sentence one substitutes for “p”:

X believes that p iff there is a mental representation S such that X believes* S and S means that p.

More generally:

  • (1) Each propositional attitude A corresponds to a unique psychological relation A*, where the following biconditional is true no matter what sentence one substitutes for “p”: X As that p iff there is a mental representation S such that X bears A* to S and S means that p.

On this analysis, mental representations are the most direct objects of propositional attitudes. A propositional attitude inherits its semantic properties, including its truth-condition, from the mental representation that is its object.

Proponents of (1) typically invoke functionalism to analyze A*. Each psychological relation A* is associated with a distinctive functional role: a role that S plays within your mental activity just in case you bear A* to S. When specifying what it is to believe* S, for example, we might mention how S serves as a basis for inferential reasoning, how it interacts with desires to produce actions, and so on. Precise functional roles are to be discovered by scientific psychology. Following Schiffer (1981), it is common to use the term “belief-box” as a placeholder for the functional role corresponding to belief*: to believe* S is to place S in your belief box. Similarly for “desire-box”, etc.

(1) is compatible with the view that propositional attitudes are relations to propositions. One might analyze the locution “S means that p” as involving a relation between S and a proposition expressed by S. It would then follow that someone who believes* S stands in a psychologically important relation to the proposition expressed by S. Fodor (1987: 17) adopts this approach. He combines a commitment to mental representations with a commitment to propositions. In contrast, Field (2001: 30–82) declines to postulate propositions when analyzing “S means that p”. He posits mental representations with semantic properties, but he does not posit propositions expressed by the mental representations.

The distinction between types and tokens is crucial for understanding (1). A mental representation is a repeatable type that can be instantiated on different occasions. In the current literature, it is generally assumed that a mental representation’s tokens are neurological. For present purposes, the key point is that mental representations are instantiated by mental events. Here we construe the category of events broadly so as to include both occurrences (e.g., I form an intention to drink orange juice) and enduring states (e.g., my longstanding belief that Abraham Lincoln was president of the United States). When mental event e instantiates representation S, we say that S is tokened and that e is a tokening of S. For example, if I believe that whales are mammals, then my belief (a mental event) is a tokening of a mental representation whose meaning is that whales are mammals.

According to Fodor (1987: 17), thinking consists in chains of mental events that instantiate mental representations:

  • (2) Thought processes are causal sequences of tokenings of mental representations.

A paradigm example is deductive inference: I transition from believing* the premises to believing* the conclusion. The first mental event (my belief* in the premises) causes the second (my belief* in the conclusion).

(1) and (2) fit together naturally as a package that one might call the representational theory of thought (RTT). RTT postulates mental representations that serve as the objects of propositional attitudes and that constitute the domain of thought processes.[1]

RTT as stated requires qualification. There is a clear sense in which you believe that there are no elephants on Jupiter. However, you probably never considered the question until now. It is not plausible that your belief box previously contained a mental representation with the meaning that there are no elephants on Jupiter. Fodor (1987: 20–26) responds to this sort of example by restricting (1) to core cases. Core cases are those where the propositional attitude figures as a causally efficacious episode in a mental process. Your tacit belief that there are no elephants on Jupiter does not figure in your reasoning or decision-making, although it can come to do so if the question becomes salient and you consciously judge that there are no elephants on Jupiter. So long as the belief remains tacit, (1) need not apply. In general, Fodor says, an intentional mental state that is causally efficacious must involve explicit tokening of an appropriate mental representation. In a slogan: “No Intentional Causation without Explicit Representation” (Fodor 1987: 25). Thus, we should not construe (1) as an attempt at faithfully analyzing informal discourse about propositional attitudes. Fodor does not seek to replicate folk psychological categories. He aims to identify mental states that resemble the propositional attitudes adduced within folk psychology, that play roughly similar roles in mental activity, and that can support systematic theorizing.

Dennett’s (1977 [1981]) review of The Language of Thought raises a widely cited objection to RTT:

In a recent conversation with the designer of a chess-playing program I heard the following criticism of a rival program: “it thinks it should get its queen out early”. This ascribes a propositional attitude to the program in a very useful and predictive way, for as the designer went on to say, one can usefully count on chasing that queen around the board. But for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with “I should get my queen out early” explicitly tokened. The level of analysis to which the designer’s remark belongs describes features of the program that are, in an entirely innocent way, emergent properties of the computational processes that have “engineering reality”. I see no reason to believe that the relation between belief-talk and psychological talk will be any more direct.

In Dennett’s example, the chess-playing machine does not explicitly represent that it should get the queen out early, yet in some sense it acts upon a belief that it should do so. Analogous examples arise for human cognition. For example, we often follow rules of deductive inference without explicitly representing the rules.

To assess Dennett’s objection, we must distinguish sharply between mental representations and rules governing the manipulation of mental representations (Fodor 1987: 25). RTT does not require that every such rule be explicitly represented. Some rules may be explicitly represented—we can imagine a reasoning system that explicitly represents deductive inference rules to which it conforms. But the rules need not be explicitly represented. They may merely be implicit in the system’s operations. Only when consultation of a rule figures as a causally efficacious episode in mental activity does RTT require that the rule be explicitly represented. Dennett’s chess machine explicitly represents chess board configurations and perhaps some rules for manipulating chess pieces. It never consults any rule akin to Get the Queen out early. For that reason, we should not expect that the machine explicitly represents this rule even if the rule is in some sense built into the machine’s programming. Similarly, typical thinkers do not consult inference rules when engaging in deductive inference. So RTT does not demand that a typical thinker explicitly represent inference rules, even if she conforms to them and in some sense tacitly believes that she should conform to them.

1.2 Compositional Semantics

Natural language is compositional: complex linguistic expressions are built from simpler linguistic expressions, and the meaning of a complex expression is a function of the meanings of its constituents together with the way those constituents are combined. Compositional semantics describes in a systematic way how semantic properties of a complex expression depend upon semantic properties of its constituents and the way those constituents are combined. For example, the truth-condition of a conjunction is determined as follows: the conjunction is true iff both conjuncts are true.

Historical and contemporary LOT theorists universally agree that Mentalese is compositional:

Compositionality of mental representations (COMP): Mental representations have a compositional semantics: complex representations are composed of simple constituents, and the meaning of a complex representation depends upon the meanings of its constituents together with the constituency structure into which those constituents are arranged.

Clearly, mental language and natural language must differ in many important respects. For example, Mentalese surely does not have a phonology. It may not have a morphology either. Nevertheless, COMP articulates a fundamental point of similarity. Just like natural language, Mentalese contains complex symbols amenable to semantic analysis.

What is it for one representation to be a “constituent” of another? According to Fodor (2008: 108), “constituent structure is a species of the part/whole relation”. Not all parts of a linguistic expression are constituents: “John ran” is a constituent of “John ran and Mary jumped”, but “ran and Mary” is not a constituent because it is not semantically interpretable. The important point for our purposes is that all constituents are parts. When a complex representation is tokened, so are its parts. For example,

intending that P & QP & Q requires having a sentence in your intention box… one of whose parts is a token of the very same type that’s in the intention box when you intend that 

https://lh4.googleusercontent.com/gXfGtbHqCtojbFNm_fuOw7VdrbIzVTXaC3SOcba7Ow2w71tY5Wd0zT0cb7XIGTpndU4xdf0ssR9BXQLzn-mLaN63uCbwZj6m_w-VIF3cTzd_Pnpx5ewxwmXT-HWQCiR86FS8FKrY

P, and another of whose parts is a token of the very same type that’s in the intention box when you intend that 

https://lh5.googleusercontent.com/uD6HD1y_ewyxUpY3ln43KCTUjVuCCUPjbnPOmuKqwI7IPTLphdubzy4mLq3GRmEkywtR54EQfnQ07IDeizO6vWU3u1777Anio-sHM7GdiYpjMHHIiy0-TsYeP8CjXyvdXqAqNB05

Q. (Fodor 1987: 139)

More generally: mental event 

https://lh4.googleusercontent.com/f1QRixzjQ9el2C583MOU5s8PNtIIoHGnc1wVZVObSX3Add-35oWyyRMmqj6ktVejex88j6uoTjoMVUzgmMM0tpOhrFgk0wNUq3dPUeMBiEEgJi5IOxFdrzdU2rZ9M-Ji9uBRzNKh

e instantiates a complex mental representation only if 

https://lh5.googleusercontent.com/kL63Lcc9iZXbUnI6YRzyp2htQdf-ZPo0Er27E28f1vI_u3qKu0SXLv91WUE6lAFFnh9Qw0Vztpyha9RyAtxDu_xSeLnImxphCbLZ9mwQhkOkVIXWUQl5Lj6Hz3DwGr1pZJTdoMY1

e instantiates all of the representation’s constituent parts. In that sense, 

https://lh4.googleusercontent.com/y4S9h0Cdn5vcdSwFzVlTZfWJ6hEbetbvUE77hCupzWIlDjf2uS6MFODJA3Ve7WCKMVw7SdGriVpRsAv0ArIj56LFdR2uMJ6orAiFd6SUx4-gynbLoeBsWao3V0yHpnJnMflyNXrq

e itself has internal complexity.

The complexity of mental events figures crucially here, as highlighted by Fodor in the following passage (1987: 136):

Practically everybody thinks that the objects of intentional states are in some way complex… [For example], what you believe when you believe that P & QP & Q is… something composite, whose elements are—as it might be—the proposition that P and the proposition that Q. But the (putative) complexity of the intentional object of a mental state does not, of course, entail the complexity of the mental state itself… LOT claims that mental states—and not just their propositional objects—typically have constituent structure.

Many philosophers, including Frege and Russell, regard propositions as structured entities. These philosophers apply a part/whole model to propositions but not necessarily to mental events during which thinkers entertain propositions. LOTH as developed by Fodor applies the part/whole model to the mental events themselves:

what’s at issue here is the complexity of mental events and not merely the complexity of the propositions that are their intentional objects. (Fodor 1987: 142)

On this approach, a key element of LOTH is the thesis that mental events have semantically relevant complexity.

Contemporary proponents of LOTH endorse RTT+COMP. Historical proponents also believed something in the vicinity (Normore 1990, 2009; Panaccio 1999 [2017]), although of course they did not use modern terminology to formulate their views. We may regard RTT+COMP as a minimalist formulation of LOTH, bearing in mind that many philosophers have used the phrase “language of thought hypothesis” to denote one of the stronger theses discussed below. As befits a minimalist formulation, RTT+COMP leaves unresolved numerous questions about the nature, structure, and psychological role of Mentalese expressions.

1.3 Logical Structure

In practice, LOT theorists usually adopt a more specific view of the compositional semantics for Mentalese. They claim that Mentalese expressions have logical form (Fodor 2008: 21). More specifically, they claim that Mentalese contains analogues to the familiar logical connectives (andornotif-thensomeallthe). Iterative application of logical connectives generates complex expressions from simpler expressions. The meaning of a logically complex expression depends upon the meanings of its parts and upon its logical structure. Thus, LOT theorists usually endorse a doctrine along the following lines:

Logically structured mental representations (LOGIC): Some mental representations have logical structure. The compositional semantics for these mental representations resembles the compositional semantics for logically structured natural language expressions.

Medieval LOT theorists used syllogistic and propositional logic to analyze the semantics of Mentalese (King 2005; Normore 1990). Contemporary proponents instead use the predicate calculus, which was discovered by Frege (1879 [1967]) and whose semantics was first systematically articulated by Tarski (1933 [1983]). The view is that Mentalese contains primitive words—including predicates, singular terms, and logical connectives—and that these words combine to form complex sentences governed by something like the semantics of the predicate calculus.

The notion of a Mentalese word corresponds roughly to the intuitive notion of a concept. In fact, Fodor (1998: 70) construes a concept as a Mentalese word together with its denotation. For example, a thinker has the concept of a cat only if she has in her repertoire a Mentalese word that denotes cats.

Logical structure is just one possible paradigm for the structure of mental representations. Human society employs a wide range of non-sentential representations, including pictures, maps, diagrams, and graphs. Non-sentential representations typically contain parts arranged into a compositionally significant structure. In many cases, it is not obvious that the resulting complex representations have logical structure. For example, maps do not seem to contain logical connectives (Fodor 1991: 295; Millikan 1993: 302; Pylyshyn 2003: 424–5). Nor is it evident that they contain predicates (Camp 2018; Rescorla 2009c), although some philosophers contend that they do (Blumson 2012; Casati & Varzi 1999; Kulvicki 2015).

Theorists often posit mental representations that conform to COMP but that lack logical structure. The British empiricists postulated ideas, which they characterized in broadly imagistic terms. They emphasized that simple ideas can combine to form complex ideas. They held that the representational import of a complex idea depends upon the representational import of its parts and the way those parts are combined. So they accepted COMP or something close to it (depending on what exactly “constituency” amounts to).[2] They did not say in much detail how compounding of ideas was supposed to work, but imagistic structure seems to be the paradigm in at least some passages. LOGIC plays no significant role in their writings.[3] Partly inspired by the British empiricists, Prinz (2002) and Barsalou (1999) analyze cognition in terms of image-like representations derived from perception. Armstrong (1973) and Braddon-Mitchell and Jackson (2007) propose that propositional attitudes are relations not to mental sentences but to mental maps analogous in important respects to ordinary concrete maps.

One problem facing imagistic and cartographic theories of thought is that propositional attitudes are often logically complex (e.g., John believes that if Plácido Domingo does not sing then either Gustavo Dudamel will conduct or the concert will be cancelled). Images and maps do not seem to support logical operations: the negation of a map is not a map; the disjunction of two maps is not a map; similarly for other logical operations; and similarly for images. Given that images and maps do not support logical operations, theories that analyze thought in exclusively imagistic or cartographic terms will struggle to explain logically complex propositional attitudes.[4]

There is room here for a pluralist position that allows mental representations of different kinds: some with logical structure, some more analogous to pictures, or maps, or diagrams, and so on. The pluralist position is widespread within cognitive science, which posits a range of formats for mental representation (Block 1983; Camp 2009; Johnson-Laird 2004: 187; Kosslyn 1980; McDermott 2001: 69; Pinker 2005: 7; Sloman 1978: 144–76). Fodor himself (1975: 184–195) suggests a view on which imagistic mental representations co-exist alongside, and interact with, logically structured Mentalese expressions.

Given the prominent role played by logical structure within historical and contemporary discussion of Mentalese, one might take LOGIC to be definitive of LOTH. One might insist that mental representations comprise a mental language only if they have logical structure. We need not evaluate the merits of this terminological choice.

2. Scope of LOTH

RTT concerns propositional attitudes and the mental processes in which they figure, such as deductive inference, reasoning, decision-making, and planning. It does not address perception, motor control, imagination, dreaming, pattern recognition, linguistic processing, or any other mental activity distinct from high-level cognition. Hence the emphasis upon a language of thought: a system of mental representations that underlie thinking, as opposed to perceiving, imagining, etc. Nevertheless, talk about a mental language generalizes naturally from high-level cognition to other mental phenomena.

Perception is a good example. The perceptual system transforms proximal sensory stimulations (e.g., retinal stimulations) into perceptual estimates of environmental conditions (e.g., estimates of shapes, sizes, colors, locations, etc.). Helmholtz (1867 [1925]) proposed that the transition from proximal sensory input to perceptual estimates features an unconscious inference, similar in key respects to high-level conscious inference yet inaccessible to consciousness. Helmholtz’s proposal is foundational to contemporary perceptual psychology, which constructs detailed mathematical models of unconscious perceptual inference (Knill & Richards 1996; Rescorla 2015). Fodor (1975: 44–55) argues that this scientific research program presupposes mental representations. The representations participate in unconscious inferences or inference-like transitions executed by the perceptual system.[5]

Navigation is another good example. Tolman (1948) hypothesized that rats navigate using cognitive maps: mental representations that represent the layout of the spatial environment. The cognitive map hypothesis, advanced during the heyday of behaviorism, initially encountered great scorn. It remained a fringe position well into the 1970s, long after the demise of behaviorism. Eventually, mounting behavioral and neurophysiological evidence won it many converts (Gallistel 1990; Gallistel & Matzel 2013; Jacobs & Menzel 2014; O’Keefe & Nadel 1978; Weiner et al. 2011). Although a few researchers remain skeptical (Mackintosh 20002), there is now a broad consensus that mammals (and possibly even some insects) navigate using mental representations of spatial layout. Rescorla (2017b) summarizes the case for cognitive maps and reviews some of their core properties.

To what extent should we expect perceptual representations and cognitive maps to resemble the mental representations that figure in high-level human thought? It is generally agreed that all these mental representations have compositional structure. For example, the perceptual system can bind together a representation of shape and a representation of size to form a complex representation that an object has a certain shape and size; the representational import of the complex representation depends in a systematic way upon the representational import of the component representations. On the other hand, it is not clear that perceptual representations have anything resembling logical structure, including even predicative structure (Burge 2010: 540–544; Fodor 2008: 169–195). Nor is it evident that cognitive maps contain logical connectives or predicates (Rescorla 2009a, 2009b). Perceptual processing and non-human navigation certainly do not seem to instantiate mental processes that would exploit putative logical structure. In particular, they do not seem to instantiate deductive inference.

These observations provide ammunition for pluralism about representational format. Pluralists can posit one system of compositionally structured mental representations for perception, another for navigation, another for high-level cognition, and so on. Different representational systems potentially feature different compositional mechanisms. As indicated in section 1.3, pluralism figures prominently in contemporary cognitive science. Pluralists face some pressing questions. Which compositional mechanisms figure in which psychological domains? Which representational formats support which mental operations? How do different representational formats interface with each other? Further research bridging philosophy and cognitive science is needed to address such questions.

3. Mental Computation

Modern proponents of LOTH typically endorse the computational theory of mind (CTM), which claims that the mind is a computational system. Some authors use the phrase “language of thought hypothesis” so that it definitionally includes CTM as one component.

In a seminal contribution, Turing (1936) introduced what is now called the Turing machine: an abstract model of an idealized computing device. A Turing machine contains a central processor, governed by precise mechanical rules, that manipulates symbols inscribed along a linear array of memory locations. Impressed by the enormous power of the Turing machine formalism, many researchers seek to construct computational models of core mental processes, including reasoning, decision-making, and problem solving. This enterprise bifurcates into two main branches. The first branch is artificial intelligence (AI), which aims to build “thinking machines”. Here the goal is primarily an engineering one—to build a system that instantiates or at least simulates thought—without any pretense at capturing how the human mind works. The second branch, computational psychology, aims to construct computational models of human mental activity. AI and computational psychology both emerged in the 1960s as crucial elements in the new interdisciplinary initiative cognitive science, which studies the mind by drawing upon psychology, computer science (especially AI), linguistics, philosophy, economics (especially game theory and behavioral economics), anthropology, and neuroscience.

From the 1960s to the early 1980s, computational models offered within psychology were mainly Turing-style models. These models embody a viewpoint known as the classical computational theory of mind (CCTM). According to CCTM, the mind is a computational system similar in important respects to a Turing machine, and certain core mental processes are computations similar in important respects to computations executed by a Turing machine.

CCTM fits together nicely with RTT+COMP. Turing-style computation operates over symbols, so any Turing-style mental computations must operate over mental symbols. The essence of RTT+COMP is postulation of mental symbols. Fodor (1975, 1981) advocates RTT+COMP+CCTM. He holds that certain core mental processes are Turing-style computations over Mentalese expressions.

One can endorse RTT+COMP without endorsing CCTM. By positing a system of compositionally structured mental representations, one does not commit oneself to saying that operations over the representations are computational. Historical LOT theorists could not even formulate CCTM, for the simple reason that the Turing formalism had not been discovered. In the modern era, Harman (1973) and Sellars (1975) endorse something like RTT+COMP but not CCTM. Horgan and Tienson (1996) endorse RTT+COMP+CTM but not CCTM, i.e., classical CTM. They favor a version of CTM grounded in connectionism, an alternative computational framework that differs quite significantly from Turing’s approach. Thus, proponents of RTT+COMP need not accept that mental activity instantiates Turing-style computation.

Fodor (1981) combines RTT+COMP+CCTM with a view that one might call the formal-syntactic conception of computation (FSC). According to FSC, computation manipulates symbols in virtue of their formal syntactic properties but not their semantic properties.

FSC draws inspiration from modern logic, which emphasizes the formalization of deductive reasoning. To formalize, we specify a formal language whose component linguistic expressions are individuated non-semantically (e.g., by their geometric shapes). We describe the expressions as pieces of formal syntax, without considering what if anything the expressions mean. We then specify inference rules in syntactic, non-semantic terms. Well-chosen inference rules will carry true premises to true conclusions. By combining formalization with Turing-style computation, we can build a physical machine that manipulates symbols based solely on the formal syntax of the symbols. If we program the machine to implement appropriate inference rules, then its syntactic manipulations will transform true premises into true conclusions.

CCTM+FSC says that the mind is a formal syntactic computing system: mental activity consists in computation over symbols with formal syntactic properties; computational transitions are sensitive to the symbols’ formal syntactic properties but not their semantic properties. The key term “sensitive” is rather imprecise, allowing some latitude as to the precise import of CCTM+FSC. Intuitively, the picture is that a mental symbol’s formal syntax rather than its semantics determines how mental computation manipulates it. The mind is a “syntactic engine”.

Fodor (1987: 18–20) argues that CCTM+FSC helps illuminate a crucial feature of cognition: semantic coherence. For the most part, our thinking does not move randomly from thought to thought. Rather, thoughts are causally connected in a way that respects their semantics. For example, deductive inference carries true beliefs to true beliefs. More generally, thinking tends to respect epistemic properties such as warrant and degree of confirmation. In some sense, then, our thinking tends to cohere with semantic relations among thoughts. How is semantic coherence achieved? How does our thinking manage to track semantic properties? CCTM+FSC gives one possible answer. It shows how a physical system operating in accord with physical laws can execute computations that coherently track semantic properties. By treating the mind as a syntax-driven machine, we explain how mental activity achieves semantic coherence. We thereby answer the question: How is rationality mechanically possible?

Fodor’s argument convinced many researchers that CCTM+FSC decisively advances our understanding of the mind’s relation to the physical world. But not everyone agrees that CCTM+FSC adequately integrates semantics into the causal order. A common worry is that the formal syntactic picture veers dangerously close to epiphenomenalism (Block 1990; Kazez 1994). Pre-theoretically, semantic properties of mental states seem highly relevant to mental and behavioral outcomes. For example, if I form an intention to walk to the grocery store, then the fact that my intention concerns the grocery store rather than the post office helps explain why I walk to the grocery store rather than the post office. Burge (2010) and Peacocke (1994) argue that cognitive science theorizing likewise assigns causal and explanatory importance to semantic properties. The worry is that CCTM+FSC cannot accommodate the causal and explanatory importance of semantic properties because it depicts them as causally irrelevant: formal syntax, not semantics, drives mental computation forward. Semantics looks epiphenomenal, with syntax doing all the work (Stich 1983).

Fodor (1990, 1994) expends considerable energy trying to allay epiphenomenalist worries. Advancing a detailed theory of the relation between Mentalese syntax and Mentalese semantics, he insists that FSC can honor the causal and explanatory relevance of semantic properties. Fodor’s treatment is widely regarded as problematic (Arjo 1996; Aydede 1997b, 1998; Aydede & Robbins 2001; Perry 1998; Prinz 2011; Wakefield 2002), although Rupert (2008) and Schneider (2005) espouse somewhat similar positions.

Partly in response to epiphenomenalist worries, some authors recommend that we replace FSC with an alternative semantic conception of computation (Block 1990; Burge 2010: 95–101; Figdor 2009; O’Brien & Opie 2006; Peacocke 1994, 1999; Rescorla 2012a). Semantic computationalists claim that computational transitions are sometimes sensitive to semantic properties, perhaps in addition to syntactic properties. More specifically, semantic computationalists insist that mental computation is sometimes sensitive to semantics. Thus, they reject any suggestion that the mind is a “syntactic engine” or that mental computation is sensitive only to formal syntax.[6] To illustrate, consider Mentalese conjunction. This mental symbol expresses the truth-table for conjunction. According to semantic computationalists, the symbol’s meaning is relevant (both causally and explanatorily) to mechanical operations over it. That the symbol expresses the truth-table for conjunction rather than, say, disjunction influences the course of computation. We should therefore reject any suggestion that mental computation is sensitive to the symbol’s syntactic properties rather than its semantic properties. The claim is not that mental computation explicitly represents semantic properties of mental symbols. All parties agree that, in general, it does not. There is no homunculus inside your head interpreting your mental language. The claim is rather that semantic properties influence how mental computation proceeds. (Compare: the momentum of a baseball thrown at a window causally influences whether the window breaks, even though the window does not explicitly represent the baseball’s momentum.)

Proponents of the semantic conception differ as to how exactly they gloss the core claim that some computations are “sensitive” to semantic properties. They also differ in their stance towards CCTM. Block (1990) and Rescorla (2014a) focus upon CCTM. They argue that a symbol’s semantic properties can impact mechanical operations executed by a Turing-style computational system. In contrast, O’Brien and Opie (2006) favor connectionism over CCTM.

Theorists who reject FSC must reject Fodor’s explanation of semantic coherence. What alternative explanation might they offer? So far, the question has received relatively little attention. Rescorla (2017a) argues that semantic computationalists can explain semantic coherence and simultaneously avoid epiphenomenalist worries by invoking neural implementation of semantically-sensitive mental computations.

Fodor’s exposition sometimes suggests that CTM, CCTM, or CCTM+FSC is definitive of LOTH (1981: 26). Yet not everyone who endorses RTT+COMP endorses CTM, CCTM, or FSC. One can postulate a mental language without agreeing that mental activity is computational, and one can postulate mental computations over a mental language without agreeing that the computations are sensitive only to syntactic properties. For most purposes, it is not important whether we regard CTM, CCTM, or CCTM+FSC as definitive of LOTH. More important is that we track the distinctions among the doctrines.

4. Arguments for LOTH

The literature offers many arguments for LOTH. This section introduces four influential arguments, each of which supports LOTH abductively by citing its explanatory benefits. Section 5 discusses some prominent objections to the four arguments.

4.1 Argument from Cognitive Science Practice

Fodor (1975) defends RTT+COMP+CCTM by appealing to scientific practice: our best cognitive science postulates Turing-style mental computations over Mentalese expressions; therefore, we should accept that mental computation operates over Mentalese expressions. Fodor develops his argument by examining detailed case studies, including perception, decision-making, and linguistic comprehension. He argues that, in each case, computation over mental representations plays a central explanatory role. Fodor’s argument was widely heralded as a compelling analysis of then-current cognitive science.

When evaluating cognitive science support for LOTH, it is crucial to specify what version of LOTH one has in mind. Specifically, establishing that certain mental processes operate over mental representations is not enough to establish RTT. For example, one might accept that mental representations figure in perception and animal navigation but not in high-level human cognition. Gallistel and King (2009) defend COMP+CCTM+FSC through a number of (mainly non-human) empirical case studies, but they do not endorse RTT. They focus on relatively low-level phenomena, such as animal navigation, without discussing human decision-making, deductive inference, problem solving, or other high-level cognitive phenomena.

4.2 Argument from the Productivity of Thought

During your lifetime, you will only entertain a finite number of thoughts. In principle, though, there are infinitely many thoughts you might entertain. Consider:

Mary gave the test tube to John’s daughter.

Mary gave the test tube to John’s daughter’s daughter.

Mary gave the test tube to John’s daughter’s daughter’s daughter.

?

The moral usually drawn is that you have the competence to entertain a potential infinity of thoughts, even though your performance is bounded by biological limits upon memory, attention, processing capacity, and so on. In a slogan: thought is productive.

RTT+COMP straightforwardly explains productivity. We postulate a finite base of primitive Mentalese symbols, along with operations for combining simple expressions into complex expressions. Iterative application of the compounding operations generates an infinite array of mental sentences, each in principle within your cognitive repertoire. By tokening a mental sentence, you entertain the thought expressed by it. This explanation leverages the recursive nature of compositional mechanisms to generate infinitely many expressions from a finite base. It thereby illuminates how finite creatures such as ourselves are able to entertain a potential infinity of thoughts.

Fodor and Pylyshyn (1988) argue that, since RTT+COMP provides a satisfying explanation for productivity, we have good reason to accept RTT+COMP. A potential worry about this argument is that it rests upon an infinitary competence never manifested within actual performance. One might dismiss the supposed infinitary competence as an idealization that, while perhaps convenient for certain purposes, does not stand in need of explanation.

4.3 Argument from the Systematicity of Thought

There are systematic interrelations among the thoughts a thinker can entertain. For example, if you can entertain the thought that John loves Mary, then you can also entertain the thought that Mary loves John. Systematicity looks like a crucial property of human thought and so demands a principled explanation.

RTT+COMP gives a compelling explanation. According to RTT+COMP, your ability to entertain the thought that p hinges upon your ability to bear appropriate psychological relations to a Mentalese sentence S whose meaning is that p. If you are able to think that John loves Mary, then your internal system of mental representations includes a mental sentence John loves Mary, composed of mental words John, loves, and Mary combined in the right way. If you have the capacity to stand in psychological relation A* to John loves Mary, then you also have the capacity to stand in relation A* to a distinct mental sentence Mary loves John. The constituent words John, loves, and Mary make the same semantic contribution to both mental sentences (John denotes John, loves denotes the loving relation, and Mary denotes Mary), but the words are arranged in different constituency structures so that the sentences have different meanings. Whereas John loves Mary means that John loves Mary, Mary loves John means that Mary loves John. By standing in relation A* to the sentence Mary loves John, you entertain the thought that Mary loves John. Thus, an ability to think that John loves Mary entails an ability to think that John loves Mary. By comparison, an ability to think that John loves Mary does not entail an ability to think that whales are mammals or an ability to think that 

https://lh4.googleusercontent.com/jfmkvz6zPnKNJnI8cTSgaskKzBlDTD4dvRfTQlR6tqkNVg-0XOxokIqtaDP2es5G5hd2sZFuqGm1soAK7JAoGp1ErjWx2JjR1cHxWqWEJzYYJIIua3Jp5KKZZf2I415m2vsc9Wvwhttps://lh5.googleusercontent.com/K_dHwunQ85GytH9eNGsT_UIXe6G-2cWuI_I8IoxncOzhcIlUB5J3rQUdygwOCyKeRttsoUHTprwHAoskvwMuskbnDeXHhmwKojyMCS4aIC2rX04mwvhuraxAe6lm9CS85bonki1mhttps://lh3.googleusercontent.com/iTuJSivU4JZpTGdIRK_8_nO-8hFF5B2zZBqEzgUlZsQHwCI8JrY4Iwu7kBrFKQfwNMfKAwDUm0vXVQpiXHgsT00YD8K6QKNODDQsd_yyUQOLuxMyTZ20SXZBtuxZLAkEkYTVKymxhttps://lh5.googleusercontent.com/cpphpmxJzgZhgoIto-tA2IrKGew_Vrw9-IUBr9sbU6O1WXXlPkGe-TlIN8HSoAR9Kcub0wtXKLAiwR3-qTDfyXf_4L7zcZM87jkNmCcMZ_JvzU2Ka1ZXhVQzVFFB6WzcP2-8bACJhttps://lh5.googleusercontent.com/JpV-ZlI9mO6JCUbbKPFx2KdWK4PT6PT309ZOOA8BXWaAZ3a9zehX7H7lyJYIa1Pnmvws50zVLxlCWzJ1DXKVr5gKkXcAZ2CvJ71OdFtoj7e698rDZHnydXEa5zPvokDOJWtiwJdahttps://lh3.googleusercontent.com/wGCMCIWbIi71ur7yX-o5SruDqEdDvs4aOl7BsJugQIXkuTb2_6pjOpJNjtjSn0562GFrur265Yi1LLxJx2OUUMwPG14e9v98er8GvWtVviNXKKIgO5CWXxlaUcZ75Hc2XvSwK6d5https://lh3.googleusercontent.com/IMSYPtbEjQEtZmLV26i01N9z4OPbx7wNShGikHsqqQiWV59xeMrkvcvMzP2EgAbb3Jg9ztsFF1CQxPNPyCWk_tXLB83bA7LPy2_cw0stNo1a8CnLVcH_JGAvZ_usQ4ftSPYStUGfhttps://lh5.googleusercontent.com/Q3zvk2BXPTkhhCtL1IoUtyD7fOo4kChYDtI34xgotLo67Q2TVtim1oMdtr6osTcOHOCvbWZZ2D63x1_RflLXjJdGvRmI2ZCghPjMZB7FWxfXIsC5JzasqD48ODB9y9agFBW_5Ffyhttps://lh4.googleusercontent.com/NZlqebj7vD5chGdTwwuWLhP8imfJ37UDqWbR-MEhv7_RBj0Sz3mR5rULt2tIwnMdikSZHcSaRhO6Z0QUKga0cBEoseW3SwPamdApJEJuOAz9VZIgGgetteD3pfdljtiThGULOpzDhttps://lh6.googleusercontent.com/9TW1HVfeD1uE4pmtkMj-1ME1N86i4erwC-F9adPQA20tuH6exjM-olNQ4WkZs9iXNTxcDxlxtX8Ve3lMfxYsF7AgA9bcYTRa5qi5NN4aIYcXfrqMi_jqd4wZqv_zNLp778f_HnSs

56+138=194.

Fodor (1987: 148–153) supports RTT+COMP by citing its ability to explain systematicity. In contrast with the productivity argument, the systematicity argument does not depend upon infinitary idealizations that outstrip finite performance. Note that neither argument provides any direct support for CTM. Neither argument even mentions computation.

4.4 Argument from the Systematicity of Thinking

There are systematic interrelations among which inferences a thinker can draw. For example, if you can infer p from p and q, then you can also infer m from m and n. The systematicity of thinking requires explanation. Why is it that thinkers who can infer p from p and q can also infer m from m and n?

RTT+COMP+CCTM gives a compelling explanation. During an inference from p and q to p, you transit from believing* mental sentence S_1 & S_2S_1 & S_2 (which means that p and q) to believing* mental sentence 

https://lh6.googleusercontent.com/SJPhVfuJaRQmpF-h63gkwoDL7sxc38u5zYCAdoHkFjnumYhXwKg8-QGByoO5WTTnrOLpDCNRBw4vs-UXTiR1jn09GXqDAP0UIUR7ywqqOQPbM2XqzpY2FqsvQx5QpPGxS4Q7XQTS

https://lh4.googleusercontent.com/l6LSAIxkVaK9WftsyNgWyqmvqE6HDj2VoyEIer0HKx4wm7INY0wF73Do-OOrSP2SJwisk3vji_Ru2P-Y6nIrVybfV2RefhKO8QrF_5pb-ZSzJz5aDia4cImS2M9xh8OcXfCQ6IG6

S1 (which means that p). According to CCTM, the transition involves symbol manipulation. A mechanical operation detaches the conjunct 

https://lh5.googleusercontent.com/gsYesZCMQbhZBmkFelEtGSmgtcDw7wIH4A-jI-_CWYTCIwp_ws8MmqIp-1qMYrJkEz2odw38uAp5XyMXyE4iJP-2N-CszUWQs_xdow8UOfBIPM-ptiUsjiGF9dx099fEUgbMYk_u

https://lh3.googleusercontent.com/wSEwrYOa9vO4vsLcGOBhm9Tps0Y_7WahLPKBZm-TlguyPMxIIGwtohq8XgFFrwXhhaEMTQgldNXo8MU99M5ufrVjKVdWdNhb2BgtXQL41F8hOyu3eC2YZdgWDpRmyJD3kws7rDVW

S1 from the conjunction S_1 & S_2S_1 & S_2. The same mechanical operation is applicable to a conjunction S_{3} & S_{4}S_{3} & S_{4} (which means that m and n), corresponding to the inference from m and n to n. An ability to execute the first inference entails an ability to execute the second, because drawing the inference in either case corresponds to executing a single uniform mechanical operation. More generally, logical inference deploys mechanical operations over structured symbols, and the mechanical operation corresponding to a given inference pattern (e.g., conjunction introduction, disjunction elimination, etc.) is applicable to any premises with the right logical structure. The uniform applicability of a single mechanical operation across diverse symbols explains inferential systematicity. Fodor and Pylyshyn (1988) conclude that inferential systematicity provides reason to accept RTT+COMP+CCTM.

Fodor and Pylyshyn (1988) endorse an additional thesis about the mechanical operations corresponding to logical transitions. In keeping with FSC, they claim that the operations are sensitive to formal syntactic properties but not semantic properties. For example, conjunction elimination responds to Mentalese conjunction as a piece of pure formal syntax, much as a computer manipulates items in a formal language without considering what those items mean.

Semantic computationalists reject FSC. They claim that mental computation is sometimes sensitive to semantic properties. Semantic computationalists can agree that drawing an inference involves executing a mechanical operation over structured symbols, and they can agree that the same mechanical operation uniformly applies to any premises with appropriate logical structure. So they can still explain inferential systematicity. However, they can also say that the postulated mechanical operation is sensitive to semantic properties. For example, they can say that conjunction elimination is sensitive to the meaning of Mentalese conjunction.

In assessing the debate between FSC and semantic computationalism, one must distinguish between logical versus non-logical symbols. For present purposes, it is common ground that the meanings of non-logical symbols do not inform logical inference. The inference from S_1 & S_2S_1 & S_2 to 

https://lh4.googleusercontent.com/b0hxQaHZdcdkiOtC8FkPY-qcnk-CyhVhEFz3E2_ZBVWiUfXB8DxOU5hqNFNIm07NoDuPpY50YeLVtNK27eSup5RggPmFd758nscC8K-TmbA0N2tzGkYAkCtZGieWC-QKw9jSvLts

https://lh5.googleusercontent.com/ErUd7P0WxJmdyq6u0bC8s0W8Sf4nTitRQSlZ-RRRpb4gbOYbiSTMQW_VrxIVKieCksLnlDOElFmaFyavGqF_guvFneJG1SFachULFKcdD55_J553mi80W6MLMtCIwevycTB1fxD0

S1 features the same mechanical operation as the inference from S_{3} & S_{4}S_{3} & S_{4} to 

https://lh4.googleusercontent.com/KPFA2LEAUyllTFlOeKWsvIeTrBz7xT-L3S-nf-_P31-jn8sodtHP5m1pa6A46xZMeeKCS1STBXFQQmskdktHPfCdnWv34_MMatKnBT3T6Px-FtKjb8SkgV1gOggxGZELNCLtHjcr

https://lh4.googleusercontent.com/2iHVbSg-asTk4QnXrHRmUV2aCtFHlG03HozI32B4UYeSYAmKTa993zlLEagKRhvURO9Ke9v3DfdP0J93DizDDEjaNOgqFUSQS3_TZlB5xJHCk5TEQG2b2OlmRr6lOVTs41woflnT

S4, and this mechanical operation is not sensitive to the meanings of the conjuncts 

https://lh4.googleusercontent.com/ficbyoCQJPyO5a8BHPDBvQF2SxLKAKh3BdglNlAF_d0sdeT6IJ6DP-UFQ-RfPtPsKmZiaKuLcZjBnyBMu42CUqe981gB9QmnXzvvcNNEFWxqDbIl0mt0E-k8jQVqt07jj3cjG3tv

https://lh4.googleusercontent.com/FQOwQMF57oRy8m7-IF1ZeLh88P0372MTSGtfI0jvqCCa1FwgoQW031R87o6xRb5rwTqBl89oSgqsrZbu1oSmsajVm7-59HVj8QmMrNGRgqmkVhWcYAN1j0WKlxZ919fKZgL2mFX8

S1, 

https://lh4.googleusercontent.com/JgSGfN-HzKF9r0HJLm0n1bDSiKAqlTncMrC7MQFR1kFjGYNXScchU3jc7wRR9o7YcwCMPdhv_JhF_-0XpDkU3ZBcNAhC5uhA4_SLWFn-Ou8LeessIRi0R_DAhxkEtNZ9FKbS6Zck

https://lh5.googleusercontent.com/zYNGS8-a2etm8lpLRhL91lg7qC1Yh-dAnnJ5gB0AHhQCr4Wm_o3IVZtocyeTBox7Z_NxB7fN0DvqfZgW0gQ6cUGJqDFVgaOgEx5Ic3vjw_ZP3HGAECxBOrXwNbI4Na1gPEWSegpe

S2, 

https://lh6.googleusercontent.com/nCsGbhu0CIBWO38o1DQ-ebvfJyRvmNr5jmoDt6xJ6sVnKxvNs7W6Y7b4eQof2CtKPLu07jPcqpIBmYprkXwD5by687luJJYHEHcIbwBBVflswZRhVo7y--WGcwy1g50C-pBsAml0

https://lh6.googleusercontent.com/AEnu-rc1tnPl948MKiHObx_s6TU2gUCYs17FBEMOSUYHfPgciCmECAd_v0kirkKlp380YTyHUdO7ORbbnLL9bvwXfD-_M4IScPsEf4XvWL6mIkULJGHFA5gAx_QBHsnacvrF8Z7d

S3, or 

https://lh4.googleusercontent.com/xk3lBeEZil_Jiwkx6WUAYji_ILYSa3p5YM-g0VGQJyAuQK17Wp7UEeBymQFCXkGMJ8r2icCnFn-ujiIphg_L4Hv5wN_t-DzuVpXa5ccbSXUX778zLDum2hAat3cTkc13gmnRTJYf

https://lh5.googleusercontent.com/EaIVyuJZ9kMj08drNRzDfWRieHO1hPeaIMkzEJO23K4w-QB0Nd022J5EsFosdTp_n8jL4gIjJCpbJ-Khn7W_0fhSpek-5KpgcY6TGmuFJ36ajwTOIwddW0Vagj22VHwYljBfV0Ss

S4. It does not follow that the mechanical operation is insensitive to the meaning of Mentalese conjunction. The meaning of conjunction might influence how the logical inference proceeds, even though the meanings of the conjuncts do not.

5. The Connectionist Challenge

In the 1960s and 1970s, cognitive scientists almost universally modeled mental activity as rule-governed symbol manipulation. In the 1980s, connectionism gained currency as an alternative computational framework. Connectionists employ computational models, called neural networks, that differ quite significantly from Turing-style models. There is no central processor. There are no memory locations for symbols to be inscribed. Instead, there is a network of nodes bearing weighted connections to one another. During computation, waves of activation spread through the network. A node’s activation level depends upon the weighted activations of the nodes to which it is connected. Nodes function somewhat analogously to neurons, and connections between nodes function somewhat analogously to synapses. One should receive the neurophysiological analogy cautiously, as there are numerous important differences between neural networks and actual neural configurations in the brain (Bechtel & Abramson 2002: 341–343; Bermúdez 2010: 237–239; Clark 2014: 87–89; Harnish 2002: 359–362).

Connectionists raise many objections to the classical computational paradigm (Rumelhart, McClelland, & the PDP Research Group 1986; Horgan & Tienson 1996; McLaughlin & Warfield 1994; Bechtel & Abrahamsen 2002), such as that classical systems are not biologically realistic or that they are unable to model certain psychological tasks. Classicists in turn launch various arguments against connectionism. The most famous arguments showcase productivity, systematicity of thought, and systematicity of thinking. Fodor and Pylyshyn (1988) argue that these phenomena support classical CTM over connectionist CTM.

Fodor and Pylyshyn’s argument hinges on the distinction between eliminative connectionism and implementationist connectionism (cf. Pinker & Prince 1988). Eliminative connectionists advance neural networks as a replacement for the Turing-style formalism. They deny that mental computation consists in rule-governed symbol manipulation. Implementationist connectionists allow that, in some cases, mental computation may instantiate rule-governed symbol manipulation. They advance neural networks not to replace classical computations but rather to model how classical computations are implemented in the brain. The hope is that, because neural network computation more closely resembles actual brain activity, it can illuminate the physical realization of rule-governed symbol manipulation.

Building on Aydede’s (2015) discussion, we may reconstruct Fodor and Pylyshyn’s argument like so:

  1. Representational mental states and processes exist. An explanatorily adequate account of cognition should acknowledge these states and processes.
  2. The representational states and processes that figure in high-level cognition have certain fundamental properties: thought is productive and systematic; inferential thinking is systematic. The states and processes have these properties as a matter of nomic necessity: it is a psychological law that they have the properties.
  3. A theory of mental computation is explanatorily adequate only if it explains the nomic necessity of systematicity and productivity.
  4. The only way to explain the nomic necessity of systematicity and productivity is to postulate that high-level cognition instantiates computation over mental symbols with a compositional semantics. Specifically, we must accept RTT+COMP.
  5. Either a connectionist theory endorses RTT+COMP or it does not.
  6. If it does, then it is a version of implementationist connectionism.
  7. If it does not, then it is a version of eliminative connectionism. As per (iv), it does not explain productivity and systematicity. As per (iii), it is not explanatorily adequate.
  8. Conclusion: Eliminative connectionist theories are not explanatorily adequate.

The argument does not say that neural networks are unable to model systematicity. One can certainly build a neural network that is systematic. For example, one might build a neural network that can represent that John loves Mary only if it can represent that Mary loves John. The problem is that one might just as well build a neural network that can represent that John loves Mary but cannot represent that Mary loves John. Hence, nothing about the connectionist framework per se guarantees systematicity. For that reason, the framework does not explain the nomic necessity of systematicity. It does not explain why all the minds we find are systematic. In contrast, the classical framework mandates systematicity, and so it explains the nomic necessity of systematicity. The only apparent recourse for connectionists is to adopt the classical explanation, thereby becoming implementationist rather than eliminative connectionists.

Fodor and Pylyshyn’s argument has spawned a massive literature, including too many rebuttals to survey here. The most popular responses fall into five categories:

  • Deny information. Some connectionists deny that cognitive science should posit representational mental states. They believe that mature scientific theorizing about the mind will delineate connectionist models specified in non-representational terms (P.S. Churchland 1986; P.S. Churchland & Sejnowski 1989; P.M. Churchland 1990; P.M. Churchland & P.S. Churchland 1990; Ramsey 2007). If so, then Fodor and Pylyshyn’s argument falters at its first step. There is no need to explain why representational mental states are systematic and productive if one rejects all talk about representational mental states.
  • Accept (viii). Some authors, such as Marcus (2001), feel that neural networks are best deployed to illuminate the implementation of Turing-style models, rather than as replacements for Turing-style models.
  • Deny (ii). Some authors claim that Fodor and Pylyshyn greatly exaggerate the extent to which thought is productive (Rumelhart & McClelland 1986) or systematic (Dennett 1991; Johnson 2004). Horgan and Tienson (1996: 91–94) question the systematicity of thinking. They contend that we deviate from norms of deductive inference more than one would expect if we were following the rigid mechanical rules postulated by CCTM.
  • Deny (iv). Braddon-Mitchell and Fitzpatrick (1990) offer an evolutionary explanation for the systematicity of thought, bypassing any appeal to structured mental representations. In a similar vein, Horgan and Tienson (1996: 90) seek to explain systematicity by emphasizing how our survival depends upon our ability to keep track of objects in the environment and their ever-changing properties. Clark (1991) argues that systematicity follows from the holistic nature of thought ascription.
  • Deny (vi). Chalmers (1990, 1993), Smolensky (1991), and van Gelder (1991) claim that one can reject Turing-style models while still postulating mental representations with compositionally and computationally relevant internal structure.

We focus here on (vi).

As discussed in section 1.2, Fodor elucidates constituency structure in terms of part/whole relations. A complex representation’s constituents are literal parts of it. One consequence is that, whenever the first representation is tokened, so are its constituents. Fodor takes this consequence to be definitive of classical computation. As Fodor and McLaughlin (1990: 186) put it:

for a pair of expression types E1, E2, the first is a Classical constituent of the second only if the first is tokened whenever the second is tokened.

Thus, structured representations have a concatenative structure: each token of a structured representation involves a concatenation of tokens of the constituent representations. Connectionists who deny (vi) espouse a non-concatenative conception of constituency structure, according to which structure is encoded by a suitable distributed representation. Developments of the non-concatenative conception are usually quite technical (Elman 1989; Hinton 1990; Pollack 1990; Smolensky 1990, 1991, 1995; Touretzky 1990). Most models use vector or tensor algebra to define operations over connectionist representations, which are codified by activity vectors across nodes in a neural network. The representations are said to have implicit constituency structure: the constituents are not literal parts of the complex representation, but they can be extracted from the complex representation through suitable computational operations over it.

Fodor and McLaughlin (1990) grant that distributed representations may have constituency structure “in an extended sense”. But they insist that distributed representations are ill-suited to explain systematicity. They focus especially on the systematicity of thinking, the classical explanation for which postulates mechanical operations that respond to constituency structure. Fodor and McLaughlin argue that the non-concatenative conception cannot replicate the classical explanation and offers no satisfactory substitute for it. Chalmers (1993) and Niklasson and van Gelder (1994) disagree. They contend that a neural network can execute structure-sensitive computations over representations that have non-concatenative constituency structure. They conclude that connectionists can explain productivity and systematicity without retreating to implementationist connectionism.

Aydede (1995, 1997a) agrees that there is a legitimate notion of non-concatenative constituency structure, but he questions whether the resulting models are non-classical. He denies that we should regard concatenative structure as integral to LOTH. According to Aydede, concatenative structure is just one possible physical realization of constituency structure. Non-concatenative structure is another possible realization. We can accept RTT+COMP without glossing constituency structure in concatenative terms. On this view, a neural network whose operations are sensitive to non-concatenative constituency structure may still count as broadly classical and in particular as manipulating Mentalese expressions.

The debate between classical and connectionist CTM is still active, although not as active as during the 1990s. Recent anti-connectionist arguments tend to have a more empirical flavor. For example, Gallistel and King (2009) defend CCTM by canvassing a range of non-human empirical case studies. According to Gallistel and King, the case studies manifest a kind of productivity that CCTM can easily explain but eliminative connectionism cannot.

6. Regress Objections to LOTH

LOTH has elicited too many objections to cover in a single encyclopedia entry. We will discuss two objections, both alleging that LOTH generates a vicious regress. The first objection emphasizes language learning. The second emphasizes language understanding.

6.1 Learning a Language

Like many cognitive scientists, Fodor holds that children learn a natural language via hypothesis formation and testing. Children formulate, test, and confirm hypotheses about the denotations of words. For example, a child learning English will confirm the hypothesis that “cat” denotes cats. According to Fodor, denotations are represented in Mentalese. To formulate the hypothesis that “cat” denotes cats, the child uses a Mentalese word cat that denotes cats. It may seem that a regress is now in the offing, sparked by the question: How does the child learn Mentalese? Suppose we extend the hypothesis formation and testing model (henceforth HF) to Mentalese. Then we must posit a meta-language to express hypotheses about denotations of Mentalese words, a meta-meta-language to express hypotheses about denotations of meta-language words, and so on ad infinitum (Atherton and Schwartz 1974: 163).

Fodor responds to the threatened regress by denying we should apply HF to Mentalese (1975: 65). Children do not test hypotheses about the denotations of Mentalese words. They do not learn Mentalese at all. The mental language is innate.

The doctrine that some concepts are innate was a focal point in the clash between rationalism versus empiricism. Rationalists defended the innateness of certain fundamental ideas, such as god and cause, while empiricists held that all ideas derive from sensory experience. A major theme in the 1960s cognitive science revolution was revival of a nativist picture, inspired by the rationalists, on which many key elements of cognition are innate. Most famously, Chomsky (1965) explained language acquisition by positing innate knowledge about possible human languages. Fodor’s innateness thesis was widely perceived as going way beyond all precedent, verging on the preposterous (P.S. Churchland 1986; Putnam 1988). How could we have an innate ability to represent all the denotations we mentally represent? For example, how could we innately possess a Mentalese word carburetor that represents carburetors?

In evaluating these issues, it is vital to distinguish between learning a concept versus acquiring a concept. When Fodor says that a concept is innate, he does not mean to deny that we acquire the concept or even that certain kinds of experience are needed to acquire it. Fodor fully grants that we cannot mentally represent carburetors at birth and that we come to represent them only by undergoing appropriate experiences. He agrees that most concepts are acquired. He denies that they are learned. In effect, he uses “innate” as a synonym for “unlearned” (1975: 96). One might reasonably challenge Fodor’s usage. One might resist classifying a concept as innate simply because it is unlearned. However, that is how Fodor uses the word “innate”. Properly understood, then, Fodor’s position is not as far-fetched as it may sound.[7]

Fodor gives a simple but striking argument that concepts are unlearned. The argument begins from the premise that HF is the only potentially viable model of concept learning. Fodor then argues that HF is not a viable model of concept learning, from which he concludes that concepts are unlearned. He offers various formulations and refinements of the argument over his career. Here is a relatively recent rendition (2008: 139):

Now, according to HF, the process by which one learns C must include the inductive evaluation of some such hypothesis as “The C things are the ones that are green or triangular”. But the inductive evaluation of that hypothesis itself requires (inter alia) bringing the property green or triangular before the mind as such… Quite generally, you can’t represent anything as such and such unless you already have the concept such and such. All that being so, it follows, on pain of circularity, that “concept learning” as HF understands it can’t be a way of acquiring concept C… Conclusion: If concept learning is as HF understands it, there can be no such thing. This conclusion is entirely general; it doesn’t matter whether the target concept is primitive (like green) or complex (like green or triangular).

Fodor’s argument does not presuppose RTT, COMP, or CTM. To the extent that the argument works, it applies to any view on which people have concepts.

If concepts are not learned, then how are they acquired? Fodor offers some preliminary remarks (2008: 144–168), but by his own admission the remarks are sketchy and leave numerous questions unanswered (2008: 144–145). Prinz (2011) critiques Fodor’s positive treatment of concept acquisition.

The most common rejoinder to Fodor’s innateness argument is to deny that HF is the only viable model of concept learning. The rejoinder acknowledges that concepts are not learned through hypothesis testing but insists they are learned through other means. Three examples:

  • Margolis (1998) proposes an acquisition model that differs from HF but that allegedly yields concept learning. Fodor (2008: 140–144) retorts that Margolis’s model does not yield genuine concept learning. Margolis and Laurence (2011) insist that it does.
  • Carey (2009) maintains that children can “bootstrap” their way to new concepts using induction, analogical reasoning, and other techniques. She develops her view in great detail, supporting it partly through her groundbreaking experimental work with young children. Fodor (2010) and Rey (2014) object that Carey’s bootstrapping theory is circular: it surreptitiously presupposes that children already possess the very concepts whose acquisition it purports to explain. Beck (2017) and Carey (2014) respond to the circularity objection.
  • Shea (2016) argues that connectionist modeling can explain concept acquisition in non-HF terms and that the resulting models instantiate genuine learning.

A lot depends here upon what counts as “learning” and what does not, a question that seems difficult to adjudicate. A closely connected question is whether concept acquisition is a rational process or a mere causal process. To the extent that acquiring some concept is a rational achievement, we will want to say that one learned the concept. To the extent that acquiring the concept is a mere causal process (more like catching a cold than confirming a hypothesis), we will feel less inclined to say that genuine learning took place (Fodor 1981: 275).

These issues lie at the frontier of psychological and philosophical research. The key point for present purposes is that there are two options for halting the regress of language learning: we can say that thinkers acquire concepts but do not learn them; or we can say that thinkers learn concepts through some means other than hypothesis testing. Of course, it is not enough just to note that the two options exist. Ultimately, one must develop one’s favored option into a compelling theory. But there is no reason to think that doing so would reinitiate the regress. In any event, explaining concept acquisition is an important task facing any theorist who accepts that we have concepts, whether or not the theorist accepts LOTH. Thus, the learning regress objection is best regarded not as posing a challenge specific to LOTH but rather as highlighting a more widely shared theoretical obligation: the obligation to explain how we acquire concepts.

For further discussion, see the entry on innateness. See also the exchange between Cowie (1999) and Fodor (2001).

6.2 Understanding a Language

What is it to understand a natural language word? On a popular picture, understanding a word requires that you mentally represent the word’s denotation. For example, understanding the word “cat” requires representing that it denotes cats. LOT theorists will say that you use Mentalese words to represent denotations. The question now arises what it is to understand a Mentalese word. If understanding the Mentalese word requires representing that it has a certain denotation, then we face an infinite regress of meta-languages (Blackburn 1984: 43–44).

The standard response is to deny that ordinary thinkers represent Mentalese words as having denotations (Bach 1987; Fodor 1975: 66–79). Mentalese is not an instrument of communication. Thinking is not “talking to oneself” in Mentalese. A typical thinker does not represent, perceive, interpret, or reflect upon Mentalese expressions. Mentalese serves as a medium within which her thought occurs, not an object of interpretation. We should not say that she “understands” Mentalese in the same way that she understands a natural language.

There is perhaps another sense in which the thinker “understands” Mentalese: her mental activity coheres with the meanings of Mentalese words. For example, her deductive reasoning coheres with the truth-tables expressed by Mentalese logical connectives. More generally, her mental activity is semantically coherent. To say that the thinker “understands” Mentalese in this sense is not to say that she represents Mentalese denotations. Nor is there any evident reason to suspect that explaining semantic coherence will ultimately require us to posit mental representation of Mentalese denotations. So there is no regress of understanding.

For further criticism of this regress argument, see the discussions of Knowles (1998) and Laurence and Margolis (1997).[8]

7. Naturalizing the Mind

Naturalism is a movement that seeks to ground philosophical theorizing in the scientific enterprise. As so often in philosophy, different authors use the term “naturalism” in different ways. Usage within philosophy of mind typically connotes an effort to depict mental states and processes as denizens of the physical world, with no irreducibly mental entities or properties allowed. In the modern era, philosophers have often recruited LOTH to advance naturalism. Indeed, LOTH’s supposed contribution to naturalism is frequently cited as a significant consideration in its favor. One example is Fodor’s use of CCTM+FSC to explain semantic coherence. The other main example turns upon the problem of intentionality.

How does intentionality arise? How do mental states come to be about anything, or to have semantic properties? Brentano (1874 [1973: 97]) maintained that intentionality is a hallmark of the mental as opposed to the physical: “The reference to something as an object is a distinguishing characteristic of all mental phenomena. No physical phenomenon exhibits anything similar”. In response, contemporary naturalists seek to naturalize intentionality. They want to explain in naturalistically acceptable terms what makes it the case that mental states have semantic properties. In effect, the goal is to reduce the intentional to the non-intentional. Beginning in the 1980s, philosophers have offered various proposals about how to naturalize intentionality. Most proposals emphasize causal or nomic links between mind and world (Aydede & Güzeldere 2005; Dretske 1981; Fodor 1987, 1990; Stalnaker 1984), sometimes also invoking teleological factors (Millikan 1984, 1993; Neander 2017l; Papineau 1987; Dretske 1988) or historical lineages of mental states (Devitt 1995; Field 2001). Another approach, functional role semantics, emphasizes the functional role of a mental state: the cluster of causal or inferential relations that the state bears to other mental states. The idea is that meaning emerges at least partly through these causal and inferential relations. Some functional role theories cite causal relations to the external world (Block 1987; Loar 1982), and others do not (Cummins 1989).

Even the best developed attempts at naturalizing intentionality, such as Fodor’s (1990) version of the nomic strategy, face serious problems that no one knows how to solve (M. Greenberg 2014; Loewer 1997). Partly for that reason, the flurry of naturalizing attempts abated in the 2000s. Burge (2010: 298) reckons that the naturalizing project is not promising and that current proposals are “hopeless”. He agrees that we should try to illuminate representationality by limning its connections to the physical, the causal, the biological, and the teleological. But he insists that illumination need not yield a reduction of the intentional to the non-intentional.

LOTH is neutral as to the naturalization of intentionality. An LOT theorist might attempt to reduce the intentional to the non-intentional. Alternatively, she might dismiss the reductive project as impossible or pointless. Assuming she chooses the reductive route, LOTH provides guidance regarding how she might proceed. According to RTT,

X A’s that p iff there is a mental representation S such that X bears A* to S and S means that p.

The task of elucidating “X A’s that p” in naturalistically acceptable terms factors into two sub-tasks (Field 2001: 33):

  1. Explain in naturalistically acceptable terms what it is to bear psychological relation A* to mental representation S.
  2. Explain in naturalistically acceptable terms what it is for mental representation S to mean that p.

As we have seen, functionalism helps with (a). Moreover, COMP provides a blueprint for tackling (b). We can first delineate a compositional semantics describing how S’s meaning depends upon semantic properties of its component words and upon the compositional import of the constituency structure into which those words are arranged. We can then explain in naturalistically acceptable terms why the component words have the semantic properties that they have and why the constituency structure has the compositional import that it has.

How much does LOTH advance the naturalization of intentionality? Our compositional semantics for Mentalese may illuminate how the semantic properties of a complex expression depend upon the semantic properties of primitive expressions, but it says nothing about how primitive expressions get their semantic properties in the first place. Brentano’s challenge (How could intentionality arise from purely physical entities and processes?) remains unanswered. To meet the challenge, we must invoke naturalizing strategies that go well beyond LOTH itself, such as the causal or nomic strategies mentioned above. Those naturalizing strategies are not specifically linked to LOTH and can usually be tailored to semantic properties of neural states rather than semantic properties of Mentalese expressions. Thus, it is debatable how much LOTH ultimately helps us naturalize intentionality. Naturalizing strategies orthogonal to LOTH seem to do the heavy lifting.

8. Individuation of Mentalese Expressions

How are Mentalese expressions individuated? Since Mentalese expressions are types, answering this question requires us to consider the type/token relation for Mentalese. We want to fill in the schema

e and e* are tokens of the same Mentalese type iff R(ee*).

What should we substitute for R(ee*)? The literature typically focuses on primitive symbol types, and we will follow suit here.

It is almost universally agreed among contemporary LOT theorists that Mentalese tokens are neurophysiological entities of some sort. One might therefore hope to individuate Mentalese types by citing neural properties of the tokens. Drawing R(ee*) from the language of neuroscience induces a theory along the following lines:

Neural individuatione and e* are tokens of the same primitive Mentalese type iff e and e* are tokens of the same neural type.

This schema leaves open how neural types are individuated. We may bypass that question here, because neural individuation of Mentalese types finds no proponents in the contemporary literature. The main reason is that it conflicts with multiple realizability: the doctrine that a single mental state type can be realized by physical systems that are wildly heterogeneous when described in physical, biological, or neuroscientific terms. Putnam (1967) introduced multiple realizability as evidence against the mind/brain identity theory, which asserts that mental state types are brain state types. Fodor (1975: 13–25) further developed the multiple realizability argument, presenting it as foundational to LOTH. Although the multiple realizability argument has subsequently been challenged (Polger 2004), LOT theorists widely agree that we should not individuate Mentalese types in neural terms.

The most popular strategy is to individuate Mentalese types functionally:

Functional individuatione and e* are tokens of the same primitive Mentalese type iff e and e* have the same functional role.

Field (2001: 56–67), Fodor (1994: 105–109), and Stich (1983: 149–151) pursue functional individuation. They specify functional roles using a Turing-style computationalism formalism, so that “functional role” becomes something like “computational role”, i.e., role within mental computation.

Functional roles theories divide into two categories: molecular and holist. Molecular theories isolate privileged canonical relations that a symbol bears to other symbols. Canonical relations individuate the symbol, but non-canonical relations do not. For example, one might individuate Mentalese conjunction solely through the introduction and elimination rules governing conjunction while ignoring any other computational rules. If we say that a symbol’s “canonical functional role” is constituted by its canonical relations to other symbols, then we can offer the following theory:

Molecular functional individuatione and e* are tokens of the same primitive Mentalese type iff e and e* have the same canonical functional role.

One problem facing molecular individuation is that, aside from logical connectives and a few other special cases, it is difficult to draw any principled demarcation between canonical and non-canonical relations (Schneider 2011: 106). Which relations are canonical for SOFA?[9] Citing the demarcation problem, Schneider espouses a holist approach that individuates mental symbols through total functional role, i.e., every single aspect of the role that a symbol plays within mental activity:

Holist functional individuatione and e* are tokens of the same primitive Mentalese type iff e and e* have the same total functional role.

Holist individuation is very fine-grained: the slightest difference in total functional role entails that different types are tokened. Since different thinkers will always differ somewhat in their mental computations, it now looks like two thinkers will never share the same mental language. This consequence is worrisome, for two reasons emphasized by Aydede (1998). First, it violates the plausible publicity constraint that propositional attitudes are in principle shareable. Second, it apparently precludes interpersonal psychological explanations that cite Mentalese expressions. Schneider (2011: 111–158) addresses both concerns, arguing that they are misdirected.

A crucial consideration when individuating mental symbols is what role to assign to semantic properties. Here we may usefully compare Mentalese with natural language. It is widely agreed that natural language words do not have their denotations essentially. The English word “cat” denotes cats, but it could just as well have denoted dogs, or the number 27, or anything else, or nothing at all, if our linguistic conventions had been different. Virtually all contemporary LOT theorists hold that a Mentalese word likewise does not have its denotation essentially. The Mentalese word cat denotes cats, but it could have had a different denotation had it born different causal relations to the external world or had it occupied a different role in mental activity. In that sense, cat is a piece of formal syntax. Fodor’s early view (1981: 225–253) was that a Mentalese word could have had a different denotation but not an arbitrarily different denotation: cat could not have denoted just anything—it could not have denoted the number 27—but it could have denoted some other animal species had the thinker suitably interacted with that species rather than with cats. Fodor eventually (1994, 2008) embraces the stronger thesis that a Mentalese word bears an arbitrary relation to its denotation: cat could have had any arbitrarily different denotation. Most contemporary theorists agree (Egan 1992: 446; Field 2001: 58; Harnad 1994: 386; Haugeland 1985: 91: 117–123; Pylyshyn 1984: 50).

The historical literature on LOTH suggests an alternative semantically permeated view: Mentalese words are individuated partly through their denotations. The Mentalese word cat is not a piece of formal syntax subject to reinterpretation. It could not have denoted another species, or the number 27, or anything else. It denotes cats by its inherent nature. From a semantically permeated viewpoint, a Mentalese word has its denotation essentially. Thus, there is a profound difference between natural language and mental language. Mental words, unlike natural language words, bring with them one fixed semantic interpretation. The semantically permeated approach is present in Ockham, among other medieval LOT theorists (Normore 2003, 2009). In light of the problems facing neural and functional individuation, Aydede (2005) recommends that we consider taking semantics into account when individuating Mentalese expressions. Rescorla (2012b) concurs, defending a semantically permeated approach as applied to at least some mental representations. He proposes that certain mental computations operate over mental symbols with essential semantic properties, and he argues that the proposal fits well with many sectors of cognitive science.[10]

A recurring complaint about the semantically permeated approach is that inherently meaningful mental representations seem like highly suspect entities (Putnam 1988: 21). How could a mental word have one fixed denotation by its inherent nature? What magic ensures the necessary connection between the word and the denotation? These worries diminish in force if one keeps firmly in mind that Mentalese words are types. Types are abstract entities corresponding to a scheme for classifying, or type-identifying, tokens. To ascribe a type to a token is to type-identify the token as belonging to some category. Semantically permeated types correspond to a classificatory scheme that takes semantics into account when categorizing tokens. As Burge emphasizes (2007: 302), there is nothing magical about semantically-based classification. On the contrary, both folk psychology and cognitive science routinely classify mental events based at least partly upon their semantic properties.

A simplistic implementation of the semantically permeated approach individuates symbol tokens solely through their denotations:

Denotational individuatione and e* are tokens of the same primitive Mentalese type iff e and e* have the same denotation.

As Aydede (2000) and Schneider (2011) emphasize, denotational individuation is unsatisfying. Co-referring words may play significantly different roles in mental activity. Frege’s (1892 [1997]) famous Hesperus-Phosphorus example illustrates: one can believe that Hesperus is Hesperus without believing that Hesperus is Phosphorus. As Frege put it, one can think about the same denotation “in different ways”, or “under different modes of presentation”. Different modes of presentation have different roles within mental activity, implicating different psychological explanations. Thus, a semantically permeated individuative scheme adequate for psychological explanation must be finer-grained than denotational individuation allows. It must take mode of presentation into account. But what it is to think about a denotation “under the same mode of presentation”? How are “modes of presentation” individuated? Ultimately, semantically permeated theorists must grapple with these questions. Rescorla (forthcoming) offers some suggestions about how to proceed.[11]

Chalmers (2012) complains that semantically permeated individuation sacrifices significant virtues that made LOTH attractive in the first place. LOTH promised to advance naturalism by grounding cognitive science in non-representational computational models. Representationally-specified computational models seem like a significant retrenchment from these naturalistic ambitions. For example, semantically permeated theorists cannot accept the FSC explanation of semantic coherence, because they do not postulate formal syntactic types manipulated during mental computation.

How compelling one finds naturalistic worries about semantically permeated individuation will depend on how impressive one finds the naturalistic contributions made by formal mental syntax. We saw earlier that FSC arguably engenders a worrisome epiphenomenalism. Moreover, the semantically permeated approach in no way precludes a naturalistic reduction of intentionality. It merely precludes invoking formal syntactic Mentalese types while executing such a reduction. For example, proponents of the semantically permeated approach can still pursue the causal or nomic naturalizing strategies discussed in section 7. Nothing about either strategy presupposes formal syntactic Mentalese types. Thus, it is not clear that replacing a formal syntactic individuative scheme with a semantically permeated scheme significantly impedes the naturalistic endeavor.

No one has yet provided an individuative scheme for Mentalese that commands widespread assent. The topic demands continued investigation, because LOTH remains highly schematic until its proponents clarify sameness and difference of Mentalese types.

Bibliography

  • Arjo, Dennis, 1996, “Sticking Up for Oedipus: Fodor on Intentional Generalizations and Broad Content”, Mind & Language, 11(3): 231–245. doi:10.1111/j.1468-0017.1996.tb00044.x
  • Armstrong, D. M., 1973, Belief Truth and Knowledge, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511570827
  • Atherton, Margaret and Robert Schwartz, 1974, “Linguistic Innateness and Its Evidence”:, Journal of Philosophy, 71(6): 155–168. doi:10.2307/2024657
  • Aydede, Murat, 1995, “Connectionism and Language of Thought”, CSLI Technical Report 195, Stanford: Center for the Study of Language and Information Publications.
  • –––, 1997a, “Language of Thought: The Connectionist Contribution”, Minds and Machines, 7(1): 57–101. doi:10.1023/A:1008203301671
  • –––, 1997b, “Has Fodor Really Changed His Mind on Narrow Content?”, Mind & Language, 12(3–4): 422–458. doi:10.1111/j.1468-0017.1997.tb00082.x
  • –––, 1998, “Fodor on Concepts and Frege Puzzles”, Pacific Philosophical Quarterly, 79(4): 289–294. doi:10.1111/1468-0114.00063
  • –––, 2000, “On the Type/Token Relation of Mental Representations”, Facta Philosophica, 2: 23–49.
  • –––, 2005, “Computation and Functionalism: Syntactic Theory of Mind Revisited”, in Turkish Studies in the History and Philosophy of Science, Gürol Irzik and Güven Güzeldere (eds.), (Boston Studies in the History and Philosophy of Science 244), Berlin/Heidelberg: Springer-Verlag, 177–204. doi:10.1007/1-4020-3333-8_13
  • –––, 2015, “The Language of Thought Hypothesis”, The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward Zalta (ed.). URL = <https://plato.stanford.edu/archives/fall2015/entries/language-thought/>.
  • Aydede, Murat and Güven Güzeldere, 2005, “Cognitive Architecture, Concepts, and Introspection: An Information-Theoretic Solution to the Problem of Phenomenal Consciousness”, Noûs, 39(2): 197–255. doi:10.1111/j.0029-4624.2005.00500.x
  • Aydede, Murat and Philip Robbins, 2001, “Are Frege Cases Exceptions to Intentional Generalizations?”, Canadian Journal of Philosophy, 31(1): 1–22. doi:10.1080/00455091.2001.10717558
  • Bach, Kent, 1987, “Review: Spreading the Word”, The Philosophical Review, 96(1): 120–123. doi:10.2307/2185336
  • Barsalou, Lawrence W., 1999, “Perceptual Symbol Systems”, Behavioral and Brain Sciences, 22(4): 577–660. doi:10.1017/S0140525X99002149
  • Bechtel, William and Adele Abrahamsen, 2002, Connectionism and the Mind: Parallel Processing, Dynamics and Evolution in Networks, second edition, Malden, MA: Blackwell.
  • Beck, Jacob, 2017, “Can Bootstrapping Explain Concept Learning?”, Cognition, 158: 110–121. doi:10.1016/j.cognition.2016.10.017
  • Bermúdez, José Luis, 2010, Cognitive Science: An Introduction to the Science of the Mind, Cambridge: Cambridge University Press.
  • Blackburn, Simon, 1984, Spreading the Word, Oxford: Oxford University Press.
  • Block, Ned, 1983, “Mental Pictures and Cognitive Science”, The Philosophical Review, 92(4): 499–451. doi:10.2307/2184879
  • –––, “Advertisement for a Semantics for Psychology”, in Midwest Studies in Philosophy, 10: 615–678. doi:10.1111/j.1475-4975.1987.tb00558.x
  • –––, 1990, “Can the Mind Change the World?”, in Meaning and Method: Essays in Honor of Hilary Putnam, George Boolos (ed.), Cambridge: Cambridge University Press.
  • Blumson, Ben, 2012, “Mental Maps”, Philosophy and Phenomenological Research, 85(2): 413–434. doi:10.1111/j.1933-1592.2011.00499.x
  • Braddon-Mitchell, David and John Fitzpatrick, 1990, “Explanation and the Language of Thought”, Synthese, 83(1): 3–29. doi: 10.1007/BF00413686
  • Braddon-Mitchell, David and Frank Jackson, 2007, Philosophy of Mind and Cognition, second edition, Cambridge: Blackwell.
  • Burge, Tyler, 2007, Foundations of Mind, (Philosophical Essays, 2), Oxford: Oxford University Press.
  • –––, 2010, Origins of Objectivity, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199581405.001.0001
  • –––, 2018, “Iconic Representation: Maps, Pictures, and Perception”, in The Map and the Territory: Exploring the Foundations of Science, Thought, and Reality, Shyam Wuppuluri and Francisco Antonio Doria (eds.), Cham: Springer International Publishing, 79–100. doi:10.1007/978-3-319-72478-2_5
  • Brentano, Franz, 1874 [1973], Psychology from an Empirical Standpoint (Psychologie vom empirischen Standpunkt, 1924 edition), Antos C. Rancurello, D.B. Terrell, and Linda McAlister (trans.), London: Routledge and Kegan Paul.
  • Camp, Elisabeth, 2009, “A Language of Baboon Thought?”, in Lurz 2009: 108–127. doi:10.1017/CBO9780511819001.007
  • –––, 2018, “Why Maps Are Not Propositional”, in Non-Propositional Intentionality, Alex Grzankowski and Michelle Montague (eds.), Oxford: Oxford University Press. doi:10.1093/oso/9780198732570.003.0002
  • Carey, Susan, 2009, The Origin of Concepts, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195367638.001.0001
  • –––, 2014, “On Learning New Primitives in the Language of Thought: Reply to Rey”, Mind and Language, 29(2): 133–166. doi:10.1111/mila.12045
  • Casati, Roberto and Achille C. Varzi, 1999, Parts and Places: The Structures of Spatial Representation, Cambridge, MA: MIT Press.
  • Chalmers, David J., 1990, “Syntactic Transformations on Distributed Representations”, Connection Science, 2(1–2): 53–62. doi:10.1080/09540099008915662
  • –––, 1993, “Connectionism and Compositionality: Why Fodor and Pylyshyn Were Wrong”, Philosophical Psychology, 6(3): 305–319. doi:10.1080/09515089308573094
  • –––, 2012, “The Varieties of Computation: A Reply”, Journal of Cognitive Science, 13(3): 211–248. doi:10.17791/jcs.2012.13.3.211
  • Chomsky, Noam, 1965, Aspects of the Theory of Syntax. Cambridge, MA: MIT Press.
  • Churchland, Patricia S., 1986, Neurophilosophy: Toward a Unified Science of Mind-Brain, Cambridge, MA: MIT Press.
  • Churchland, Patricia S. and Terrence J. Sejnowski, 1989, “Neural Representation and Neural Computation”, in Neural Connections, Neural Computation, Lynn Nadel, Lynn A. Cooper, Peter W. Culicover, and Robert M. Harnish, Cambridge, MA: MIT Press.
  • Churchland, Paul M., 1990, A Neurocomputational Perspective: The Nature of Mind and the Structure of Science, Cambridge, MA: MIT Press.
  • Churchland, Paul M., and Patricia S. Churchland, 1990, “Could a Machine Think?”, Scientific American, 262(1): 32–37. doi:10.1038/scientificamerican0190-32
  • Clark, Andy, 1991, “Systematicity, Structured Representations and Cognitive Architecture: A Reply to Fodor and Pylyshyn”, in Horgan and Tienson 1991: 198–218. doi:10.1007/978-94-011-3524-5_9
  • –––, 2014, Mindware: An Introduction to the Philosophy of Cognitive Science, second edition, Oxford: Oxford University Press.
  • Cowie, Fiona, 1999, What’s Within? Nativism Reconsidered, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195159783.001.0001
  • Cummins, Robert, 1989, Meaning and Mental Representation, Cambridge, MA: MIT Press.
  • Dennett, Daniel C., 1977 [1981], “Critical Noticw: Review of The Language of Thought by Jerry Fodor”, Mind, 86(342): 265–280. Reprinted as “A Cure for the Common Code”, in Brainstorms: Philosophical Essays on Mind and Psychology, Cambridge, MA: MIT Press, 1981. doi:10.1093/mind/LXXXVI.342.265
  • –––, 1991, “Mother Nature Versus the Walking Encyclopedia: A Western Drama”, in Philosophy and Connectionist Theory, W. Ramsey, S. Stich, and D. Rumelhart, Hillsdale, NJ: Lawrence Erlbaum Associates. [available online]
  • Devitt, Michael, 1995, Coming to Our Senses: A Naturalistic Program for Semantic Localism, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511609190
  • Dretske, Fred, 1981, Knowledge and the Flow of Information, Cambridge, MA: MIT Press.
  • –––, 1988. Explaining Behavior, Cambridge, MA: MIT Press.
  • Egan, Frances, 1992, “Individualism, Computation, and Perceptual Content”, Mind, 101(403): 443–459. doi:10.1093/mind/101.403.443
  • Elman, Jeffrey L., 1989, “Structured Representations and Connectionist Models”, in Proceedings of the Eleventh Annual Meeting of the Cognitive Science Society, Mahwah: Laurence Erlbaum Associates.
  • Field, Hartry, 2001, Truth and the Absence of Fact, Oxford: Oxford University Press. doi:10.1093/0199242895.001.0001
  • Figdor, Carrie, 2009, “Semantic Externalism and the Mechanics of Thought”, Minds and Machines, 19(1): 1–24. doi:10.1007/s11023-008-9114-6
  • Fodor, Jerry A., 1975, The Language of Thought, New York: Thomas Y. Crowell.
  • –––, 1981, Representations, Cambridge, MA: MIT Press.
  • –––, 1987, Psychosemantics, Cambridge, MA: MIT Press.
  • –––, 1990, A Theory of Content and Other Essays, Cambridge, MA: MIT Press.
  • –––, 1991, “Replies”, in Meaning in Mind: Fodor and His Critics, Barry M. Loewer and Georges Rey (eds.), Cambridge, MA: MIT Press.
  • –––, 1994, The Elm and the Expert, Cambridge, MA: MIT Press.
  • –––, 1998, Concepts: Where Cognitive Science Went Wrong, Oxford: Oxford University Press. doi:10.1093/0198236360.001.0001
  • –––, 2001, “Doing without What’s within: Fiona Cowie’s Critique of Nativism”, Mind, 110(437): 99–148. doi:10.1093/mind/110.437.99
  • –––, 2003, Hume Variations, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199287338.001.0001
  • –––, 2008, LOT 2: The Language of Thought Revisited, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199548774.001.0001
  • –––, 2010, “Woof, Woof. Review of The Origin of Concepts by Susan Carey”, The Times Literary Supplement, October 8: pp. 7–8.
  • Fodor, Jerry and Brian P. McLaughlin, 1990, “Connectionism and the Problem of Systematicity: Why Smolensky’s Solution Doesn’t Work”, Cognition, 35(2): 183–204. doi:10.1016/0010-0277(90)90014-B
  • Fodor, Jerry A. and Zenon W. Pylyshyn, 1981, “How Direct Is Visual Perception?: Some Reflections on Gibson’s ‘Ecological Approach’”, Cognition, 9(2): 139–196. doi:10.1016/0010-0277(81)90009-3
  • –––, 1988, “Connectionism and Cognitive Architecture: A Critical Analysis”, Cognition, 28(1–2): 3–71. doi:10.1016/0010-0277(88)90031-5
  • –––, 2015, Minds Without Meanings, Cambridge, MA: MIT Press.
  • Frege, Gottlob, 1879 [1967], Begriffsschrift, eine der Arithmetischen Nachgebildete Formelsprache des Reinen Denkens. Translated as Concept Script, a Formal Language of Pure Thought Modeled upon that of Arithmetic in From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931, J. van Heijenoort (ed.), S. Bauer-Mengelberg (trans.), Cambridge, MA: Harvard University Press.
  • –––, 1892 [1997], “On Sinn and Bedeutung”. Reprinted in the The Frege Reader, M. Beaney (ed.), M. Black (trans.), Malden, MA: Blackwell.
  • –––, 1918 [1997], “Thought”. Reprinted in The Frege Reader, M. Beaney (ed.), P. Geach and R. Stoothof (trans.), Malden, MA: Blackwell.
  • Gallistel, Charles R., 1990, The Organization of Learning, Cambridge, MA: MIT Press.
  • Gallistel, Charles R. and Adam Philip King, 2009, Memory and the Computational Brain, Malden, MA: Wiley- Blackwell.
  • Gallistel, C.R. and Louis D. Matzel, 2013, “The Neuroscience of Learning: Beyond the Hebbian Synapse”, Annual Review of Psychology, 64(1): 169–200. doi:10.1146/annurev-psych-113011-143807
  • Gibson, James J., 1979, The Ecological Approach to Visual Perception, Boston, MA: Houghton Mifflin.
  • Greenberg, Gabriel, 2013, “Beyond Resemblance”, Philosophical Review, 122(2): 215–287. doi:10.1215/00318108-1963716
  • Greenberg, Mark, 2014, “Troubles for Content I”, in Metasemantics: New Essays on the Foundations of Meaning, Alexis Burgess and Brett Sherman (eds.), Oxford: Oxford University Press, 147–168. doi:10.1093/acprof:oso/9780199669592.003.0006
  • Harman, Gilbert, 1973, Thought, Princeton, NJ: Princeton University Press.
  • Harnad, Stevan, 1994, “Computation Is Just Interpretable Symbol Manipulation; Cognition Isn’t”, Minds and Machines, 4(4): 379–390. doi:10.1007/BF00974165
  • Harnish, Robert M., 2002, Minds, Brains, Computers: An Historical Introduction to the Foundations of Cognitive Science, Malden, MA: Blackwell.
  • Haugeland, John, 1985, Artificial Intelligence: The Very Idea, Cambridge, MA: MIT Press
  • Helmholtz, Hermann von, 1867 [1925], Treatise on Physiological Optics (Handbuch der physiologischen Optik), James P.C. Southall, Manasha, WI: George Banta Publishing Company.
  • Hinton, G. 1990. “Mapping Part-Whole Hierarchies into Connectionist Networks”. Artificial Intelligence 46: pp. 47-75.
  • Horgan, Terence and John Tienson (eds.), 1991, Connectionism and the Philosophy of Mind, (Studies in Cognitive Systems 9), Dordrecht: Springer Netherlands. doi:10.1007/978-94-011-3524-5
  • –––, 1996, Connectionism and the Philosophy of Psychology, Cambridge, MA: MIT Press.
  • Hume, David, 1739 [1978], A Treatise on Human Nature, second edition, P. H. Nidditch (ed.). Oxford: Clarendon Press.
  • Jacobs, Lucia F and Randolf Menzel, 2014, “Navigation Outside of the Box: What the Lab Can Learn from the Field and What the Field Can Learn from the Lab”, Movement Ecology, 2(1): 3. doi:10.1186/2051-3933-2-3
  • Johnson, Kent, 2004, “On the Systematicity of Language and Thought”:, Journal of Philosophy, 101(3): 111–139. doi:10.5840/jphil2004101321
  • Johnson-Laird, Philip N., 2004, “The History of Mental Models”, in Psychology of Reasoning: Theoretical and Historical Perspectives, Ken Manktelow and Man Cheung Chung, New York: Psychology Press.
  • Kant, Immanuel, 1781 [1998], The Critique of Pure Reason, P. Guyer and A. Wood (eds), Cambridge: Cambridge University Press.
  • Kaplan, David, 1989, “Demonstratives”, in Themes from Kaplan, Joseph Almog, John Perry, and Howard Wettstein (eds.), New York: Oxford University Press.
  • Kazez, Jean R., 1994, “Computationalism and the Causal Role of Content”, Philosophical Studies, 75(3): 231–260. doi:10.1007/BF00989583
  • King, Peter, 2005, “William of Ockham: Summa Logicae”, in Central Works of Philosophy: Ancient and Medieval, volume 1: Ancient and Medieval Philosophy, John Shand (ed.), Montreal: McGill-Queen’s University Press, 242–270.
  • Knill, David C. and Whitman Richards (eds.), 1996, Perception as Bayesian Inference, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511984037
  • Knowles, Jonathan, 1998, “The Language of Thought and Natural Language Understanding”, Analysis, 58(4): 264–272. doi: 10.1093/analys/58.4.264
  • Kosslyn, Stephen, 1980, Image and Mind, Cambridge, MA: Harvard University Press.
  • Kulvicki, John, 2015, “Maps, Pictures, and Predication”, Ergo: An Open Access Journal of Philosophy, 2(7): 149–174.
  • Laurence, Stephen and Eric Margolis, 1997, “Regress Arguments Against the Language of Thought”, Analysis, 57(1): 60–66.
  • Loar, Brian, 1982, Mind and Meaning, Cambridge: Cambridge University Press.
  • Loewer, Barry, 1997, “A Guide to Naturalizing Semantics”, in A Companion to the Philosophy of Language, Bob Hale and Crispin Wright (eds.), Oxford: Blackwell.
  • Lurz, Robert W. (ed.), 2009, The Philosophy of Animal Minds, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511819001
  • Mackintosh, Nicholas John, 2002, “Do Not Ask Whether They Have a Cognitive Map, but How They Find Their Way about”, Psicológica, 23(1): 165–185. [Mackintosh 2002 available online]
  • Margolis, Eric, 1998, “How to Acquire a Concept”, Mind & Language, 13(3): 347–369. doi:10.1111/1468-0017.00081
  • Margolis, Eric and Stephen Laurence, 2011, “Learning Matters: The Role of Learning in Concept Acquisition”, Mind & Language, 26(5): 507–539. doi:10.1111/j.1468-0017.2011.01429.x
  • McDermott, Drew V., 2001, Mind and Mechanism, Cambridge, MA: MIT Press.
  • McLaughlin, B. P. and T. A. Warfield, 1994, “The Allure of Connectionism Reexamined”, Synthese, 101(3): 365–400. doi:10.1007/BF01063895
  • Marcus, G., 2001, The Algebraic Mind, Cambridge: MIT Press.
  • Millikan, Ruth Garrett, 1984, Language, Thought, and Other Biological Categories: New Foundations for Realism, Cambridge, MA: MIT Press.
  • –––, 1993, White Queen Psychology and Other Essays for Alice, Cambridge, MA: MIT Press.
  • Neander, Karen, 2017, A Mark of the Mental: In Defense of Informational Teleosemantics, Cambridge, MA: MIT Press.
  • Niklasson, Lars F. and Tim Gelder, 1994, “On Being Systematically Connectionist”, Mind & Language, 9(3): 288–302. doi:10.1111/j.1468-0017.1994.tb00227.x
  • Normore, Calvin, 1990, “Ockham on Mental Language”, in The Historical Foundations of Cognitive Science, J. Smith (ed.), Dordrecht: Kluwer.
  • –––, 2003, “Burge, Descartes, and Us”, in Reflections and Replies: Essays on the Philosophy of Tyler Burge, Martin Hahn and Bjørn Ramberg, Cambridge, MA: MIT Press.
  • –––, 2009, “The End of Mental Language”, in Le Langage Mental du Moyen Âge à l’Âge Classique, J. Biard (ed.), Leuven: Peeters.
  • O’Brien, Gerard and Jon Opie, 2006, “How Do Connectionist Networks Compute?”, Cognitive Processing, 7(1): 30–41. doi:10.1007/s10339-005-0017-7
  • O’Keefe, John and Lynn Nadel, 1978, The Hippocampus as a Cognitive Map, Oxford: Clarendon Press.
  • Ockham, William of, c. 1323 [1957], Summa Logicae, Translated in his Philosophical Writings, A Selection, Philotheus Boehner (ed. and trans.), London: Nelson, 1957.
  • Panaccio, Claude, 1999 [2017], Mental Language: From Plato to William of Ockham (Discours intérieur), Joshua P. Hochschild and Meredith K. Ziebart (trans.), New York: Fordham University Press.
  • Papineau, David, 1987, Reality and Representation, Oxford: Basil Blackwell.
  • Peacocke, Christopher, 1992, A Study of Concepts, Cambridge, MA: MIT Press.
  • –––, 1994, “Content, Computation and Externalism”, Mind & Language, 9(3): 303–335. doi:10.1111/j.1468-0017.1994.tb00228.x
  • –––, 1999, “Computation as Involving Content: A Response to Egan”, Mind & Language, 14(2): 195–202. doi:10.1111/1468-0017.00109
  • Perry, John, 1998, “Broadening the Mind”, Philosophy and Phenomenological Research, 58(1): 223–231. doi:10.2307/2653644
  • Piccinini, Gualtiero, 2008, “Computation without Representation”, Philosophical Studies, 137(2): 205–241. doi:10.1007/s11098-005-5385-4
  • Pinker, Steven, 2005, “So How Does the Mind Work?”, Mind & Language, 20(1): 1–24. doi:10.1111/j.0268-1064.2005.00274.x
  • Pinker, Steven and Alan Prince, 1988, “On Language and Connectionism: Analysis of a Parallel Distributed Processing Model of Language Acquisition”, Cognition, 28(1–2): 73–193. doi:10.1016/0010-0277(88)90032-7
  • Polger, Thomas W., 2004, Natural Minds, Cambridge, MA: MIT Press.
  • Pollack, Jordan B., 1990, “Recursive Distributed Representations”, Artificial Intelligence, 46(1–2): 77–105. doi:10.1016/0004-3702(90)90005-K
  • Prinz, Jesse, 2002, Furnishing the Mind: Concepts and Their Perceptual Basis, Cambridge, MA: MIT Press.
  • –––, 2011, “Has Mentalese Earned Its Keep? On Jerry Fodor’s LOT 2”, Mind, 120(478): 485–501. doi:10.1093/mind/fzr025
  • Putnam, Hilary, 1967, “Psychophysical Predicates”, In Art, Mind, and Religion: Proceedings of the 1965 Oberlin Colloquium in Philosophy, W.H. Capitan and D.D. Merrill (eds), Pittsburgh, PA: University of Pittsburgh Press, 37–48.
  • –––, 1988, Representation and Reality, Cambridge, MA: MIT Press.
  • Pylyshyn, Zenon W., 1984, Computation and Cognition: Toward a Foundation for Cognitive Science, Cambridge, MA: MIT Press.
  • –––, 2003, Seeing and Visualizing: It’s Not What You Think, Cambridge, MA: MIT Press.
  • Quine, W. V., 1951 [1980], “Two Dogmas of Empiricism”, The Philosophical Review, 60(1): 20–43. Reprinted in his From a Logical Point of View, second edition, Cambridge, MA: Harvard University Press, 1980, 20–46. doi:10.2307/2181906
  • Ramsey, William M., 2007, Representation Reconsidered, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511597954
  • Rescorla, Michael, 2009a, “Chrysippus’ Dog as a Case Study in Non-Linguistic Cognition”, in Lurz 2009: 52–71. doi:10.1017/CBO9780511819001.004
  • –––, 2009b, “Cognitive Maps and the Language of Thought”, The British Journal for the Philosophy of Science, 60(2): 377–407. doi:10.1093/bjps/axp012
  • –––, 2009c, “Predication and Cartographic Representation”, Synthese, 169(1): 175–200. doi:10.1007/s11229-008-9343-5
  • –––, 2012a, “Are Computational Transitions Sensitive to Semantics?”, Australasian Journal of Philosophy, 90(4): 703–721. doi:10.1080/00048402.2011.615333
  • –––, 2012b, “How to Integrate Representation into Computational Modeling, and Why We Should”, Journal of Cognitive Science, 13(1): 1–37. doi:10.17791/jcs.2012.13.1.1
  • –––, 2014a, “The Causal Relevance of Content to Computation”, Philosophy and Phenomenological Research, 88(1): 173–208. doi:10.1111/j.1933-1592.2012.00619.x
  • –––, 2014b, “A Theory of Computational Implementation”, Synthese, 191(6): 1277–1307. doi:10.1007/s11229-013-0324-y
  • –––, 2015, “Bayesian Perceptual Psychology”, in The Oxford Handbook of Philosophy of Perception, Mohan Matthen (ed.), Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199600472.013.010
  • –––, 2017a, “From Ockham to Turing—and Back Again”, in Philosophical Explorations of the Legacy of Alan Turing, Juliet Floyd and Alisa Bokulich (eds.), (Boston Studies in the Philosophy and History of Science 324), Cham: Springer International Publishing, 279–304. doi:10.1007/978-3-319-53280-6_12
  • –––, 2017b, “Maps in the Head?”, The Routledge Handbook of Philosophy of Animal Minds, Kristin Andrews and Jacob Beck (eds.), New York: Routledge.
  • –––, forthcoming, “Reifying Representations”, in What Are Mental Representations?, Tobias Schlicht, Krzysztof Doulega, and Joulia Smortchkova (eds.), Oxford: Oxford University Press.
  • Rey, Georges, 2014, “Innate and Learned: Carey, Mad Dog Nativism, and the Poverty of Stimuli and Analogies (Yet Again): Innate and Learned”, Mind & Language, 29(2): 109–132. doi:10.1111/mila.12044
  • Rumelhart, David and James L. McClelland, 1986, “PDP Models and General Issues in Cognitive Science”, in Rumelhart, et al. 1986: 110–146.
  • Rumelhart, David E., James L. McClelland, and the PDP Research Group, 1986, Parallel Distributed Processing, volume 1: Explorations in the Microstructure of Cognition: Foundations, Cambridge, MA: MIT Press.
  • Russell, Bertrand, 1918–1919 [1985], “The Philosophy of Logical Atomism: Lectures 1-2”, Monist, 28(4): 495–527, doi:10.5840/monist19182843, 29(1): 32–63, doi:10.5840/monist191929120, 29(2): 190–222, doi:10.5840/monist19192922, 29(3): 345–380, doi:10.5840/monist19192937. Reprinted in The Philosophy of Logical Atomism, David F. Pears (ed.), La Salle, IL: Open Court.
  • Rupert, Robert D., 2008, “Frege’s Puzzle and Frege Cases: Defending a Quasi-Syntactic Solution”, Cognitive Systems Research, 9(1–2): 76–91. doi:10.1016/j.cogsys.2007.07.003
  • Schiffer, Stephen, 1981, “Truth and the Theory of Content”, in Meaning and Understanding, Herman Parret and Jacques Bouveresse, Berlin: Walter de Gruyter, 204–222.
  • Schneider, Susan, 2005, “Direct Reference, Psychological Explanation, and Frege Cases”, Mind & Language, 20(4): 423–447. doi:10.1111/j.0268-1064.2005.00294.x
  • –––, 2011, The Language of Thought: A New Philosophical Direction, Cambridge, MA: MIT Press.
  • Sellars, Wilfrid, 1975, “The Structure of Knowledge”, in Action, Knowledge and Reality: Studies in Honor of Wilfrid Sellars, Hector-Neri Castañeda (ed.), Indianapolis, IN: Bobbs-Merrill, 295–347.
  • Shagrir, Oron, forthcoming, “In Defense of the Semantic View of Computation”, Synthese, First online: 11 October 2018. doi:10.1007/s11229-018-01921-z
  • Shea, Nicholas, 2016, “Representational Development Need Not Be Explicable-By-Content”, in Fundamental Issues of Artificial Intelligence, Vincent C. Müller (ed.), Cham: Springer International Publishing, 223–240. doi:10.1007/978-3-319-26485-1_14
  • Sloman, Aaron, 1978, The Computer Revolution in Philosophy: Philosophy, Science and Models of the Mind, Hassocks: The Harvester Press.
  • Smolensky, Paul, 1990, “Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems”, Artificial Intelligence, 46(1–2): 159–216. doi:10.1016/0004-3702(90)90007-M
  • –––, 1991, “Connectionism, Constituency, and the Language of Thought”, in Meaning in Mind: Fodor and His Critics, Barry M. Loewer and Georges Rey (eds), Cambridge, MA: Blackwell.
  • –––, 1995, “Constituent Structure and Explanation in an Integrated Connectionist/Symbolic Cognitive Architecture”, in Connectionism: Debates on Psychological Explanation, Cynthia Macdonald and Graham Macdonald (eds), Oxford: Basil Blackwell.
  • Stalnaker, Robert C., 1984, Inquiry, Cambridge, MA: MIT Press.
  • Stich, Stephen P., 1983, From Folk Psychology to Cognitive Science, Cambridge, MA: MIT Press.
  • Tarski, Alfred, 1933 [1983], “Pojecie prawdy w jezykach nauk dedukcyjnych”, Warsaw: Nakladem Towarzystwa Naukowego Warszawskiego. Translated into German (1935) by L. Blaustein as “Der Wahrheitsbegriff in den formalisierten Sprachen”, Studia Philosophica, 1: 261–405. Translated into English (1983) as “The Concept of Truth in Formalized Languages”, in Logic, Semantics, Metamathematics: Papers from 1923 to 1938, second edition, J.H. Woodger (trans.), John Corcoran (ed.), Indianapolis, IN: Hackett.
  • Tolman, Edward C., 1948, “Cognitive Maps in Rats and Men.”, Psychological Review, 55(4): 189–208. doi:10.1037/h0061626
  • Touretzky, David S., 1990, “BoltzCONS: Dynamic Symbol Structures in a Connectionist Network”, Artificial Intelligence, 46(1–2): 5–46. doi:10.1016/0004-3702(90)90003-I
  • Turing, Alan M., 1936, “On Computable Numbers, with an Application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society, s2-42(1): 230–265. doi:10.1112/plms/s2-42.1.230
  • van Gelder, Timothy, 1991, “Classical Questions, Radical Answers: Connectionism and the Structure of Mental Representations”. In Horgan and Tienson 1991: 355–381, doi:10.1007/978-94-011-3524-5_16
  • Wakefield, Jerome C., 2002, “Broad versus Narrow Content in the Explanation of Action: Fodor on Frege Cases”, Philosophical Psychology, 15(2): 119–133. doi:10.1080/09515080220127099
  • Weiner, Jan, Sara Shettleworth, Verner P. Bingman, Ken Cheng, Susan Healy, Lucia F. Jacobs, Kathryn J. Jeffery, Hanspeter A. Mallot, Randolf Menzel, and Nora S. Newcombe, 2011, “Animal Navigation: A Synthesis”, in Animal Thinking, Randolf Menzel and Julia Fischer (eds), Cambridge, MA: MIT Press.
  • Wittgenstein, Ludwig, 1921 [1922], Logisch-Philosophische Abhandlung, in W. Ostwald (ed.), Annalen der Naturphilosophie, 14. Translated as Tractatus Logico-Philosophicus, C.K. Ogden (trans.), London: Kegan Paul, 1922.
  • –––, 1953, Philosophical Investigations, G.E.M. Anscombe (trans.), Oxford: Blackwell.

Academic Tools

Other Internet Resources

Related Entries

artificial intelligence | belief | Church-Turing Thesis | cognitive science | computation: in physical systems | concepts | connectionism | consciousness: representational theories of | folk psychology: as a theory | functionalism | intentionality | mental content: causal theories of | mental imagery | mental representation | mind: computational theory of | naturalism | physicalism | propositional attitude reports | qualia | reasoning: automated | Turing, Alan | Turing machines

Acknowledgments

I owe a profound debt to the Murat Aydede, author of the previous entry on the same topic. His exposition hugely influenced my work on the entry, figuring indispensably as a springboard, a reference, and a standard of excellence. Some of my formulations in the introduction and in sections 1.1, 2, 3, 4.3, 5, 6.1, and 7 closely track formulations from the previous entry. Section 5’s discussion of connectionism is directly based on the previous entry’s treatment. I also thank Calvin Normore, Melanie Schoenberg, and the Stanford Encyclopedia editors for helpful comments.

Copyright © 2019 by

Michael Rescorla <rescorla@ucla.edu>

Open access to the SEP is made possible by a world-wide funding initiative.

The Encyclopedia Now Needs Your Support

Please Read How You Can Help Keep the Encyclopedia Free

The Language of Thought Hypothesis

First published Thu May 28, 1998; substantive revision Fri Sep 17, 2010

The Language of Thought Hypothesis (LOTH) postulates that thought and thinking take place in a mental language. This language consists of a system of representations that is physically realized in the brain of thinkers and has a combinatorial syntax (and semantics) such that operations on representations are causally sensitive only to the syntactic properties of representations. According to LOTH, thought is, roughly, the tokening of a representation that has a syntactic (constituent) structure with an appropriate semantics. Thinking thus consists in syntactic operations defined over such representations. Most of the arguments for LOTH derive their strength from their ability to explain certain empirical phenomena like productivity and systematicity of thought and thinking.


1. What is the Language of Thought Hypothesis?

LOTH is an empirical thesis about the nature of thought and thinking. According to LOTH, thought and thinking are done in a mental language, i.e., in a symbolic system physically realized in the brain of the relevant organisms. In formulating LOTH, philosophers have in mind primarily the variety of thoughts known as ‘propositional attitudes’. Propositional attitudes are the thoughts described by such sentence forms as ‘S believes that P’, ‘S hopes that P’, ‘S desires that P’, etc., where ‘S’ refers to the subject of the attitude, ‘P’ is any sentence, and ‘that P’ refers to the proposition that is the object of the attitude. If we let ‘A’ stand for such attitude verbs as ‘believe’, ‘desire’, ‘hope’, ‘intend’, ‘think’, etc., then the propositional attitude statements all have the form: S As that P.

LOTH can now be formulated more exactly as a hypothesis about the nature of propositional attitudes and the way we entertain them. It can be characterized as the conjunction of the following three theses (A), (B) and (C):

  1. Representational Theory of Mind (RTM) (cf. Field 1978:37, Fodor 1987:17):
    1. Representational Theory of Thought: For each propositional attitude A, there is a unique and distinct (i.e. dedicated)[1] psychological relation R, and for all propositions P and subjects SS As that P if and only if there is a mental representation #P# such that
      1. S bears R to #P#, and
      2. #P# means that P.
    2. Representational Theory of Thinking: Mental processes, thinking in particular, consists of causal sequences of tokenings of mental representations.
  2. Mental representations, which, as per (A1), constitute the direct “objects” of propositional attitudes, belong to a representational or symbolic system which is such that (cf. Fodor and Pylyshyn 1988:12–3)
    1. representations of the system have a combinatorial syntax and semantics: structurally complex (molecular) representations are systematically built up out of structurally simple (atomic) constituents, and the semantic content of a molecular representation is a function of the semantic content of its atomic constituents together with its syntactic/formal structure, and
    2. the operations on representations (constituting, as per (A2), the domain of mental processes, thinking) are causally sensitive to the syntactic/formal structure of representations defined by this combinatorial syntax.
  3. Functionalist Materialism. Mental representations so characterized are, at some suitable level, functionally characterizable entities that are (possibly, multiply) realized by the physical properties of the subject having propositional attitudes (if the subject is an organism, then the realizing properties are presumably the neurophysiological properties of the brain).

The relation R in (A1), when RTM is combined with (B), is meant to be understood as a computational/functional relation. The idea is that each attitude is identified with a characteristic computational/functional role played by the mental sentence that is the direct “object” of that kind of attitude. (Scare quotes are necessary because it is more appropriate to reserve ‘object’ for a proposition as we have done above, but as long as we keep this in mind, it is harmless to use it in this way for LOT sentences.) For instance, what makes a certain mental sentence an (occurrent) belief might be that it is characteristically the output of perceptual systems and input to an inferential system that interacts decision-theoretically with desires to produce further sentences or action commands. Or equivalently, we may think of belief sentences as those that are accessible only to certain sorts of computational operations appropriate for beliefs, but not to others. Similarly, desire-sentences (and sentences for other attitudes) may be characterized by a different set of operations that define a characteristic computational role for them. In the literature it is customary to use the metaphor of a “belief-box” (cf. Schiffer 1981) as a blanket term to cover whatever specific computational role belief sentences turn out to have in the mental economy of their possessors. (Similarly for “desire-box”, etc.)

The Language of Thought Hypothesis is so-called because of (B): token mental representations are like sentences in a language in that they have a syntactically and semantically regimented constituent structure. Put differently, the mental representations that are the direct “objects” of attitudes are structurally complex symbols whose complexity lends itself to a syntactic and semantic analysis. This is also why the LOT is sometimes called Mentalese.

It is (B2) that makes LOTH a species of the so-called Computational Theory of Mind (CTM). This is why LOTH is sometimes called the Computational/Representational Theory of Mind or Thought (CRTM/CRTT) (cf. Rey 1991, 1997). Indeed, LOTH seems to be the most natural product when RTM is combined with a view that treats mental processes or thinking as computational when computation is understood traditionally or classically (this is a recent term emphasizing the contrast with connectionist processing, which we will discuss later).

According to LOTH, when someone believes that P, there is a sense in which the immediate “object” of one's belief can be said to be a complex symbol, a sentence in one's LOT physically realized in the neurophysiology of one's brain, that has both syntactic structure and a semantic content, namely the proposition that P. So, contrary to the orthodox view that takes the belief relation as a dyadic relation between an agent and a proposition, LOTH takes it to be a triadic relation among an agent, a Mentalese sentence, and a proposition. The Mentalese sentence can then be said to have the proposition as its semantic/intentional content. Within the framework of LOTH, it is only in this sense can it be said that what is believed is a proposition, and thus the proper object of the attitude.

This triadic view seems to have several advantages over the orthodox dyadic view. It is a puzzle in the dyadic view how intentional organisms can stand in direct relation to abstract objects like propositions in such a way as to influence their causal powers. According to folk psychology (ordinary commonsense psychology that we rely on daily in our dealings with others), it is because those states have the propositional content they do that they have the causal powers they do. LOTH makes this relatively non-mysterious by introducing a physical intermediary that is capable of having the relevant causal powers in virtue of its syntactic structure that encodes its semantic content. Another advantage of this is that the thought processes can be causally guided by the syntactic forms of the sentences in a way that respect their semantic contents. This is the virtue of (B) to which we'll come back below. Mainly because of these features, LOTH is said to be poised to scientifically vindicate folk psychology if it turns out to be true.

2. Status of LOTH

LOTH has primarily been advanced as an empirical thesis (although some have argued for the truth of LOTH on a priori or conceptual grounds following the natural conceptual contours of folk psychology—see Davies 1989, 1991; Lycan 1993; Rey 1995; Jacob 1997; Markic 2001 argues against Jacob. Harman 1973 develops and defends LOTH on both empirical and conceptual grounds). It is not meant to be taken as an analysis of what the folk mean (or, for that matter, what the scientists ought to mean) when they talk about various propositional attitudes and their role in thinking. In this regard, LOT theorists typically view themselves as engaged in some sort of a proto-science, or at least in some empirical research program continuous with scientific psychology. Indeed, as we will see in more detail below, when Jerry Fodor first explicitly articulated and elaborated LOTH in some considerable detail in his (1975), he basically defended it on the ground that it was assumed by our best scientific theories or models in cognitive psychology and psycholinguistics. This empirical status generally accorded to LOTH should be kept firmly in mind when assessing its plausibility and especially its prospects in the light of new evidence and developments in scientific psychology. Nevertheless, it would be more appropriate to see LOTH more as a foundational thesis rather than as an ongoing research project guided by a set of concrete empirical methods, specific theses and principles. In this regard, LOTH stands to specific scientific theories of the (various aspects of the) mind somewhat like the “Atomic Hypothesis” stands to a whole bunch specific scientific theories about the particulate nature of the world (some of which may be—and certainly historically, have been—incompatible with each other).

When viewed this way, scientific theories advanced within the LOTH framework are not, strictly speaking, committed to preserving the folk taxonomy of the mental states in any very exact way. Notions like belief, desire, hope, fear, etc. are folk notions and, as such, it may not be utterly plausible to expect (eliminativist arguments aside) that a scientific psychology will preserve the exact contours of these concepts. On the contrary, there is every reason to believe that scientific counterparts of these notions will carve the mental space somewhat differently. For instance, it has been noted that the folk notion of belief harbors many distinctions. For example, it has both a dispositional and an occurrent sense. In the occurrent sense, it seems to mean something like consciously entertaining and accepting a thought (proposition) as true. There is quite a bit of literature and controversy on the dispositional sense.[2] Beliefs are also capable of being explicitly stored in long term memory as opposed to being merely dispositional or tacit. Compare, for instance: I believe that there was a big surprise party for my 24th birthday vs. I have always believed that lions don't eat their food with forks and knives, or that 13652/4=3413, even though until now these latter two thoughts had never occurred to me. There is furthermore the issue of degree of belief: while I may believe that George will come to dinner with his new girlfriend even though I wouldn't bet on it, you, thinking that you know him better than I do, may nevertheless go to the wall for it. It is unlikely that there will be one single construct of scientific psychology that will exactly correspond to the folk notion of belief in all these ways.

For LOTH to vindicate folk psychology it is sufficient that a scientific psychology with a LOT architecture come up with scientifically grounded psychological states that are recognizably like the propositional attitudes of folk psychology, and that play more or less similar roles in psychological explanations.[3]

3. Scope of LOTH

LOTH is an hypothesis about the nature of thought and thinking with propositional content. As such, it may or may not be applicable to other aspects of mental life. Officially, it is silent about the nature of some mental phenomena such as experience, qualia,[4] sensory processes, mental images, visual and auditory imagination, sensory memory, perceptual pattern-recognition capacities, dreaming, hallucinating, etc. To be sure, many LOT theorists hold views about these aspects of mental life that sometimes make it seem that they are also to be explained by something similar to LOTH.[5]

For instance, Fodor (1983) seems to think that many modular input systems have their own LOT to the extent to which they can be explained in representational and computational terms. Indeed, many contemporary psychological models treat perceptual input systems in just these terms.[6] There is indeed some evidence that this kind of treatment might be appropriate for many perceptual processes. But it is to be kept in mind that a system may employ representations and be computational without necessarily satisfying any or both of the clauses in (B) above in any full-fledged way. Just think of finite automata theory where there are plenty of examples of a computational process defined over states or symbols which lack full-blown syntactic and/or semantic structural complexity. (For a useful discussion of varieties of computational processes and their classification, see Piccinini 2008.) Whether sensory or perceptual processes are to be treated within the framework of full-blown LOTH is again an open empirical question. It might be that the answer to this question is affirmative. If so, there may be more than one LOT realized in different subsystems or mechanisms in the mind/brain. So LOTH is not committed to there being a single representational system realized in the brain, nor is it committed to the claim that all mental representations are complex or language-like, nor would it be falsified if it turns out that most aspects of mental life other than the ones involving propositional attitudes don't require a LOT.

Similarly, there is strong evidence that the mind also exploits an image-like representational medium for certain kinds of mental tasks.[7] LOTH is non-committal about the existence of an image-like representational system for many mental tasks other than the ones involving propositional attitudes. But it is committed to the claim that propositional thought and thinking cannot be successfully accounted for in its entirety in purely imagistic terms. It claims that a combinatorial sentential syntax is necessary for propositional attitudes and a purely imagistic medium is not adequate for capturing that.[8]

There are in fact some interesting and difficult issues surrounding these claims. The adequacy of an imagistic system seems to turn on the nature of syntax at the sentential level. For instance, Fodor, in Chapter 4 of his (1975) book, allows that many lexical items in one's LOT may be image-like; he introduces the notion of a mental image/picture under description to avoid some obvious inadequacies of pictures (e.g., what makes a picture a picture of an overweight woman rather than a pregnant one, or vice versa, etc.). This is an attempt to combine discursive and imagistic representational elements at the lexical level. There may even be a well defined sense in which pictures can be combined to produce structurally complex pictures (as in British Empiricism: image-like simple ideas are combined to produce complex ideas, e.g., the idea of a unicorn—see also Prinz 2002). But what is absolutely essential for LOTH, and what Fodor insists on, is the claim that there is no adequate way in which a purely image-like system can capture what is involved in making judgments, i.e., in judging propositions to be true. This seems to require a discursive syntactic approach at the sentential level. The general problem here is the inadequacy of pictures or image-like representations to express propositions. I can judge that the blue box is on top of the red one without judging that the red box is under the blue one. I can judge that Mary kisses John without judging that John kisses Mary, and so on for indefinitely many such cases. It is hard to see how images or pictures can do that without using any syntactic structure or discursive elements, to say nothing of judging, e.g., conditionals, disjunctive or negative propositions, quantifications, negative existentials, etc.[9]

Moreover, there are difficulties with imagistic representations arising from demands on processing representations. As we will see below, (B2) turns out to provide the foundations for one of the most important arguments for LOTH: it makes it possible to mechanize thinking understood as a semantically coherent thought process, which, as per (A2), consists of a causal sequence of tokenings of mental representations. It is not clear, however, how an equivalent of (B2) could be provided for images or pictures in order to accommodate operations defined over them, even if something like an equivalent of (B1) could be given. On the other hand, there are truly promising attempts to integrate discursive symbolic theorem-proving with reasoning with image-like symbols. They achieve impressive efficiency in theorem-proving or in any deductive process defined over the expressions of such an integrated system. Such attempts, if they prove to be generalizable to psychological theorizing, are by no means threats to LOTH; on the contrary, such systems have every feature to make them a species of a LOT system: they satisfy (B).[10]

4. Nativism and LOTH

In the book (1975) in which Fodor introduced the LOTH, he also argued that all concepts are innate. As a result, the connection between LOTH and an implausibly strong version of conceptual nativism looked very much internal. This historical coincidence has led some people to think that LOTH is essentially committed to a very strong form of nativism, so strong in fact that it seems to make a reductio of itself (see, for instance, P.S. Churchland 1986, H. Putnam 1988, A. Clark 1994). The gist of his argument was that since learning concepts is a form of hypothesis formation and confirmation, it requires a system of mental representations in which formation and confirmation of hypotheses are to be carried out, but then there is a non-trivial sense in which one already has (albeit potentially) the resources to express the extension of the concepts to be learned.

In his LOT 2 (2008), Fodor continues to claim that concepts cannot be learned and that the very idea of concept learning is “confused”:

Now, according to HF [the Hypothesis Formation and Confirmation model], the process by which one learns C must include the inductive evaluation of some such hypothesis as ‘The C things are the ones that are green or triangular’. But the inductive evaluation of that hypothesis itself requires (inter alia) bringing the property green or triangular before the mind as such. ... Quite generally, you can't represent anything as such and such unless you already have the concept such and such. All that being so, it follows, on pain of circularity, that ‘concept learning’ as HF understands it can't be a way of acquiring concept C. ... Conclusion: If concept learning is as HF understands it, there can be no such thing. This conclusion is entirely general; it doesn't matter whether the target concept is primitive (like GREEN) or complex (like GREEN OR TRIANGULAR). (LOT 2, 2008:139)

Note that this argument and the predecessors Fodor articulated in his previous writings and especially in his (1975) are entirely general, applicable to any hypothesis that identifies concepts with mental representations whether or not these representations belong to a LOT.

The crux of the issue seems to be that learning concepts is a rational process. There seem to be non-arbitrary semantic and epistemic liaisons between the target concept to be acquired and its “evidence” base. This evidence base needs to be represented and rationally tied to the target concept. This target concept needs also to be expressed in terms of representations one already possesses. Fodor thinks that any model of concept learning understood in this sense will have to be a form of hypothesis formation and confirmation. But not every form of concept acquisition is learning. There are non-rational ways of acquiring concepts whose explanation need not be at the cognitive level (e.g., brute triggering mechanisms that can be activated in sorts of ways that can presumably be explained at the sub-cognitive or neurophysiological levels). If concepts cannot be learned, then they are either innate or non-rationally acquired. Whereas early Fodor used to think that concepts must therefore be innate (maybe he thought that non-learning concept acquisition forms are limited to sensory or certain classes of perceptual concepts), he now thinks that they may be acquired but the explanation of this is not the business of cognitive psychology.

Whatever one may think of the merits of Fodor's arguments for concept nativism or of his recent anti-learning stance, it should be emphasized that LOTH per se has very little to do with it. LOTH is not committed to such a strong version of nativism, especially about concepts. It also need not be committed to any anti-learning stance about concepts. It is certainly plausible to assume that LOTH will turn out to have some empirically (as well as theoretically/a priori) motivated nativist commitments about the structural organization and dynamic management of the entire representational system. But this much is to be expected especially in the light of recent empirical findings and trends. This, however, does not constitutes a reductio. It is an open empirical question how much nativism is true about concepts, and LOTH should be so taken as to be capable of accommodating whatever turns out to be true in this matter. LOTH, therefore, when properly conceived, is independent of any specific proposal about conceptual nativism.[11]

5. Naturalism and LOTH

One of the most attractive features of LOTH is that it is a central component of an ongoing research program in philosophy of psychology to naturalize the mind, that is, to give a theoretical framework in which the mind could naturally be seen as part of the physical world without postulating irreducibly psychic entities, events, processes or properties. Fodor, historically the most important defender of LOTH, once identified the major mysteries in philosophy of mind thus:

How could anything material have conscious states? How could anything material have semantical properties? How could anything material be rational? (where this means something like: how could the state transitions of a physical system preserve semantical properties?). (1991: 285, Reply to Devitt)

LOTH is a full-blown attempt to give a naturalist answer to the third question, an attempt to solve at least part of the problem underlying the second one, and is almost completely silent about the first.[12]

According to RTM, propositional attitudes are relations to meaningful mental representations whose causally sequenced tokenings constitute the process of thinking. This much can, in principle, be granted by an intentional realist who might nevertheless reject LOTH. Indeed, there are plenty of theorists who accept RTM in some suitable form (and also happily accept (C) in many cases) but reject LOTH either by explicitly rejecting (B) or simply by remaining neutral about it. Among some of the prominent philosophers who choose the former option are Searle (1984, 1990, 1992), Stalnaker (1984), Lewis (1972), Barwise and Perry (1983).[13] Some who want to remain neutral include Loar (1982a, 1982b), Dretske (1981), Armstrong (1980), and many contemporary functionalists including some connectionists.[14]

But RTM per se doesn't so much propose a naturalistic solution to intentionality and mechanization of thinking as simply assert a framework to emphasize intentional realism and, perhaps, with (C), a declaration of a commitment to naturalism or physicalism at best. How, then, is the addition of (B) supposed to help? Let us first try to see in a bit more detail what the problem is supposed to be in the first place to which (B) is proposed as a solution. Let us start by reflecting on thinking and see what it is about thinking that makes it a mystery in Fodor's list. This will give rise to one of the most powerful (albeit still nondemonstrative) arguments for LOTH.

5.1 The Problem of Thinking

RTM's second clause (A2), in effect, says that thinking is at least the tokenings of states that are (a) intentional (i.e. have representational/propositional content) and (b) causally connected. But, surely, thinking is more. There could be a causally connected series of intentional states that makes no sense at all. Thinking, therefore, is causally proceeding from states to states that makes semantic sense: the transitions among states must preserve some of their semantic properties to count as thinking. In the ideal case, this property would be the truth value of the states. But in most cases, any interesting intentional or epistemic property would do (e.g., warrantedness, degree of confirmation, semantic coherence given a certain practical context like satisfaction of goals in a specific context, etc.). In general, it is hard to spell out what this requirement of “making sense” comes to. The intuitive idea, however, should be clear. Thinking is not proceeding from thoughts to thoughts in arbitrary fashion: thoughts that are causally connected are in some fashion semantically (rationally, epistemically) connected too. If this were not so, there would be little point in thinking—thinking couldn't serve any useful purpose. Call this general phenomenon, then, the semantic coherence of causally connected thought processes. LOTH is offered as a solution to this puzzle: how is thinking, conceived this way, physically possible? This is the problem of thinking, thus the problem of mechanization of rationality in Fodor's version. How does LOTH propose to solve this problem and bring us one big step closer to the naturalization of the mind?

5.2 Syntactic Engine Driving a Semantic Engine: Computation

The two most important achievements of 20th century that are at the foundations of LOTH as well as most of modern Artificial Intelligence (AI) research and most of the so-called information processing approaches to cognition are information the developments in modern symbolic (formal) logic, and (ii) Alan Turing's idea of a Turing Machine and Turing computability. It is putting these two ideas together that gives LOTH its enormous explanatory power within a naturalistic framework. Modern logic showed that most of deductive reasoning can be formalized, i.e. most semantic relations among symbols can be entirely captured by the symbols' formal/syntactic properties and the relations among them. And Turing showed, roughly, that if a process has a formally specifiable character then it can be mechanized. So we can appreciate the implications of information and (ii) for the philosophy of psychology in this way: if thinking consists in processing representations physically realized in the brain (in the way the internal data structures are realized in a computer) and these representations form a formal system, i.e., a language with its proper combinatorial syntax (and semantics) and a set of derivations rules formally defined over the syntactic features of those representations (allowing for specific but powerful programs to be written in terms of them), then the problem of thinking, as described above, can in principle be solved in completely naturalistic terms, thus the mystery surrounding how a physical device can ever have semantically coherent state transitions (processes) can be removed. Thus, given the commitment to naturalism, the hypothesis that the brain is a kind of computer trafficking in representations in virtue of their syntactic properties is the basic idea of LOTH (and the AI vision of cognition).

Computers are environments in which symbols are manipulated in virtue of their formal features, but what is thus preserved are their semantic properties, hence the semantic coherence of symbolic processes. Slightly paraphrasing Haugeland (cf. 1985: 106), who puts the same point nicely in the form of a motto:

The Formalist Motto:

If you take care of the syntax of a representational system, its semantics will take care of itself.

This is in virtue of the mimicry or mirroring relation between the semantic and formal properties of symbols. As Dennett once put it in describing LOTH, we can view the thinking brain as a syntactically driven engine preserving semantic properties of its processes, i.e. driving a semantic engine. What is so nice about this picture is that if LOTH is true we have a naturalistically adequate causal treatment of thinking that respects the semantic properties of the thoughts involved: it is in virtue of the physically coded syntactic/formal features that thoughts cause each other while the coherence of their semantic properties is preserved precisely in virtue of this.

Whether or not LOTH actually turns out to be empirically true in the details or in its entire vision of rational thinking, this picture of a syntactic engine driving a semantic one can at least be taken to be an important philosophical demonstration of how Descartes' challenge can be met (cf. Rey 1997: chp.8). Descartes claimed that rationality in the sense of having the power “to act in all the contingencies of life in the way in which our reason makes us act” cannot possibly be possessed by a purely physical device: “The rational soul … could not be in any way extracted from the power of matter … but must … be expressly created” (1637/1970: 117–18). Descartes was completely puzzled by just this rational character and semantic coherence of thought processes so much so that he failed to even imagine a possible mechanistic explication of it. He thus was forced to appeal to Divine creation. But we can now see/imagine at least a possible mechanistic/naturalistic scenario.[15]

5.3 Intentionality and LOTH

But where do the semantic properties of the mental representations come from in the first place? How can they mean anything? This is Brentano's challenge to a naturalist. Brentano's bafflement was with the intentionality of the human mind, its apparently mysterious power to represent things, events, properties in the world. He thought that nothing physical can have this property: “The reference to something as an object is a distinguishing characteristic of all mental phenomena. No physical phenomenon exhibits anything similar” (Brentano 1874/1973: 97). This problem of intentionality is the second problem or mystery in Fodor's list quoted above. I said that LOTH officially offers only a partial solution to it and perhaps proposes a framework within which the remainder of the solution can be couched and elaborated in a naturalistically acceptable way.

Recall that RTM contains a clause (A1b) that says that the immediate “object” of a propositional attitude that P is a mental representation #P# that means that P. Again, (B1) attributes a compositional semantics to the syntactically complex symbols belonging to one's LOT that are, as per (C), realized by the physical properties of a thinking system. According to LOTH, the semantic content of propositional attitudes is inherited from the semantic content of the mental symbols. So Brentano's questions for a LOT theorist becomes: how do the symbols in one's LOT get their meanings in the first place? There are two levels or stages at which this question can be raised and answered:

(1) At the level of atomic symbols (non-logical primitives): how do the atomic symbols represent what they do?

(2) At the level of molecular symbols (phrasal complexes or sentences): how do molecular symbols represent what they do?

There have been at least two major lines LOT theorists have taken regarding these questions. The one that is least committal might perhaps be usefully described as the official position regarding LOTH's treatment of intentionality. Most LOT theorists seem to have taken this line. The official line doesn't propose any theory about the first stage, but simply assumes that the first question can be answered in a naturalistically acceptable way. In other words, officially LOTH simply assumes that the atomic symbols/expressions in one's LOT have whatever meanings they have.[16]

But, the official line continues, LOTH has a lot to say about the second stage, the stage where the semantic contents are computed or assigned to complex (molecular) symbols on the basis of their combinatorial syntax or grammar together with whatever meanings atomic symbols are assumed to have in the first stage. This procedure is familiar from a Tarski-style[17] definition of truth conditions of sentences. The truth-value of complex sentences in propositional logic are completely determined by the truth-values of the atomic sentences they contain together with the rules fixed by the truth-tables of the connectives occurring in the complex sentences. Example: ‘P and Q’ is true just in case both ‘P’ and ‘Q’ are true, but false otherwise. This process is similar but more complex in first-order languages, and even more so for natural languages—in fact, we don't have a completely working compositional semantics for the latter at the moment. So, if we have a semantic interpretation of atomic symbols (if we have symbols whose reference and extension are fixed at the first stage by whatever naturalistic mechanism turns out to govern it), then the combinatorial syntax will take over and effectively determine the semantic interpretation (truth-conditions) of the complex sentences they are constituents of. So officially LOTH would only contribute to a complete naturalization project if there is a naturalistic story at the atomic level.

Early Fodor (1975, 1978, 1978a, 1980), for instance, envisaged a science of psychology which, among other things, would reasonably set for itself the goal of discovering the combinatorial syntactic principles of LOT and the computational rules governing its operations, without worrying much about semantic matters, especially about how to fix the semantics of atomic symbols (he probably thought that this was not a job for LOTH). Similarly, Field (1978) is very explicit about the combinatorial rules for assigning truth-conditions to the sentences of the internal code. In fact, Field's major argument for LOTH is that, given a naturalistic causal theory of reference for atomic symbols, about which he is optimistic (Field 1972), it is the only naturalistic theory that has a chance of solving Brentano's puzzle. For the moment, this is not much more than a hope, but, according to the LOT theorist, it is a well-founded hope based on a number of theoretical and empirical assumptions and data. Furthermore, it is a framework defining a naturalistic research program in which there have been promising successes.[18]

As I said, this official and, in a way, least committal line has been the more standard way of conceiving LOTH's role in the project of naturalizing intentionality. But some have gone beyond it and explored the ways in which the resources of LOTH can be exploited even in answering the first question (1) about the semantics of atomic symbols.

Now, there is a weak version of an answer to (1) on the part of LOTH and a strong version. On the weak version, LOTH may be untendentiously viewed as inevitably providing some of the resources in giving the ultimate naturalistic theory in naturalizing the meaning of atomic symbols. The basic idea is that whatever the ultimate naturalistic theory turns out to be true about atomic expressions, computation as conceived by LOTH will be part of it. For instance, it may be that, as with nomic covariation theories of meaning (Fodor 1987, 1990a; Dretske 1981), the meaning of an atomic predicate may consist in its potential to get tokened in the presence of (or, in causal response to) something that instantiates the property the predicate is said to express. A natural way of explicating this potential may partly but ultimately rely on certain computational principles the symbol may be subjected to within a LOT framework, or principles that in some sense govern the “behavior” of the symbol. Insofar as computation is naturalistically understood in the way LOTH proposes, a complete answer to the first question about the semantics of atomic symbols may plausibly involve an explicatory appeal to computation within a system of symbols. This is the weak version because it doesn't see LOTH as proposing a complete solution to the first question (1) above, but only helping it.

A strong version would have it that LOTH provides a complete naturalistic solution to both questions: given the resources of LOTH we don't need to look any further to meet Brentano's challenge. The basic idea lies in so-called functional or conceptual role semantics, according to which a concept is the concept it is precisely in virtue of the particular causal/functional potential it has in interacting with other concepts. Each concept may be thought of as having a certain distinctive set of epistemic/semantic relations or liaisons to other concepts. We can conceive of this set as determining a certain “conceptual role” for each concept. We can then take these roles to determine the semantic identity of concepts: concepts are the concepts they are because they have the conceptual roles they have; that is to say, among other things, concepts represent whatever they do precisely in virtue of these roles. The idea then is to reduce each conceptual role to causal/functional role of atomic symbols (now conceived as primitive terms in LOTH), and then use the resources of LOTH to reduce it in turn to computational role. Since computation is naturalistically well-defined, the argument goes, and since causal interactions between thoughts and concepts can be understood completely in terms of computation, we can completely naturalize intentionality if we can successfully treat meanings as arising out of thoughts/concepts' internal interactions with each other. In other words, the strong version of LOTH would claim that atomic symbols in LOT have the content they do in virtue of their potential for causal interactions with other tokens, and cashing out this potential in mechanical/naturalistic terms is what, among other things, LOTH is for. LOTH then comes as a naturalistic rescuer for conceptual role semantics.

It is not clear whether any one holds this strong version of LOTH in this rather naive form. But certainly some people have elaborated the basic idea in quite subtle ways, for which Cummins (1989: chp.8) is perhaps the best example. (But also see Block 1986 and Field 1978.) But even in the best hands, the proposal turns out to be very problematic and full of difficulties nobody seems to know how to straighten out. In fact, some of the most ardent critics of taking LOTH as incorporating a functional role semantics turn out to be some of the most ardent defenders of LOTH understood in a weak, non-committal sense we have explored above—see Fodor (1987: chp.3), Fodor and Lepore (1991), Fodor's attack (1978b) on AI's way of doing procedural semantics is also relevant here. Haugeland (1981), Searle (1980, 1984), and Putnam (1988) quite explicitly take LOTH to involve a program for providing a complete semantic account of mental symbols, which they then attack accordingly.[19]

It is also possible, in fact, quite natural, to combine conceptual role semantics (internalist) with causal/informational psychosemantics (externalist). The result is sometimes known as two-factor theories. If this turns out to be the right way to naturalize intentionality, then, given what is said above about the potential resources of LOTH in contributing to both factors, it is easy to see why many theorists who worry about naturalizing intentionality are attracted to LOTH.

As indicated previously, LOTH is almost completely silent about consciousness and the problem of qualia, the third mystery in Fodor's list in the quote above. But the naturalist's hope is that this problem too will be solved, if not by LOTH, then by something else. On the other hand, it is important to emphasize that LOTH is neutral about the naturalizability of consciousness/qualia. If it turns out that qualia cannot be naturalized, this would by no means show that LOTH is false or defective in some way. In fact, there are people who seem to think that LOTH may well turn out to be true even though qualia can perhaps not be naturalized (e.g., Block 1980, Chalmers 1996, McGinn 1991).

Finally, it should be emphasized that LOTH has no particular commitment to every symbolic activity's being conscious. Conscious thoughts and thinking may be the tip of a computational iceberg. Nevertheless, there are ways in which LOTH can be helpful for an account of state consciousness that seeks to explain a thought's being conscious in terms of a higher order thought which is about the first order thought. So, to the extent to which thought and thinking are conscious, to that extent LOTH can perhaps be viewed as providing some of the necessary resources for a naturalistic account of state consciousness—for elaboration see Rosenthal (1997) and Lycan (1997).

6. Arguments for LOTH

We have already seen two major arguments, perhaps historically the most important ones, for LOTH: First, we have noted that if LOTH is true then all the essential features of the common sense conception of propositional attitudes will be explicated in a naturalistic framework which is likely to be co-opted by scientific cognitive psychology, thus vindicating folk psychology. Second, we have discussed that, if true, LOTH would solve one of the mysteries about thinking minds: how is thinking (as characterized above) possible? How is rationality mechanically possible? Then we have also seen a third argument that LOTH would partially contribute to the project of naturalizing intentionality by offering an account of how the semantic properties of whole attitudes are fixed on the basis of their atomic constituents. But there have been many other arguments for LOTH. In this section, I will describe only those arguments that have been historically more influential and controversial.

6.1 Argument from Contemporary Cognitive Psychology

When Fodor first formulated LOTH with significant elaboration in his (1975), he introduced his major argument for it along with its initial formulation in the first chapter. It was basically this: our best scientific theories and models of different aspects of higher cognition assume a framework that requires a computational/representational medium for them to be true. More specifically, he analyzed the basic form of the information processing models developed to account for three types of cognitive phenomena: perception as the fixation of perceptual beliefs, concept learning as hypothesis formation and confirmation, and decision making as a form of representing and evaluating the consequences of possible actions carried out in a situation with a preordered set of preferences. He rightly pointed out that all these psychological models treated mental processes as computational processes defined over representations. Then he drew what seems to be the obvious conclusion: if these models are right in at least treating mental processes as computational, even if not in detail, then there must be a LOT over which they are defined, hence LOTH.

In Fodor's (1975), the arguments for different aspects of LOTH are diffused and the emphasis, with the book's slogan “no computation without representation”, is put on the RTM rather than on (B) or (C). But all the elements are surely there.

6.2 Argument from the Productivity of Thought

People seem to be capable of entertaining an infinite number of thoughts, at least in principle, although they in fact entertain only a finite number of them. Indeed adults who speak a natural language are capable of understanding sentences they have never heard uttered before. Here is one: there is a big lake of melted gold on the dark side of the moon. I bet that you have never heard this sentence before, and yet, you have no difficulty in understanding it: it is one you in fact likely believe false. But this sentence was arbitrary, there are infinitely many such sentences I can in principle utter and you can in principle understand. But understanding a sentence is to entertain the thought/proposition it expresses. So there are in principle infinitely many thoughts you are capable of entertaining. This is sometimes expressed by saying that we have an unbounded competence in entertaining different thoughts, even though we have a bounded performance. But this unbounded capacity is to be achieved by finite means. For instance, storing an infinite number of representations in our heads is out of the question: we are finite beings. If human cognitive capacities (capacities to entertain an unbounded number of thoughts, or to have attitudes towards an unbounded number of propositions) are productive in this sense, how is this to be explained on the basis of finitary resources?

The explanation LOTH offers is straightforward: postulate a representational system that satisfies at least (B1). Indeed, recursion is the only known way to produce an infinite number of symbols from a finite base. In fact, given LOTH, productivity of thought as a competence mechanism seems to be guaranteed.[20]

6.3 Argument from the Systematicity and Compositionality of Thought

Systematicity of thought consists in the empirical fact that the ability to entertain certain thoughts is intrinsically connected to the ability to entertain certain others. Which ones? Thoughts that are related in a certain way. In what way? There is a certain initial difficulty in answering such questions. I think, partly because of this, Fodor (1987) and Fodor and Pylyshyn (1988), who are the original defenders of this kind of argument, first argue for the systematicity of language production and understanding: the ability to produce/understand certain sentences is intrinsically connected to the ability to produce/understand certain others. Given that a mature speaker is able to produce/understand a certain sentence in her native language, by psychological law, there always appear to be a cluster of other sentences that she is able to produce/understand. For instance, we don't find speakers who know how to express in their native language the fact that John loves the girl but not the fact that the girl loves John. This is apparently so, moreover, for expressions of any n-place relation.

Fodor and Pylyshyn bring out the force of this psychological fact by comparing learning languages the way we actually do with learning a language by memorizing a huge phrase book. In the phrase book model, there is nothing to prevent someone learning how to say ‘John loves the girl’ without learning how to say ‘the girl loves John.’ In fact, that is exactly the way some information booklets prepared for tourists help them to cope with their new social environment. You might, for example, learn from a phrase book how to say ‘I'd like to have a cup of coffee with sugar and milk’ in Turkish without knowing how to say/understand absolutely anything else in Turkish. In other words, the phrase book model of learning a language allows arbitrarily punctate linguistic capabilities. In contrast, a speaker's knowledge of her native language is not punctate, it is systematic. Accordingly, we do not find, by nomological necessity, native speakers whose linguistic capacities are punctate.

Now, how is this empirical truth (in fact, a law-like generalization) to be explained? Obviously if this is a general nomological fact, then learning one's native language cannot be modeled on the phrase book model. What is the alternative? The alternative is well known. Native speakers master the grammar and vocabulary of their language. But this is just to say that sentences are not atomic, but have syntactic constituent structure. If you have a vocabulary, the grammar tells you how to combine systematically the words into sentences. Hence, in this way, if you know how to construct a particular sentence out of certain words, you automatically know how to construct many others. If you view all sentences as atomic, then, as Fodor and Pylyshyn say, the systematicity of language production/understanding is a mystery, but if you acknowledge that sentences have syntactic constituent structure, systematicity of linguistic capacities is what you automatically get; it is guaranteed. This is the orthodox explanation of linguistic systematicity.

From here, according to Fodor and Pylyshyn, establishing the systematicity of thought as a nomological fact is one step away. If it is a law that the ability to understand a sentence is systematically connected to the ability to understand many others, then it is similarly a law that the ability to think a thought is systematically connected to the ability to think many others. For to understand a sentence is just to think the thought/proposition it expresses. Since, according to RTM, to think a certain thought is just to token a representation in the head that expresses the relevant proposition, the ability to token certain representations is systematically connected to the ability to token certain others. But then, this fact needs an adequate explanation too. The classical explanation LOTH offers is to postulate a system of representations with combinatorial syntax exactly as in the case of the explanation of the linguistic systematicity. This is what (B1) offers.[21] This seems to be the only explanation that does not make the systematicity of thought a miracle, and thus argues for the LOT hypothesis.

However, thought is not only systematic but also compositional: systematically connected thoughts are also always semantically related in such a way that the thoughts so related seem to be composed out of the same semantic elements. For instance, the ability to think ‘John loves the girl’ is connected to the ability to think ‘the girl loves John’ but not to, say, ‘protons are made up of quarks’ or to ‘2+2=4.’ Why is this so? The answer LOTH gives is to postulate a combinatorial semantics in addition to a combinatorial syntax, where an atomic constituent of a mental sentence makes (approximately) the same semantic contribution to any complex mental expression in which it occurs. This is what Fodor and Pylyshyn call ‘the principle of compositionality’.[22]

In brief, it is an argument for LOTH that it offers a cogent and principled solution to the systematicity and compositionality of cognitive capacities by postulating a system of representations that has a combinatorial syntax and semantics, i.e., a system of representations that satisfies at least (B1).

6.4 Argument from the Systematicity of Thinking (Inferential Coherence)

Systematicity of thought does not seem to be restricted solely to the systematic ability to entertain certain thoughts. If the system of mental representations does have a combinatorial syntax, then there is a set of rules, psychosyntactic formation rules, so to speak, that govern the construction of well-formed expressions in the system. It is this fact, (B1), that guarantees that if you can form a mental sentence on the basis of certain rules, then you can also form many others on the basis of the same rules. The rules of combinatorial syntax determine the syntactic or formal structure of complex mental representations. This is the formative (or, formational) aspect of systematicity. But inferential thought processes ( i.e., thinking) seem to be systematic too: the ability to make certain inferences is intrinsically connected to the ability to make certain many others. For instance, you do not find minds that can infer ‘A’ from ‘A&B’ but cannot infer ‘C’ from ‘A&B&C.’ It seems to be a psychological fact that inferential capacities come in clusters that are homogeneous in certain aspects. How is this fact (i.e., the inferential or transformational systematicity) to be explained?

As we have seen, the explanation LOTH offers depends on the exploitation of the notion of logical form or syntactic structure determined by the combinatorial syntax postulated for the representational system. The combinatorial syntax not only gives us a criterion of well-formedness for mental expressions, but it also defines the logical form or syntactic structure for each well-formed expression. The classical solution to inferential systematicity is to make the mental operations on representations sensitive to their form or structure, i.e., to insist on (B2). Since, from a syntactic view point, similarly formed expressions will have similar forms, it is possible to define a single operation which will apply to only certain expressions that have a certain form, say, only to conjunctions, or conditionals. This allows the LOT theorist to give homogeneous explanations of what appear to be homogeneous classes of inferential capacities. This is one of the greatest virtues of LOTH, hence provides an argument for it.

The solution LOTH offers for what I called the problem of thinking, above, is connected to the argument here because the two phenomena are connected in a deep way. Thinking requires that the logico-semantic properties of a particular thought process be somehow causally implicated in the process (say, inferring that John is happy from knowing that if John is at the beach then John is happy and coming to realize that John is indeed at the beach). The systematicity of inferential thought processes then is based on the observation that if the agent is capable of making that particular inference, then she is capable of making many other somehow similarly organized inferences. But the idea of similar organization in this context seems to demand some sort of classification of thoughts independently of their particular content. But what can the basis of such a classification be? The only basis seems to be the logico-syntactic properties of thoughts, their form. Although it feels a little uneasy to talk about syntactic properties of thoughts common-sensically understood, it seems that they are forced upon us by the very attempt to understand their semantic properties: how, for instance, could we explain the semantic content of the thought that if John is at the beach then he is happy without somehow appealing to its being a conditional? This is the point of contact between the two phenomena. Especially when the demands of naturalism are added to this picture, inferring a LOT (= a representational system satisfying B) realized in the brain becomes almost irresistible. Indeed Rey (1995) doesn't resist and claims that, given the above observations, LOTH can be established on the basis of arguments that are not “merely empirical”. I leave it to the reader to evaluate whether mere critical reflection on our concepts of thought and thinking (along with certain mundane empirical observations about them) can be sufficient to establish LOTH.[23]

7. Objections to LOTH

There have been numerous arguments against LOTH. Some of them are directed more specifically against the Representational Theory of Mind (A), some against functionalist materialism (C). Here I will concentrate only on those arguments specifically targeting (B)—the most controversial component of LOTH.

7.1 Regress Arguments against LOTH

These arguments rely on the explanations offered by LOTH defenders for certain aspects of natural languages. In particular, many LOT theorists advert to LOTH to explain (1) how natural languages are learned, (2) how natural languages are understood, or (3) how the utterances in such languages can be meaningful. For instance, according to Fodor (1975), natural languages are learned by forming and confirming hypotheses about the translation of natural language sentences into Mentalese such as: ‘Snow is white’ is true in English if and only if P, where ‘P’ is a sentence in one's LOT. But to be able to do that, one needs a representational medium in which to form and confirm hypotheses—at least to represent the truth-conditions of natural language sentences. The LOT is such a medium. Again, natural languages are understood because, roughly, such an understanding consists in translating their sentences into one's Mentalese. Similarly, natural language utterances are meaningful in virtue of the meanings of corresponding Mentalese sentences.

The basic complaint is that in each of these cases, either the explanations generate a regress because the same sort of explanations ought to be given for how the LOT is learned, understood or can be meaningful, or else they are gratuitous because if a successful explanation can be given for LOT that does not generate a regress then it could and ought to be given for the natural language phenomena without introducing a LOT (see, e.g., Blackburn 1984). Fodor's response in (1975) is (1) that LOT is not learned, it's innate; (2) that it's understood in a different sense than the sense involved in natural language comprehension; (3) that LOT sentences acquire their meanings not in virtue of another meaningful language but in a completely different way, perhaps by standing in some sort of causal relation to what they represent or by having certain computational profiles (see above, §5.3). For many who have a Wittgensteinian bent, these replies are not likely to be convincing. But here the issues tend to concern RTM rather than (B).

Laurence and Margolis (1997) point out that the regress arguments depend on the assumption that LOTH is introduced only to explain (1)-(3). If it can be shown that there are lots of other empirical phenomena for which the LOTH provides good explanations, then the regress arguments fail because LOTH then would not be gratuitous. In fact, as we have seen above, there are plenty of such phenomena. But still it is important to realize that the sort of explanations proposed for the understanding of one's LOT (computational use/activity of LOT sentences with certain meanings) and how LOT sentences can be meaningful (computational roles and/or nomic relations with the world) cannot be given for (1)-(3): it's unclear, for example, what it would be like to give a computational role and/or nomic relation account for the meanings of natural language utterances. (See Knowles 1998 for a reply to Laurence & Margolis 1997; Margolis & Laurence 1998 counterreplies to Knowles.)

7.2 Propositional Attitudes without Explicit Representations

Dennett in his review of Fodor's (1975) has raised the following objection (cf. Fodor 1987: 21–3 for a similar discussion):

In a recent conversation with the designer of a chess-playing program I heard the following criticism of a rival program: “it thinks it should get its queen out early.” This ascribes a propositional attitude to the program in a very useful and predictive way, for as the designer went on to say, one can usefully count on chasing that queen around the board. But for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with “I should get my queen out early” explicitly tokened. The level of analysis to which the designer's remark belongs describes features of the program that are, in an entirely innocent way, emergent properties of the computational processes that have “engineering reality.” I see no reason to believe that the relation between belief-talk and psychological talk will be any more direct. (Dennett 1981: 107)

The objection, as Fodor (1987: 22) points out, isn't that the program has a dispositional, or potential, belief that it will get its queen out early. Rather, the program actually operates on this belief. There appear to be lots of other examples: e.g., in reasoning we pretty often follow certain inference rules like modus ponens, disjunctive syllogism, etc., without necessarily explicitly representing them.

The standard reply to such objections is to draw a distinction between rules on the basis of which Mentalese data-structures are manipulated, and the data-structures themselves (intuitively, the program/data distinction). LOTH is not committed to every rule's being explicitly represented. In fact, as a point of nomological fact, in a computational device not every rule can be explicitly represented: some have to be hard-wired and, thus, implicit in this sense. In other words, LOTH permits but doesn't require that rules be explicitly represented. On the other hand, data structures have to be explicitly represented: it is these that are manipulated formally by the rules. No causal manipulation is possible without explicit tokening of these structures. According to Fodor, if a propositional attitude is an actual episode in one's reasoning that plays a causal role, then LOTH is committed to explicit representation of its content, which is as per (A2 and B2) causally implicated in the physical process realizing that reasoning. Dispositional propositional attitudes can then be accounted for in terms of an appropriate principle of inferential closure of explicitly represented propositional attitudes (cf. Lycan 1986).

Dennett's chess program certainly involves explicit representations of the chess board, the pieces, etc. and perhaps some of the rules. Which rules are implicit and which are explicit depend on the empirical details of the program. Pointing to the fact that there may be some rules that are emergent out of the implementation of explicit rules and data-structures does not suffice to undermine LOTH.

7.3 Explicit Representations without Propositional Attitudes

In any sufficiently complex computational system, there are bound to be many symbol manipulations with no obviously corresponding description at the level of propositional attitudes. For instance, when a multiplication program is run through a standard conventional computer, the steps of the program are translated into the computer's machine language and executed there, but at this level the operations apply to 1's and 0's with no obvious way to map them onto the original numbers to be multiplied or to the multiplication operation. So it seems that at those levels that, according to Dennett, have engineering reality there are plenty of explicit tokenings of symbols with appropriate operations over them that don't correspond to anything like the propositional attitudes of folk psychology. In other words, there is plenty of symbolic activity which it would be wrong to say a person engages in. Rather, they are done by the person's subpersonal computational components as opposed to the person. How to rule out such cases? (cf. Fodor 1987: 23–6 for a similar discussion.)

They are ruled out by an appropriate reading of (A1) and (B1): (A1) says that the person herself must stand in an appropriate computational relation to a Mentalese sentence, which, as per (B1), has a suitable syntax and semantics. Only then will the sentence constitute the person's having a propositional attitude. Not all explicit symbols in one's LOT will satisfy this. In other words, not every computational routine will correspond to a processing appropriately described as storage in, e.g., the “belief-box”. Furthermore, as pointed out by Fodor (1987), LOTH would vindicate the common sense view of propositional attitudes if they turn out to be computational relations to Mentalese sentences. It may not be further required that every explicit representation correspond to a propositional attitude.

There have been many other objections to LOTH in recent years raised especially by connectionists: that LOT systems cannot handle certain cognitive tasks like perceptual pattern recognition, that they are too brittle and not sufficiently damage resistant, that they don't exhibit graceful degradation when physically damaged or as a response to noisy or degraded input, that they are too rigid, deterministic, so are not well-suited for modeling humans' capacity to satisfy multiple soft-constraints so gracefully, that they are not biologically realistic, and so on. (For useful discussions of these and many similar objections, see Rumelhart, McClelland and the PDP Research Group (1986), Fodor and Pylyshyn (1988), Horgan and Tienson (1996), Horgan (1997), McLaughlin and Warfield (1994), Bechtel and Abrahamsen (2002), Marcus (2002).)

8. The Connectionism/Classicism Debate

When Jerry Fodor published his influential book, The Language of Thought, in (1975), he called LOTH “the only game in town.” As we have seen, it was the philosophical articulation of the assumptions that underlay the new developments in “cognitive sciences” after the demise of behaviorism. Fodor argued for the truth of LOTH on the basis of the successes of the best scientific theories we had then. Indeed most of the scientific work in cognitive psychology, psycholinguistics, and AI assumed the framework of LOTH.

In the early 1980's, however, Fodor's claim that LOTH was the only game in town was beginning to be challenged by some who were working on so-called connectionist networks. They claimed that connectionism offered a new and radically different alternative to classicism in modeling cognitive phenomena. The name ‘classicism’ has since then become to be applied to the LOTH framework. On the other hand, many classicists like Fodor thought that connectionism was nothing but a slightly more sophisticated way with which the old and long dead associationism, whose roots could be traced back to early British empiricists, was being revived. In 1988 Fodor and Pylyshyn (F&P) published a long article, “Connectionism and Cognitive Architecture: A Critical Analysis”, in which they launched a formidable attack on connectionism, which largely set the terms for the ensuing debate between connectionists and classicists.

F&P's forceful criticism consists in posing a dilemma for connectionists: They either fail to explain the law-like cognitive regularities like systematicity and productivity in an adequate way or the connectionist models are nothing but mere implementation models of classical architectures; hence, they fail to provide a radically new paradigm as connectionists claim. This conclusion was also meant to be a challenge: Explain the cognitive regularities in question without postulating a LOT architecture.

First, let me present F&P's argument against connectionism in a somewhat reconstructed fashion. It will be helpful to characterize the debate by locating the issues according to the reactions many connectionists had to the premises of the argument.

F&P's Argument against Connectionism in their (1988) article:

  1. Cognition essentially involves representational states and causal operations whose domain and range are these states; consequently, any scientifically adequate account of cognition should acknowledge such states and processes.
  2. Higher cognition (specifically, thought and thinking with propositional content) conceived in this way, has certain empirically interesting properties: in particular, it is a law of nature that cognitive capacities are productivesystematic, and inferentially coherent.
  3. Accordingly, the architecture of any proposed cognitive model is scientifically adequate only if it guarantees that cognitive capacities are productive, systematic, etc. This would amount to explaining, in the scientifically relevant and required sense, how it could be a law that cognition has these properties.
  4. The only way (i.e., necessary condition) for a cognitive architecture to guarantee systematicity (etc.) is for it to involve a representational system for which (B) is true (see above). (Classical architectures necessarily satisfy (B).)
  5. Either the architecture of connectionist models does satisfy (B), or it does not.
  6. If it does, then connectionist models are implementations of the classical LOT architecture and have little new to offer (i.e., they fail to compete with classicism, and thus connectionism does not constitute a radically new way of modeling cognition).
  7. If it does not, then (since connectionism does not then guarantee systematicity, etc., in the required sense) connectionism is empirically false as a theory of the cognitive architecture.
  8. Therefore, connectionism is either true as an implementation theory, or empirically false as a theory of cognitive architecture.

The notion of cognitive architecture assumes special importance in this debate. F&P's characterization of the notion goes as follows:

The architecture of the cognitive system consists of the set of basic operations, resources, functions, principles, etc. (generally the sorts of properties that would be described in a “user's manual” for that architecture if it were available on a computer) whose domain and range are the representational states of the organism. (1988: 10)

Also, note that (B1) and (B2) are meta-architectural properties in that they are themselves conditions upon any specific architecture's being classical. They define classicism per se, but not any particular way of being classical. Classicism as such simply claims that whatever the particular cognitive architecture of the brain might turn out to be (whatever the specific grammar of Mentalese turns out to be), (B) must be true of it. F&P claim that this is the only way an architecture can be said to guarantee the nomological necessity of cognitive regularities like systematicity, etc. This seems to be the relevant and required sense in which a scientific explanation of cognition is required to guarantee the regularities—hence the third premise in their argument.

Connectionist responses have fallen into four classes:

  1. Deny premiseinformation. The rejection of information commits connectionists to what is sometimes called radical or eliminativist connectionism. Premise information, as F&P point out, draws a general line between eliminativism and representationalism (or, intentional realism). There has been some controversy as to whether connectionism constitutes a serious challenge to the fundamental tenets of folk psychology.[24] Although it may still be too early for assessment,[25] the connectionist research program has been overwhelmingly cognitivist: most connectionists do in fact advance their models as having causally efficacious representational states, and explicitly endorse F&P's first premise. So they seem to accept intentional realism.[26]
  2. Accept the conclusion. This group may be seen as more or less accepting the cogency of the entire argument, and characterizes itself as implementationalist: they hold that connectionist networks will implement a classical architecture or language of thought. According to this group, the appropriate niche for neural networks is closer to neuroscience than to cognitive psychology. They seem to view the importance of the program in terms of its prospects of closing the gap between the neurosciences and high-level cognitive theorizing. In this, many seem content to admit premise (vi). (See Marcus 2001 for a discussion of the virtues of placing connectionist models closer to implementational level.)
  3. Deny premise (ii) or (iv). Some connectionists reject (ii) or (iv),[27] holding that there are no lawlike cognitive regularities such as systematicity (etc.) to be explained, or that such regularities do not require a (B)-like architecture for their explanation. Those who question (ii) often question the empirical evidence for systematicity (etc.) and tend to ignore the challenge put forward by F&P. Those who question (iv) also often question (ii), or they argue that there can be very different sort of explanations for systematicity and the like (e.g. evolutionary explanations, see Braddon-Mitchell and Fitzpatrick 1990), or they question the very notion of explanation involved (e.g. Matthews 1994). There are indeed quite a number of different kinds of arguments in the literature against these premises.[28] For a sampling, see Aydede (1995) and McLaughlin (1993b), who partitions the debate similarly.
  4. Deny premise (vi). The group of connectionists who have taken F&P's challenge most seriously has tended to reject premise (vi) in their argument, while accepting, on the face of it, the previous five premises (sometimes with reservations on the issue of productivity). They think that it is possible for connectionist representations to be syntactically structured in some sense without being classical. Prominent in this group are Smolensky (1990a, 1990b, 1995), van Gelder (1989, 1990, 1991), Chalmers (1990, 1993).[29] Some connectionists whose models give support to this line include Elman (1989), Hinton (1990), Touretzky (1990), Pollack (1990), Barnden and Srinivas (1991), Shastri and Ajjanagadde (1993), Plate (1998), Hummel et al. (2004), Van Der Velde and De Kamps (2006), Barrett et al. (2008), Sanjeevi and Bhattacharyya (2010).

Much of the recent debate between connectionists and classicists has focused on this option. How is it possible to reject premise (vi), which seems true by definition of classicism. The connectionists' answer, roughly put, is that when you devise a representational system whose satisfaction of (B) relies on a non-concatenative realization of structural/syntactic complexity of representations, you have a non-classical system. (See especially Smolensky 1990a and van Gelder 1990.) Interestingly, some classicists like Fodor and McLaughlin (1990) (F&M) seem to agree. F&M stipulate that you have a classical system only if the syntactic complexity of representations is realized concatenatively, or as it is sometimes put, explicitly:

We … stipulate that for a pair of expression types E1, E2, the first is a Classical constituent of the second only if the first is tokened whenever the second is tokened. (F&M 1990: 186)

The issues about how connectionists propose to obtain constituent structure non-concatenatively tend to be complex and technical. But they propose to exploit so called distributed representations in certain novel ways. The essential idea behind most of them is to use vector (and tensor) algebra (involving superimposition, multiplication, etc. of vectors) in composing and decomposing connectionist representations which consist in coding patterns of activity across neuron-like units which can be modeled as vectors. The result of such techniques is the production of representations that have in some interesting sense a complexity whose constituent structure is largely implicit in that the constituents are not tokened explicitly when the representations are tokened, but can be recovered by further operations upon them. The interested reader should consult some of the pioneering work by Elman (1989), Hinton (1990), Smolensky (1989, 1990, 1995), Touretzky (1990), Pollack (1990).

F&M's criticism, more specifically stated, however, is this. Connectionists with such techniques only satisfy (B1) in some “extended sense”, but they are incapable of satisfying (B2), precisely because their way of satisfying (B1) is committed to a non-concatenative realization of syntactic structures.

Some connectionists disagree (e.g., Chalmers 1993, Niklasson and van Gelder 1994—see also Browne 1998 and Browne and Sun 2001 for discussion and overview of models): they claim that you can have structure-sensitive transformations or operations defined over representations whose syntactic structure is non-concatenatively realized. So given the apparent agreement that non-concatenative realization is what makes a system non-classical, connectionists claim that they can and do perfectly satisfy (B) in its entirety with their connectionist models without implementing classical models.

The debate still continues and there is a growing literature built around the many issues raised by it. Aydede (1997a) offers an extensive analysis of the debate between classicists and this group of connectionists with special attention to the conceptual underpinnings of the debate. (See also Roth 2005 who argues that to the extent to which connectionist models can transform representations successfully according to an algorithmic function, to that extent they count as executing program in the sense relevant to classical program execution.) Aydede argues that both parties are wrong in assuming that concatenative realization is relevant to the characterization of LOTH. Part of the argument is that concatenative realization of (B) is just that—a realization. The attentive reader might have noticed that there is nothing in the characterization of (B) that requires concatenative realization. Indeed, when we look at all the major arguments for LOTH focused on the need for (B), none of them requires concatenation or explicit realization of syntactic structure. In fact, it is almost on the border of confusion to necessarily associate LOTH to such an implementational level issue. If anything, this class of connectionist networks, if successful and generalizable across all higher cognition, contributes to our understanding of how radically differently a LOTH architecture could be implemented in neural networks. Indeed, if these models prove to be adequate for explaining the full range of human cognitive capacities, they would show how syntactically structured representations and structure sensitive processes could be implemented in a radically new way. So research programs in this niche are by no means trivial or insignificant. But we need to be clear and careful about what minimally needs to be the case for LOTH to be true, and why.

On the other hand, it is by no means clear that these connectionist models are successful and generalizable (scalable). They all have proved to have serious limitations that seem to be tied to their particular ways of implementing variable binding (syntactic structure) and structure sensitive processing. For critical discussion, see Marcus (2001), Hadley (2009), Browne and Sun (2001). Marcus in particular makes a strong and largely empirical case for why classical symbol systems are needed for explaining human capacities of variable binding and generalizing, and why existing connectionist models aren't up to the job to match human capacities while remaining non-classical. Indeed the trend in the last fifteen years seems to be towards developing hybrid systems combining connectionist and classical symbol processing models—see, for instance, the articles in Wermter and Sun (2000).[30]

Bibliography

  • Aizawa, K. (1994). “Representations without Rules, Connectionism and the Syntactic Argument.” Synthese 101(3): 465–492.
  • –––. (1997a). “Explaining Systematicity.” Mind and Language 12(2): 115–136.
  • –––. (1997b). “Exhibiting versus Explaining Systematicity: A Reply to Hadley and Hayward.” Minds and Machines 7(1): 39–55.
  • –––. (2003). The Systematicity Arguments, Kluwer Academic Publishers.
  • Aydede, Murat. (1995). “Connectionism and Language of Thought”, CSLI Technical Report, Stanford, CSLI, 95–195. (This is an early version of Aydede 1997 but contains quite a lot of expository material not contained in 1997.)
  • –––. (1997a). “Language of Thought: The Connectionist Contribution,” Minds and Machines, Vol. 7, No. 1, pp. 57–101.
  • –––. (1997b). “Has Fodor Really Changed His Mind on Narrow Content?”, Mind and Language, 12(3–4): 422–458.
  • –––. (1998). “Fodor On Concepts and Frege Puzzles,” Pacific Philosophical Quarterly, 79(4): 289–294.
  • –––. (2000). “On the Type/Token Relation of Mental Representations,” Facta Philosophica: International Journal for Contemporary Philosophy, 2(1): 23–49.
  • –––. (2005). “Computation and Functionalism: Syntactic Theory of Mind Revisited” in Gürol Irzik and G. Güzeldere (eds.), Boston Studies in the History and Philosophy of Science, Dordrecht: Kluwer Academic Publishers.
  • Aydede, Murat, and Güven Güzeldere (2005). “Cognitive Architecture, Concepts, and Introspection: An Information-Theoretic Solution to the Problem of Phenomenal Consciousness”, Noûs, 39(2): 197–255.
  • Armstrong, D.M. (1973). Belief, Truth and Knowledge, Cambridge: Cambridge University Press.
  • –––. (1980). The Nature of Mind, Ithaca, NY: Cornell University Press.
  • Bader, S. and B. Hitzler (2005). “Dimensions of neural-symbolic integration—a structured survey” in We Will Show Them: Essays in Honour of Dov Gabbay, edited by S. Artemov and H. Barringer and A. S. d'Avila Garcez and L.C. Lamb and J. Woods, King's College Publications.
  • Barnden, J. and K. Srinivas (1991). “Encoding techniques for complex information structures in connectionist systems,” Connection Science, 3(3): 269–315.
  • Barrett, L., J Feldman, and L. Mac Dermed (2008). “A (somewhat) new solution to the variable binding problem,” Neural Computation, Vol. 20, pp. 2361–2378.
  • Barsalou, L. W. (1993). “Flexibility, Structure, and Linguistic Vagary in Concepts: Manifestations of a Compositional System of Perceptual Symbols” in Theories of Memory, edited by A. Collins, S. Gathercole, M. Conway and P. Morris, Hillsdale, NJ: Lawrence Erlbaum Associates.
  • –––. (1999). “Perceptual Symbol Systems.” Behavioral and Brain Sciences 22(4).
  • Barsalou, L. W., W. Yeh, B. J. Luka, K. L. Olseth, K. S. Mix, and L.-L. Wu. (1993). “Concepts and Meaning”, Chicago Linguistics Society 29.
  • Barsalou, L. W., and J. J. Prinz. (1997). “Mundane Creativity in Perceptual Symbol Systems” in Creative Thought: An Investigation of Conceptual Structures and Processes, edited by T. B. Ward, S. M. Smith and J. Vaid, Washington, DC: American Psychological Association.
  • Barwise, Jon and John Etchemendy (1995). Hyperproof, Stanford, Palo Alto: CSLI Publications.
  • Barwise, J. and J. Perry (1983). Situations and Attitudes, Cambridge, Massachusetts: MIT Press.
  • Bechtel, W. and A. Abrahamsen (2002). Connectionism and the Mind: An Introduction to Parallel Processing in Networks, 2nd Edition, Oxford, UK: Basil Blackwell.
  • Blackburn, S. (1984). Spreading the Word, Oxford, UK: Oxford University Press.
  • Block, Ned. (1980). “Troubles with Functionalism” in Readings in Philosophy of Psychology, N. Block (ed.), Vol.1, Cambridge, Massachusetts: Harvard University Press, 1980. (Originally appeared in Perception and Cognition: Issues in the Foundations of Psychology, Minnesota Studies in the Philosophy of Science, C.W. Savage (ed.), Minneapolis: The University of Minnesota Press, 1978.)
  • –––. (ed.) (1981). Imagery. Cambridge, Massachusetts: MIT Press.
  • –––. (1983a). “Mental Pictures and Cognitive Science,” Philosophical Review 93: 499–542. (Reprinted in Mind and Cognition, W.G. Lycan (ed.), Oxford, UK: Basil Blackwell, 1990.)
  • –––. (1983b). “The Photographic Fallacy in the Debate about Mental Imagery”, Nous 17: 651–62.
  • ––– (1986). “Advertisement for a Semantics for Psychology” in Studies in the Philosophy of Mind: Midwest Studies in Philosophy, Vol.10, P. French, T. Euhling and H. Wettstein (eds.), Minneapolis: University of Minnesota Press.
  • Braddon-Mitchell, David and John Fitzpatrick (1990). “Explanation and the Language of Thought,” Synthese 83: 3–29.
  • Braddon-Mitchell, D. and F. Jackson (2007). Philosophy of Mind and Cognition: An Introduction, Blackwell.
  • Browne, A. (1998). “Performing a symbolic inference step on distributed representations”, Neurocomputing, 19(1–3): 23–34.
  • Browne, A., and R. Sun (1999). “Connectionist variable binding”, Expert Systems, 16(3): 189–207.
  • –––. (2001). “Connectionist inference models”, Neural Networks, 14(10): 1331–1355.
  • Brentano, Franz (1874/1973). Psychology from an Empirical Standpoint, A. Rancurello, D. Terrell and L. McAlister (trans.), London: Routledge and Kegan Paul.
  • Butler, Keith (1991). “Towards a Connectionist Cognitive Architecture,” Mind and Language, Vol. 6, No. 3, pp. 252–72.
  • Chalmers, David J. (1990). “Syntactic Transformations on Distributed Representations,” Connection Science, Vol. 2.
  • –––. (1993). “Connectionism and Copositionality: Why Fodor and Pylyshyn Were Wrong” in Philosophical Psychology 6: 305–319.
  • –––. (1996). The Conscious Mind: In Search of a Fundamental Theory, Oxford, UK: Oxford University Press.
  • Churchland, Patricia Smith (1986). Neurophilosophy: Toward a Unified Science of Mind-Brain, Cambridge, Massachusetts: MIT Press.
  • –––. (1987). “Epistemology in the Age of Neuroscience,” Journal of Philosophy, Vol. 84, No. 10, pp. 544–553.
  • Churchland, Patricia S. and Terrence J. Sejnowski (1989). “Neural Representation and Neural Computation” in Neural Connections, Neural Computation, L. Nadel, L.A. Cooper, P. Culicover and R.M. Harnish (eds.), Cambridge, Massachusetts: MIT Press, 1989.
  • Churchland, Paul M. (1990). A Neurocomputational Perspective: The Nature of Mind and the Structure of Science, Cambridge, Massachusetts: MIT Press.
  • –––. (1981). “Eliminative Materialism and the Propositional Attitudes,” Journal of Philosophy 78: 67–90.
  • Churchland, Paul M. and P.S. Churchland (1990). “Could a Machine Think?,” Scientific American, Vol. 262, No. 1, pp. 32–37.
  • Clark, Andy (1988). “Thoughts, Sentences and Cognitive Science,” Philosophical Psychology, Vol. 1, No. 3, pp. 263–278.
  • –––. (1989a). “Beyond Eliminativism,” Mind and Language, Vol. 4, No. 4, pp. 251–279.
  • –––. (1989b). Microcognition: Philosophy, Cognitive Science, and Parallel Distributed Processing, Cambridge, Massachusetts: MIT Press.
  • –––. (1990). “Connectionism, Competence, and Explanation,” British Journal for Philosophy of Science, 41: 195–222.
  • –––. (1991). “Systematicity, Structured Representations and Cognitive Architecture: A Reply to Fodor and Pylyshyn” in Connectionism and the Philosophy of Mind, Terence Horgan and John Tienson (eds.), Studies in Cognitive Systems (Volume 9), Dordrecht: Kluwer Academic Publishers, 1991.
  • –––. (1994). “Language of Thought (2)” in A Companion to the Philosophy of Mind edited by S. Guttenplan, Oxford, UK: Basil Blackwell, 1994.
  • Cowie, F. (1998). What's Within? Nativism Reconsidered. Oxford, UK, Oxford University Press.
  • Cummins, Robert. (1986). “Inexplicit Information” in The Representation of Knowledge and Belief, M. Brand and R.M. Harnish (eds.), Tucson, Arizona: Arizona University Press, 1986.
  • –––. (1989). Meaning and Mental Representation, Cambridge, Massachusetts: MIT Press.
  • –––. (1996). Representations, Targets, and Attitudes, Cambridge, Massachusetts: MIT Press.
  • Cummins, Robert and Georg Schwarz (1987). “Radical Connectionism,” The Southern Journal of Philosophy, Vol. XXVI, Supplement.
  • Davidson, Donald (1984). Inquiries into Truth and Interpretation, Oxford: Clarendon Press.
  • Davies, Martin (1989). “Connectionism, Modularity, and Tacit Knowledge,” British Journal for the Philosophy of Science 40: 541–555.
  • –––. (1991). “Concepts, Connectionism, and the Language of Thought,” in Philosophy and Connectionist Theory, W. Ramsey, S.P. Stich and D.E. Rumelhart (eds.), Hillsdale, NJ: Lawrence Erlbaum, 1991.
  • –––. (1995). “Two Notions of Implicit Rules,” Philosophical Perspectives 9: 153–83.
  • Dennett, D.C. (1978). “Two Approaches to Mental Images” in Brainstorms: Philosophical Essays on Mind and Psychology, Cambridge, Massachusetts: MIT Press, 1981.
  • –––. (1981). “Cure for the Common Code” in Brainstorms: Philosophical Essays on Mind and Psychology, Cambridge, Massachusetts: MIT Press, 1981. (Originally appeared in Mind, April 1977.)
  • –––. (1986). “The Logical Geography of Computational Approaches: A View from the East Pole” in The Representation of Knowledge and Belief, Myles Brand and Robert M. Harnish (eds.), Tucson: The University of Arizona Press, 1986.
  • –––. (1991a). “Real Patterns,” Journal of Philosophy, Vol. LXXXVIII, No. 1, pp. 27–51.
  • –––. (1991b). “Mother Nature Versus the Walking Encyclopedia: A Western Drama” in Philosophy and Connectionist Theory, W. Ramsey, S.P. Stich and D.E. Rumelhart (eds.), Lawrence Erlbaum Associates.
  • Descartes, R. (1637/1970). “Discourse on the Method” in The Philosophical Works of Descartes, Vol.I, E.S. Haldane and G.R.T. Ross (trans.), Cambridge, UK: Cambridge University Press.
  • Devitt, Michael (1990). “A Narrow Representational Theory of the Mind,” Mind and Cognition, W.G. Lycan (ed.), Oxford, UK: Basil Blackwell, 1990.
  • –––. (1996). Coming to our Senses: A Naturalistic Program for Semantic Localism, Cambridge, UK: Cambridge University Press.
  • Devitt, Michael and Sterelny, Kim (1987). Language and Reality: An Introduction to the Philosophy of Language, Cambridge, Massachusetts: MIT Press.
  • Dretske, Fred (1981). Knowledge and the Flow of Information, Cambridge, Massachusetts: MIT Press.
  • –––. (1988). Explaining Behavior, Cambridge, Massachusetts: MIT Press.
  • Elman, Jeffrey L. (1989). “Structured Representations and Connectionist Models”, Proceedings of the Eleventh Annual Meeting of the Cognitive Science Society, Ann Arbor, Michigan, pp.17–23.
  • Field, Hartry H. (1972). “Tarski's Theory of Truth”, Journal of Philosophy, 69: 347–75.
  • –––. (1978). “Mental Representation”, Erkenntnis 13, 1, pp.9–61. (Also in Mental Representation: A Reader, S.P. Stich and T.A. Warfield (eds.), Oxford, UK: Basil Blackwell, 1994. References in the text are to this edition.)
  • Fodor, Jerry A. (1975). The Language of Thought, Cambridge, Massachusetts: Harvard University Press.
  • –––. (1978). “Propositional Attitudes” in RePresentations: Philosophical Essays on the Foundations of Cognitive Science, J.A. Fodor, Cambridge, Massachusetts: MIT Press, 1981. (Originally appeared in The Monist 64, No.4, 1978.)
  • –––. (1978a). “Computation and Reduction” in RePresentations: Philosophical Essays on the Foundations of Cognitive Science, J.A. Fodor, Cambridge, MA: MIT Press. (Originally appeared in Minnesota Studies in the Philosophy of Science: Perception and Cognition, Vol. 9, W. Savage (ed.), 1978.)
  • –––. (1978b). “Tom Swift and His Procedural Grandmother,” Cognition, Vol. 6. (Also in RePresentations: Philosophical Essays on the Foundations of Cognitive Science, J.A. Fodor, Cambridge, Massachusetts: MIT Press, 1981.)
  • –––. (1980). “Methodological Solipsism Considered as a Research Strategy in Cognitive Psychology”, Behavioral and Brain Sciences 3, 1, 1980. (Also in RePresentations: Philosophical Essays on the Foundations of Cognitive Science, J.A. Fodor, Cambridge, MA: MIT Press, 1981. References in the text are to this edition.)
  • –––. (1981a). RePresentations: Philosophical Essays on the Foundations of Cognitive Science, Cambridge, Massachusetts: MIT Press.
  • –––. (1981b), “Introduction: Something on the State of the Art” in RePresentations: Philosophical Essays on the Foundations of Cognitive Science, J.A. Fodor, Cambridge, Massachusetts: MIT Press, 1981.
  • –––. (1983). The Modularity of Mind, Cambridge, Massachusetts: MIT Press.
  • –––. (1985). “Fodor's Guide to Mental Representation: The Intelligent Auntie's Vade-Mecum”, Mind 94, 1985, pp.76–100. (Also in A Theory of Content and Other Essays, J.A. Fodor, Cambridge, Massachusetts: MIT Press. References in the text are to this edition.)
  • –––. (1986). “Banish DisContent” in Language, Mind, and Logic, J. Butterfield (ed.), Cambridge, UK: Cambridge University Press, 1986. (Also in Mind and Cognition, William Lycan (ed.), Oxford, UK: Basil Blackwell, 1990.)
  • –––. (1987). Psychosemantics: The Problem of Meaning in the Philosophy of Mind, Cambridge, Massachusetts: MIT Press.
  • –––. (1989). “Substitution Arguments and the Individuation of Belief” in A Theory of Content and Other Essays, J. Fodor, Cambridge, Massachusetts: MIT Press, 1990. (Originally appeared in Method, Reason and Language, G. Boolos (ed.), Cambridge, UK: Cambridge University Press, 1989.)
  • –––. (1990). A Theory of Content and Other Essays, Cambridge, Massachusetts: MIT Press.
  • –––. (1991). “Replies” (Ch.15) in Meaning in Mind: Fodor and his Critics, B. Loewer and G. Rey (eds.), Oxford, UK: Basil Blackwell, 1991.
  • –––. (2001). “Doing without What's Within: Fiona Cowie's Critique of Nativism.” Mind: 110(437) 99–148.
  • –––. (2008). LOT 2: The Language of Thought Revisited, Oxford: Oxford University Press.
  • Fodor, Jerry A. and Ernest Lepore (1991). “Why Meaning (Probably) Isn't Conceptual Role?”, Mind and Language, Vol. 6, No. 4, pp. 328–43.
  • Fodor, Jerry A. and B. McLaughlin (1990). “Connectionism and the Problem of Systematicity: Why Smolensky's Solution Doesn't Work,” Cognition 35: 183–204.
  • Fodor, Jerry A. and Zenon W. Pylyshyn (1988). “Connectionism and Cognitive Architecture: A Critical Analysis” in S. Pinker and J. Mehler, eds., Connections and Symbols, Cambridge, Massachusetts: MIT Press (A Cognition Special Issue).
  • Grice, H.P. (1957). “Meaning”, Philosophical Review, 66: 377–88.
  • Hadley, R. F. (1995). “The ”Explicit-Implicit“ Distinction.” Minds and Machines 5(2): 219–242.
  • –––. (1997). “Cognition, Systematicity and Nomic Necessity.” Mind and Language 12(2): 137–153.
  • –––. (1997). “Explaining Systematicity: A Reply to Kenneth Aizawa.” Minds and Machines 7(4): 571–579.
  • –––. (1999). “Connectionism and Novel Combinations of Skills: Implications for Cognitive Architecture.” Minds and Machines 9(2): 197–221.
  • –––. (2009). “The problem of rapid variable creation,” Neural Computation, 21: 510–32.
  • Hadley, R. F. and M. B. Hayward (1997). “Strong Semantic Systematicity from Hebbian Connectionist Learning.” Minds and Machines 7(1): 1–37.
  • Harman, Gilbert (1973). Thought, Princeton University Press.
  • Haugeland, John (1981). “The Nature and Plausibility of Cognitivism,” Behavioral and Brain Sciences I, 2: 215–60 (with peer commentary and replies).
  • –––. (1985). Artificial Intelligence: The Very Idea, Cambridge, Massachusetts: MIT Press.
  • Hinton, Geoffrey (1990). “Mapping Part-Whole Hierarchies into Connectionist Networks,” Artificial Intelligence, Vol. 46, Nos. 1–2, (Special Issue on Connectionist Symbol Processing).
  • Horgan, T. E. and J. Tienson (1996). Connectionism and the Philosophy of Psychology, Cambridge, Massachusetts: MIT Press.
  • Horgan, T. (1997). “Connectionism and the Philosophical Foundations of Cognitive Science.” Metaphilosophy 28(1–2): 1–30.
  • Hummel, J. E., Holyoak, K. J., Green, C., Doumas, L. A. A., Devnich, D., Kittur, A., & Kalar, D.J. (2004). A Solution to the Binding Problem for Compositional Connectionism. In S.D. Levy & R. Gayler: Compositional Connectionism in Cognitive Science: Papers from the AAAI Fall Symposium (pp. 31–34). Menlo Park, CA: AAAI Press.
  • Jacob, P. (1997). What Minds Can Do: Intentionality in a Non-Intentional World. Cambridge, UK, Cambridge University Press.
  • Kirsh, D. (1990). “When Is Information Explicitly Represented?” in Information, Language and Cognition. P. Hanson (ed.), University of British Columbia Press.
  • Knowles, J. (1998). “The Language of Thought and Natural Language Understanding.” Analysis 58(4): 264–272.
  • Kosslyn, S.M. (1980). Image and Mind. Cambridge, Massachusetts: Harvard University Press.
  • –––. (1981). “The Medium and the Message in Mental Imagery: A Theory” in Imagery, N. Block (ed.), Cambridge, Massachusetts: MIT Press, 1981.
  • –––. (1994). Image and Brain, Cambridge, Massachusetts: MIT Press.
  • Kulvicki, J. (2004). “Isomorphism in information-carrying systems”, Pacific Philosophical Quarterly 85(4): 380–395.
  • –––. (2006). On Images: Their Structure and Content, Oxford: Clarendon Press.
  • Laurence, Stephen and Eric Margolis (1997). “Regress Arguments Against the Language of Thought”, Analysis, Vol. 57, No. 1.
  • –––. (2002). “Radical Concept Nativism.” Cognition 86: 22–55.
  • Leeds, S. (2002). “Perception, Transparency, and the Language of Thought.” Noûs 36(1): 104–129.
  • Lewis, David (1972). “Psychophysical and Theoretical Identifications,” Australasian Journal of Philosophy, 50(3):249–58. (Also in Readings in Philosophy of Psychology, Ned Block (ed.), Vols.1, Cambridge, Massachusetts: Harvard University Press, 1980.)
  • –––. (1994). “Reduction of Mind” in A Companion to the Philosophy of Mind, edited by Samuel Guttenplan, Oxford: Blackwell, pp. 412–31.
  • Loar, Brian F. (1982a). Mind and Meaning, Cambridge, UK: Cambridge University Press.
  • –––. (1982b). “Must Beliefs Be Sentences?” in Proceedings of the Philosophy of Science Association for 1982, Asquith, P. and T. Nickles (eds.), East Lansing, Michigan, 1983.
  • Lycan, William G. (1981). “Toward a Homuncular Theory of Believing,” Cognition and Brain Theory 4(2): 139–159.
  • –––. (1986). “Tacit Belief” in Belief: Form, Content, and Function, R. Bogdan (ed.), Oxford, UK: Oxford University Press.
  • –––. (1993). “A Deductive Argument for the Representational Theory of Thinking,” Mind and Language, Vol. 8, No. 3, pp. 404–22.
  • –––. (1997). “Consciousness as Internal Monitoring” in The Nature of Consciousness: Philosophical Debates, edited by N. Block, O. Flanagan and G. Güzeldere, Cambridge, Massachusetts: MIT Press.
  • Marcus, G. F. (1998). “Can connectionism save constructivism?” Cognition 66: 153–182.
  • –––. (1998). “Rethinking Eliminative Connectionism.” Cognitive Psychology 37: 243–282.
  • –––. (2001). The Algebraic Mind: Integrating Connectionism and Cognitive Science. Cambridge, MA, MIT Press.
  • Margolis, Eric (1998). “How to Acquire a Concept?”, Mind and Language.
  • Margolis, E. and S. Laurence (1999). “Where the Regress Argument Still Goes Wrong: Reply to Knowles.” Analysis 59(4): 321–327.
  • –––. (2001). “The Poverty of the Stimulus Argument.” British Journal for the Philosophy of Science 52: 217–276.
  • ––– (forthcoming-a). “Learning Matters: The Role of Learning in Concept Acquisition.”
  • –––. (forthcoming-b). “The Nativist Manifesto.”
  • Markic, O. (2001). “Is Language of Thought a Conceptual Necessity?” Acta Analytica 16(26): 53–60.
  • Marr, David (1982). Vision, San Francisco: W. H. Freeman.
  • Martinez, F. and J. Ezquerro Martinez (1998). “Explicitness with Psychological Ground.” Minds and Machines 8(3): 353–374.
  • Matthew, Robert J. (1994). “Three-Concept Monte: Explanation, Implementation and Systematicity”, Synthese, Vol. 101, No. 3, pp. 347–63.
  • McGinn, Colin (1989). Mental Content, Oxford: Blackwell.
  • –––. (1991). The Problem of Consciousness, Oxford, UK: Basil Blackwell.
  • McLaughlin, B.P. (1993a). “The Connectionism/Classicism Battle to Win Souls,” Philosophical Studies 71: 163–90.
  • –––. (1993b). “Systematicity, Conceptual Truth, and Evolution,” in Philosophy and Cognitive Science, C. Hookway and D. Peterson (eds.), Royal Institute of Philosophy, Supplement No. 34.
  • McLaughlin, B.P. and Ted Warfield (1994). “The Allures of Connectionism Reexamined”, Synthese 101, pp. 365–400
  • Millikan, Ruth Garrett (1984). Language, Thought, and Other Biological Categories: New Foundations for Realism, Cambridge, Massachusetts: MIT Press.
  • –––. (1993). White Queen Psychology and Other Essays for Alice, Cambridge, Massachusetts: MIT Press.
  • Niklasson, L. and T. van Gelder (1994). “On Being Systematically Connectionist,” Mind and Language, 9(3): 288–302
  • Papineau, D. (1987). Reality and Representation, Oxford, UK: Basil Blackwell.
  • Perry, John and David Israel (1991). “Fodor and Psychological Explanations” in Meaning in Mind: Fodor and his Critics, B. Loewer and G. Rey (eds.), Oxford, UK: Basil Blackwell, 1991.
  • Phillips, S. (2002). “Does Classicism Explain Universality?” Minds and Machines 12(3): 423–434.
  • Piccinini, G. (2008). “Computers,” Pacific Philosophical Quarterly, 89:32 –73.
  • Pinker, S., and A. Prince (1988). “On language and connectionism: Analysis of a parallel distributed processing model of language acquisition,” Cognition (special issue on Connections and Symbols) 28: 73–193.
  • Plate, Tony A. (1998). “Structured operations with distributed vector representations” in Keith Holyoak, Dedre Gentner, and Boicho Kokinov, Advances in Analogy Research: Integration of Theory and Data from the Cognitive, Computational, and Neural Sciences. NBU Series in Cognitive Science. New Bugarian University, Sofia.
  • Pollack, J.B. (1990). “Recursive Distributed Representations,” Artificial Intelligence, Vol.46, Nos.1–2, (Special Issue on Connectionist Symbol Processing).
  • Prinz, J. (2002). Furnishing the Mind: Concepts and Their Perceptual Basis. Cambridge, MA, MIT Press.
  • Putnam, Hilary (1988), Representation and Reality, Cambridge, Massachusetts: MIT Press.
  • Pylyshyn, Z.W. (1978). “Imagery and Artificial Intelligence” in Perception and Cognition. W. Savage (ed.), University of Minnesota Press. (Reprinted in Readings in the Philosophy of Psychology, N. Block (ed.), Cambridge, Massachusetts: MIT Press, 1980.)
  • Pylyshyn, Z. W. (1984). Computation and Cognition: Toward a Foundation for Cognitive Science, Cambridge, Massachusetts: MIT Press.
  • Ramsey, F.P. (1931). “General Propositions and Causality” in The Foundations of Mathematics, New York: Harcourt Brace, pp. 237–55.
  • Ramsey, W., S. Stich and J. Garon (1991). “Connectionism, Eliminativism and the Future of Folk Psychology,” in Philosophy and Connectionist Theory, W. Ramsey, D. Rumelhart and Stephen Stich (eds.), Hillsdale, NJ: Lawrence Erlbaum.
  • Rescorla, M. (2009a). “Cognitive maps and the language of thought,” The British Journal for the Philosophy of Science, 60 (2): 377–407.
  • –––. (2009b). “Predication and cartographic representation,” Synthese, 169:175–200.
  • Rey, Georges (1981). “What are Mental Images?” in Readings in the Philosophy of Psychology, N. Block (ed.), Vol. 2, Cambridge, Massachusetts: Harvard University Press, 1981.
  • –––. (1991). “An Explanatory Budget for Connectionism and Eliminativism” in Connectionism and the Philosophy of Mind, Terence Horgan and John Tienson (eds.), Studies in Cognitive Systems (Volume 9), Dordrecht: Kluwer Academic Publishers.
  • –––. (1992). “Sensational Sentences Switched”, Philosophical Studies 67: 73–103.
  • –––. (1993). “Sensational Sentences” in Consciousness, M. Davies and G. Humphrey (eds.), Oxford, UK: Basil Blackwell, pp. 240–57.
  • –––. (1995). “A Not ‘Merely Empirical’ Argument for a Language of Thought,” in Philosophical Perspectives 9, J. Tomberlin (ed.), pp. 201–222.
  • –––. (1997). Contemporary Philosophy of Mind: A Contentiously Classical Approach, Oxford, UK: Basil Blackwell.
  • Rosenthal, D.M. (1997). “A Theory of Consciousness” in The Nature of Consciousness: Philosophical Debates, edited by N. Block, O. Flanagan and G. Güzeldere, Cambridge, Massachusetts: MIT Press.
  • Roth, M. (2005). “Program Execution in Connectionist Networks,” Mind & Language, 20(4): 448–467.
  • Rumelhart, D.E. and J.L. McClelland (1986). “PDP Models and General Issues in Cognitive Science,” in Parallel Distributed Processing, Vol.1, D.E. Rumelhart, J.L. McClelland, and the PDP Research Group, Cambridge, Massachusetts: MIT Press, 1986.
  • Rumelhart, D.E., J.L. McClelland, and the PDP Research Group (1986). Parallel Distributed Processing, (Vols. 1&2), Cambridge, Massachusetts: MIT Press.
  • Rupert, R. D. (1999). “On the Relationship between Naturalistic Semantics and Individuation Criteria for Terms in a Language of Thought,” Synthese, 117: 95–131.
  • –––. (2008). “Frege's puzzle and Frege cases: Defending a quasi-syntactic solution,” Cognitive Systems Research, 9: 76–91.
  • Sanjeevi, S. and P. Bhattacharyya (2010). “Connectionist predicate logic model with parallel execution of rule chain” in Proceedings of the International Conference and Workshop on Emerging Trends in Technology (ICWET 2010) TCET, Mumbai, India (2010).
  • Schiffer, Stephen (1981). “Truth and the Theory of Content” in Meaning and Understanding, H. Parret and J. Bouveresse (eds.), Berlin: Walter de Gruyter, 1981.
  • Searle, John R. (1980). “Minds, Brains, and Programs” Behavioral and Brain Sciences III, 3: 417–24.
  • –––. (1984). Minds, Brains and Science, Cambridge, Massachusetts: Harvard University Press.
  • –––. (1990). “Is the Brain a Digital Computer?”, Proceedings and Addresses of the APA, Vol. 64, No. 3, November 1990.
  • –––. (1992). The Rediscovery of Mind, Cambridge, Massachusetts: MIT Press.
  • Sehon, S. (1998). “Connectionism and the Causal Theory of Action Explanation.” Philosophical Psychology 11(4): 511–532.
  • Shastri, L. (2006). “Comparing the neural blackboard and the temporal synchrony-based SHRUTI architecture,” Behavioral and Brain Science, 29: 84–86.
  • Shastri, L. and A. Ajjanagadde (1993). “From simple associations to systematic reasoning: A connectionist representation of rules, variables and dynamic bindings using temporal synchrony,” Behavioral and Brain Sciences, Vol. 16, pp. 417–94
  • Shepard, R. and Cooper, L. (1982). Mental Images and their Transformations. Cambridge, Massachusetts: MIT Press.
  • Smolensky, Paul (1988). “On the Proper Treatment of Connectionism,” Behavioral and Brain Sciences 11: 1–23.
  • –––. (1990a). “Connectionism, Constituency, and the Language of Thought” in Meaning in Mind: Fodor and His Critics, B. Loewer and G. Rey (eds.), : Oxford, UK: Basil Blackwell, 1991.
  • –––. (1990b). “Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems,” Artificial Intelligence, Vol. 46, Nos. 1–2, (Special Issue on Connectionist Symbol Processing), November 1990.
  • –––. (1995). “Constituent Structure and Explanation in an Integrated Connectionist/Symbolic Cognitive Architecture” in Connectionism: Debates on Psychological Explanation, C. Macdonald and G. Macdonald (eds.), Oxford, UK: Basil Blackwell, 1995.
  • Schneider, S. (2009). “The Nature of Symbols in the Language of Thought,” Mind and Language, 24(5): 523–553.
  • Stalnaker, Robert C. (1984). Inquiry, Cambridge, Massachusetts: MIT Press.
  • Sterelny, K. (1986). “The Imagery Debate”, Philosophy of Science 53: 560–83. (Reprinted in Mind and Cognition, W. Lycan (ed.), Oxford, UK: Basil Blackwell, 1990.)
  • –––. (1990). The Representational Theory of Mind, Cambridge, Massachusetts: MIT Press.
  • Stich, Stephen (1983). From Folk Psychology to Cognitive Science: The Case against Belief, Cambridge, Massachusetts: MIT Press.
  • Tarski, Alfred (1956). “The Concept of truth in Formalized Languages” in Logic, Semantics and Metamathematics, J.Woodger (trans.), Oxford, UK: Oxford University Press.
  • Touretzky, D.S. (1990). “BoltzCONS: Dynamic Symbol Structures in a Connectionist Network,” Artificial Intelligence, Vol. 46, Nos. 1–2, (Special Issue on Connectionist Symbol Processing).
  • Tye, M. (1984). “The Debate about Mental Imagery”, Journal of Philosophy 81: 678–91.
  • –––. (1991). The Imagery Debate, Cambridge, Massachusetts: MIT Press.
  • Van Der Velde, F. and Marc De Kamps (2006). “Neural blackboard architectures of combinatorial structures in cognition,” Behavioral and Brain Sciences, Vol. 29 (01), pp. 37–70.
  • van Gelder, Timothy (1989). “Compositionality and the Explanation of Cognitive Processes”, Proceedings of the Eleventh Annual Meeting of the Cognitive Science Society, Ann Arbor, Michigan, pp. 34–41.
  • –––. (1990). “Compositionality: A Connectionist Variation on a Classical Theme,” Cognitive Science, Vol. 14.
  • –––. (1991). “Classical Questions, Radical Answers: Connectionism and the Structure of Mental Representations” in Connectionism and the Philosophy of Mind, Terence Horgan and John Tienson (eds.), Studies in Cognitive Systems (Volume 9), Dordrecht: Kluwer Academic Publishers.
  • Vinueza, A. (2000). “Sensations and the Language of Thought.” Philosophical Psychology 13(3): 373–392.
  • Wermter, S. and Ron Sun (eds.) (2000). Hybrid Neural Systems, Heidelberg: Springer.

Academic Tools

Other Internet Resources

Related Entries

artificial intelligence | belief | Church-Turing Thesis | cognitive science | computation: in physical systems | concepts | connectionism | consciousness: representational theories of | folk psychology: as a theory | functionalism | intentionality | mental content: causal theories of | mental imagery | mental representation | mind: computational theory of | naturalism | physicalism | propositional attitude reports | qualia | reasoning: automated | Turing, Alan | Turing machines

Copyright © 2010 by

Murat Aydede

This is a file in the archives of the Stanford Encyclopedia of Philosophy.

Stanford Center for the Study of Language and Information

The Stanford Encyclopedia of Philosophy is copyright © 2016 by The Metaphysics Research Lab, Center for the Study of Language and Information (CSLI), Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

SEP logo

Stanford Encyclopedia of Philosophy

Entry Navigation

Browse

About

Support SEP

Mirror Sites

View this site from another server:

USA (Main Site)  CSLI, Stanford University

Stanford Center for the Study of Language and Information

The Stanford Encyclopedia of Philosophy is copyright © 2016 by The Metaphysics Research Lab, Center for the Study of Language and Information (CSLI), Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

 0

     

Table Of Contents

Child Pages

Copywrite © 2020 LOGICMOO (Unless otherwise credited in page)