728
Views
0
CrossRef citations to date
0
Altmetric
Original Articles

Representation and mental representation

Pages 204-225 | Received 04 May 2018, Accepted 04 May 2018, Published online: 02 Jul 2018
 

Abstract

This paper engages critically with anti-representationalist arguments pressed by prominent enactivists and their allies. The arguments in question are meant to show that the “as-such” and “job-description” problems constitute insurmountable challenges to causal-informational theories of mental content. In response to these challenges, a positive account of what makes a physical or computational structure a mental representation is proposed; the positive account is inspired partly by Dretske’s views about content and partly by the role of mental representations in contemporary cognitive scientific modeling.

Acknowledgements

Many thanks to Bill Ramsey, Heather Demarest, Bob Pasnau, Raul Saucedo, and an anonymous reviewer for comments on earlier drafts of this paper; to Frances Egan, Tobias Schlicht, Krzysztof Dołęga, and Paul Schweizer for discussion of the issues addressed herein; and to an audience at the University of Edinburgh for helpful feedback.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes on contributor

Robert D. Rupert writes about issues in philosophy of mind, the philosophical foundations of cognitive science, and related areas of metaphysics, philosophy of science, and epistemology. His book, Cognitive Systems and the Extended Mind, was published by Oxford University Press, and his work has appeared in Journal of Philosophy, Noûs, Mind, Philosophy and Phenomenological Research, Mind & Language, British Journal for the Philosophy of Science, Philosophical Studies, Synthese, and many other journals and volumes. He has held visiting positions at University of Edinburgh, the Australian National University, and Ruhr University, Bochum, among other institutions. He is currently an Associate Editor at the British Journal for the Philosophy of Science.

Notes

1 And, to be clear, I do not here argue that detector-style representations play a definite causal-explanatory role in cognitive science. My brief is more modest, partly because space is limited. My goal is to neutralize the sort of in-principle roadblocks to causal-informational theories of mental content that Hutto and Myin and Ramsey take themselves to have identified. I do so partly by outlining, at a fairly abstract level, a causal-informational approach not subject to Hutto and Myin and Ramsey’s concerns. But, there may well be other roadblocks to the development of a causal-informational theory of mental content; that is, there may be distinct reasons to think that naturalized, detector-like representations (those subject to a causal-informational theory and lacking internal cognitive structure) cannot or will not play a causal role in a science of human cognition. I take no stand on such matters here. Partly for this reason, I make only passing reference to various aspects of cognitive-scientific modeling and do not focus on any particular research program or family of research programs to try to show that representations do, in fact, play a causal-explanatory role in cognitive-scientific modeling. My strategy takes, as a starting point, the rampant use of “representation,” especially in cognitive neuroscience, to refer to cognitively unstructured detectors (bits of cortex that “light up” in response to certain sorts of stimuli, for instance); I then ask whether Hutto and Myin’s and Ramsey’s criticisms show that such uses of “representation” are inherently confused or mistaken.

2 In much of Fodor’s writing (e.g., Citation1975, Citation1981, chapter 10, Citation1983, Citation1998), he engages directly with empirical results in cognitive science (in fact, he has himself published experimental work – Fodor et al. Citation1980) and trends in those results that are of theoretical import from the standpoint of cognitive science itself. So, in one respect, what is written in the main text might seem to misrepresent Fodor’s work vis-à-vis cognitive science. Such a reaction would, however, reflect a misunderstanding of Fodor’s work on mental content (Citation1987, Citation1990), that is, his work as what I below call a “first-generation” naturalistic theorist of content. In that work, his primary goal is to provide a naturalistic semantics for folk-psychological states that is respectable from the standpoint of cognitive science. More generally, in much of Fodor’s work, his explicitly stated goal is to find, in cognitive science, a vindication of folk psychology (he uses the language of “vindication” throughout chapter 1 of Psychosemantics), a situation that will help to confirm the very science in question, given a wealth of independent reasons (both empirical and, loosely speaking, a priori) for accepting folk psychology. The structure of this aspect of Fodor’s thinking emerges at least as early as “Propositional Attitudes” (Fodor Citation1978):

“What are the propositional attitudes?” … One way to elucidate this situation is to examine theories that cognitive psychologists endorse, with an eye to explicating the account of propositional attitudes that the theories presuppose. That was my strategy in Fodor (Citation1975). In this paper, however, I’ll take another tack. I want to outline a number of a priori conditions, which, on my view, a theory of propositional attitudes (PA) ought to meet. I’ll argue that, considered together, these conditions pretty clearly demand a treatment of PAs as relations between organisms and internal representations; precisely the view that the psychologists have independently arrived at. I’ll thus be arguing that we have good reason to endorse the psychologists’ theory even aside from the empirical exigencies that drove them to it. I take it this convergence between what’s plausible a priori and what’s demanded ex post facto is itself a reason for believing that the theory is probably true. (501)

More to the present point, Fodor describes the goal of his first major work on content, Psychosemantics, in the following way: “This book is mostly a defense of belief/desire psychology” (Fodor Citation1987, xii). And, at greater length:

The main thesis of this book can now be put as follows: We have no reason to doubt – indeed, we have substantial reason to believe – that it is possible to have a scientific psychology that vindicates commonsense belief/desire explanation [i.e., folk psychology]. But though that is my thesis, I don’t propose to argue the case in quite so abstract a form. For there is already in the field a (more or less) empirical theory that is, in my view, reasonably construed as ontologically committed to the attitudes and that – again, in my view – is quite probably approximately true. If I’m right about this theory, it is a vindication of the attitudes. (Fodor Citation1987, 16)

In fact, Fodor spends the first ten pages of Psychosemantics defending the depth, the success, and what he claims is the indispensability of folk psychology (appealing, inter alia, to our ability to understand the plot twists in Shakespeare’s plays and the inferences drawn by Sherlock Holmes, as well as our patent ability to coordinate everyday social interactions). It should come as no surprise that Fodor’s development of a theory of content was driven largely by the quest to vindicate folk psychology. One of the most lively and high-profile debates in philosophy of mind in the 1980s, which Fodor was at the center of, concerned the questions (a) whether folk psychology is legitimate, (b) whether its being legitimate requires that it mesh with, or be vindicated by, cognitive science, (c) what kind of cognitive science, if any, would be required to vindicate it, and (d) whether that kind of cognitive science is in the offing (Churchland Citation1981; Dennett Citation1987; Fodor Citation1987; Stich Citation1983).

To muddy the waters a bit, Fodor recognizes that our best cognitive-scientific models include subpersonal-level representations distinct from, and in addition to, those that play a direct role in vindicating folk psychology (that is, distinct from the computational symbols that encode propositions that are the contents of beliefs and desires and that play the distinctive causal roles of beliefs and desires) (Citation1987, 25–26); nevertheless, he does not develop a theory of content specially aimed at such additional representations, presumably because their semantics is tangential to his project in Psychosemantics, of vindicating commonsense belief–desire psychology.

3 To be clear, naturalists can stipulate just about anything to be a potentially interesting relation, and then let the stipulation run the empirical gauntlet; but they cannot simply stipulate just anything to be mental content or to be a mental representation. There has to be some principled connection to the pretheoretical use of “content” or “representation,” more on which below.

4 Contrast this approach with the shape of Fodor’s approach in the section “The Essence of the Attitudes” (Citation1987, 10–16), which is to identify, a priori, the essential properties of the folk-psychological constructs of belief and desire and then to look for scientifically respectable states having those properties.

5 The comments in the main text oversimplify the history in some respects. All of the first-generation authors address or incorporate, in some way, work in cognitive science (see note 2’s relevant remarks about Fodor’s particularly complicated situation in this regard); but they tend to focus on a cognitive science the purpose of which is to illuminate or ground folk categories, for example, “thought and experience” (Dretske Citation1997, 4). In contrast, second-generation theorists are, to a significantly greater extent, concerned to a develop a naturalistic theory of content that suits the needs of working cognitive science, as the endeavor to model the mechanisms that produce the relevant measurable, replicable data – leaving the status of folk psychology to fall where it may. But, they do not focus exclusively on this; Ryder, for example, would like his neurosemantic theory to account directly for explananda of both scientific and folk psychology (Citation2004, 212, 232).

6 Ramsey sometimes seems to endorse the naturalistic perspective associated in the main text with Cummins’s work and the work of second-generation naturalistic theorists of content (Ramsey Citation2007, 65–66). Nevertheless, throughout his discussion of Dretske’s approach, Ramsey repeatedly makes a critical appeal to such claims as that information is supposed to “tell” (Citation2007, 135, 136, 138, 139) someone something or “inform” (ibid., 141, 148) an agent of something, which flies in the face of the methodological point in the main text. Dretske gets to stipulate what the information-relation is, as he does in painstaking detail (Citation1981, chapters 1–3); when the relation, so stipulated, holds, the relevant relatum carries such-and-such information. Of course, the question might then be asked whether that relation, as stipulated, does the explanatory work Dretske wants it to do, as the naturalistic basis for, for example, the everyday use of belief talk to explain behavior.

7 Compare Matthen’s claim that “[C]ontent is a system property. It is a property of states in a system that treats information-carrying states in a certain way” (Citation2014, 125). Also see Markman and Dietrich (Citation2000, 144), for the claim that Dretskean information-bearing mediating states become representations only when they play a specific kind of role in a cognitive system (allowing the system to satisfy its goals, for example).

8 To be clear, then, the purpose of the preceding section was not to try to show that Dretske gets things right. Rather it is to show that Ramsey’s and Hutto and Myin’s criticisms of Dretske’s view fail to engage fully with it and that a careful look at the structure of Dretske’s view orients naturalistic theorists of mental content in the right direction, that is, it reveals what a more promising naturalistic semantics for detector-style representations might look like. In other words, although I do not think Dretske has the details right, and am thus not out to defend the details of his proposal, his theory instantiates a structure that others working on naturalistic theories of content can fruitfully extract and flesh out differently.

9 Dretske distances himself from cognitive-scientific explanation, holding that the value of his approach lies in the domain of folk psychology (Dretske Citation1988, 81 n1).

10 On the importance of the explanandum, see Dretske Citation1988, 69; on the distinction between representation and mental representation, see Dretske Citation1997, 19.

11 My particular interest is in the vindication of causal-informational theories as they might apply to detectors, indicators, or otherwise unstructured cognitive units. This is not meant to marginalize what Ramsey calls “S-representations” (Citation2007, chapter 3), that is, complex structures with map-like properties or, more generally, structure that mirrors or simulates the structure of the space of the real-world problem to be solved (cf. Cummins Citation1996). There is ample evidence of the brain’s use of such structures. See Ramsey (Citation2016) for some discussion of how S-representation and the detector-based notion of representation might be wedded productively.

12 In this vicinity, one may find a realist response to Frances Egan’s skepticism about mental representation, expressed within the framework of computational cognitive science (Egan Citation2014; also see Schweizer Citation2017). Egan claims that computational cognitive science is nonrepresentational partly because, I think, she ignores the relational nature of the data and focuses on the mathematical theory of computation. But, cognitive science is, in some clear sense, an applied science – a science of relationally individuated data. In this applied context, the characterization of computational units as representations is no more a mere gloss than is the relational characterization of the data, that is, not a mere gloss at all.

So far as I can tell, Egan’s tendency to view the data as nonrelational results partly from a mistaken view about the role of modal commitments in natural science. It is not the case that naturalistic explanation requires that, in order for (process, phenomenon, property, state) X to explain (process, phenomenon, property, state) Y, there is no possible world in which the apparent relations between X and Y are, relative to the actual world, shuffled around. There is no evidence that scientific explanation is driven (with regard to evidence and justification, as opposed to brainstorming) by such modal concerns. In which case, the sorts of examples Egan uses to motivate a merely-gloss-based approach to representation – the Visua example, for instance (ibid., 126–127) – seem beside the point, for they rest on claims about which internal states (processes, etc.) could be paired with which external states (processes, etc.) in distant possible worlds.

The application of the present line of thinking to Egan’s work demands much more attention than this, but given limitations of space and the extent to which Egan’s concerns are removed from those of prominent enactivists, more extensive discussion must be postponed.

13 It is largely because the theory of mental representation builds content from representation-related, representation-like, or vaguely representational ingredients that it is not infelicitous to use the term “mental representation.” But, the use of that term in no way implies that wherever one finds representation-like ingredients, one has found a representation simpliciter. It might be that there is no unified genus representation. Perhaps instead there are only some basic natural ingredients that loosely fit some of our vague intuitions about representation, which ingredients then, when put to use in a particular applied science x, become x representations. On this view, intuitions concerning what counts as playing a representational role might provide sufficient unity to the use of “representation” across these various specific contexts, without there being any naturalistically relevant kind representation neat that plays a causal-explanatory role in any science at all. Thus, one can be in a position to offer a compelling naturalistic account of mental representation without committing oneself to there being a definite, useful, or genuine kind, representation.

14 One might reinterpret Dretske’s folk-psychology-oriented strategy in this light, by treating reinforcement learning – with its array of motivational states and strengthened associations between indicators and motor commands – as the distinctive operation.

15 Compare the way in which Ramsey runs his head-to-head comparison of detectors and icon-like S-representations (Citation2007, 194 ff.). Although Ramsey considers the possibility that the detectors in question acquire a function, he makes little mention of the sort of reinforcement learning or the appearance of structuring causes of importance to Dretske. Moreover, Ramsey’s central example – a car with sensors at its periphery – is not a cognitive system, that is, one that displays a wide variety of forms of intelligent behavior. Thus, Ramsey’s head-to-head comparison has no clear bearing on matters at hand, that is, on questions about mental (or cognitive) representation.

16 In Explaining Behavior (58–59), Dretske states that he does not want to presuppose the detailed account of information spelled out in from Knowledge and the Flow of Information (Citation1981). But, he explicitly equates information and indication; what technical bits he does say about indication in (Citation1988) reflect what he says in (Citation1981) about information; and thus, there is no reason to think he has rejected the more detailed analysis. To the contrary.

17 After discussing some intuitively natural ways to think about the “information channel,” Dretske writes,

From a theoretical point of view, however, the communication channel may be thought of as simply the set of [statistically defined] dependency relations between s and r. If the statistical relations defining equivocation and noise between s and r are appropriate, then there is a channel between these two points, and information passes between them, even if there is no direct physical link joining s with r. (Citation1981, 38)

18 Here is a bit more about Ramsey’s disjunctive premise – that information is nomic dependence or is wholly distinct from nomic dependence – which provides the backdrop for his dilemma argument. According to Dretske, the carrying of information amounts to the obtaining of the conditional probabilities in question relative to the state of the receiver (Citation1981, 68), so long as those probabilities holds in virtue of natural law (Citation1981, 76–77). Thus, even if the pattern of conditional probabilities central to the definition of information holds, if it does not hold because of nomic regularities, then the receiver states in question do not carry the information they would otherwise. Thus, it is neither the case that information is nomic dependence nor is wholly distinct from nomic dependence; rather, the correlations constitutive of information are “a symptom of lawful connections … information inherits its intentional feature from the nomic regularities on which it depends” (ibid., 77). It is the obtaining of a pattern of conditional probabilities because of nomic regularity. And, that is precisely what is exploited in the use of shade, within the broader structural and temporal context that brings a mental representation into existence.

19 Ramsey sometimes seems to take an appropriately flexible, pragmatist approach to the understanding of scientific processes. For instance, he views the nature of cognitive science as something that – beyond a general conception of what cognitive science is out to explain – should be left largely open, to be negotiated as the project proceeds (Citation2017, 4208–4209). In my view, that approach applies equally to the scientific conception of representation, as something that should not be rigidly characterized in advance, beyond perhaps some very general platitudes largely to do with what is to be explained.

20 Outside of philosophy, Markman and Dietrich (Citation2000) offer a sophisticated take on the job description of mental representations (as part of a discussion that is strongly informed by, and that significantly contributes to, the project of naturalizing mental representation as a theoretical posit in cognitive science).

21 This is not to say that descriptions never drive the evolution of the use of scientific terms; see, e.g., Ian Hacking’s (Citation1983, 87–90) account of how the reference of “meson” was fixed.

22 Although I have explicitly identified myself as a second-generation naturalistic theorist of mental content, I might seem now to have “slid back” into the game played by first-generation theorists. The dialectic is a bit different, though. I certainly do think our naturalistic theory of content should be responsive, in the first instance, to the needs of causal-explanatory modeling of measurable data that produces replicable results, letting folk psychology fall where it may. Nevertheless, once one has identified such units and assigned to them privileged relations to other individuals, properties, or kinds (a relation one takes to be content-fixing), the further questions can be asked, “Are those units really representations?” and “Do they play a representational role?” At that point, even the second-generation naturalistic theorist of mental content must engage with the question whether those units satisfy enough of the intuitions associated with the everyday term “representation” as to be rightly called “mental representations,” so as to avoid misleading stipulation. At that point in the debate, even the second-generation theorist must come to grips with at least some of the everyday intuitions about the use or application of “representation” and show that the cognitive-scientific models in question do enough of the right sort of causal-explanatory work to count as representations. That is the task at hand in the main text; the reasoning of this section does not assume that cognitive science’s goal is to vindicate folk psychology.

23 On this topic, Hutto and Satne (Citation2015, 523) may slightly misrepresent the history. On their story, Fodor rejected senses only late in his career. But as early as 1990, Fodor wrote “The older I get, the more I am inclined to think that there is nothing at all to meaning except denotation … .” (Citation1990, 161).

24 The problem of intensionality is often associated with the human ability to think about nonexistent things or kinds. One can think about unicorns, even though unicorns do not exist, and thus, it is often thought, there must be a component of meaning that is neither syntactic nor referential; this is commonly taken to be a sense or an intension. One naturalistic strategy is to treat such concepts as composed: when one thinks of unicorns, one is thinking of horses with horns, at a first approximation (Fodor Citation1990). This strategy takes on additional plausibility in the current context, in which emphasis is placed on a cognitive-scientific notion of representational content, the primary role of which is to account for largely relational data. After all, since there are no unicorns, there will be no data concerning interactions with unicorns for cognitive science to puzzle over. What is left is the production of “unicorn”-related sentences (reports of intuitions about unicorns, etc.), which, though relational in a sense (they involve the production of such things as sound waves beyond the boundary of the body), appear to be a much more manageable target for the proposed approach to intensionality described in the text – combined perhaps with the appeal to compositionality.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 233.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.