Publication Cover
Philosophical Explorations
An International Journal for the Philosophy of Mind and Action
Volume 15, 2012 - Issue 3
206
Views
1
CrossRef citations to date
0
Altmetric
Articles

The utility of conscious thinking on higher-order theory

Pages 303-316 | Published online: 19 Jul 2012
 

Abstract

Higher-order theories of consciousness posit that a mental state is conscious by virtue of being represented by another mental state, which is therefore a higher-order representation (HOR). Whether HORs are construed as thoughts or experiences, higher-order theorists have generally contested whether such metarepresentations have any significant cognitive function. In this paper, I argue that they do, focusing on the value of conscious thinking, as distinguished from conscious perceiving, conscious feeling, and other forms of conscious mentality. A thinking process is constituted by propositional-attitude states, and during conscious thinking some or all of these states would be targeted by HORs. Since cases of nonconscious thinking are widely accepted, the question arises as to the use of representing one's thoughts during thinking. Contra the views of Armstrong and Rolls, I argue that HORs do not facilitate first-order thinking. Rather, I propose that such representations enable reasoning about one's act of thinking, and I give various examples of this sort of metacognition in support of the theory. I further argue that the general correlation between complex thinking and its being conscious is merely due to the fact that assessing one's mental act is particularly useful during such thinking, not because consciousness somehow facilitates first-order inference-making, as folk psychology implies. My view is thus consistent with recent empirical evidence that complex thinking sometimes yields better results when nonconscious.

Acknowledgements

The author wishes to thank Michael Levin and Barbara Gail Montero for helpful comments.

Notes

Rosenthal (Citation1997) distinguishes state consciousness as that property by virtue of which a mental state, as opposed to a creature, is conscious. For a creature to be conscious is simply for it to be awake and responsive to stimuli.

As Rosenthal (Citation1997) has claimed, second-order states are only conscious in the relatively rare cases when we introspect, via third-order states (p. 742).

Or being disposed to have, for Carruthers.

The theoretical possibility, admitted by Rosenthal (Citation2005), that HOTs may be “empty” (lacking actual target states) or misrepresent their targets is not a problem for the functional role I am advocating, as it is surely rare that such HOTs would occur and lead us to reason about states we are not actually in.

Fodor (Citation1975) in fact allows that analog representations can be syntactic and play a computational role in a “language of thought”. See his Chapter 4.

Carruthers (Citation2000) argues that if higher-order experiences serve “in underpinning and providing content for higher-order thoughts (HOTs)” then the faculty of inner sense would be “redundant” (Section 5.2). But, that would not be the case if HOPs and HOTs serve different roles: suppose only HOPs can implement our awareness of our own mental states (i.e. the arguments of the inner-sense theorists are successful), while only HOTs can be deployed in IM.

This of course assumes a limited capacity of states that can be engaged at once, and indeed working memory – the ability to “hold in mind” information for the purpose of reasoning and learning – is thought to have a limited capacity.

This judgment is part of his Theory of Apparent Mental Causation, which also includes judgments of priority (that the volition occurs just before the act) and exclusivity (that there is no other cause of the act) for any act one feels control over. See his Chapter 3. Note that while priority judgment does require representing the volition, surely this judgment occurs at the time of the act; at that point, the volition is represented as having occurred just before the act. And if it is not represented as a present state, it will not become conscious.

This, I think, is the result of introspection that is not “theory laden”: if one believes that every voluntary act is consciously willed, one must be careful not to let that belief inform the introspective report.

Presumably the representation has that tendency when it occurs in the pre-supplementary motor area.

Whether the mental act occurs will of course also depend on one's cognitive abilities, just as whether the bodily movement occurs does not depend solely on the volition to move, but also on one's physiological abilities.

I do not deny that solutions to complex problems can sometimes “pop into” one's mind, or result from “sleeping on a problem”. But, in these cases of nonconscious ratiocination, we have done some conscious work on the problem beforehand. At least, there has been conscious volition to solve the problem prior to the nonconscious processing. An entirely nonconscious solving process, where we are first exposed to the complex problem and the next thing that enters consciousness is the solution, would surely be quite rare. Normally, we are conscious of several reasoning steps or exploratory thoughts, along with a sense of volition, in the course of arriving at a solution or decision.

Carruthers (Citation1996) calls this function “reflexive thinking”, though I prefer the term “inferential metacognition” as a description of the use of state consciousness. “Reflexive thinking” may describe simply having a HOR – say, a belief that one is in a certain mental state – without deploying that representation in reasoning. And, such reflexive thoughts are not, in and of themselves, useful to have.

I am not suggesting that all HORs that transpire during thinking or perception are put to use in IM; some may occur due to mere habituation. My claim is that the actualist is not saddled with a great many HORs that go unused.

Now, it may be that a metacognitive “progress assessment” leads one to postpone a complex ratiocination to a later time when (for whatever reason) it proceeds with more success. But, that is not a case of consciousness improving one's rational ability by (directly) causing better inferences to be made.

Some relatively simple inferences may be involved here: “If the car has an airbag, I'll be safer. Car A has an airbag…”, etc.

It also suggests that exclusive focus at the first-order level is not needed for data-integration tasks.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 233.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.