1,313
Views
48
CrossRef citations to date
0
Altmetric
Target Article

MCI theory: a critical discussion

&
Pages 207-248 | Published online: 09 Apr 2015
 

Abstract

In this paper, we critically review MCI theory and the evidence supporting it. MCI theory typically posits that religious concepts violate what we call deep inferences, intuitions stemming from our evolved cognitive architecture rather than shallow inferences that are specific and flexible informational units also used for inference-making. We point to serious problems facing the approach, and propose a few corrective measures, avenues for further research, and an alternative view.

Acknowledgements

The authors thank Joel Daniels, Natalie Emmons, Stewart Guthrie, Chris Kavanagh, Jordan Kiper, Eric Margolis, Michaela Porubanova, John Shaver, Wesley Wildman, and the anonymous reviewers and editors for their very helpful and valuable comments.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. There has been considerable debate and revision regarding the degree to which modularity characterizes the workings of the mind (Anderson, Citation2007; Barrett & Kurzban, Citation2006; Callebaut, Citation2005; Callebaut & Rasskin-Gutman, Citation2005; Chomsky, Citation1980, Citation2000, pp. 117–119; Fodor, Citation1983, Citation2005; Karmiloff-Smith, Citation1992; Pinker, Citation2005a, Citation2005b; Samuels et al., Citation1999; Segal, Citation1996; Sperber, Citation1996, Citation2002). We restrict our discussion to the foundational works that informed the genesis of MCI theory and its current state, and therefore avoid discussion of more current views of the mind and the neurophysiological foundations of cognition. Also note that Fodor's views on the innateness of cognitive faculties are not the same as his argument for the innateness of concepts (see Carey, Citation2014; Fodor, Citation1981; Rey, Citation2014).

2. Cultural evolutionary modelers Boyd and Richerson variously define “culture” as “the information capable of affecting individuals’ behavior that they acquire from other members of their species through teaching, imitation, and other forms of social transmission” (Richerson & Boyd, Citation2005, p. 5) and “the transmission from one generation to the next, via teaching and imitation, of knowledge, values, and other factors that influence behavior” (Boyd & Richerson, Citation1985, p. 2) or “the information capable of affecting individuals’ phenotypes which they acquire from other conspecifics by teaching or imitation” (Boyd & Richerson, Citation1985, p. 33). These are akin to ideational concepts of “culture” such as Goodenough's (Citation1957/Citation1964, p. 36):

a society's culture consists of whatever it is one has to know or believe in order to operate in a manner acceptable to its members, and do so in any role that they accept for any one of themselves. Culture, being what people have to learn as distinct from their biological heritage, must consist of the end product of learning: knowledge, in a most general, if relative, sense of the term … It is the forms of things that people have in mind, their models for perceiving, relating, and otherwise interpreting them. (cited in Duranti, Citation1997, p. 27)

In all such definitions, culture is explicit and socially transmitted information and its organization rather than *inferred* information generated from exposure to social stimuli.

3. Crucially, “minds” are in no way limited to humans but are readily applied to anything from geometric shapes to animal puppets that show mentalistic cues (Guthrie, Citation1993; Johnson, Citation2000, Citation2003). Although we have intuitions that we apply to all animals or people, our core cognitive functions may not have neatly defined ANIMAL and PEOPLE categories, but rather broader intuitions about biological function, animate movement, and minds.

4. The “corest” of domains remains unclear. Pyysiäinen (Citation2004) details the inconsistencies in Boyer's use of basic, core categories, but a wider look at the literature reveals a few more. For instance, Atran (Citation2002, p. 96) and others (Boyer, Citation1994; Sperber, Citation1996) suggest that LIVING KIND and STUFF are “basic ontological categories” (Atran, Citation2002, p. 96). Elsewhere, Atran (Citation2002, p. 98) identifies the “conceptually primary ontological categories … [as] PERSON, ANIMAL, PLANT, ARTIFACT, SUBSTANCE.” Nevertheless, in terms of MCI as originally conceived, ontological templates partly consist of the differential convergence of inferences provided by modules. For instance, things in the STUFF and ARTIFACT categories may have the same inferences provided by naive physics systems, ARTIFACTs have the additional inference of essentialized function, and so on. Barrett's updated model (see section 3.2) explicitly avoided this confusion by focusing on the inference systems, which – he acknowledges – are also debatable.

5. Note that we do not need lexical equivalents for concepts (e.g., “plant”) or their categories (e.g., PLANT). For instance, Berlin (Citation1981, p. 95) demonstrates that while the Aguaruna do not have a word for the category “plant,” the conceptual category nevertheless exists as fungi are not “considered to fall within the domain” of related plants. This suggests that the category has associated attributes and a word for “plant” is not necessary to have a category. It also suggests that there can be ontological placeholders that do not have conscious representations attributed to them. In other words, conceptual clusters can form around domain concepts without a corresponding lexical marker for the domain.

6. Boyer (Citation1998) states that intuitive ontologies:

are part of the evolved cognitive equipment typical of the species. This does not entail that they are necessarily ‘innate’ or modular in terms of neural architecture. All that is necessary for the present argument is that intuitive ontologies are the normal outcome of early cognitive development. (Boyer, Citation1998, p. 879)

While it is unclear from the context of the source, the operative word here may be neural architecture rather than mental architecture.

7. This model is purposely anachronistic; these “inference systems’” are from Barrett (2Citation008a), which we discuss in section 3.2.

8. A middling inference or assumption might be generalization sets such as dogs breathe, grow, and die by virtue of their being animals (see section 3.3).

9. A plethora of concept types were soon (re)introduced – intuitive statements (INT), minimally counterintuitive statements (MCI), maximally counterintuitive statements (MAX-CI), and bizarre (BIZ) – in order to search for the cognitively optimal type of concept, which, MCI theory predicts, are the minimally counterintuitive ones (Boyer, Citation1994, p. 287). BIZ statements are “counter-schematic” (Johnson et al., Citation2010). Depending on the source, “maximally counterintuitive concepts” are often defined as “concepts that violate two ontological expectations such as a squinting wilting brick” (Upal et al., Citation2007) or “a chattering nauseating cat” (Upal, Citation2005, p. 2224). While some claim that MCIs are those that violate one or two deep inferences, it is entirely unclear how such statements violate any “ontological expectations” at all, as defined above. Is “having eyes” a deep inference about anything? Does “wilting” actually violate “modular expectations” about ARTIFACTs? While Norenzayan and Atran (Citation2004) suggest that “a giant gorilla in an opera house” is a “bizarre” concept, when appealing to cognitive architecture this counts as a counter-schematic concept (a rather enjoyable one, at that; see section 2.3.1). Intuitive statements – statements that are consistent with modular inferences – blend into statements that are merely conceptually consistent. So, “a cat that fell out of a tree” – one that explicitly follows inferences generated by folk-physical systems – is just as intuitive as “a cat eating cat food” – something that consists of a schematically consistent relationship.

10. Cohen (Citation2007, p. 117), for instance, explicitly acknowledges her use of MCI theory to characterize spirit concepts in Afro-Brazilian spirit-possession cults: “The concept of spirit may be created by taking an ordinary concept, such as man, and adding one or two counterintuitive features, such as intangibility and invisibility.” On why this may not be the case, see note 26 on intuitive dualism. While her ethnography does not focus on MCI theory, we question the theory's utility as an interpretive framework for the same reasons that Sperber (Citation1996, p. 34) resists interpretivism; namely “An interpretation is a representation of a representation with a similar content”; it should be a far more useful theory if directly brought to bear on data to determine whether or not it sufficiently explains religious concepts and their ubiquity.

11. In contemporary parlance, schemas are typically the stuff of “reflective beliefs” as distinct from “intuitive beliefs” (Barrett, Citation2004, p. 7; McCauley, Citation2011; Slone, Citation2004; Sperber, Citation1996, Citation1997). Note that Boyer (Citation1994, pp. 70–71), drawing from Atran (Citation1990, p. 215) who drew from Kant (1790/Citation1928, §59, p. 222), made the crucial distinction between “schemas” and “nonschematic” assumptions (Atran uses “quasi-schematized”) to lay the groundwork for what became MCI theory. Kant characterizes schemata as “pure concepts of the understanding” as distinct from symbols insofar as “[schemata] contain direct, symbols indirect, presentations of the concept.” In other words, we can draw upon schemas to make sense of incoming information, whereas symbols require a little more work. We greatly simplify things insofar as interpreting symbols is, indeed, also a schematic process; the cognitive requirements drawn upon to make sense of symbols access schematic information. Boyer (Citation1994, p. 83) uses “schemas” differently, explicitly stating: “Counterintuitive assumptions, obviously, are nonschematic; they appear counterintuitive precisely because there is no causal nexus from which they could be inferred.” Here Boyer emphasizes the causal and explanatory aspects of schemas; explaining or making sense of something is a schematic process when one may easily draw upon explicit information to explain it. He states that “nonschematic” information is along the lines of “congressmen from this or that party are particularly likely to be corrupt, or to be liberal in issues of private morality,” and so on (Boyer, Citation1994, p. 70). In our use of the term, this is schematic information too, just of a more specific character. Note too, that he foresees one problem discussed in the present paper:

To provide a satisfactory account of any given concept, we must be able to give an answer to two series of questions. First, we must have a precise account of the mechanisms whereby nonschematic assumptions are added to the conceptual schemata, and of the processes whereby they are made intuitively plausible or natural. Second, we must evaluate the relative contributions of schematic and nonschematic assumptions in constraining inferences about a given domain of reality. (Boyer, Citation1994, p. 73)

MCI research has yet to consistently and satisfactorily address this. See note 12.

12. While important, this timeless debate is beyond our present concerns. Still, connectionism entails a greater emphasis on the gradual acquisition of “knowledge through exposure to a variety of specific examples and repeated correction of inferences about those instances” (Strauss & Quinn, Citation1997, p. 57). On the surface, this portrait bears a striking resemblance to Sperber's aforementioned view. Connectionists, however, largely argue that the source of such inferences is the interaction between the informational units themselves rather than innate, domain-specific modules. In other words, there are emergent patterns inherent in the units of the stimulus and these units do the processing work themselves. Nativists often hold that there are biologically endowed cognitive systems that differentially handle stimuli.

13. To be clear, we do not wish to equate “mental organs” with “innate information” or specific locations of the brain. Rather, they are functionally distinct properties of the brain that interact with incoming stimuli and ultimately stabilize to optimally function within local contexts.

14. Boyer (Citation1994) states that:

nonschematic assumptions can vary in salience, that is, in the probability that they will be activated, given a certain situation. Schematic assumptions, by contrast, are automatically activated whenever the conceptual structure is relevant to the situation at hand, whereas nonschematic assumptions are not invariably activated. (Boyer, Citation1994, p. 74, emphasis in original)

This appears to be completely backward from the present discussion, but again, Boyer uses “nonschematic” to characterize latent information, not just deep information. Nevertheless, we would suggest that the veracity of these claims about nonschematic information depend on the source of the information; schematic information does vary in salience depending on the domain activated. See note 9.

15. But the problem is more basic than this. So, “a statue that thinks” would be MSTATUE. Why not “AGENT made of stone”? There is not any direct and obvious deep inferential system being violated here and therefore it is probably not an MCI. While it may be argued that agents made of stone are incapable of self-propulsion, we still do not know, beyond interpretation, if inferences of “self-propulsion” are violated by statues that think. Again, what is crucial here is the distinction between storage and active cognitive systems.

16. This point relates to the notion of “theological” or “cultural correctness,” which is doctrinal information that people are supposed to say or believe (Barrett, Citation1999; Barrett & Keil, Citation1996; Purzycki, Citation2013a; Purzycki et al., Citation2012; Slone, Citation2004). The “correct” part of theology or other ideologies is a matter of how one's explicit, schematic cognitive models correspond to authoritative models; they are “correct” when they. They are “correct” when they correspond to the majority or to an authoritative source such as the Bible or a religious leader. Like political correctness, it is a matter of how we are supposed to talk (and presumably think; e.g., “God is everywhere”). Theological incorrectness or inconsistency is often a matter of deep or shallow inferential processes running counter to authoritative or cultural consensus models of what people are supposed to say (e.g., saying “God came down from heaven” presumably suggests that deeper inferences about humans’ localized physicality are at work, whereas “God doesn’t like it when you chew gum” is applying novel schematic information to models of what God cares about).

17. Russell and Gobet (Citation2013, p. 743) “regard counterintuition as a highly semantic phenomenon” that is “unique to the individual.” This assessment stems from their problems with “innatist assumptions” and their reluctance to embrace conceptual modularity. As we have discussed in the present work and elsewhere, we also emphasize the distinction between cognitive faculties’ operations from the content of human thought. However, we question this alleged “uniqueness” as religious concepts are likely consistent across individuals. Russell and Gobet question this consistency's source. Two immediate challenges for defending claims that the “counterintuition” discussed by MCI theorists is “highly semantic” and “unique to the individual” are determining what “highly” means and determining whether or not only shallow inferences are at work upon initial exposure to MCIs. Even if religious concepts are entirely things of schematic content, in our view, the best MCI theory can do is characterize types of schematic concepts based on their deeper processual analogues. This would be an important contribution, but as we have detailed, the theory has yet to accomplish this.

18. Barrett also avoids the nature/nurture and cognitive architectural issues by appealing to McCauley's (Citation2011) distinction between “maturational” and “practiced” naturalness. While this distinction serves to reformulate how we talk about cognitive processes, it does not solve the problem of the distinction between counter-schematic and counterintuitive, and as such tells us very little about how to sufficiently determine what constitutes an MCI. So, while we might use McCauley's distinction to point to MCIs and characterize the violated inferences’ ontogenetic status (e.g., Barrett, Citation2008a; Barrett & Lanman, Citation2008), it neither tests nor solves the problem of what distinguishes an MCI from any other weird idea unless we determine a way of empirically delineating between deep and shallow inferences as well as “maturationally” natural or practiced habits (or “reflective” versus “nonreflective” beliefs). At least in the case of determining the relative “naturalness” of religion and science, McCauley (Citation2013, p. 166) acknowledges that his typology is comparative and remains beyond our ability to measure. Likewise, with relative ease we might characterize deep inferences as “maturationally natural” and shallow inferences as “practically natural,” but determining what kinds of stimuli violate these cognitive levels is an empirical question.

19. There is variation in how these faculties operate with predictable effects on memory. In line with this idea, Willard et al. (Citationn.d.) find that the more people show a general tendency to apply human-like mental state reasoning to such things as nature, animals, and machines (i.e., anthropomorphize), the less likely they are to show a memory bias for MCI content that violates “mentality” systems. These results suggest that it is not necessarily variation in schematic content that predicts a concept's counterintuitiveness; it is rather the variation in the functions of deep inferential processes. Further, once these schematic concepts exist, they impact the memorability of that type of counterintuitive content. Simply put, anthropomorphic ideas are not as distinctive if you are on the higher end of agency attribution.

20. Take the case of metaphor. We use terms like “babbling brook” to describe intuitive states of the world, and it is often good to let wine “breathe” a little before you drink it. “Arguing cars” can clearly have metaphorical value, or be understood as arguing about cars (akin to “talking shop”), and for both authors, “limping newspaper” conjured up a wet newspaper rather than one with legs. In none of these cases, however, is an explicit violation of default inferences generated by the aforementioned intuitive systems. From a connectionist standpoint, one might say that the connection weights between “limping” and “newspaper” are low compared to, say “wet” and “newspaper.” It also may be the case that such concepts might confound our language processing; we may have ignored the “-ing” to think of a “limp newspaper,” which then conjured up a wet newspaper. We would suggest that metaphorical cognition requires at least the schematic representation of what the metaphor means, and the metarepresentational ability to know that one is thinking about something different from the actual input (see Atran, Citation1990, p. 219; Upal, Citation2007). How we make sense of and create metaphors is a complex mixture of deeper inferences and schematic models at work. Detecting speakers’ intentions can be a part of the process, yet as Lakoff and Johnson (Citation1980) argue, metaphor is so much a part of our thinking that it is not necessarily always or even mostly the case. Often, religious people do not appreciate the metaphorical value of religious postulates and likely “turn off” their metarepresentational ability, or at least explicitly deny that religious and mythical concepts are metaphors (see Steadman & Palmer, Citation2008).

21. Note here that even if “crying mailboxes” and “crying statues” are equally counterintuitive, the fact that the former seems weirder to us than the latter suggests that schemas are at work (i.e., we hear about the latter more often than the former). But it may also be the case that we assume the statue is of a person and people cry, so such a thing is less strange than a “crying mailbox.” “A crying stone” seems quite different altogether, even though it is technically supposed to be the intuitive equivalent of the other two (and practically the same as the latter).

22. Note, however, that a mouse that knows your every move is still not an MCI in the strict sense. According to this model, a flower that thinks is a less likely candidate for a religious idea than a flower that knows you stole someone's bike by virtue of salience and relevance to individuals. Referring back to mentalizing, consider that we have a mental schema of “socially strategic knowledge” (or modules devoted to detecting moral defectors and morality; Cosmides & Tooby, Citation1989; Sugiyama et al., Citation2002). Cross-culturally, whatever constitutes such a domain is likely to vary and also is likely to vary situationally (i.e., models of socially strategic information in a classroom might be different from those at a synagogue), particularly when it comes to behaviors about which gods care. This opens the question of why gods might vary in their concern of universally recognized socially strategic information and locally specific domains of socially strategic information or other domains (see Purzycki, Citation2011b, Citation2013a; Purzycki & Sosis, Citation2011).

23. Researchers have demonstrated that people view MCI content as more religious or supernatural than non-MCI content (Norenzayan et al., Citation2006; Pyysiäinen et al., Citation2003), but it has yet to be shown that the wide range of religious or supernatural concepts found in the world are consistently, or even frequently, transmitted and retained MCIs.

24. Note that when advertisers use things that approximate to MCIs, it is the products and their ultimate purchase that are more important to the message, not the dazzling and attractive imagery, jingles, jargon, and acronyms designed to manipulate consumers into buying these otherwise mundane things. The analogy might be quite informative here insofar as MCIs might be glittery devices useful for getting people to engage in religiously justified behaviors. So, “walking on water” is not the object of religious devotion and is likely quite peripheral to religious traditions devoted to Jesus. It might give people a justification for claiming Jesus’ divinity, it might be easier to remember than the Sermon on the Mount, it might violate deep inferences of folk physics (even though the basilisk or “Jesus Lizard” can run across water), and it might have a lot of metaphorical value (e.g., with faith you can do the impossible), but it does not help us explain much at all about “religion,” let alone Christian mythology.

25. Perhaps it is the case that because religious concepts are introduced early in a child's development, their initial salience increases the chances of further elaboration of religious thought. These are ontogenetic questions ripe for empirical attention.

26. It is often stated that many or most ostensible MCI violations involve psychology (Atran, Citation2002; Atran & Norenzayan, Citation2004; Boyer, Citation2001; Cohen, Citation2007). Gods, ghosts, and spirits are agents (Guthrie, Citation2008, pp. 241–244). They are minds that deal in socially strategic information. If this is the case, then what exactly constitutes an MCI “mind” needs to be addressed. Bloom's (Citation2005) work on dualism suggests that the intuitive view of minds is separate and not reliant on the physical body. Evidence for the potential universality of this view comes from studies demonstrating this phenomenon cross-culturally (Chudek, McNamara, Birch, Bloom, & Henrich, Citationn.d.) and in pre-221 BCE Chinese texts (Slingerland & Chudek, Citation2011). If we intuitively think that minds are not part of bodies, and are not necessarily attached to bodies, then ghosts and spirits are logically more consistent with our intuitions than the scientific belief that the love you feel for your family and friends is nothing more than hormones and electric signals in your brain.

Additional information

Funding

During the preparation of this manuscript, Purzycki and Willard were financially supported by the Cultural Evolution of Religion Research Consortium (CERC), which is funded by SSHRC and the John Templeton Foundation.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 337.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.