449
Views
0
CrossRef citations to date
0
Altmetric
ORIGINAL ARTICLE

On explaining and understanding cognitive behaviour

&
Pages 241-250 | Received 26 Jun 2014, Accepted 05 Nov 2014, Published online: 20 Nov 2020

Abstract

Objective

To show that experimental psychology can correlate physiological events and processes with behavioural manifestations, avowals and reports of thought and experience.

Method

Connective analysis.

Results

It is not necessary to establish a computational theory about the cognitive functions of human beings preparatory to an understanding of the functioning of the brain, for the latter is not dependent on the former. A theory of intellectual powers is not then required before making significant contributions to understanding the character of brain activity correlated with perhaps a necessary condition for the exercising of these powers. We consider these propositions in the context of the work of the cognitive psychologist Max Coltheart on theories of language comprehension. It is suggested that the different theories of reading he offers are anodyne descriptions of reading under laboratory conditions. Generalizations that this work leads to, such as ‘The reading system has direct access to the mental lexican’ do not constitute theories and such model building presented as an explanation only constitutes a redescription of the problem under consideration.

Conclusion

Human beings are living creatures with perceptual, volitional and affective powers informed by reason, behaving purposively and pursuing goals against a backdrop of social norms.

Introduction

It has been a peculiarity of Western thought that in our efforts to render us intelligible to ourselves, we attempt to explain ourselves on the model of our most sophisticated technology. Plato conceived of the soul on the model of a charioteer (Reason) endeavouring to control two horses (Passion and Appetite). In the seventeenth century, Descartes conceived of the human organism on the model of the sophisticated automata that so fascinated him and his contemporaries. Unlike the Aristotelians, Descartes thought that all forms of life can be explained in purely mechanical, physical terms—just like the behaviour of automata. The only exception, he held, was the domain of the mental—which he restricted to man, a being composed of two distinct types of substance, mind and body, in mutual interaction. But here too, his model for the comprehension of the relation between mind and body was taken from up‐to‐date technology. For, in his view, the mind controls the flow of animal spirits to the muscles via the pineal gland, just as by means of levers the water engineer controls the flow of water from the tanks to the fountains and cascades for the elaborate water displays in seventeenth‐century royal gardens. In the first half of the twentieth century, it was common to explain the relationship between mind and body by analogy with the central telephone exchange, the mind being the telephone operator. It is hardly surprising that today we should be mesmerized by computers and try to understand ourselves on the model of our latest sophisticated technology.

It might be suggested that someone who knew nothing about computers might be puzzled at the appearance of the letter A on the screen in response to pressing the A‐key on the keyboard (Coltheart, Citation2012). If he asked a programmer to explain this to him, the programmer would explain how discrete chunks of code severally perform discrete information‐processing tasks. These, he would explain, are called ‘modules’ (e.g., detecting which key has been depressed, looking up in a lexicon of stored letter forms which stored description is of the letter on the key, etc.). The entire programme can be represented diagrammatically as a flow chart, with individual modules represented by boxes and the pathways between them by arrows. This software description is to be contrasted with the engineering hardware description. Cognitive scientists often say that the mind is the software of the brain (Block, Citation1995), and that the performance by a human being of any cognitive task could be described at these two different levels (Coltheart, Citation2012, p. 12):

‘One could propose what the information‐processing steps are that link the input to the output when the cognitive task is being performed. That would be a description at the mental or cognitive (“software”) level, and it could be expressed explicitly in the form of a box‐and‐arrow flowchart.’

Indeed, it is claimed that cognitive psychology has provided detailed explicit theories, expressible as information‐processing flowcharts, of how many cognitive activities are accomplished. This is an inescapable step that has to be completed before one can hope to proceed to the second‐level hardware description that is the province of neuroscience (Coltheart, Citation2012). Why so? Because, Coltheart explains,

‘one needs to begin with a theory of how some cognitive activity is performed, a theory expressed at the cognitive level, because only when such a theory is available can one seek to investigate how the components of that theory are implemented at the neural level’ (loc. cit.).

We suggest that such modular theories in cognitive psychology are misconceived, presenting as an explanation what is actually no more than a re‐description of the problem. After considering some general methodological issues, we indicate why computational models are not explanations and why theories of information processing as conceived by Shannon (Citation1948) for machines like computers have been unsuccessful when applied in psychology. Next, as an example of the ‘cognitive level of explanation’, we consider a model for reading and understanding speech (Coltheart, Citation2012; Coltheart, Rastle, Perry, Langdon, & Ziegler, Citation2001), and show that it does not offer explanations of the activity involved. Finally, we emphasize that our criticisms of the ‘cognitive level of explanation’ must not be seen as rejecting the various efficacious training procedures for improving the psychological capacities of those with disabilities. Indeed, we emphasize it is in such work that psychologists most profitably exercise their ingenuity and skills.

General Methodological Considerations

Any analogy between the brain and the computer should be viewed with as much suspicion as the past analogies between the brain and the central telephone exchange, or the brain and a water control system. As we have argued (Bennett & Hacker, Citation2003, Citation2006), and shall elaborate below, it is an analogy that has no warrant in similarity of the phenomena. The analogy between the mind and the software of a computer is equally suspect, for there is no useful sense in which the mind is akin to the program of a computer.

Cognitive psychology originated in reaction to the crude behaviourism that beset psychology in the interwar years and beyond. The faults of behaviourism are now well known, but it is important not to overlook its merits. Behaviourists were right to emphasize that language learning is based on training, and that it presupposes shared behavioural reactions and responses. They were right to see language acquisition as learning new forms of behaviour—learning how to do things with words in human intercourse (not learning how to operate a truth‐conditional calculus). Logical behaviourists were right to see an internal relation between mental attributes and behaviour, properly construed. For the criteria for ascribing mental predicates to other people consist of what they do and say. But the logical behaviourists were mistaken in conceiving the relationship reductively, supposing the mental to be reducible to the behavioural. One can simulate and pretend, and one can think or feel without showing what one thinks or feels. So it makes sense, in certain circumstances, to describe someone as manifesting such‐and‐such behaviour but to deny that the person is in the corresponding mental state, and it makes sense to ascribe thoughts and feelings to a person even though they are not exhibited in behaviour.

The non‐verbal and the verbal expressive behaviour that manifest psychological attributes constitute criteria, i.e., constitutive evidence, for appropriate psychological ascriptions. The criterial link is not inductive but conceptual. Nevertheless, it is looser than entailment—the behaviour is necessarily good evidence for the presence of the psychological attribute. It is defeasible, but if not defeated, it normally suffices for certainty. It makes sense to ascribe a psychological attribute to another being, truly or falsely, only if it is possible for that being to display such behaviour as would count as good evidence for the ascription of the psychological attribute, i.e., the appropriate forms of behaviour must be in the creature's behavioural repertoire. Hence, the limits of thought and experience are the limits of the possible behavioural expression of thought and experience. What the experimental psychologist can do is correlate physiological events and processes with these behavioural manifestations, avowals, and reports of thought and experience.

Cognitive psychology began in the 1960s as a laudable endeavour to reintroduce into psychology attention to the human mind, to mental abilities, states, processes, and activities. But as one of its founding fathers, Jerome Bruner, was to admit many years later, all it did was to introduce putative non‐mental computation into psychology, rather than introducing understanding, reasoning, thinking, believing, supposing, let alone knowing, realizing, or being conscious of things. What the Cognitive Revolution did, according to Bruner (Citation1990), was to abandon intentionality (Brentano's ‘mark of the mental’; Brentano, Citation1874) or ‘meaning making’ as its central concern, ‘opting for “information processing” and computation instead’. But these are just computer analogies the validity of which should be subjected to scrutiny.

The claim that to understand the functioning of the brain, one needs to begin with a computational theory about the cognitive functions of human beings rather than with painstaking descriptions of human powers and their exercise is not correct. It is not obvious that the cognitive functions of human beings, in particular speech and the understanding of speech, thought and its expression, are executed by computational means. To be sure, we need a reliable description of the relevant human abilities and of the human behaviour that manifests them. But to suggest that we need a computational theory before we even begin our investigation is to demonstrate a commitment to a form of explanation that may well be inappropriate to the phenomena that we seek to understand.

Indeed, to talk of ‘inputs’ and ‘outputs’ is more problematic than it appears, precisely because it is based on a questionable computer analogy. What precisely is the ‘input’ to a reader reading a text? Is it what he sees? And what does he see? Is the ‘input’ what impacts upon his retina, i.e., light waves or photons? These are not visibilia at all. Is the input black linear shapes? No reader of a text can see a text in his native tongue as mere black shapes. Or is the input letters? Or words? Or words comprehended? And what is the ‘output'?—Tongue and laryngeal movements? Or meaningless sounds? Or words read? Or words comprehended? Or sentences comprehended? Or speech‐acts performed?

It is important to bear in mind that we see other human beings not as embodied minds or animated bodies, but as living creatures with perceptual, volitional, and affective powers informed by reason and acting for reasons, behaving purposively and pursuing goals against a backdrop of social norms. We naturally see their behaviour as suffused with intentions and with intentionality, not as ‘bare bodily movements’. That we do so is no part of any theory (a ‘theory of mind’, as some psychologists urge), any more than our psychological vocabulary is part of a theoretical vocabulary (of ‘folk psychology’ as some folk would have it).

Computational Models

Why computational models are not explanations

Computational models may be used to relate empirical observations with a theory and in that way connect, for example, some law to observations of phenomena. One category of computational models are connectionist models, used most famously by Rumelhart and McCelland (Citation1986) in their investigation ‘On Learning the Past Tenses of English Verbs’ in which they comment:

‘We have shown that our simple learning model shows, to a remarkable degree, the characteristics of young children learning the morphology of the past tense in English. We have shown how our model generates the so‐called U‐shaped learning curve for irregular verbs and that it exhibits a tendency to overgeneralize that is quite similar to the pattern exhibited by young children’.

It is doubtful whether this provides an ‘explanation’ of how children learn the past tense, given the Oxford English Dictionary definition of this word, namely ‘a statement or account that makes something clear’ or the verb ‘explain’ ‘to make (an idea or situation) clear to someone by describing it in more detail or revealing relevant facts’. Certainly, the past tense model may open up a range of new psychological investigations that had not been thought of before but it does not provide an explanation of the phenomenon. That depends on an empirical investigation of the developing brain in relation to the task of learning the past tense, no mean feat, as well as a later mechanistic interpretation of the findings. Psychological observations of children's innate imitative tendencies in response to training, and the development of their grasp of implicit syntactical rules in response to teaching can yield generalizations about child learning. But the only explanations they yield are explanations in terms of behavioural tendencies and dispositions. The discovery of such regularities, of course, is of great importance for teaching and learning. Their explanatory power, however, is small (roughly speaking: A does V in circumstances C, because A's tend to do V in circumstances C′). They describe the norm (what is normal) in the mastery of normative (rule‐governed) behaviour. This makes prediction possible but does not explain the regularities described.

Zipser and Anderson (Citation1988) used a connectionist model to show how neurons in the posterior parietal cortex of primates that respond to both the retinal location of a visual stimulus as well as the position of the eyes, might provide the link between retinal and head‐centered coordinate systems. This model showed that units with precisely the properties of the parietal neurons could provide the transform of the coordinates required, even though such properties were not originally specified in the model. In order to achieve this they used a ‘back‐propagation’ algorithm in which the ‘hidden units’, taken as the parietal neurons, develop their properties after successive iterations of a specific input and output to the model. This work did not provide an explanation for the properties of the parietal neurons, for it is entirely unknown how these are connected, nor is the back‐propagation algorithm realizable in biological neuronal networks. The famous Hodgkin–Huxley model of the action potential consists of a set of differential equations that describe the behaviour of a specific group of voltage‐dependent conductances in an electrical circuit taken to represent the membrane of neuron. This is without doubt the most influential model in the history of neuroscience: At the phenomenological level, it has shown that many of the characteristics of the action potential can be predicted by the model, and at the mechanistic level, it has spurred on research to find the molecules that have the characteristics of the voltage‐dependent conductances. Yet, the model itself does not provide such mechanisms and is heuristic, giving a highly accurate description of the action potential and of phenomena associated with it rather than being genuinely explanatory. One must then distinguish between models that explain and those that are descriptive/predictive of phenomena (Kaplan, Citation2011).

Computational models in the cognitive sciences are not explanatory, according to the above analysis. These models, like the Dual Route Cascaded Model of Coltheart et al. (Citation2001), involve algorithms specifically designed to describe and predict some form of behaviour, with their success measured by the extent to which they do so. These models do not distinguish an activity as being in accordance with a rule from an activity that is genuinely normative, i.e., that is intentionally following a rule (Piccinini, Citation2007). As there is sparse empirical evidence for the conjectured elements and their function in the model (Bechtel & Abrahamsen, Citation2010), the claim that they offer a mechanism for a behaviour and are explanatory is incorrect. In order to make claims that one has illuminated the workings of a mechanism responsible for a particular phenomenon, it is necessary to decompose the phenomenon into component parts and operations and then show how these function together in such a way as to lead directly into further research on the details of the mechanisms pertinent to the phenomenon (Bechtel, Citation2008). Behaviour per se, as used by cognitive scientists in order to develop an algorithm for a particular behaviour, does not offer guidance as to the mechanisms involved. Furthermore, the use of a computational model of the suggested algorithm does not advance the claim that it is explanatory.

Information and Information Processing

Two different senses of ‘processing information’

In the shaky sense in which human beings may be said to process information, computers do not. In the sense in which computers process what may, in a technical sense, be deemed to be information, human beings do not. For in the information‐theoretic sense in which computers process information, neither understanding nor knowledge is involved. Moreover, the quantity of information involved in any processing is completely independent of sense or meaning. The concept of information introduced by Claude Shannon in his revolutionary 1948 paper has nothing to do with sense or meaning (as he readily admitted) but only with the relative frequencies of signals. Understanding the speech of another and responding to his speech act intelligently is not an information‐processing task in Shannon's sense. Nor is reading a text with understanding. On the other hand, one may (perhaps) say of an accountant or statistician that they process information in the ordinary sense of the word ‘information’. One may (perhaps) say of the retinae and the optic nerves that they process information in the information‐theoretic sense of the word. However, it is of capital importance not to confuse or conflate the two different senses of ‘information’.

In what sense the brain can be said to process information

The normative use of the word ‘information’ is, according to the Oxford English dictionary, derived from the Latin informare meaning to instruct, to give form to the mind, to form an idea. This normative use of the word we will call N‐information. Shannon (Citation1948), in his mathematical theory of communication, gave the word ‘information’ an entirely different definition. He asked the question that if ‘x’ is a discrete random variable, to what extent is ‘information’ received when a particular value of ‘x’ is detected following transmission in an electronic system? Shannon postulated that if a highly probable value of ‘x’ is detected, then little ‘information’ is gained, whereas the opposite is the case with a low probable value of ‘x’ so that the probability distribution p(x) of x determined the quantity of ‘information’ (h(x)) obtained. This is the Shannon use of the word which we will call S‐information, defined by h(x) = −log2p(x). The conflation of N‐information with S‐information is often used, perhaps unintentionally, to lend a theoretic sophistication to N‐information on the one hand and to introduce N‐information into theoretic considerations on the other, both leading to confusion. Attempts have been made, for example, to apply S‐information to the analysis of activity in large numbers of neurons that are activated in primate visual cortex in response to looking at different faces, with the aim of showing how experimental observations can be plausibly interpreted in this context as suggesting that temporal synchrony of neuronal activity is not as important as combining neurons that are active in response to different features of the faces (Rolls & Treves, Citation2011). While interesting from a theoretic point of view, this is a far cry from N‐information, of giving ‘form to the mind’.

When it comes to animal behaviour, attempts have been made to identify the ‘input’ to organisms and then analyse their function or behaviour. For example, the extent to which the movement of crabs can be predicted by consideration of the extent to which chemotaxis, involving simultaneous spatial comparisons of chemical signals, and rheotaxis, involving odor‐stimulated upstream movement. These have been incorporated in simulation models, with both considered important as ‘sources of information’ (Weissburg & Dusenbery, Citation2002). Indeed, such research led to a treatise on ‘How Organisms Acquire and Respond to Information’ (Duesbery, Citation1992). Here ‘information’ is neither N‐information nor S‐information.

Attempts at using S‐information in psychology have not been illuminating, as indicated in the detailed critique of almost all attempts to apply S‐information theory by Laming (Citation2001). In the review of ‘Whatever Happened to Information Theory in Psychology’, Luce (Citation2003) asks the question ‘Why is information theory not very applicable to psychological problems despite apparent similarities of concepts?’ He answers this with the comment, echoing the present work that ‘the most important answer lies in the following incompatibility between psychology and information theory (S‐information). The elements of choice in information theory are absolutely neutral and lack any internal structure: the probabilities are on a pure, unstructured set where elements are functionally interchangeable’. Or, as recently stated by Vigo (Citation2011) in his attempt to overcome this objection, ‘the aim of Shannon's theory of communication and of Shannon's information measure (S‐information) was to characterize information in terms of transmission and readability of data from the standpoint of electronic systems and not human cognizers’.

Descriptions of abilities contrasted with theories of abilities

Coltheart (Citation2012) suggests that before we can make any headway with understanding the character of brain activity that is correlated with, and perhaps a necessary condition of, the exercise of human intellectual powers, we need a theory of those powers. This is tantamount to the claim that one cannot figure out how some complex device works without a theory of what it does. This is unlikely to be correct. A human being, to be sure, is not a device, nor is the human nervous system or brain. The neural vehicles of human abilities are not devices either. The vehicle of an ability is the underlying material structure that makes an ability possible. In general, the vehicle of our cognitive abilities might be said to be the sense organs and cortex. The vehicle of our visual powers is the eye, thalamus, and visual cortex, and perhaps whatever parts of the cortex that are involved in our unavoidably subsuming what we see under concepts. But, of course, our eyesight is distinct from its vehicle, just as the horsepower of a car is distinct from its pistons and cylinders. To work out how the vehicle of a human ability enables a human being to do what it can do does indeed require a meticulous description of what it can do, but a description is not a theory.

According to Coltheart (Citation2012), the required theory of a cognitive ability should be cast in modular form akin to box diagrams employed in describing a computer programme. No grounds are given for this suggestion. In particular, it is unclear why a non‐theoretic, but nevertheless scientific, description of the exercise of human intellectual powers is not what is needed. Coltheart (Citation2012) quotes Galistel (Citation1999, p. 843) with approval: ‘An analysis at the behavioural level lays the foundation for an analysis at the neural level. Without this foundation, there can be no meaningful contribution from the neural level.’ This, in our view, is wholly unobjectionable. What is objectionable is the extra step Coltheart (Citation2012) advocates, namely that the behavioural level of description should take the form of a modularised computer programme. What is the warrant for this claim?

It is also unclear why Coltheart (Citation2012) believes that a box diagram employed in explaining the information processing of a computer is a theory of anything, as opposed to a description of the separate functions of the machine. A computer programmer who explains the program of a computer is not constructing a theory of the computer that has to be confirmed or infirmed in experiments. He is offering a descriptive breakdown of the tasks the computer is programmed to execute mechanically. To be sure, such a description may also function as a blueprint. But a blueprint is not a theory either.

Moreover, the history of neuroscience amply demonstrates that one can make the most important discoveries concerning the workings of the nervous system in general and of the vehicles of our cognitive powers without a theory of what is done. The great discoveries of Sherrington and Eccles about the function and workings of the motor, somato‐sensory, and supplementary motor cortex provided profound insights into locomotion without any antecedent modular theories. Similarly, the great discoveries of Hubel and Wiesel concerning the workings of the visual cortex and its role in endowing us with our visual powers involved no cognitive theories.

Reading and Understanding Speech on the Cognitive Model

Analytic decomposition of abilities

As Coltheart (Citation2012) observes, we agree that numerous human abilities can be decomposed into constituent abilities. In order to be able to read a text with understanding, we must be able to recognize the words of the text and discern the letters of which they are composed, we must know what the words mean, and we must understand what is said by the use of the sentence of which they are constituents. It is unwise to redescribe this in the terms Coltheart chooses, namely: ‘the system we use to read can be decomposed into a letter recognition sub‐system, a word‐recognition sub‐system, and a semantic sub‐system’. It is unwise since (1) we do not use any system to read, and (2) a constituent ability is misleadingly described as a ‘sub‐system’ of the ‘reading system’. This need not lead to confusion, but can easily do so.

Having noted our acceptance of the decomposition of abilities, Coltheart (Citation2012) remarks that ‘It is therefore strange to find them [Bennett and Hacker] asserting that the claim that language consists of modules is unclear. The sub‐systems of the reading system that I have listed are modules: What is unclear about that? … as I have noted computer programs are said to consist of modules in exactly this sense’ (p. 13)’. There is nothing strange about what we said, for language does not consist of modules. If anything here consists of modules, it is the ability to understand or read a language, and whether the ability to understand English speech or to read English texts consists of modules in exactly the same sense as computer programs do is precisely what is under dispute.

Descriptive models are not explanatory

We argued that the so‐called models advanced by cognitive psychologists were not explanatory models at all, but merely question begging re‐descriptions of the phenomena to be explained. As we shall explain below, there is, and could be, no such thing as a mental lexicon to which speakers and readers have access when comprehending speech or a text. What there is, we wrote, ‘is the human being's mastery of a vocabulary, his ability correctly to use the words of any language he knows. This is a skill, but not a skill in looking up anything in a mental lexicon.’ To have mastered a language is not to have access to anything, nor is it to retrieve anything from storage. These contentions are rejected by Coltheart (Citation2012) on the grounds that they betray ‘a fundamental lack of knowledge of the past few decades of research on the experimental psychology of language’. But this is no argument. Coltheart confuses ignorance (which he has not demonstrated), with conceptual objections to computational psychology. After all, what we are saying has remarkable similarities to what Jerome Bruner said, and he cannot be accused of ignorance of computational psychology (Bruner, Citation1990).

The argument Coltheart offers is that were the so‐called models of cognitive linguists mere re‐descriptions of the phenomena to be explained, there could not be conflicting theories about how lexical access is achieved nor could there be the possibility of adjudication between such theories by empirical tests (p. 13f). But this begs the question. For there may well be conflicting descriptions.

We shall confront his only argument. Examples of conflicting theories, Coltheart (Citation2012) contends, abound.

‘In the 1970s there were two conflicting accounts of how readers gained access to the representations of words in their mental lexicons: that this involved a direct‐access process as specified by the logogen model, or that it involved a process of serial search, as proposed by Forster (Citation1976). Coltheart, Davelaar, Jonasson, and Besner (Citation1977) derived conflicting predictions from these two different theories of how lexical access operates, and carried out a lexical decision experiment to test these predictions. The results were inconsistent with the predictions from both theories (how could that have happened if these theories were mere re‐descriptions of the behavioural phenomena?), but it was easy to offer a slightly modified direct‐access theory that was consistent with the result of the experiment. … The point is that if theories of lexical access were mere re‐description of the data, there could not be conflicting theories intended to explain the data, nor could such theories correctly predict the results of not‐yet‐done experiments’. (p. 14)

However,

  1. there is no such thing as a representation of words in a mental lexicon;

  2. there is no such thing as a mental lexicon;

  3. there is no such thing as gaining access to a mental lexicon.

This is not a matter of fact (like: there is no such animal as a unicorn), but a matter of logic (like: there is no such thing as a four‐sided triangle). Being a matter of logic, the assertion that there is no such thing as a representation of words in a mental lexicon means that this very phrase lacks sense, that it is a senseless concatenation of words. Consequently, there can no more be an intelligible theory of how words are represented in a mental lexicon than there could be an intelligible theory about the location of the East Pole or about how to square circles.

Mental dictionaries and lexicons

The expression ‘mental dictionary’ was first introduced into psychology by Anne Treisman (Citation1961) in her doctoral thesis, as was the notion of a ‘concept store’ in the brain. Coltheart et al. (Citation2001) ascribe to Treisman the detection of four different modules involved in speech and linguistic understanding: (1) a store of concepts (meanings); (2) a store of ‘sound pictures’ used to recognize spoken words; (3) a store of representations of spoken words that is used for producing spoken words, rather than for recognizing them; (4) a store of object representations used to recognize seen objects. But this is misconceived. For all Treisman noted is that in order to be able to speak and comprehend speech, one must know what the words in question mean (and, by the way, a concept is not the meaning of a word), one must be able to identify spoken words, be able to produce correctly pronounced words, and be able to recognize objects around one about which one might speak. That much is true but hardly a new theoretical insight. The rest, as we shall show below, does not make sense. The talk about ‘stores’ is at best a metaphor for an ability, at worst it reifies abilities and confuses an ability with its vehicle. Reification is altogether non‐trivial, for it leads Coltheart et al. (Citation2001) to ascribe actions to abilities. But abilities do not, and cannot do anything. Representing these mythological ability ‘stores’ by means of box diagrams is not a model of anything, it is merely a misleading description in diagrammatic form of abilities exercised in speech and comprehension of speech.

It should be obvious that ‘mental dictionary’ and ‘mental lexicon’ if they mean anything, mean no more than knowing the meanings of words in one's language. For a dictionary is a book of normative (rule‐governed) correlations of words with other words that explain their meanings. One has recourse to a dictionary when one does not know what a given word means. There are, and could be, no dictionaries in the mind or in the brain. Nor does it make any sense to speak of there being rules in the brain. But normal speakers do know what a host of words in their language mean—that is, they know how to use words correctly and can explain what the words mean in a sentence that they understand.

It is obscure what is meant by ‘gaining access to the representation of words in one's mental lexicon’. One might suppose that a ‘representation of a word’ is a written word. A representation of the word ‘red’ is ‘/r/–/e/–/d/’. But one cannot ‘store’ script in one's brain or in one's mind—only remember how a given word is spelled and pronounced. If a representation of a word is some neural configuration that is systematically correlated with using, seeing, or hearing a word—then such representations have yet to be discovered, no one ‘has access’ to them, and they are not in a mental lexicon. Might one not postulate a mental lexicon, and postulate access to it? Not until one has given a cogent explanation of what the expression ‘mental lexicon’ means, and a further explanation of what ‘access’ to it means. And that has not been done. We all know what a dictionary is, and we know what is meant by having access to a dictionary. No one knows what is meant by ‘having a dictionary in the brain’, nor does anyone know what it would be to take the dictionary in the brain off the bookshelf in the brain, or to page through it searching for the entry. But even if we could make sense of these phrases, we (or our brain, or our ‘reading system’) would still have to be able to read the dictionary in the brain to find out what is written in it! So the problem we were trying to solve turns up again in the proposed solution! And does it make sense to suppose that brains can read? Or that reading systems can read? And if, per impossibile, they could, how would that help the person who is reading?

The idea that knowing what a word means consists in associating or linking the word with a concept of which it is the name does not make sense. Words are not names of concepts. Concepts do not have names. Some words can, with qualifications, be said to name the things to which they apply, i.e., the things that fall under the concepts severally expressed by the words. The word ‘red’ is not the name of the concept of redness (and ‘rot’, ‘rouge’, and ‘rosso’ are not alternative names). In so far as we classify the adjective ‘red’ as a name, it is the name of a colour, not of a concept. Concepts are not objects of any kind and are no more storables than shadows. In so far as they can be said to be anythings, concepts are techniques of applying words. (That can be misleading, since concepts are expressed by words, and techniques of use are not. Nevertheless, to have mastered a concept is indeed to have mastered the technique of using the word that expresses that concept.) But there is no such thing as storing a technique of use.

Putting aside the picturesque descriptions derived from lexicography, there is a fundamental question to be faced. Is anything more being said in these theories of mental or neural lexicons than the following: it is a plausible hypothesis that there are cortical structures that endow us with the abilities we have to recognize letters and words, to pronounce words, to remember what they mean, and to understand the sentences in which they occur? At the moment we have no idea what they are or how they function. That seems undeniable, but it is no theory of anything. One might want to add the claim that in order to understand the sentences we read, we must compute the sense of the sentence from the meaning of its constituent words and their mode of combination. But that is no computational psychological theory, let alone a neuro‐linguistic hypothesis. It is a philosophical doctrine, with its roots in the Wittgenstein's Tractatus Logico‐Philosophicus (Citation1922) and in subsequent truth‐conditional semantics. It was adopted as a corner stone of Chomsky's theoretical linguistics (Chomsky, Citation1957), and the computational process was attributed to something called ‘the mind‐brain’. That does not make it a scientific claim, only an eminently questionable philosophical dogma.

Two accounts of reading

Were there really two different theories of reading in the 1970s? We suggest that there were anodyne descriptions of reading behaviour under laboratory conditions (including the reading of non‐words, partially concealed words, etc.). Such descriptions were then translated into the modular language of computer programs. These were duly generalized into such sentences as ‘The reading system has direct access to the mental lexicon’ or ‘The reading system has serial access to the mental lexicon'—both of which are of questionable intelligibility. That does not make such statements into theories. Not every conjecture is a theory, and conjectures cast in the language of computer programming, involving such notions as ‘concept‐modules’ and ‘concept‐stores’, ‘representations of words in mental lexicons’ make no sense. All that can be said is that a previous set of experiments suggested that readers either do so‐and‐so in given circumstances or do such‐and‐such, but further investigation showed that both suggestions were mistaken. To explain the error in terms of ‘direct‐access’, ‘serial access’ or ‘modified direct access’ is to explain failures, difficulties, and deficiencies in reading, or differences in speed of reading by reference to mythological activities and incoherent descriptions. For human beings do not, and could not, have access to mental dictionaries or concept stores, no matter whether the ‘access’ is direct, serial, or ‘modified direct’. Nor do brains. (What would a brain do with a dictionary?) Nor does anything called ‘a reading system’ have access to concept stores and mental lexicons, since there is no such thing as a reading system, let alone any such thing as a reading system's having access to concept stores or lexicons, mental or otherwise.

Professor Coltheart remonstrates that if a flow diagram were merely a redescription of the phenomenon to be explained, how could there be different flow diagrams yielding different predictions. He acknowledges that there could be two distinct ‘computational models’, both equally complete and sufficient, and both capturing ‘all the effects known to characterize human performance in the relevant cognitive domain’. But, he adds (Coltheart et al., Citation2001, p. 204), ‘Nothing ever guarantees, of course, that any theory in any branch of science is correct’, and so one can rest satisfied with any theory one has that is both complete and sufficient. But it is not true that nothing ever guarantees the correctness of a scientific theory. Conclusive verification does. If it were true that we can never be sure of the truth of a scientific theory, we should not know that the sun is at the centre of the solar system, that Harvey's theory of the circulation of the blood is correct, that puerperal fever is caused by infection. And so on and so forth. But we do. What is true is that we do not know which of two different modular theories of reading is correct. But that is precisely because, we suggest, they are not genuinely explanatory theories at all.

Explanation

Varieties of explanation

Coltheart (Citation2012) declares that according to our view, the only acceptable form of explanation of behavioural phenomena is at the neural level (p. 14). But that is untrue, and the dichotomy he presents is mistaken. There are numerous forms of explanation of human behaviour that are neither neural nor computational. Human behaviour is explained in terms of instinctual responses, natural behavioural dispositions; it is explained in terms of personal or cultural habits, natural or acquired tendencies, and acquired skills; it is explained by reference to dispositions of mind and of character; it is explained by reference to reasons, goals and purposes; it is explained in terms of motives; and so on. These are explanations of human behaviour (including intentionalist explanations) in contrast to the neural explanations of how movements (or indeed mere muscular contractions) occur that were considered above under ‘Why computational models are not explanations’. Of course normal human behaviour could not occur at all without the normal functioning of the neural system. The so‐called cognitive level of explanation, championed by Coltheart and his colleagues in cognitive psychology, neither contributes to the normative nor to the mechanistic forms of explanation. Indeed, it is not even cognitive. For according to cognitive psychologists, neither the subject nor anyone else knows anything about the postulated processes of accessing the concept store module in order to find the concept of which a given word is the name, or of accessing non‐conscious mental representations of phonemes.

Ignorance and mystery

Coltheart (Citation2012) contends that since we deny that the explanations of certain reading phenomena offered by cognitive psychology make sense, and since we agree that there is at the moment no neural explanation of the phenomena, therefore at present these aspects of language behaviour ‘remain mysterious’ and are ‘completely inexplicable’. All that is true is that we offered no alternative explanations to the phenomena that Coltheart and his colleagues are trying to explain. That does not mean that we think that the phenomena are mysterious. Ignorance does not imply mystery. Nor does it mean that in our view the phenomena are inexplicable—only that Coltheart and his colleagues have not explained them. We did not set ourselves up to answer questions in psychology—only to demonstrate that the answers being offered by computational psychologists do not make sense. They lack sense because the phrases ‘mental lexicon’, ‘lexical access to the representations of words in the mental lexicon’, ‘concept module’ are, we suggest, without meaning.

Explanations of reading

In his paper, Coltheart (Citation2012) describes three interesting experiments concerning reading. His purpose is to show that cognitive level explanations are genuine explanations. The experiments involve measuring the relative speeds at which people read pronounceable non‐words such as freps (five phonemes) and feech (three phonemes). Experiments showed that it takes longer to read the latter than the former. The explanation of the difference that is offered in the case of the first experiment is that the way

‘the cognitive procedure we use for reading aloud non‐words works is by translating serially and left to right, letters into phonemes. When it has reached the second letter in feech it is translating the string fe, and so is generating the phoneme /e/ (as in ‘fed’) as the second phoneme for the response. But when this procedure gets to the third letter and is translating the string fee, it generates the phoneme ‘i:’ as the second phoneme in response to the letters ee. So now, there are two competing candidates for the second phoneme position, /e/ and /i:/. There is mutual inhibition between phonemes competing for the same position, and the inhibition exerted by the wrong phoneme /e/ on the correct phoneme /i:/ slows the rate at which the correct phoneme arises’. (p. 15)

The explanation continues, but this preliminary observation suffices for present purposes.

It is true that we read English from left to right. It is doubtful whether that can be deemed to be a ‘cognitive procedure’. It is even more doubtful whether reading aloud involves translating letters into phonemes. It involves uttering phonemes in response to seeing sequences of letters. (A pianist does not translate the score into finger movements!) It is not the ‘cognitive procedure’ that utters the phonemes, it is the reader. If the reader is a small child learning to read, the child will very likely read ‘/f/–/e/–/e/’ and then pause to say ‘fee’. A competent adult does no such thing, not even soto voce. Nor does something called ‘his cognitive procedure’. Indeed, nor does something psychologists call ‘his reading system’. Moreover, the so‐called competing phonemes /e/ and /i:/ are not the sorts of things that could compete with, let alone inhibit, each other. For these are pronunciation possibilities, not actualities; and possibilities do not compete with each other.

Three points are noteworthy. First, Coltheart (Citation2012) explains the three experimental results he discusses in his paper without any recourse to box diagram models. Irrespective of the acceptability or unacceptability of his explanations, this suggests that any work attributed to the models is purely decorative. Second, whatever the correct explanation may be, it would be question begging to suppose that one can derive conclusions about reading ordinary English words from experiments that involve reading non‐ordinary English non‐words. Third, embellishing the explanation with box diagrams (software design) does not explain anything. The data to be explained are that it takes a fraction of a second longer for readers to read a five‐letter non‐word with three phonemes than it takes to read a five‐letter non‐word with five phonemes. To explain this by reference to a flow chart beginning with ‘visual analysis' leading to ‘grapheme‐phoneme conversion’, which is then momentarily blocked by a ‘buffer’, and so forth is no more than to redescribe the question in the guise of an answer.

It is striking that cognitive psychologists appear to labour under the misapprehension that if young children read phoneme by phoneme, it follows that mature readers who have mastered the skills of reading also do so, only very much more quickly. But that is like assuming that because a child has to learnt to walk before it can run, therefore running is a form of walking, only very much more quickly, and that anyone who runs simultaneously walks. Similarly, it begs the question to suppose that a phenomenon characteristic of reading English non‐words is to be explained by reference to hypothesized mechanisms that are also at work when reading English words. That we all pause a few milliseconds when reading ‘feech’ does not imply that that the same fictitious mechanism are at work when we read ‘beech’ without any hiatus, only very much faster. It should be possible to suggest some intelligible and testable hypotheses why it takes a few milliseconds less to read unfamiliar English non‐words containing single multi‐letter phonemes. Indeed, for all we know, Coltheart's conjecture that it takes longer to read ‘Feech’ than to read ‘Freps’ because it takes longer to grasp a digraph in a non‐word than to grasp a simple vowel may be true. But that gives no support to Coltheart's (Citation2012) box diagrams providing explanatory models.

Conclusion

Our criticisms of Coltheart's conception of ‘cognitive level explanation’ should not be thought to imply rejecting the various efficacious training procedures that use letter‐sound rules (phonics) to improve the reading skills of those who find difficulties in learning to read words. A recent Cochrane Collaboration meta‐analysis of the efficacy of phonics in 11 studies of 736 participants showed that phonics training is effective for improving some reading skills (McArthur et al., Citation2012). A senior author on this article is from Coltheart's department, but the 11 studies on which the meta‐analysis is based do not have in their titles any indication that they depend on any form of ‘cognitive level of explanation’ advanced by Coltheart (Citation2012).

Similarly, we do not want to leave the reader with the impression that one cannot in a worthwhile way hypothesize that some part of the brain is malfunctioning in, say, dyslexia, and then try to determine if there are deficits in that part of the brain (or other parts), using psycho‐physical techniques (Ramus et al., Citation2003). Such studies provide solid experimental data, which we do not believe are provided by the ‘cognitive level of explanation’. It is the design and implementation of experiments that gather such data that might most profitably engage the ingenuity and creativity of cognitive psychologists.

References

  • Bechtel, W. (2008). Mechanisms in cognitive psychology: What are the operations? Philosophy of Science, 75, 983–994. Part II: Symposia.
  • Bechtel, W., & Abrahamsen, A. (2010). Dynamic mechanistic explanation: Computational modeling of circadian rhythms as an exemplar for cognitive science. Studies in History and Philosophy of Science, 41, 321–333.
  • Bennett, M. R., & Hacker, P. M. S. (2003). Philosophical foundations of neuroscience. Oxford: Blackwell Publishing.
  • Bennett, M. R., & Hacker, P. M. S. (2006). Language and cortical function: Conceptual developments. Progress in Neurobiology, 80, 20–52.
  • Block, N. (1995). The mind as the software of the brain. In D. Osherson, L. Gleitman, S. Kosslyn, E. Smith, & S. Sternberg (Eds.), An invitation to cognitive science. Cambridge, MA: MIT Press.
  • Brentano, F. (1874). Psychologie vom empirischen standpunkt. Leipzig: Verlag von Duncker & Humblot.
  • Bruner, J. S. (1990). Acts of meaning. Cambridge, MA: Harvard University Press.
  • Chomsky, N. (1957). Syntactic structures. New York: Mouton de Gruyer.
  • Coltheart, M. (2012). The cognitive level of explanation. Australian Journal of Psychology, 64, 11–18.
  • Coltheart, M., Davelaar, E., Jonasson, J. T., & Besner, D. (1977). Access to the internal lexicon. In S. Dornic (Ed.), Attention and performance VI (pp. 535–555). Hillsdale, NJ: Lawrence Erblbaum Associates.
  • Coltheart, M., Rastle, K., Perry, C., Langdon, R., & Ziegler, J. (2001). DRC: A dual route cascaded model of visual word recognition and reading aloud. Psychological Review, 108, 204–256.
  • Duesbery, D. B. (1992). Sensory ecology: How organisms acquire and respond to information. New York: W. H. Freeman & Co.
  • Forster, K. L. (1976). Accessing the mental lexicon. In R. J. Wales & E. Walker (Eds.), New approaches to language mechanisms (pp. 257–287). Amsterdam: North‐Holland.
  • Galistel, C. R. (1999). Themes of thought and thinking [review of The Nature of Cognition, ed. by R. J. Sternberg]. Science, 285, 842–843.
  • Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese, 183, 339–373.
  • Laming, D. (2001). Statistical information, uncertainty, and Bayes' theorem: Some applications in experimental psychology. In S. Benferhat & P. Besnard (Eds.), Symbolic and quantitative approaches to reasoning with uncertainty (pp. 635–646). Berlin: Springer‐Verlag.
  • Luce, R. D. (2003). Whatever happened to information theory in psychology? Review of General Psychology, 7, 183–188.
  • Mcarthur, G., Eve, P. M., Jones, K., Banales, E., Kohnen, S., Anandakumar, T., … Castles, A. (2012). Phonics training for English‐speaking poor readers. Cochrane Database of Systematic Reviews, (12), https://doi.org/CD009115.
  • Piccinini, G. (2007). Computational modeling versus computational explanation: Is everything a Turing machine, and does it matter to the philosophy of mind? Australian Journal of Philosophy, 85, 93–115.
  • Ramus, F., Rosen, S., Dakin, S. C., Day, B. L., Castellote, J. M., White, S., & Frith, U. (2003). Theories of developmental dyslexia: Insights from a multiple case study of dyslexic adults. Brain: A Journal of Neurology, 126, 841–865.
  • Rolls, E. T., & Treves, A. (2011). The neuronal encoding of information in the brain. Progress in Neurobiology, 95, 448–490.
  • Rumelhart, D. E., & Mccelland, J. L. (1986). On learning the past tenses of English verbs. A Bradford book. Chapter 18. In D. E. Rumelhart, J. L. Mccelland & the PDP Research Group (Eds.), Parallel distributed processing: Explorations in the microstructure of cognition. Volume 2: Psychological and biological models. Cambridge, MA: The MIT Press.
  • Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27, 379–423, 623–656.
  • Treisman, A. M. (1961). Attention and speech. PhD dissertation, Oxford University, unpublished.
  • Vigo, R. (2011). Representational information: A new general notion and measure of information. Information Sciences, 181, 4847–4859.
  • Weissburg, M. J., & Dusenbery, D. B. (2002). Behavioral observations and computer simulations of blue crab movement to a chemical source in a controlled turbulent flow. The Journal of Experimental Biology, 205, 3387–3398.
  • Wittgenstein, L. (1922). Tractatus logico‐philosophicus C. K. Ogden (trans.). London: Routledge & Kegan Paul.
  • Zipser, D., & Anderson, R. A. (1988). A back‐propagation programmed network that simulates response properties of a subset of posterior parietal neurons. Nature, 331, 679–684.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.