1,129
Views
3
CrossRef citations to date
0
Altmetric
SPECIAL ISSUE

How to operationalise consciousness

, , , &
Pages 390-410 | Received 13 Apr 2019, Accepted 20 Jun 2019, Published online: 20 Nov 2020

Abstract

Objective

To review the way consciousness is operationalised in contemporary research, discuss strengths and weaknesses of current approaches and propose new measures.

Method

We first reviewed the literature pertaining to the phenomenal character of visual and self‐consciousness as well as awareness of visual stimuli. We also reviewed more problematic cases of dreams and animal consciousness, specifically that of octopuses.

Results

Despite controversies, work in visual and self‐consciousness is highly developed and there are notable successes. Cases where experiences are not induced, such as dreams, and where no verbal report is possible, such as when we study purported experiences of octopuses, are more challenging. It is difficult to be confident about the reliability and validity of operationalisations of dreams. Although this is a general concern about the measuring consciousness, it is not a sufficiently severe concern to completely undermine the work reviewed on vision and self‐consciousness. It is more difficult to see how the good work on human psychology can be applied to non‐human animals, especially those with radically different nervous systems, such as octopuses. Given the limitations of report‐based operationalisations of consciousness, it is desirable to develop non‐report‐based measures, particularly for phenomenal qualities. We examine a number of possibilities and offer two possible approaches of varying degrees of practicality, the first based on combining quality space descriptions of phenomenal qualities and the notion of a “neural activation space” inherited from connectionist A.I., the second being a novel match to target approach.

Conclusion

Consciousness is a multi‐faceted phenomenon and requires a variety of operationalisations to be studied.

WHAT IS ALREADY KNOWN?

  • Reports are used to operationalise visual and self‐consciousness.

  • It is difficult to apply this work to dreams.

  • It is unclear how consciousness may be operationalised in animals very different to humans, such as octopuses.

WHAT THIS ADDS?

  • Report based measures can be rigorous, but are not always applicable.

  • It may be possible to combine report based measures with future work on neural activation spaces.

  • A novel match to target approach is offered.

INTRODUCTION

Here, we review some of the more successful techniques for operationalising consciousness. We will examine what has been achieved through these approaches, while also covering some of the limitations each faces. Unfortunately, we are unable to offer solutions to all the problems currently discussed in the literature, but we do offer some ways forward. We will not be able to discuss every aspect of consciousness, nor would it be particularly meaningful to do so because “consciousness” is multiply ambiguous. Of interest here are two, possibly different, phenomena to be explained (Kriegel, Citation2005; Nagel, Citation1974; Rosenthal, Citation2002). What we are focused on is “consciousness” referring to experiences. We consider two senses of “consciousness” as experience. The first is “awareness.” This is similar to what has been called “access consciousness” (e.g., Block, Citation1995), although we find the precise definition of access consciousness to be too theoretically loaded to be useful here. In western cultures, awareness is often understood using container metaphors. Indeed, many of us are so comfortable with these metaphors; it is easy to lose sight of the fact that they are metaphors, and not universal ones at that (cf. Barrett, Citation2004). We say some of our mental states are “in” awareness and some are not. For example, in a subliminal priming word‐stem completion task, visual representations of the word stem, the keyboard, computer screen and the like are processed “in” awareness, whereas the masked prime is not. Instead, the masked prime is processed “in” the unconscious mind. More literally subjects are aware of, for example, the computer screen, but not aware of the masked prime. One of the questions of this paper will be how can we tell when subjects are aware of something and when they are not?

The second sense of “consciousness” that we are interested in are the “phenomenal qualities” of consciousness. This is the famous “what it is like” of consciousness (Nagel, Citation1974). This goes beyond the subject, say, seeing words on a computer screen, to the qualities of that experience. The whiteness of the letters, the blackness of the screen, the shape of the letters, the subtle changes of lightness from a shadow over the screen, and the like. To operationalise phenomenal qualities we need to measure not whether or not a particular mental representation is processed in or out of awareness, but instead what it is like for the subject. We will leave aside controversies about whether awareness and phenomenal qualities are co‐extensive. Most of the operationalisations considered here will apply regardless of whether or not only mental states in awareness have phenomenal qualities.

Different techniques have different ambitions when it comes to measuring consciousness. Where some focus on determining if a subject is aware of a stimulus, others attempt to describe what experiences are like through the measurement of quality spaces. Many seek, more simply, to determine the representational content of experiences. We will begin with a consideration of the techniques used to operationalise the phenomenal qualities of visual experiences and our awareness of visual stimuli. This is followed by the techniques used to operationalise aspects of self‐ consciousness (understood as both awareness and phenomenal qualities) in both laboratory and clinical contexts. Work in these areas is highly developed and there are notable successes in operationalising both awareness and phenomenal qualities, as well as ongoing, theory dependent, controversies regarding whether measures can exclusively and exhaustively operationalise consciousness. Following this, we turn to the more challenging case of dreams. It is very difficult to be confident about the reliability and validity of operationalisations of dreams. Although this is a general concern about the measuring consciousness, it is not a sufficiently severe concern to completely undermine the work reviewed on vision and self‐consciousness. It is, however, more difficult to see how the good work on human psychology can be applied to non‐human animals, especially those with radically different nervous systems, such as octopuses. Given the limitations of report‐based operationalisations of consciousness, it is desirable to develop non‐report‐based measures, particularly for phenomenal qualities (as some such measures exist for awareness, discussed in the vision section). We consider “no‐report” paradigms (Tsuchiya, Wilke, Frässle, & Lamme, Citation2015) used to examine the neural correlates of consciousness and offer two possible approaches of varying degrees of practicality, the first based on combining quality space descriptions of phenomenal qualities and the notion of a “neural activation space” inherited from connectionist A.I., the second being a novel match to target approach.

The point of the paper, if achieving such a goal is even possible in a review, is to encourage methodological pluralism. As we will see no one methodology is perfect and every method has it strengths and weaknesses. The hope underlying pluralism is that these strengths and weaknesses will not be the same such that they will “cancel” each other out and enable the triangulation of the phenomenon of interest. We hope that this survey of the strengths and weaknesses of various operalisations of consciousness will advance the attempt to home in on consciousness.

HOW IS CONSCIOUSNESS OPERATIONALISED IN LAB CONDITIONS

Even today some researchers would have us believe that to operationalise consciousness is an impossible dream:

“The ‘observational hard problem’ is fundamental in the same way [as Chalmers' purported Hard problem of consciousness], as there seems no way to observe subjective experience by way of objective methods.” (Overgaard, Citation2015)

Overgaard's concerns are hugely overstated. As one of us has argued (Schier, Citation2016) if we accept that there is no way to observe subjective experiences by objective means, that there are facts about experiences that are locked inside the subject, then dualism follows (see also Weisberg, Citation2011). Even if the possibility of dualism was still a live option, we should not entertain methodological assumptions that assume its truth. In fact, we will see that the diversity of measures of awareness and the different reports of the content of experiences that are provided by these different measures suggest that the subject's access to their experiences is no more direct or indirect than the observer's.

Phenomenal qualities of visual experiences

The study of visual experience has focused on what is purportedly the hardest aspect of phenomenology: colour experiences. Much of this work has examined the structure of colours, that is, the relationships between different colour experiences, by asking subjects to judge the similarity of stimuli. This produces “quality spaces,” that is, multidimensional ordered lists of phenomenal qualities (except where otherwise stated this treatment is derived from Clark, Citation1993). These list phenomena in order of their relative similarity. Items that are more similar appear closer together on the lists and more dissimilar items are more distant. As phenomenal qualities typically vary along more than one dimension, that is, there is more than one way in which subjects might rate them as similar or different, quality spaces are typically multidimensional. Probably the most famous example of a quality spaces are the various colour solids. These spaces represent some of the colour experiences subjects can have by assigning each colour a point in a three‐dimensional (3D) space. Dimensions correspond to ways in which a colour experience can appear similar to or distinct from other colour experiences. There are Euclidean dimensions corresponding to lightness/brightnessFootnote1 and saturation as well as a radial dimension corresponding to hue.

Quality spaces are derived from subject's judgements of the (dis)similarity of stimuli (see below for various methodologies). These judgements are then subject to multidimensional scaling (MDS) in order to derive a similarity space. The result is a description of the perceived similarity of the stimuli and the number of dimensions of the space. The dimensions represent the ways in which subjects judge the stimuli to be similar to each other. However, MDS itself does not provide the interpretation of the dimensions, only the number, so additional analysis is required. Nonetheless it is possible to discern the nature of the dimensions. For example, when asking subjects to judge matte coloured chips on a neutral background, we find dimensions corresponding to hue, saturation and lightness (see Clark, Citation1993 for an accessible introduction).

There are a range of different ways of gathering (dis)similarity judgements that can be analysed using MDS. Some methods enable the attribution of a numerical value to large differences. The first of these is called “direct proximity judgements.” In these tasks, subjects assign a numerical rating to describe the relative similarity of pairs of stimuli. For example, they are first given a sample dissimilarity pair (e.g., yellow and blue) with an assigned value, for example, 100. Pairs that cannot be discriminated are given a value of 0. The subject's task is to compare dissimilarity of two targets to the sample pair and assign an appropriate value (Tokunaga & Logvinenko, Citation2010). This can be tricky for subjects and so other tasks are often used. One alternative is the “odd‐one‐out” methodology, which we will return to in the self‐consciousness section below.

Other techniques only enable the attribution of numerical difference values to small differences in stimuli. For example, in the “just noticeable differences” (JND) approach subjects are presented with two stimuli that at most vary only slightly on the feature of interest. Subjects are asked to decide if they are the same or different. Researchers collate these judgements plotting the proportion of different responses on the y‐axis to the size of the difference of the stimulus attribute (e.g., surface spectral reflectance, or wavelength of emitted light) on the x. The value of the stimulus at which the line crosses 50% is called the “just noticeable difference” (Palmer, Citation1999). In other words, the JND is the stimulus difference at which the ability to distinguish the stimuli which vary by that much is better than chance. This is easier for subjects as they only have to judge whether or not there is a difference in the stimuli, rather than assign a value to the size of the difference. However, it is most useful for looking at small differences and faces limitations in that it does not address the question of whether it is possible to add up small differences in order to get a measure of larger differences (Boynton, Citation1978).

Other measures ask subjects to make changes to stimuli in order to produce pairs of stimuli that are indiscriminable. One approach to this is to match a target to a sample by changing the stimulus properties of the target, for example, by adjusting the wavelength of a light. This may not be the best approach as subjects often report not being able to get a precise match no matter what adjustments they make to either stimulus. As this may be due to low confidence in the ability to create a match, rather than because the stimuli do not appear to match, another approach called the “minimally distinct border” (Boynton, Citation1978) is used. In this kind of study two coloured patches are presented next to each other. If they appear to be different, a border between the two stimuli is identifiable. However, if they appear to be the same colour, it will look like a uniform surface. Subjects can make the stimuli indiscriminable by adjusting them until no border appears.

These approaches have been productive in producing detailed descriptions of colour experiences. However, it is generally true that only a small number of colour experiences have been examined in this way. It has long been known that the appearance of a surface can change depending on the viewing conditions, and so these types of studies have focused on standardised viewing conditions. The desire to get a detailed map of phenomenal colours led researchers to examine the structure of colour space in very controlled viewing conditions, usually with the goal of ensuring the adequate replication of colours, for example, in printing or on screen. In the last two decades this approach has been extended to begin mapping experiences of colours on backgrounds of varied colour (Ekroll, Faul, & Niederee, Citation2004; Ekroll, Faul, Niederee, & Richter, Citation2002) in more complex stimulus configurations (Ekroll et al., Citation2004) containing shadows (Logvinenko, Citation2015a) and with 3D objects (Xiao, Hurst, MacIntyre, & Brainard, Citation2012). It might be the case that more than three dimensions are needed to map the colour experiences elicited by complex stimuli (Logvinenko, Citation2015a, Citation2015b; Tokunaga & Logvinenko, Citation2010).

Visual awareness

In a very general sense, there is a unanimous answer to the question “how should we operationalise visual awareness?” Ask the subject. However, questions about the possibility of subliminal visual perception have led to a range of methodological and theoretical refinements to this general methodology.

In vision, issues surrounding the operationalisation of awareness arise in the debate about whether there is subliminal perception, that is, whether there are visual representations, which can influence behaviour in the absence of awareness of the content of those representations. The general methodology is dissociative; to show that the information that is represented in the cognitive system (via its influence on behaviour) is greater than what the subject is aware of being represented (Erdelyi, Citation1985, Citation1986). The methodological refinements examined below come out of the attempt to answer sceptical challenges to various purported demonstrations of subliminal perception. Many solutions to the problem have been proposed and ultimately rejected. The focus here will be on those that are relevant to operationalising awareness.

In the oldest studies, subjects were presented with stimuli that they could not discriminate and were asked to make a guess about the nature of the stimuli (see Adams, Citation1957 for a review). In an early non‐visual example, Peirce and Jastrow (Citation1884) applied pressure to a finger followed by a second slightly greater or lesser pressure and asked subjects to determine which one was heavier and to say how confident they were in the judgement. Even when they reported the “absence of any preference for one answer over its opposite, so that it seemed nonsensical to answer at all” (Peirce & Jastrow, Citation1884, p. 78) they nevertheless were significantly above chance in guessing which touch was heavier. There were representations of the difference in pressure that were available to guide guesses which the subjects reported no awareness of.

As argued by Eriksen (Citation1960), the problem with this methodology is that subjective reports can be influenced not only by what the subject is aware of but also by other factors such as individual differences in the level of certainty subjects require before reporting their experiences. For example as Irvine (Citation2012) emphasises, if subjects report the presence of the word “shit” less often than the word “shot” under identical masking conditions this is likely because they are less willing to report a somewhat taboo word under uncertainty, not because the word “shit” is harder to perceive. Experimental factors such as the weight of rewards and punishments for false positives and negatives also shift an individual's response criterion. A mathematical response to this problem was developed in the form of signal detection theory, which models decision making under conditions of uncertainty and allows the separation of sensitivity (the ability to detect the stimulus) from response bias (the tendency to prefer one response over another independently of the input) (Swets, Citation1964).

However, this does not address the broader concern that self‐report methodologies underestimate the content of a subject's awareness (Eriksen, Citation1960): that they are not exhaustive measures of awareness (Reingold & Merikle, Citation1988, Citation1990). Because of this exhaustivity concern, it was suggested that subjective report be abandoned in favour of more objective measures. Merickle and Cheeseman distinguished between the:

subjective threshold, the level of discriminative responding at which observers claim not to be able to detect perceptual information at better than a chance level of performance, and the objective threshold, the level of discriminative responding corresponding to chance‐level performance (Merikle & Cheesman, Citation1986, p. 42).

While they argued that consciousness is subjective and therefore that subjective measures are to be preferred, others encourage the use of objective measure because if the subject cannot even correctly guess whether, for example, the stimulus was present or absent then claims that there is residual awareness of the stimulus seem misplaced.

Objective measures might deal with the exhaustivity problem, but they are problematic in that they may not exclusively measure the contents of awareness (Reingold, Citation2004, p. 884; see also Reingold & Merikle, Citation1988, Citation1990). Perhaps the best evidence in support of the inappropriate strength of objective measures is blindsight. Patients with blindsight have suffered damage to part of the primary visual cortex, such that they are partially cortically blind. They report not being able to see in the part of the visual field the damaged cortex was responsive to. None‐the‐less they do substantially better than chance in forced choice paradigms guessing features of stimuli presented to the apparent blind spot. In other words, blindsight patients seem to be subjectively unaware of stimuli presented in their blindfield and yet are objectively aware (although see Overgaard, Fehl, Mouridsen, Bergholt, & Cleeremans, Citation2008 for evidence that there is not a dissociation in performance between objective and subjective awareness). If these reports of no visual awareness are veridical, then this suggests that objective measures are not exclusive, that is, they measure both supraliminal and subliminal processes. After all, what could it mean to claim that subjects are aware of stimuli that they deny seeing?

The general problem is that it seems that if the dissociation method is to provide one definitive experiment for the existence of subliminal perception, we need a measure of awareness that is both exclusive and exhaustive. Subjective measures seem better suited to ensuring exclusivity and objective measures seem better suited to exhaustivity. Given that the stimulus values for objective and subjective thresholds are quite different, there will be problems integrating these distinct tasks into a single task that is both exclusive and exhaustive. Hence, much current work focuses on understanding the nature of the subjective and objective thresholds and the relation between these various thresholds (Breitmeyer, Citation2015; Dubois & Faivre, Citation2014; Gelbard‐Sagiv, Faivre, Mudrik, & Koch, Citation2016; Knotts, Lau, & Peters, Citation2018; Overgaard & Mogensen, Citation2017; Persuh, Citation2018; Peters, Kentridge, Phillips, & Block, Citation2017; Rausch, Müller, & Zehetleitner, Citation2015; Rausch & Zehetleitner, Citation2014, Citation2016; Sandberg, Del Pin, Bibby, & Overgaard, Citation2014; Timmermans & Cleeremans, Citation2015; Zehetleitner & Rausch, Citation2013). Although this seems reasonable given the forgoing considerations, it turns out to make assumptions about the nature of awareness that are not universally agreed upon. Specifically it seems to assume a mental state “entering awareness” (in‐line with the container metaphors introduced above) changes the content of that state. However, this assumption is controversial.

On some theories of consciousness, the exclusivity criterion will never be met. Recall that exclusivity requires that operations of awareness measure only what is in awareness. However, given the apparent dependence of consciousness on unconscious processing, due to the hierarchical nature of perceptual processing, this seems implausible (Snodgrass, Citation2004; Snodgrass, Bernat, & Shevrin, Citation2004). Consider for example the use of inter‐aural timing differences to determine the spatial location of a sound source. This difference is not explicitly represented in auditory experiences. The content of the experience is not about (does not bear a content grounding relation to) the difference in the time that the signal reaches each ear. What it is about is where the sound source is located in space and this content is constructed in part on the basis of inter‐aural timing differences. The experience contains information about inter‐aural timing differences even though it is not about that. It cannot be that experience contains no information about unconscious processing, because as in this example, it always will. The exclusivity criterion needs to be made more precise if it is not to be implausible. To show a dissociation, the content of consciousness cannot be the same as the content of unconsciousness. This means that the dissociation methodology requires the assumption that awareness changes the content of the representations.

This assumption is implausible on at least some accounts of awareness. Consider what we could call, following Dennett (Citation1995), fame accounts of awareness. Many of the most prominent accounts of awareness say that awareness is the set of processes by which information is made available to the broader cognitive system, whether this is by it being available to the global workspace (Baars, Citation2002; Dehaene & Changeux, Citation2003; Dehaene & Naccache, Citation2001), there being a higher order thought about it (Rosenthal, Citation1997) or it becoming the dominant draft (Dennett, Citation1991). On these accounts any discrimination only needs to be made once, it is not made in the unconscious mind and then again in awareness. Here the difference between aware and unaware perceptual processes is not that further discriminations are made in awareness, but that awareness is the process, which makes the discriminations made pre‐awareness available to the wider system. That is, becoming aware of (or better “with”) a representation, need not change the content of that representation. On such accounts, exclusiveness is not going to be able to be met because the information in consciousness is the same as the information in unconsciousness. If a fame model is correct then exclusivity can never be met. Yet according to Reingold and Merikle (Citation1990), and seemingly following from some considerations above, if exclusivity is not met then a failure to find a dissociation cannot be evidence that there is no perception without awareness because it might be that the failure of dissociation occurs because, for example, both measures index both conscious and unconscious information/content. Given that exclusivity cannot be met for fame accounts of awareness then the dissociation methodology cannot be used to provide the evidence for the presence of perception without awareness on these accounts.

In contrast, the exclusivity assumption is more plausible on binding accounts of awareness, such as Prinz's (Citation2000, Citation2012). According to Prinz, awareness is identical with object‐level attention, the process which binds together the disparate streams of visual processing into a coherent whole. On this account the content of visual states before awareness is different to that after awareness because before awareness there are a range of representations about, for example, colour, form, and motion et cetera, but they are not bound together into a representation of a particular object (e.g., a coloured form moving in a particular direction). On a binding account of awareness, the content of aware and unaware states is different. On such accounts there could be an exclusive measure of awareness.

This suggests that to the extent that the dissociation methodology requires the exclusivity criterion to be met it might not be the best way to demonstrate subliminal processing. Whether or not one believes an exclusive measure of consciousness is possible depends on whether one accepts or rejects a fame account of awareness. For the purposes of this review the point is that how subliminal perception has been examined empirically makes theoretical assumptions about the nature of awareness. If the exclusivity assumption is made, then we are assuming the falsity of fame accounts. This is an uncomfortable situation given that accounts such as the Global Workspace Theory are developed in large part to explain the difference between subliminal and supraliminal perception. The onus here is on advocates of fame accounts of awareness, like the Global Workspace Theory, to develop more specific testable predictions about the relation between aware and unaware visual processing to examine empirically the nature of subliminal processing.

Another perennial problem with the dissociation methodology arises from the nature of the most prevalent test for significance—null hypothesis significance testing (NHST). The basic logic of this methodology is to provide a measure of how likely it is that a particular hypothesis, the null hypothesis, is false. If some pre‐defined threshold, usually a 5% probability of falsely rejecting the hypothesis, is passed then there is “sufficient” evidence that the null hypothesis is false. If this threshold is not passed, then there is not enough evidence to reject the null hypothesis. A general problem with this methodology is that a failure to reject the null hypothesis is inherently ambiguous—it might mean that the null hypothesis is true or it might mean that it is false but the experiment did not have sufficient power to demonstrate this (Dienes, Citation2014). This is relevant here as the hypothesis that there is processing that affects behaviour without awareness is analysed as the null hypothesis. In order to show subjects can be affected by primes that they are not aware of, there needs to be evidence that subjects are not aware of the primes, but this is treated as the null hypothesis and so accepted if there is >5% chance it is true. Using NHST this is done by showing that there is insufficient evidence that subjects can discriminate the presence of a prime form its absence in a forced choice task at above chance rates. This seems like a strict criterion, it uses an “objective” threshold, which is less subject to bias than “subjective” report. Namely, subjects are not different than chance at guessing whether the prime is present. However, failing to reject the null hypothesis because there is more than a 5% chance that it is true is not the same as showing that the null hypothesis is true.

As sceptical responses to purported examples of subliminal priming demonstrate, any demonstration that there was no awareness (i.e., >5% chance that discrimination of prime present versus absent is not different that chance) is ambiguous between there actually being no awareness and the experiment being too underpowered to find evidence of awareness. This general problem is even more acute when dealing with the small differences found in subliminal perception studies because as the effect size shrinks the number of trials required to have sufficient power increases:

For example, with a sample size of 21 subjects with a mean true performance of 52%, one would need approximately 570 prime classification trials to bring the probability of wrongly accepting the null hypothesis down below 5%. With a true performance of 51%, approximately 2,300 trials would be needed (Finkbeiner & Coltheart, Citation2014, p. 27).

Because of this problem several researchers have been advocating for the use of Bayesian models (Bayes Factor) of the data (see Dienes, Citation2014). The upside of Bayesian models is that they give you a measure of which hypothesis best fits the data (if any) such that you can distinguish between the null hypothesis being true, false or the data being ambiguous. In contrast standard NHSTs only enable you to distinguish between the null hypothesis being (probably) false and not being able to reject the null hypothesis (which could be either because it is true or the results are ambiguous) (Dienes, Citation2015; Rouder, Speckman, Sun, Morey, & Iverson, Citation2009). The downside is that they require you to precisely state the predictions of your theory, for example, the expected level of performance on the task if the information was aware (Dienes, Citation2015). However, as Bayesian advocates urge, you cannot test a theory unless you know what it would predict.

There have also been a range of studies using Rouder et al. (Citation2009), Rouder and Morey (Citation2009), Rouder, Morey, Speckman, and Pratte (Citation2007) “mass at chance” Bayesian model of subliminal perception. Unlike standard SDT models it assumes that there are thresholds in perception which are imposed on top of the underlying continuous perceptual data. The advantage of this model is that a dividing line between subliminal and supraliminal perception is proposed to exist in the system (something is below threshold if all stimulus values lead to the same outcome, for example, if at stimulus values above zero your response is the same as if the value were zero) rather than being imposed from the outside. This makes more sense of there being a limen than SDT (which assumes the underlying perceptual information and the decision making process are continuous). There have also been studies of subliminal perception using the mass at chance model, which have provided moderately strong evidence in favour of the existence of subliminal perception (e.g., Finkbeiner, Citation2011; Rouder et al., Citation2007).

There remains considerable controversy around distinguishing aware visual processing from wholly unaware processing. This may seem surprising, and it may turn out to have theoretical implications (cf. Carruthers, Citation2019). None‐the‐less the turn to Bayesian statistics from NHST is an important development. However, in general, until these open questions have been answered the possibility of answering the question “how can we operationalise visual awareness” with “see what is visually represented (because all vision is aware)” remains open.

Self‐consciousness

As with visual experience, self‐consciousness tends to be operationalised in laboratory conditions using self‐report. A common technique is to elicit an experience and then have subjects report their experiences. Quantifiable questionnaires are the tool most frequently used to gather reports, as these introduce control questions that allow for quantitative comparison. This aims to ensure that subjects are reporting what they experience, and not what they think the experimenters want them to say.

One aspect of self‐consciousness that is studied in this way is the feeling of embodiment, that is, the feeling that one's body is one's own body (as opposed to someone else's) or that a particular body part is a part of one's own body. We know of no way to reliably eliminate the feeling of embodiment in healthy subjects (although hypnosis has been tried Rahmanovic, Barnier, Cox, Langdon, & Coltheart, Citation2012), so the focus has been on using reliable manipulations that change the experience.

A prominent way of manipulating the feeling of embodiment uses the rubber hand illusion (RHI). The RHI is a complex experience in which subjects experience touch coming from an artificial body part, and experience that artificial body part as if it were one's own (Carruthers, Citation2014). It is this experiencing the artificial body part as if it were one's own, “the feeling of embodiment,” which is our focus. Our focus here is less on the methods that produce the illusion and more on how the feeling of embodiment is operationalised.

When studying the feeling of embodiment via the RHI, self‐consciousness is operationalised via verbal report and by non‐verbal behavioural changes. Botvinick and Cohen (Citation1998) introduced a 9‐item questionnaire for subjects to report their experience of the RHI. On the questionnaire, subjects are asked to affirm or deny statements on a Likert or visual analogue scale, allowing for quantitative comparison of scores in different conditions and between items. Some items on the questionnaire ask subjects to report experiences that are a part of the RHI, namely (i) “It seemed as if I were feeling the touch of the paintbrush in the location where I saw the rubber hand”; (ii) “It seemed as though the touch I felt was caused by the paintbrush touching the rubber hand”; and (iii) “I felt as if the rubber hand were my hand.” Other items, such as “it felt as if my hands were turning ‘rubbery’” are not consistently agreed to by subjects and are treated as controls for suggestibility (Ehrsson et al., Citation2008). When subjects agree with items (i–iii) more than the control items they are taken to be reporting an experience of the RHI.

Longo and colleagues (2008) have extended this questionnaire approach to operationalising the RHI. They used principal components analysis to analyse an extended 27‐item questionnaire. They discovered four significant groupings of responses. Component (1) they called “embodiment of rubber hand,” which included three subcomponents (1a) “ownership,” which contained items referring to experiences such as the feeling of the rubber hand being a part of oneself, that is, what we are calling the feeling of embodiment; (1b) “location,” which contained items referring to the apparent location of the hand and the touch; and (1c) “agency,” which contained items referring to the experience of being able to move or control the rubber hand. Component (2) was named “loss of own hand,” and included items referring to the disappearance of one's real hand as well as being unable to move it. Component (3), named “movement,” included items referring to apparent motion of the real or rubber hand; and component (4), or “affect,” contained items referring to the interestingness or enjoyableness of the experience. Following asynchronous stroking a further component was found called (5) or “deafference,” which contained items referring to sensations like numbness.

Some studies have used only non‐report‐based behavioural measures (e.g., Tsakiris, Prabhu, & Haggard, Citation2006). Proprioceptive drift, introduced by Botvinick and Cohen, operationalises the RHI by the perceived location of the hand. Subjects are asked to point (usually by sliding their finger under the desk) to where they perceive their hand (or finger) to be after the RHI is induced and after asynchronous control stimulation. Proprioceptive drift refers to the difference between these two locations and this has been taken to index the strength of the illusion. A variation on this approach asks subjects to verbally report the apparent location of their hand relative to a randomly offset ruler. Although proprioceptive drift remains a popular measure of the RHI, there is clear evidence that it is not a good measure of the feeling of embodiment. Specifically, drift occurs when there is no alteration in the feeling of embodiment (Holmes, Snijders, & Spence, Citation2006). Further, the feeling of embodiment can be reduced without reducing the size of the drift by using flat images or smaller images of hands (Ijsselsteijn, de Kort, & Haans, Citation2006; Pavani & Zampini, Citation2007). Proprioceptive drift has also been observed at the same magnitude during control conditions as in the illusion condition. This occurs when the stimulus is applied for a short time with repeated sampling, thus suggesting that asynchronous stroking reduces drift rather than it being increased by synchronous stroking (Rohde, Luca, & Ernst, Citation2011).

Given these problems with using proprioceptive drift as a non‐verbal measure of the RHI and feeling of embodiment other non‐report measures have been developed. A promising approach comes from the use of what we might call “on versus off model” reaction times. Subjects are typically faster at responding to stimuli presented on their hands than in peri‐personal space (Hari & Jousmaki, Citation1996). This same difference has been observed for artificial or virtual hands during the RHI, but not for hand‐shaped objects during control conditions (Short & Ward, Citation2009; Whiteley, Spence, & Haggard, Citation2008).

Zopf and colleagues (2013) have applied the crossmodal congruency task to the RHI. This task requires subjects to respond as fast as possible to stimuli presented in one modality, such as touch, while distractors are presented in another modality, such as vision. Subjects are slower at responding during incongruent trials (when the signals do not match) than congruent trials. Zopf and colleagues (2013) combined this task with the RHI. During the RHI, there was a difference in reaction times between the congruent and incongruent conditions, but no difference, or a significantly smaller difference, during the control condition.

Finally, the RHI has been shown to affect judgements of visual similarity expressed through button presses. Carruthers et al. (2017) used odd‐one‐out judgements for triads composed of 14 images of hands plus an image of a prosthetic hand to infer a 3D quality space for hands. During the control condition, the image of the prosthetic hand fell relatively far from the centre of the space, because it was often selected as the odd one out. After induction of the illusion for the image of a prosthetic hand, it fell significantly closer to the centre of the space because it had been selected as the odd‐one‐out less often. This is evidence that the prosthetic hand was perceived as more hand‐like during the RHI than during the control condition.

Not all self‐experiences can be easily assessed using questionnaires or actions like those considered above. When assessing self‐consciousness in clinical populations we, as researchers, are not attempting to operationalise the occurrence of an experience that we would expect to occur given our experimental setup. Instead, we are attempting to understand an experience that may be radically different from any we ourselves have had, and which patients have great difficulty in describing. There is no external object or common reference point which a patient can use to help us understand what they experience. As with dreams (Rosen, Citation2013) and non‐clinical hallucinations (Carruthers, Citation2018) we face a problem of distinguishing what in patients' reports are veridical and what is confabulated in an attempt to make sense of and communicate a bizarre experience.

A common approach to understanding reports from patients about their unusual self‐experiences is to look for common themes in reports. This then justifies the hypothesis (but only the hypothesis) that these themes reflect what is most likely to be a shared experience of those suffering the symptom in question. For example, we get reports like these, from many patients in many contexts:

I felt like an automaton, guided by a female spirit who had entered me during it [an arm movement].

I thought you [the experimenter] were varying the movements with your thoughts.

I could feel God guiding me [during an arm movement] (Spence, Citation2001, p. 165).

When I reach my hand for the comb it is my hand and arm which move, and my fingers pick up the pen, but I don't control them… I sit there watching them move, and they are quite independent, what they do is nothing to do with me… I am just a puppet who is manipulated by cosmic strings. When the strings are pulled my body moves and I cannot prevent it (Mellor, Citation1970, p. 18).

That justifies the claim that people who suffer from delusions of alien control have a deficit in or an unusual feeling of agency. Although each report is different, they cluster around the theme of the subject not being in control of their own actions.

Self‐consciousness then is operationalised in three ways. In laboratory conditions, we can use quantifiable self‐report, using rating scales or preferably multi‐item questionnaires, and quantifiable behavioural changes. In naturalistic studies of anomalous experiences, we use qualitative narrative reports from patients, which we then analyse for common themes.

We can now see that while consciousness is often operationalised in lab conditions by asking the subject, it would be a mistake to think that there are not important differences in how this is done. We can ask the subject to detect the stimulus (visual awareness), to compare the experiences elicited by different stimuli (colour phenomenology), to give a rating of how much their experiences matches various statements (the feeling of embodiment) and to simply describe their experiences. More importantly sometimes consciousness is operationalised indirectly, by showing the influence of some content on behaviour. In the RHI study of embodiment this is done by looking for a match between behaviour with one's own hand when not under the illusion and the rubber hand when under the illusion but not when not under the illusion.

DIFFICULT CASES

So far, we have seen some productive approaches to operationalising consciousness, and some difficulties with non‐verbal operationalisations. Many of these approaches rely heavily on verbal or written reports from subjects. In this section, we turn to some limitations with this approach, and suggest ways that these limitations can be mitigated. We begin with specific problems around dreams and non‐verbal animals, before turning to other more general theoretical concerns.

Operationalising dreams

Dreams are commonly operationalised via narrative self‐report, after either waking up naturally or forced awakenings. There is little agreement on what the experience of dreaming is like from a first‐person perspective. Theorists debate whether dreams are imaginations (Ichikawa, Citation2009, Citation2016; Sosa & Ichikawa, Citation2009) illusions (Windt, Citation2015, Citation2018), hallucinations (Hobson, Hong, & Friston, Citation2014; Metzinger, Citation2014; Revonsuo, Tuominen, & Valli, Citation2015a) or an amalgam of these states (Rosen, Citation2018a). Problems with describing the nature of dream experiences have a corresponding problem with operationalising dreams. Much of this disagreement seems to rest on distrust of our ability to accurately report a state after time has passed or after coming out of that state, although there are concerns about the dream state in particular.

We are prone to forgetting and confabulating our memories. Memory reports are more difficult to control and verify than reports about current stimulus or our behavioural responses to said stimulus. Dream memory, however, is generally much less accurate (Hobson, Citation2005), and dream‐induced “stupidity” complicates matters further (Kahan & Sullivan, Citation2012). Our cognitive capacities are usually lowered while dreaming, although this may not be across the board in all dream states (such as lucid dreams LaBerge & DeGracia, Citation2000; Voss, Holzmann, Tuin, & Hobson, Citation2009). In general, dreams lack in metacognition, rationality, binding, or in other words, the normal bringing together of multiple elements of experience, concentration, focus, and ability to notice bizarre features. We are usually shut off from waking memories within dreams, having forgotten falling asleep, or even who we are (Rosen & Sutton, Citation2013). Our waking selves thus have very limited access to memories of what occurred in the dream. We have multiple rapid eye movement (REM) sleep sessions per night, and REM sleep awakenings in lab settings elicit dream reports around 80–90% of the time (Domhoff, Citation2003), yet we rarely report more than one dream, if any at all, after a full night's sleep. To further complicate issues, unlike waking memory, dream memory cannot be compared to the initial experience since the dreamer is unresponsive during the dream and cannot make a report until waking, thus dream reports are difficult, if not impossible, to verify.

The combined poor memory and other cognitive lapses may also lead to confabulated reports (Rosen, Citation2013). Because of this, dream reports are generally less trustworthy than waking reports. Although it should be said that it is unlikely that they are so untrustworthy as to justify an anti‐experience thesis which denies dreams occur at all (contra Malcolm, Citation1967), or to reject dream reports as evidence of dream content whatsoever—certain conditions can increase the likelihood of accurate reporting (Windt & Metzinger, Citation2007) although our ability to control them is severely limited.

Scientists are limited in their ability to interfere with dreams and then test the outcome of such interferences. The dream environment itself cannot be controlled in the same way as waking experience (Foulkes, Citation1985; Revonsuo, Tuominen, & Valli, Citation2015b). Stimuli applied to the sleeping body can “infiltrate” the dream, altering the dream experience, with the most reliable method being pressure applied to the arm (e.g., see Nielsen, Ouellet, & Zadra, Citation1995), so experimenters can have some influence over the dream of a research subject, but this influence is difficult to predict and even successful stimulus infiltration does not guarantee a specific experience will occur—for example, pressure stimulus could be interpreted as pain or swelling. The closest we can get to producing specific dreams is to teach subjects how to lucid‐control‐dream—in which the dreamer both realises they are dreaming and has control over the environment. Once lucidity is attained, the dreamer can carry out certain pre‐arranged tasks, as has been achieved in some interesting recent experiments (Erlacher, Schädlich, Stumbrys, & Schredl, Citation2013; Kahan & LaBerge, Citation1994; Schädlich, Erlacher, & Schredl, Citation2017). However, we cannot rely on lucid dreaming research for a complete understanding of the dream state as such dreams do not represent of all types of dreaming (Voss et al., Citation2009).

It is also unclear how best to collect dream reports. One key disagreement is whether to collect dreams at home or in the lab. Some argue that the home is to be preferred because the dream lab setting itself can affect dream content, especially regarding bizarreness; lab dreams are more mundane than home dreams (Antrobus, Fein, Jordan, Ellman, & Arkin, Citation1991; Hobson, Pace‐Schott, & Stickgold, Citation2000). Home dreams tend to be morning reports as opposed to REM awakenings, which are accepted as being less accurate reflections of actual dream experiences due to the time that has passed between the dream and the report (Foulkes, Citation1996, Citation1999). However, techniques such as wearable tech or apps can be somewhat reliably used to induce REM sleep awakenings in the home (Ko et al., Citation2015). In the lab, the dreamer can be reliably awakened and a variety of technologies can be used to track bodily and brain activity, such as eye movements via electrooculogram (EOG), heart rate via electrocardiogram (ECG), bodily twitches via electromyogram (EMG). Brain activity data can be collected by electroencephalogram (EEG), functional magnetic resonance imaging (fMRI), positron emission tomography (PET) and others, each with their own benefits and limitations. One limitation of all forms of brain imaging is that it is difficult to ascertain the timing of a particular dream and correlate particular brain activity with specific elements of that dream.

Attempts to correlate bodily changes with dream content began with the “scanning hypothesis,” that real eye movements during REM sleep track perceived eye movements in dreams (Arnulf, Citation2011; Dement & Kleitman, Citation1957b; Leclair‐Visonneau, Oudiette, Gaymard, Leu‐Semenescu, & Arnulf, Citation2010). Dement and Kleitman (Citation1957a, Citation1957b) initially proposed this hypothesis to explain rapid eye movements during sleep, hypothesising that the real eye movements track the dream scene. Evidence for this in early experiments included examples of vertical eye‐movements correlating with reports of throwing basketballs at a net or looking up at a tall cliff (Dement & Kleitman, Citation1957b). Not all eye movements correlate with dream reports and vice versa, however. Dreams can occur during NREM sleep where there are no eye‐movements, and some eye movements are nothing like waking eye movements. So, do real eye movements sometimes track dream eye movements?

REM sleep eye movements can at times be saccade speed (REMs), which is the fastest speed the human eye can move, at others, pursuit speed, which is much slower, used for tracking objects moving in the environment. Sometimes these movements fall within a “forbidden zone” (Arnulf, Citation2010); too fast to be a pursuit, too slow to be a saccade, and thus not at a speed that would be used for exploration of the visual field when awake (Bridgeman, Citation2012). Yet regular eye‐tracking can not be used when the eyelids are closed and the technology required to get specific information about how the eyes move under the eyelids is overly invasive and can only be used on non‐human subjects (Bridgeman, Citation2012). Even if we were to gain this information, once again, making correlations between bodily movements and dream reports is fraught. Can we determine with any certainty, for example, that the basketball dream occurred at the same time that the vertical eye‐movements did? It is likely that the eyes track the dream scene at times, but certainly not always (Rosen Citation2019). It is also possible that eye‐movements can be generated automatically by PGO waves which then inform the dream‐eyes of movement, causing changes to the hallucinated scene (Miyauchi, Misaki, Kan, Fukunaga, & Koike, Citation2009; Ogawa, Nittono, & Hori, Citation2005; Peigneux et al., Citation2001). This would mean that the causal direction between visual stimulus and movement is the reverse of what we would assume, leaving out the possibility of intentional movement altogether. However, this explanation seems implausible when considering lucid dream eye‐movement signalling.

Lucid dreamers can carry out pre‐arranged eye‐flicks to indicate that they are dreaming and even indicate some of the content of the dream as it occurs (Appel, Pipa, & Dresler, Citation2017; Dresler, Erlacher, Czisch, & Spoormaker, Citation2016; LaBerge, Citation2000; Voss & Hobson, Citation2014). This contradicts the idea that we never make agentive, intentional eye‐movements while dreaming. However, rather than disproving the reverse‐causation hypothesis, it simply shows that reverse causation cannot be the explanation for all dream eye‐movements—they may be agentive at times and automatic at others. Lucid dreams may be a special case that allows for increased agency (Voss et al., Citation2009).

Highly creative, novel research shows that despite the limitations of studying dream consciousness, new methods can make interesting new discoveries. A good future direction for this research would be to draw on techniques from “microphenomenological” research to allow dreamers to improve their ability to pay attention to their experiences and improve report accuracy. Microphenomenology involves a researcher guiding the subject's attention to a singular experience within a report in order to attain in‐depth details about that part of their experience (Bitbol & Petitmengin, Citation2017). This could be used after REM awakenings in the dream lab but could also be modified so that lucid dreamers carry out similar investigations on themselves while dreaming and after waking. Dream reports are often ambiguous, so paying more attention to particular features could make reports more specific and clear. Eye flick techniques could be put to better use, for example, lucid dreamers could be taught to indicate more detailed information about the dream experience, perhaps through a simplified Morse‐like code (Rosen, Citation2013). Hand signalling has been tried for these purposes but unfortunately, this did not work due to the faintness and unreliability of bodily twitches (Kahan & LaBerge, Citation1994).

Recently it has been suggested that fMRI can be used to “read off” dreams or “predict” dream reports (Horikawa, Tamaki, Miyawaki, & Kamitani, Citation2013). However, this is only possible after 50 or so trials in which the dreamer is woken up repeatedly to give a report, correlating the neural images with the specifics of the report each time. This would not be a good technique for collecting and verifying many dream reports due to the time required and the lack of fine‐grainedness of the information attained.

A method that is less disturbing while more conserving of resources is the potential to carry out relatively controlled home studies that use wearable technologies to gain personal data and wake participants up to collect dream reports at different specific sleep stages. We can use smart watches to gain movement, heart rate and skin conductance data which can, to a reasonable degree, determine sleep stages (Ko et al., Citation2015). We currently have the technology to carry out such research, but improved methodologies need to be developed.

Operationalising consciousness of octopuses

Consciousness research is not limited to the study of humans. While research on animal subjects has largely been concerned with vertebrates, mainly mammals and birds, the field is expanding to cover invertebrates as well, such as insects (Barron & Klein, Citation2016) and octopuses (Godfrey‐Smith, Citation2016). The latter cases compound the problems of operationalising consciousness in non‐human species, as the usual starting points, which are used for consciousness attribution to their vertebrate counterparts may not always be applicable. For example if a particular behaviour is used as an index of consciousness in say dogs, then when we find the same type of behaviour in octopuses, with their radically distributed neural system, are we equally able to attribute consciousness to the octopus? If not should we reconsider the attribution of consciousness to dogs based on such behaviour? As such, methodologies for studying consciousness in creatures such as octopuses—as well as attempts to establish that they are indeed conscious—will undoubtedly be plagued with interpretative issues, for example, anthropomorphism, over‐inflationary explanations (see Heyes, Citation1994), or incongruence with the species' natural behavioural repertoire (see Povinelli & Cant, Citation1995).

Octopuses are a particularly interesting case in the study of consciousness. Features of the octopus nervous system (Hochner, Citation2004) as well as similarities of their behavioural repertoire to those of vertebrates (Godfrey‐Smith, Citation2016) have motivated discussions on whether they are conscious (Low, Citation2012).

The ≈ 500 million neurons of the octopus are organised into an anatomically and functionally decentralised nervous system whose components are highly autonomous. These components—the central brain (≈45 million neurons), the periphery, consisting of the pair of optic lobes (120–180 million neurons), and the nervous system of the arms (≈350 million neurons)—are highly specialised, with the peripheral components carrying out their respective functions with little involvement of the brain. For instance, much of an octopus's motor control routines are localised within the peripheral arm nervous system, whereas their vertebrate counterparts have their substrates in the brain (Sumbre, Gutfreund, Fiorito, Flash, & Hochner, Citation2001). Additionally, octopus arms are highly sensitive: each one is equipped with millions of sensory receptors that register tactile, chemical, and kinetic stimuli, which are extensively processed in the arm nervous system. Such an organisation suggests that if octopuses are conscious, the substrates of octopus consciousness may be distributed across the nervous system (Carls‐Diamante, Citation2017). Nevertheless, investigating the structure of consciousness in octopuses inevitably requires putting the empirical cart before the conceptual horse: doing so presupposes that there are reliable indicators of the presence of consciousness in octopuses. These indicators can only be provided if consciousness is operationalised.

Finding ways to operationalise consciousness in octopuses could also help address the ambiguities that naturally arise from behaviour‐based methods of consciousness attribution. With clear criteria specifying the conditions to be met in order for consciousness to be judged present, identifying species‐specific behavioural manifestations of consciousness will have one less moving part. To begin with, behaviours that are regarded as suggestive of consciousness in vertebrates (e.g., guiding a limb as it moves along an unusual trajectory) may not be so in octopuses. Due to the organisation of the octopus nervous system and its possible influence on the structure of consciousness that may arise from it (Carls‐Diamante, Citation2017), octopus consciousness (if present) may not contribute to behaviour in the ways it is presumed to in its vertebrate counterparts. For instance, because octopus arms are extremely flexible and have hydrostatic muscles, that is, muscles groups in which the shortening or lengthening of one group results in dynamical compensatory adjustments in the others (Levy, Nesher, Zullo, & Hochner, Citation2017), the motor trajectory of an arm can be updated by the physical activity of the muscles themselves (Richter, Hochner, & Kuba, Citation2015). As such, positing the role of consciousness in motor tasks involving such a control schema may be superfluous. This stands in contrast to the notion that in vertebrates, motor trajectories—especially those responsible for novel or atypical movements—often involve conscious monitoring in order to keep them on the right track.

Another type of octopus behaviour that could shed light on the possibility of consciousness in octopuses is “crypsis,” the umbrella term for behaviours used by an octopus to disguise itself. In octopuses, “camouflage,” or changing the skin's colour and texture to match a background, does not obviously depend on conscious control of behaviour, as it depends in large part on the activity of low‐level light‐sensitive receptors on the animal's skin (Ramirez & Oakley, Citation2015). Nonetheless, other, complex forms of crypsis are more suggestive. Some species of octopuses mimic other animals or objects by copying colour and texture patterns, or rearranging their arms to approximate the target's body outline, and changing their locomotion techniques to imitate those of the latter. Commonly imitated species are lionfish, flounders, and even seaweed (Finn, Tregenza, & Norman, Citation2009; Hanlon & Messenger, Citation1996). The complexity of this mimicry suggests that the octopus is aware of what its body looks like to a third person observer. In order to match its body so precisely to a target, it seems like that the octopus would be aware of how it looks from the outside. However, it is possible to formulate plausible deflationary explanations of crypsis by mimicking that do not reference consciousness; perhaps it is not awareness, but mere representation of body shape and colour that the octopus relies on. It is difficult to know what evidence can rule in or out these possibilities. We cannot just ask the octopus.

Octopuses can be used as a case study—or even a cautionary tale—in operationalising consciousness, due to the challenges to theory and methodology that they pose. Species‐specific features must be considered when designing experiments to measure consciousness. Caution must be exercised when applying tests or theories of consciousness that have been originally designed around vertebrate—especially human—models to octopuses, as they may not be transferable and hence the conclusions they generate may not be equivalent. This same notion can be extended to research on human consciousness, where the structure of consciousness in non‐neurotypical subjects may differ, in varying degrees of significance, from that of neurotypical individuals. That is to say, neurophysiology must never be overlooked when designing tests for consciousness.

Can we trust self‐reports anyway?

As we saw in the sections on vision, self‐consciousness, and dreams, verbal self‐report and button pressing are common measures of consciousness. The intuitive assumption is that these so‐called “subjective measures” are reasonably reliable because of the subject's supposedly “privileged access” to their conscious states. However, we have seen that this assumption is problematic. In addition, there are reasons to worry that some of the concerns raised for dreams may generalise to other experiences.

Next, we consider an empirical thesis against the reliability of subjective measures. Following this, we consider a conceptual problem with measuring consciousness. These problems have led some scholars to conclude that we should abandon the scientific study of consciousness, and opt instead for the study of individual cognitive capacities associated with consciousness (Irvine, Citation2012). However, we do suggest a more optimistic and careful stance toward subjective measures specifically and consciousness studies generally: although these concerns are real, they do not completely undermine the work reviewed above, provided that we can make explicit the research's background theory of consciousness to avoid post‐hoc assumptions and incoherence, as well as critically evaluate self‐report and other measures of consciousness.

Granting that accurate measures of consciousness through self‐report are possible does not imply that it is easy to design a reliable or valid operationalisation. We have already seen difficulties with doing so above. In this section, we first review a sceptical position concerning self‐report: empirical fallibilism about introspection of consciousness.

Advocates of this position argue that introspection of consciousness is highly unreliable and so reports based on introspection are also unreliable. We focus on the fallibilist position developed by Schwitzgebel (Citation2008). He argues that the unreliability of naïve introspection of our current or recently past conscious experiences is (i) wide‐scope (encompassing all modalities); (ii) gross (not pertaining to only finer or minor details, but also stable and coarse features of the perceptual objects); and (iii) prevalent (it exists not just when one is in a compromised or pathological state, but also when one is calmly and carefully reflecting).

A common example is that of apparently coloured peripheral vision, which demonstrates the errors we make about very gross features of our perceptual objects while using our (arguably) best‐developed sensory modality under favourable conditions. Our ability to see things clearly, in vivid colours, shapes, etc., does not extend beyond one to two degrees of arc from the fixation point. However, normal, naive introspectors make false judgments about these obvious features and report a much larger area of visual clarity (Hurlburt & Schwitzgebel, Citation2011).

Schwitzgebel also presents other empirical evidence for his fallibilist position, such as the limited cases of successful introspection research relative to considerable numbers of failures. We saw above that there have been great success and replicability using reports to operationalise colour experiences and self‐consciousness. Research also suggests we are reliable in reporting our problem‐solving processes (Ericsson & Simon, Citation1993). Yet, other evidence points to our failures: We do not seem to be very good at knowing our own emotional states as well as our pain and pleasure (Haybron, Citation2007). We also have pervasive disagreement on some aspects of our conscious experiences; so, at least some of us must be grossly wrong about them. For instance, people do not agree on whether there is phenomenology of thoughts that is distinct from the accompanying (visual, auditory, etc.) imageries, as well as on whether we have conscious experience of an object when we are not attending to it (Hurlburt & Schwitzgebel, Citation2011; Schwitzgebel, Citation2008). On the whole, we seem no better at introspecting our experiences than we are at perceiving the external world.

Despite the fallibilist position, introspection's reputation as a problematic methodology may be undeserved. Problems like the imageless thought controversy should not be taken as decisive evidence to avoid using introspection in general. A common narrative is that, in the 20th century, psychology moved from introspectionism, which could never overcome the problem of the privacy of experience, to a notionally more objective behaviourism which only studied objective behaviour and then to cognitivism which admitted the existence of mental states but this was in order to account for behavioural, not introspective evidence (for an overview see Costall, Citation2006; Brock Citation2013). Like all textbook histories this is overly simplistic and misleading. Not in the least because the “introspectionist” father of psychology, Wundt, was a psychophysicist who readily admitted that mere self‐reflection was unscientific because it failed to support a distinction between the subject and object of study. For Wundt “The process of introspection succeeds only in destroying, or, at best, grossly distorting its object.” (Danziger, Citation1980, p. 245). By getting immediate reports of experiences, rather than relying on memory, and by tightly controlling the stimuli so that the same experience could be elicited and studied many times, Wundt hoped to be able to make the study of the mind scientifically rigorous. What is important for our purposes is to appreciate that modern psychophysics uses the same basic method of asking the subject how things appear. The fact that much work tries to make these reports reliable by careful control of the stimuli that elicit the experiences being reported does not change the fact that subjects are providing a report of their experiences. For example, when asking which line appears longer in the study of the Muller‐Lyer illusion, the subject is being asked about their experiences, not about the line itself. The point is that introspection, when used carefully, is and will continue to be a core method in psychology, even if the particular ways in which it was done in the late 19th and early 20th century fell out of favour (See Beenfeldt, Citation2013; Boring, Citation1953; Brock Citation1991; Citation2013; Danziger, Citation1980).

The fallibilist position suggests that subjective measures are highly unreliable: The source of unreliability can come from all sorts of biases generated by experimental settings, task designs, subject's motivation, and so on, such as the suggestibility biases discussed in the self‐consciousness section above. We have also suggested ways to improve the reliability of subjective measures. However, trying to reach a bias‐free subjective measure, as some scholars argue, may reveal a more radical, conceptual challenge. For example, Irvine (Citation2012, Citation2017) argues that it may be unlikely that any scientific consensus concerning what constitutes bias‐free subjective measures will emerge.

Subjective measures involve, at minimum, the processes of detecting conscious states and making decisions for reporting. Decision processes are often thought of as dependent on the accumulation of information generated through detection relative to variable decision threshold, which, when crossed, leads to a report (Gold & Shadlen, Citation2007). Returning to an example mentioned above, subjects more readily report the presence of the word “shot” than “shit” under identical masking conditions because subjects want to be more confident before reporting a taboo word. Because the threshold is context‐dependent and adjustable, it can introduce positive or negative biases (Irvine, Citation2017).

If conscious mental states are independent of the detection and decision processes (and this assumption itself is controversial), there should be ways to determine a bias‐free decision threshold for veridical subjective measures. In practice, however, there is currently not, and one may argue that there is unlikely to ever be a scientific consensus on the bias‐free standard. We can illustrate this problem with a trilemma.

First, we can attempt to determine the bias‐free threshold by inferring it from a more general background theory of consciousness held by members of the consciousness studies community. The problem with this approach is that currently, there is no such consensus on the correct background theory of consciousness with which we can infer a useful and unbiased measurement. Indeed, Irvine thinks that there is “permanent presence” of response bias in subjective measures (Irvine, Citation2012, pp. 16–26). As argued above this problem seems to depend on assumptions about the difference between awareness and subliminal processing.

Second, the problem of decision bias could be eliminated if we had other unbiased and uncontroversial measures of consciousness to “calibrate” subjective measures. However, if all measures of consciousness face similar methodological problems, this is not an option (Irvine, Citation2012). Objective measures, such as d' or behaviour, depend on assumptions that can be controversial (Irvine, Citation2017). One objective measure that reflects the discriminatory capacity, called d', is determined by analysing the subject's responses with Signal Detection Theory. As discussed above, there is evidence that subjects can detect stimuli that they cannot report. Hence, it is disputable whether they measure consciousness or just some pre‐conscious information processing. In other words, to rely on objective measures, one needs to make additional assumptions, such as that the discrimination is sufficient for consciousness.

Finally, we may hope that there will be a convergence of different measures (e.g., subjective and other behavioural or neurophysiological measures) to identify the most appropriate measures in various settings even if none of them are uncontroversial by themselves. Irvine has argued that this is also unlikely to work. This is because there is currently no interesting convergence among different measures of consciousness (Irvine, Citation2012). This raises a further concern that different measures may be tracking different phenomena instead of a single and unitary phenomenon of consciousness. However, it could still be that measures do tend to converge except in fringe cases (like the cases discussed above), and that evidence of dissociations is evidence that awareness is constituted by multiple functions (Carruthers, Citation2019), which does nothing more than ask us to revise our naive container metaphors of awareness. Again, the point is that explicit theorising, rather than intuitions about measures of consciousness, are needed to decide these issues.

As a result, despite these worries about self‐report, we may take a more optimistic attitude toward self‐report. First, given that fallibilism of introspection is empirically supported, scientific studies of consciousness need to be careful of the self‐report and cannot take them at face value. For example, we can employ methods of improving its reliability, such as introspective training and beeper technology (Hurlburt & Schwitzgebel, Citation2011). Also, there are different kinds of controls, introduced in the vision and self‐consciousness sections of this article, which increase validity of measures.

Second, because the methodological problems raised above are all linked to the lack of consensus on crucial assumptions about consciousness, one way to move forward in consciousness studies is to always make one's assumptions explicit. As we saw above, operationalisations of self‐consciousness assume that suggestibility bias can be controlled for using questionnaire items asking about experiences that are not expected to occur in a particular experiment. Work on pathological self‐experiences assumes that reports are more likely to be veridical if they contain themes commonly reported by other patients. Debates around the existence of subliminal perception could be advanced by more precise theory‐based predictions about the difference between subliminal and supraliminal performance. This is crucial because one's assumptions determine the measurement, which in turn could be used to generate data that either confirms or falsifies a theory of consciousness. We can aim at constructing a set of optimally elegant theories that have minimal post‐hoc‐ness and maximal internal coherence in their assumptions, and highest external coherence with empirical data collected with, given certain assumptions, increasingly improved methods and standards—something that is both valuable and nontrivial as scientific endeavour.

ALTERNATIVE MEASURES OF EXPERIENCE WITHOUT REPORT

Given the concerns raised about report‐based measures, it is worth considering the possibility of operationalising consciousness without verbal report. We have seen some attempts to do this in examining behavioural correlates of the RHI, but are there techniques which can be generalised? In this section, we make two proposals as to how experiences can be operationalised without verbal report. The first approach offers an “in principle” proposal which depends on the development of more advanced neuroscientific techniques. The second proposal is an adaptation of an approach already used in cognitive psychology. It should be noted that we do not want to suggest that these methodologies are superior to those surveyed above. Rather the suggestion is that they are complementary and can help with the task of triangulating the phenomena of interest.

One challenge facing studies of the neural correlates of consciousness is separating the neural basis of consciousness from the neural basis of other aspects of the task, for example, reporting the nature of the experience. The challenge is to track the neural changes associated with changes in the experience. One way to isolate NCCs is to utilise phenomena like binocular rivalry (where different images are presented to each eye) in which the input is kept the same but the experience changes (subjects experience tends to alternate between each of the stimuli). In standard binocular rivalry experiments the change in a subject's experiences is tracked via report. However, no‐report paradigms are also available (Tsuchiya et al., Citation2015). They operationalise consciousness via correlated physiological changes, such as changes in the direction of eye movements associated with changes in the direction of the perceived stimulus during binocular rivalry. By removing the report requirement we are able to operationalise consciousness in such a way as to enable it to be studied in non‐verbal subjects and to enable the separation of the neural changes that underly report from those that underlie awareness.

When it comes to operationalising phenomenal qualities, focusing on propositional descriptions of parts or kinds of experience like “green‐ness” are far too vague. Far better descriptions of phenomenal qualities are given using quality spaces that we introduced above. Because of some of the limitations with using verbal report to operationalise consciousness, we want a way to determine if experience is present in the absence of a report. Using quality spaces, this question then becomes: how could we determine if such quality spaces are present when not being reported? Schier (Citation2009) suggests that we look for evidence of identity between phenomenal qualities and neural activation. To see how, we need to introduce another tool—that of a neural activation space.

The notion of activation space is inherited from the study of artificial neural networks (Churchland, Citation1995; Gärdenfors, Citation2000; O'Brien & Opie, Citation2004). These spaces describe patterns of activity across layers of units in such networks in terms of the similarity between each pattern. To generate such a space, each unit is assigned a dimension in the space such that activation of that unit (its firing rate) corresponds to a point on the dimension. Combining the dimensions for each unit, each pattern of activity can be represented as a vector, that is, a point in the space defined by those dimensions. The distance between each point represents a relationship of similarity between the activations represented by those points. The closer two points are, the more similar the corresponding patterns of activation are. It should be possible, in principle, to develop such spaces for the activity of real neural networks by assigning a dimension to the activation (firing rate) of particular neurons.

Using these tools of quality spaces and activation spaces, we can propose an explanatory hypothesis. One thing to be explained about phenomenal qualities is why they bear the similarity relationships they do, that is, why they are describable in quality spaces. This could be explained if the brain states they are properties of (neural activation patterns) bear the same set of similarity relationships. How do we describe the similarity relationships between patterns of neural activity? They are represented by distances between points representing those patterns of activity in activation space. If a neural activation space and a particular quality space bear the same similarity relations, then for every point in the quality space, there will be a corresponding point in the activation space and for every relationship in the quality space, there will be a corresponding relationship in the activation space. In other words, the spaces will be isomorphic (O'Brien & Opie, Citation2004; Schier, Citation2009, p. 218). From this, we end up with the prediction that each quality space will have an isomorphic neural activation space. While this prediction is not currently testable, it is testable in principle (see Schier, Citation2009 for more details). Should sufficient evidence for this be acquired, then an examination of consciousness in the absence of report via neural activation spaces would be justified.

Another methodology

This proposal makes use of masked priming, a dominant research paradigm used to investigate visual experience and its absence. In particular, the idea is to make use of the fact that attention can occur without awareness and that attention can alter the appearance of a stimulus. In other words, it is possible to operationalise qualitative character in terms of the behavioural effects of attention, not awareness.

Carrasco, Fuller, and Ling (Citation2008) provide a nice demonstration of how the conscious appearance of stimuli can be altered by attention. In their study, subjects stared at a fixation point while two Gabor patches were presented simultaneously to the left and right of fixation. In the test condition, subjects were presented with a black or white cue presented at fixation or to the left or right of fixation (0.067-s), which then disappeared leaving only the fixation mark (0.053-s), followed by the two Gabor patches appearing on the left and right of fixation (0.040-s). The position of the cue was randomly determined and thus not informative of the location or orientation of the target patch. Although briefly presented, the patches were visible (Carrasco et al., Citation2008, p. 1155). The subjects' task was then to indicate the orientation (titled left or right from vertical) of the patch of higher contrast (Carrasco et al., Citation2008, p. 1155). This was a forced choice task and one patch had to be selected on every trial. The task thus depends on how contrasted the patches consciously appear to the subject without asking the subject to report on the contrast, that is, subjects are not asked to report contrast, but rather, orientation.

Of interest here are the conditions under which subjects are equally likely to report the orientation of either the test or the standard patch. That is, conditions under which either the standard or the test patch were selected as the patch of “higher contrast” with equal frequency. When this occurs, the subject is randomly selecting a patch (to meet the forced choice instruction) and thus indicating that the contrast of the test and standard patch are indiscriminable. This is referred to as the “point of subjective equality”.

For trials in which the cue appeared at the fixation point, subjects were equally likely to report the orientation of the test and standard patch only when the two patches had the same contrast of 21% (Carrasco et al., Citation2008, p. 1156). However, on trials where attention was drawn to the side of the test patch by the cue, subjects were equally likely to choose the test or standard patch only when the contrast of the test patch was, in fact, lower than the contrast of the standard patch (Carrasco et al., Citation2008, p. 1156). In other words, drawing attention to the test patch appears to have increased its apparent contrast. Similarly, on trials where the side of the standard patch was cued, the point of subjective equality occurred when the test patch was, in fact, higher contrast than the standard patch (Carrasco et al., Citation2008, p. 1556). Again, drawing attention to the standard patch seems to have increased its apparent contrast.

The upshot of this is that attention alters the conscious appearance of stimuli. This finding has been extended to other dimensions of appearance including saturation and flicker rate (see Carrasco, Citation2009 for review).

It has also been shown that attention can be directed to stimuli that subjects are not aware of Norman, Heywood, and Kentridge (Citation2013) have provided a nice demonstration of this. They argue that subjects can visually attend to objects, namely two‐dimensional shapes, even when they cannot consciously see those objects. They presented on a screen an array of Gabor patches whose orientation rapidly alternated between vertical and horizontal. Within the array, rectangles were defined by Gabor patches flickering out of phase with the remainder of the array (Norman et al., Citation2013, p. 838). When the background patches were vertical, those defining the rectangle were horizontal, and vice versa. Observing the array, subjects reported seeing flickering Gabor patches but were unable to see the rectangles. Indeed, subjects were no better than chance when asked to guess whether or not such flickering displays contained rectangles (i.e., d' is not significantly different from zero) (Norman et al., Citation2013, p. 840). Despite the invisibility of the shapes, there was a facilitation effect in the colour discrimination task characteristic of attention being directed at the shapes. That is, subjects were faster at responding to targets which appeared in the same shape as the cue than for targets which appeared the same distance from the cue but in a different shape (Norman et al., Citation2013, p. 839). In this study, we see an effect characteristic of attention being directed at an object despite the object being invisible.

To test if appearances can be altered by attention in the absence of awareness or cognitive access, we suggest combining Carrasco and colleague's paradigm with masking. Subjects could be asked to perform the same forced choice task as above, but with the Gabor patches masked so as to be invisible to the subject, where invisibility is measured both by subjective report and forced choice guessing as to whether or not a patch is present, but analysed with Bayesian statistics rather the NHST. If mental states can be conscious in that they have a phenomenal quality independently of their cognitive function, then the same pattern of results as found by Carrasco and colleagues for visible stimuli can be found for invisible stimuli. If they do not, then subjects will respond randomly because, given the inability of subjects to detect the presence of the masked Gabor patches, they will have no basis on which to select one or the other. The underlying inference is that given that attention can alter the appearance of a stimulus when the subject is aware of it, if the same effect of attention on appearance can be shown in the absence of awareness then this provides some evidence that there are appearances in the absence of awareness. Although the theoretical significance of this is beyond the scope of this paper, what we are seeing here is the possibility of changing and operationalising the change to an appearance that cannot be reported.

CONCLUSION

In this article, we have examined the ways in which consciousness can be operationalised. We have focused on ways to operationalise consciousness in the sense of awareness and in the sense of phenomenal qualities or the “what it is like” of experiences. Awareness is commonly operationalised through reports of some kind, usually verbal or button presses. Similar techniques are used for phenomenal qualities; however, the judgements of subjects should be combined (using MDS) to produce quality spaces to adequately describe experiences. It is problematic to apply these approaches to dreaming and non‐human animals. There are also theoretical and empirical reasons to question subjects capacity to consistently accurately report their experiences. Combined, these concerns suggest that non‐report‐based measures of consciousness are desirable. We have suggested a possible, but currently impractical, technique for operationalising experiences via neural activation spaces and a difficult but possible performance‐based measure.

Notes

1. Lightness is a property of surfaces, brightness is a property of lights.

REFERENCES

  • Adams, J. K. (1957). Laboratory studies of behavior without awareness. Psychological Bulletin, 54(5), 383–405. https://doi.org/10.1037/h0043350
  • Antrobus, J. S., Fein, G., Jordan, L., Ellman, S. J., & Arkin, A. M. (1991). Measurement and design in research on sleep reports. In S. J. Ellman & J. S. Antrobus (Eds.), Wiley series on personality processes. The mind in sleep: Psychology and psychophysiology (pp. 83–121). Oxford, England: John Wiley & Sons.
  • Appel, K., Pipa, G., & Dresler, M. (2017). Investigating consciousness in the sleep laboratory—An interdisciplinary perspective on lucid dreaming. Interdisciplinary Science Reviews, 43, 1–16. https://doi.org/10.1080/03080188.2017.1380468
  • Arnulf, I. (2010). REM sleep behavior disorder: an overt access to motor and cognitive control during sleep. Revue neurologique, 166(10), 785–792.
  • Arnulf, I. (2011). The ‘scanning hypothesis' of rapid eye movements during REM sleep: A review of the evidence. Archives Italiennes de Biologie, 149(4), 367–382.
  • Baars, B. J. (2002). The conscious access hypothesis: Origins and recent evidence. TRENDS in Cognitive Sciences, 6(1), 47–52.
  • Barrett, R. J. (2004). Kurt schneider in borneo: Do first rank symptoms apply to the iban? In J. H. Jenkins & R. J. Barrett (Eds.), Schizophrenia, culture and subjectivity: The edge of experience. Cambridge, England: Cambridge University Press.
  • Barron, A. B., & Klein, C. (2016). What can insects tell us about the origins of consciousness. Proceedings of the National Academy of Sciences of the United States of America, 113(18), 4900–4908.
  • Beenfeldt, C. (2013). The philosophical background and scientific legacy of EB titchener's psychology: Understanding introspectionism. Dordrecht, the Netherlands: Springer Science & Business Media.
  • Bitbol, M., & Petitmengin, C. (2017). Neurophenomenology and the microphenomenological interview. In The Blackwell companion to consciousness (2nd ed., pp. 726–739). West Sussex, England: Wiley & Sons.
  • Block, N. (1995). On a confusion about a function of consciousness. Behavioural and Brain Sciences, 18(2), 227–247.
  • Boring, E. G. (1953). A history of introspection. Psychological Bulletin, 50(3), 169.
  • Botvinick, M., & Cohen, J. (1998). Rubber hands “feel” touch that eyes see. Nature, 391, 756.
  • Boynton, R. (1978). Ten years of research with the minimally distinct border. In J. Armngton, J. Krauskopf, & B. R. Wooten (Eds.), Visual psychophysics and physiology. A volume dedicated to Lorrin Riggs. New York, NY: Academic Press.
  • Breitmeyer, B. G. (2015). Psychophysical “blinding” methods reveal a functional hierarchy of unconscious visual processing. Consciousness and Cognition, 35, 234–250. https://doi.org/10.1016/j.concog.2015.01.012
  • Bridgeman, B. (2012). Eye Movements. Encyclopedia of Human Behavior, 2, 160–166.
  • Brock, A. (1991). Imageless thought or stimulus error? The social construction of private experience. In W. R. Woodward & R. S. Cohen (Eds.), World views and scientific discipline formation (pp. 97–106). Dordrecht, the Netherlands: Springer.
  • Brock, A. C. (2013). The history of introspection revisited. In J. W. Clegg (Ed.), History and theory of psychology. Self‐observation in the social sciences (pp. 25–43). Piscataway, NJ: Transaction Publishers.
  • Carls‐diamante, S. (2017). The octopus and the unity of consciousness. Biology and Philosophy, 32(6), 1269–1287.
  • Carrasco, M. (2009). Attention psychophysical approaches. In B. Tim, C. Axel, & W. Patrick (Eds.), The Oxford companion to consciousness. Oxford, England: Oxford University Press.
  • Carrasco, M., Fuller, S., & Ling, S. (2008). Transient attention does increase perceived contrast of suprathreshold stimuli: A reply to Prinzmetal, Long, and Leonhardt (2008). Perception & Psychophysics, 70(7), 1151–1164. https://doi.org/10.3758/PP.70.7.1151
  • Carruthers, G. (2014). What makes us conscious of our own agency? And why the conscious versus unconscious representation distinction matters. Frontiers in Human Neuroscience, 8, 434. https://doi.org/10.3389/fnhum.2014.00434
  • Carruthers, G. (2018). Confabulation or experience? Implications of out‐of‐body experiences for theories of consciousness. Theory & Psychology, 28(1), 122–140.
  • Carruthers, G. (2019). The feeling of embodiment: A case study in explaining consciousness. Cham, Switzerland: Palgrave Macmillan.
  • Churchland, P. M. (1995). The engine of reason, the seat of the soul: A philosophical journey into the brain. Cambridge, MA: MIT Press.
  • Clark, A. (1993). Sensory qualities. Oxford, England: Clarendon Library of Logic and Philosophy.
  • Costall, A. (2006). ‘Introspectionism’ and the mythical origins of scientific psychology. Consciousness and Cognition, 15(4), 634–654.
  • Danziger, K. (1980). The history of introspection reconsidered. Journal of the History of the Behavioral Sciences, 16(3), 241–262. https://doi.org/10.1002/1520‐6696(198007)16:3<241::AID‐JHBS2300160306>3.0.CO;2‐O.
  • Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition, 79, 1–37.
  • Dehaene, S., & Changeux, J.‐P. (2003). Neural mechanisms for access to consciousness. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (pp. 1145–1157). Cambridge, MA: MIT Press.
  • Dement, W., & Kleitman, N. (1957a). Cyclic variations in EEG during sleep and their relation to eye movements, body motility, and dreaming. Electroencephalography and Clinical Neurophysiology, 9(4), 673–690.
  • Dement, W., & Kleitman, N. (1957b). The relation of eye movements during sleep to dream activity: An objective method for the study of dreaming. Journal of Experimental Psychology, 53(5), 339–346.
  • Dennett, D. C. (1991). Consciousness explained. New York, NY: Penguin Books.
  • Dennett, D. C. (1995). Consciousness: More like fame than television. In Munich conference volume Retrieved from http://pp.kpnet.fi/seirioa/cdenn/concfame.htm
  • Dienes, Z. (2014). Using Bayes to get the most out of non‐significant results. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.00781
  • Dienes, Z. (2015). How Bayesian statistics are needed to determine whether mental states are unconscious. In Overgaard, M. (ed) Behavioural methods in consciousness research. Oxford, England: Oxford University Press
  • Domhoff, G. W. (2003). The scientific study of dreams : Neural networks, cognitive development, and content (1st ed.). Washington, DC: American Psychological Association.
  • Dresler, M., Erlacher, D., Czisch, M., & Spoormaker, V. I. (2016). Lucid dreaming. In M. Kryger & T. Roth (Eds.), Principles and practice of sleep medicine (pp. 539–545). Amsterdam, the Netherlands: Elsevier.
  • Dubois, J., & Faivre, N. (2014). Invisible, but how? The depth of unconscious processing as inferred from different suppression techniques. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.01117
  • Ehrsson, H. H., Rosen, B., Stockselius, A., Ragno, C., Kohler, P., & Lundborg, G. (2008). Upper limb amputees can be induced to experience a rubber hand as their own. Brain, 131, 3443–3452.
  • Ekroll, V., Faul, F., & Niederee, R. (2004). The peculiar nature of simultaneous colour contrast in uniform surrounds. Vision Research, 44, 1765–1786.
  • Ekroll, V., Faul, F., Niederee, R., & Richter, E. (2002). The natural Center of Chromaticity Space is not always achromatic: A new look at color induction. Proceedings of the National Academy of Science of the United States of America, 99(20), 13352–13356.
  • Erdelyi, M. H. (1985). Psychoanalysis: Freud's cognitive psychology. New York, NY: W H Freeman/Times Books/ Henry Holt & Co.
  • Erdelyi, M. H. (1986). Experimental indeterminacies in the dissociation paradigm of subliminal perception. Behavioral and Brain Sciences, 9(1), 30–31. https://doi.org/10.1017/S0140525X00021348
  • Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as Data. Cambridge, MA: MIT Press.
  • Eriksen, C. W. (1960). Discrimination and learning without awareness: A methodological survey and evaluation. Psychological Review, 67(5), 279–300.
  • Erlacher, D., Schädlich, M., Stumbrys, T., & Schredl, M. (2013). Time for actions in lucid dreams: Effects of task modality, length, and complexity. Frontiers in Psychology, 4.
  • Finkbeiner, M. (2011). Subliminal priming with nearly perfect performance in the prime‐classification task. Attention, Perception, and Psychophysics, 73(4), 1255–1265.
  • Finkbeiner, M., & Coltheart, M. (2014). Dismissing subliminal perception because of its famous problems is classic “baby with the bathwater.”. Behavioral and Brain Sciences, 37(1), 27–27. https://doi.org/10.1017/S0140525X13000708
  • Finn, J. K., Tregenza, T., & Norman, M. D. (2009). Defensive tool use in a coconut‐carrying octopus. Current Biology, 15(6), R1069–R1070.
  • Foulkes, D. (1985). Dreaming: a cognitive‐psychological analysis: L. New York, NY: Erlbaum Associates.
  • Foulkes, D. (1996). Dream research: 1953–1993. Sleep, 19(8), 609–624. https://doi.org/10.1093/sleep/19.8.609
  • Foulkes, D. (1999). Children's dreaming and the development of consciousness. Cambridge, MA: Harvard University Press.
  • Gärdenfors, P. (2000). Conceptual spaces: The geomentry of thought. Cambridge, England: The MIT Press.
  • Gelbard‐sagiv, H., Faivre, N., Mudrik, L., & Koch, C. (2016). Low‐level awareness accompanies “unconscious” high‐level processing during continuous flash suppression. Journal of Vision, 16(1), 3–3. https://doi.org/10.1167/16.1.3
  • Godfrey‐smith, P. (2016). Other minds. New York, NY: Farrar, Straus, and Giroux.
  • Gold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annual Review of Neuroscience, 30, 535–574.
  • Hanlon, R., & Messenger, J. B. (1996). Cephalopod behaviour. New York, NY: Cambridge University Press.
  • Hari, R., & Jousmaki, V. (1996). Preference of personal to extrapersonal space in a visuomotor task. Journal of Cognitive Neuroscience, 8(3), 305–307.
  • Haybron, D. M. (2007). Do we know how happy we are? On some limits of affective introspection and recall. Nous, 41(3), 394–428.
  • Heyes, C. M. (1994). Reflections on self‐recognition in primates. Animal Behaviour, 47(4), 909–919.
  • Hobson, J. A. (2005). In bed with mark Solms? What a nightmare! A reply to Domhoff (2005). Dreaming, 15(1), 21–29. https://doi.org/10.1037/1053-0797.15.1.21
  • Hobson, J. A., Hong, C. C. H., & Friston, K. J. (2014). Virtual reality and consciousness inference in dreaming. Frontiers in Psychology, 5, 1133. https://doi.org/10.3389/fpsyg.2014.01133
  • Hobson, J. A., Pace‐schott, E. F., & Stickgold, R. (2000). Dreaming and the brain: Toward a cognitive neuroscience of conscious states. Behavioral and Brain Sciences, 23(6), 793–842.
  • Hochner, B. (2004). Octopus nervous system. In G. Adelman & B. H. Smith (Eds.), Encyclopedia of neuroscience (3rd ed.). Amsterdam, the Netherlands: Elsevier B. V..
  • Holmes, N. P., Snijders, H. J., & Spence, C. (2006). Reaching with alien limbs: Visual exposure to prosthetic hands in a mirror biases proprioception without accompanying illusions of ownership. Perception & Psychophysics, 68(4), 685–701.
  • Horikawa, T., Tamaki, M., Miyawaki, Y., & Kamitani, Y. (2013). Neural decoding of visual imagery during sleep. Science, 340(6132), 639–642.
  • Hurlburt, R., & Schwitzgebel, E. (2011). Describing inner experience? Proponent meets skeptic (1st ed.). Cambridge, MA: The MIT Press.
  • Ichikawa, J. (2009). Dreaming and imagination. Mind and Language, 24(1), 103–121.
  • Ichikawa, J. (2016). Imagination, dreaming, and hallucination. Routledge Handbook of the Philosophy of Imagination, 149–162.
  • Ijsselsteijn, W. A., de Kort, Y. A. W., & Haans, A. (2006). Is this my hand I see before me? The rubber hand illusion in reality, virtual reality and mixed reality. Presence, 15(4), 455–464.
  • Irvine, E. (2012). Consciousness as a scientific concept: A philosophy of science perspective. Dordrecht, the Netherlands: Springer Science & Business Media.
  • Irvine, E. (2017). Explaining what? Topoi, 36(1), 95–106. https://doi.org/10.1007/s11245-014-9273-4
  • Kahan, T. L., & Laberge, S. (1994). Lucid dreaming as metacognition: Implications for cognitive science. Consciousness and Cognition, 3(2), 246–264. https://doi.org/10.1006/ccog.1994.1014
  • Kahan, T. L., & Sullivan, K. T. (2012). Assessing metacognitive skills in waking and sleep: A psychometric analysis of the metacognitive, affective, cognitive experience (MACE) questionnaire. Consciousness and Cognition, 21(1), 340–352. https://doi.org/10.1016/j.concog.2011.11.005
  • Knotts, J. D., Lau, H., & Peters, M. A. K. (2018). Continuous flash suppression and monocular pattern masking impact subjective awareness similarly. Attention, Perception, & Psychophysics, 80(8), 1974–1987. https://doi.org/10.3758/s13414-018-1578-8
  • Ko, P.‐R. T., Kientz, J. A., Choe, E. K., Kay, M., Landis, C. A., & Watson, N. F. (2015). Consumer sleep technologies: A review of the landscape. Journal of Clinical Sleep Medicine: JCSM: Official Publication of the American Academy of Sleep Medicine, 11(12), 1455–1461.
  • Kriegel, U. (2005). Naturalizing subjective character. Philosophy and Phenomenological Research, 71(1), 23–57.
  • Laberge, S. (2000). Lucid dreaming: Evidence and methodology. Behavioral and Brain Sciences, 23(6), 962–964.
  • Laberge, S., & Degracia, D. J. (2000). Varieties of lucid dreaming experience. In R. G. Kuzedorf & R. Wallace (Eds.), Individual differences in conscious experience, 20, 269. Philadelphia, PA: John Benjamins Publishing Company.
  • Leclair‐visonneau, L., Oudiette, D., Gaymard, B., Leu‐semenescu, S., & Arnulf, I. (2010). Do the eyes scan dream images during rapid eye movement sleep? Evidence from the rapid eye movement sleep behaviour disorder model. Brain, 133(6), 1737–1746. https://doi.org/10.1093/brain/awq110
  • Levy, G., Nesher, N., Zullo, L., & Hochner, B. (2017). Motor control in soft‐bodied animals. In J.H. Byrne (Ed.), The Oxford Handbook of Invertebrate Neurobiology. Oxford, England: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190456757.013.36
  • Logvinenko, A. D. (2015a). The achromatic object‐colour manifold is three‐dimensional. Perception, 44(3), 243–268. https://doi.org/10.1068/p7912
  • Logvinenko, A. D. (2015b). The geometric structure of color. Journal of Vision, 15(1), 16–16. https://doi.org/10.1167/15.1.16
  • Low, P. (2012). The Cambridge declaration on consciousness. In Francis crick memorial conference on consciousness in human and non‐human animals. Cambridge, England: University of Cambridge http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf.
  • Malcolm, N. (1967). Dreaming: (4. Impr.). London: Routledge & Kegan Paul. New York, NY: Humanities Press.
  • Mellor, C. S. (1970). First Rank Symptoms of Schizophrenia. British Journal of Psychiatry, 117, 15–23.
  • Merikle, P. M., & Cheesman, J. (1986). Consciousness is a “subjective” state. Behavioral and Brain Sciences, 9(1), 42–42. https://doi.org/10.1017/S0140525X00021452
  • Metzinger, T. (2014). What is the specific significance of dream research for philosophy of mind? In N. Tranquillo (Ed.), Dream consciousness: Allan Hobson's new approach to the brain and its mind (pp. 161–166). Cham, Switzerland: Springer International Publishing.
  • Miyauchi, S., Misaki, M., Kan, S., Fukunaga, T., & Koike, T. (2009). Human brain activity time‐locked to rapid eye movements during REM sleep. Experimental Brain Research, 192(4), 657–667.
  • Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435–450.
  • Nielsen, T. A., Ouellet, L., & Zadra, A. L. (1995). Pressure stimulation during REM sleep alters dream limb activity and body bizarreness. Sleep Res, 24, 134.
  • Norman, L. J., Heywood, C. A., & Kentridge, R. W. (2013). Object‐based attention without awareness. Psychological Science, 24(6), 836–843. https://doi.org/10.1177/0956797612461449
  • O'brien, G., & Opie, J. (2004). Notes toward a structuralist theory of representation. In H. Clapin, P. Staines, & P. Slezak (Eds.), Representation in mind: New approaches to mental representation (pp. 1–20). Oxford, England: Greenwood Publishers.
  • Ogawa, K., Nittono, H., & Hori, T. (2005). Brain potentials before and after rapid eye movements: An electrophysiological approach to dreaming in REM sleep. Sleep, 28(9), 1077–1082.
  • Overgaard, M. (2015). Consciousness research methods: The empirical “hard problem”. Oxford, England: Oxford University Press.
  • Overgaard, M., Fehl, K., Mouridsen, K., Bergholt, B., & Cleeremans, A. (2008). Seeing without seeing? Degraded conscious vision in a Blindsight patient. PLOS One, 3(8), e3028. https://doi.org/10.1371/journal.pone.0003028
  • Overgaard, M., & Mogensen, J. (2017). An integrative view on consciousness and introspection. Review of Philosophy and Psychology, 8(1), 129–141.
  • Palmer, S. E. (1999). Vision science: Photons to phenomenology. Cambridge, MA: MIT Press
  • Pavani, F., & Zampini, M. (2007). The role of hand size in the fake‐hand illusion paradigm. Perception, 36, 1547–1644.
  • Peigneux, P., Laureys, S., Fuchs, S., Delbeuck, X., Degueldre, C., Aerts, J., … Maquet, P. (2001). Generation of rapid eye movements during paradoxical sleep in humans. Neuroimage, 14(3), 701–708.
  • Peirce, C. S., & Jastrow, J. (1884). On small differences in sensation. Memoirs of the National Academy of Sciences, 3, 73–83.
  • Persuh, M. (2018). Measuring perceptual consciousness. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.02320
  • Peters, M. A. K., Kentridge, R. W., Phillips, I., & Block, N. (2017). Does unconscious perception really exist? Continuing the ASSC20 debate. Neuroscience of Consciousness, 2017(1), 1–11. https://doi.org/10.1093/nc/nix015
  • Povinelli, D. J., & Cant, J. G. H. (1995). Arboreal clambering and the evolution of Self‐Conception. The Quarterly Review of Biology, 70(4), 393–421.
  • Prinz, J. (2000). A neurofunctional theory of visual consciousness. Consciousness and Cognition, 9(2), 243–259.
  • Prinz, J. J. (2012). The conscious brain: How attention engenders experience. Oxford, England: Oxford University Press.
  • Rahmanovic, A., Barnier, A. J., Cox, R. E., Langdon, R. A., & Coltheart, M. (2012). “That's not my arm”: A hypnotic analogue of somatoparaphrenia. Cognitive Neuropsychiatry, 17(1), 36–63. https://doi.org/10.1080/13546805.2011.564925
  • Ramirez, M. D., & Oakley, T. H. (2015). Eye‐independent, light‐activated chromatophore expansion (LACE) and expression of phototransduction genes in the skin of Octopus bimaculoides. The Journal of Experimental Biology, 218, 1513–1520.
  • Rausch, M., Müller, H. J., & Zehetleitner, M. (2015). Metacognitive sensitivity of subjective reports of decisional confidence and visual experience. Consciousness and Cognition, 35, 192–205. https://doi.org/10.1016/j.concog.2015.02.011
  • Rausch, M., & Zehetleitner, M. (2014). A comparison between a visual analogue scale and a four point scale as measures of conscious experience of motion. Consciousness and Cognition, 28, 126–140. https://doi.org/10.1016/j.concog.2014.06.012
  • Rausch, M., & Zehetleitner, M. (2016). Visibility is not equivalent to confidence in a Low contrast orientation discrimination task. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.00591
  • Reingold, E. M. (2004). Unconscious perception and the classic dissociation paradigm: A new angle? Perception & Psychophysics, 66(5), 882–887. https://doi.org/10.3758/BF03194981
  • Reingold, E. M., & Merikle, P. M. (1988). Using direct and indirect measures to study perception without awareness. Perception & Psychophysics, 44(6), 563–575. https://doi.org/10.3758/BF03207490
  • Reingold, E. M., & Merikle, P. M. (1990). On the inter‐relatedness of theory and measurement in the study of unconscious processes. Mind & Language, 5(1), 9–28. https://doi.org/10.1111/j.1468-0017.1990.tb00150.x
  • Revonsuo, A., Tuominen, J., & Valli, K. (2015a). The avatars in the machine: Dreaming as a simulation of social reality open MIND: Open MIND. Frankfurt am Main, Germany: MIND Group. https://doi.org/10.15502/9783958570375
  • Revonsuo, A., Tuominen, J., & Valli, K. (2015b). The simulation theories of dreaming: How to make theoretical progress in dream science open MIND: Open MIND. Frankfurt am Main, Germany: MIND Group.
  • Richter, J. N., Hochner, B., & Kuba, M. J. (2015). Octopus arm movements under constrained conditions: Adaptation, modification, and plasticity of motor primitives. Journal of Experimental Biology, 218, 1069–1076.
  • Rohde, M., Luca, M. D., & Ernst, M. O. (2011). The rubber hand illusion: feeling of ownership and proprioceptive drift do not go hand in hand. PLOS ONE, 6(6), e21659. https://doi.org/10.1371/journal.pone.0021659
  • Rosen, M. G. (2013). What I make up when I wake up: Anti‐experience views and narrative fabrication of dreams. Frontiers in Psychology, 4. https://doi.org/10.3389/fpsyg.2013.00514
  • Rosen, M. G. (2018a). How bizarre? A pluralist approach to dream content. Consciousness and Cognition, 62, 148–162.
  • Rosen, M. (2019). Dreaming of a stable world: Vision and action in sleep. Synthese. In press. https://doi.org/10.1007/s11229-019-02149-1
  • Rosen, M. G., & Sutton, J. (2013). Self‐representation and perspectives in dreams. Philosophy Compass, 8(11), 1041–1053. https://doi.org/10.1111/phc3.12082
  • Rosenthal, D. (1997). A theory of consciousness. In N. Block, O. Flanagan, & G. Guzeldere (Eds.), The nature of consciousness philosophical debates. Cambridge, MA: MIT Press.
  • Rosenthal, D. (2002). Explaining consciousness. In D. J. Chalmers (Ed.), Philosophy of mind: Classical and contemporary readings. Oxford, England: Oxford.
  • Rouder, J. N., & Morey, R. D. (2009). The nature of psychological thresholds. Psychological Review, 116(3), 655–660. https://doi.org/10.1037/a0016413
  • Rouder, J. N., Morey, R. D., Speckman, P. L., & Pratte, M. S. (2007). Detecting chance: A solution to the null sensitivity problem in subliminal priming. Psychonomic Bulletin & Review, 14(4), 597–605. https://doi.org/10.3758/BF03196808
  • Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237. https://doi.org/10.3758/PBR.16.2.225
  • Sandberg, K., Del pin, S. H., Bibby, B. M., & Overgaard, M. (2014). Evidence of weak conscious experiences in the exclusion task. Frontiers in Psychology, 5. https://doi.org/10.3389/fpsyg.2014.01080
  • Schädlich, M., Erlacher, D., & Schredl, M. (2017). Improvement of darts performance following lucid dream practice depends on the number of distractions while rehearsing within the dream—A sleep laboratory pilot study. Journal of Sports Sciences, 35(23), 2365–2372. https://doi.org/10.1080/02640414.2016.1267387
  • Schier, E. (2009). Identifying phenomenal consciousness. Consciousness and Cognition, 18, 216–222. https://doi.org/10.1016/j.concog.2008.04.001
  • Schier, E. (2016). Subjectivity, multiple drafts and the inconceivability of zombies and the inverted Spectrum in this world. Topoi, 1–9. https://doi.org/10.1007/s11245-016-9446-4
  • Schwitzgebel, E. (2008). The unreliability of naive introspection. Philosophical Review, 117(2), 245–273.
  • Short, F., & Ward, R. (2009). Virtual limbs and body space: Critical features for the distinction between body space and near‐body space. Journal of Experimental Psychology: Human Perception and Performance, 35(4), 1092–1103.
  • Snodgrass, M. (2004). The dissociation paradigm and its discontents: How can unconscious perception or memory be inferred? Consciousness and Cognition, 13(1), 107–116. https://doi.org/10.1016/j.concog.2003.11.001
  • Snodgrass, M., Bernat, E., & Shevrin, H. (2004). Unconscious perception: A model‐based approach to method and evidence. Perception & Psychophysics, 66(5), 846–867. https://doi.org/10.3758/BF03194978
  • Sosa, E., & Ichikawa, J. (2009). Dreaming, philosophical issues. In T. Bayne, P. Wilken, & A. Cleeremans (Eds.), Oxford companion to consciousness. Oxford, England: Oxford University Press.
  • Spence, S. A. (2001). Alien Control: From Phenomenology to Cognitive Neurobiology. Philosophy, Psychiatry, Psychology, 8(2–3).
  • Sumbre, G., Gutfreund, Y., Fiorito, G., Flash, T., & Hochner, B. (2001). Control of octopus arm extension by a peripheral motor program. Science, 293(5536), 1845–1848.
  • Swets, J. A. (1964). Signal detection and recognition in human observers: Contemporary readings. New York, NY: John Wiley and Sons.
  • Timmermans, B., & Cleeremans, A. (2015). How can we measure awareness? An overview of current methods. In M. Overgaard (Ed.), Behavioural methods in consciousness research, Chapter 3 (pp. 21–46). Oxford, England: Oxford University Press.
  • Tokunaga, R., & Logvinenko, A. D. (2010). Hue manifold. Journal of the Optical Society of America A, 27(12), 2551. https://doi.org/10.1364/JOSAA.27.002551
  • Tsakiris, M., Prabhu, G., & Haggard, P. (2006). Having a body versus moving your body: How agency structures body‐ownership. Consciousness and Cognition, 15, 423–432.
  • Tsuchiya, N., Wilke, M., Frässle, S., & Lamme, V. A. F. (2015). No‐report paradigms: Extracting the true neural correlates of consciousness. Trends in Cognitive Sciences, 19(12), 757–770. https://doi.org/10.1016/j.tics.2015.10.002
  • Voss, U., & Hobson, J. A. (2014). What is the state‐of‐the‐art on lucid dreaming?‐recent advances and questions for future research open MIND: Open MIND. Frankfurt am Main, Germany: MIND Group.
  • Voss, U., Holzmann, R., Tuin, I., & Hobson, A. J. (2009). Lucid dreaming: A state of consciousness with features of both waking and non‐lucid dreaming. Sleep, 32(9), 1191–1200.
  • Weisberg, J. (2011). The zombie's cogito: Meditations on type‐Q materialism. Philosophical Psychology, 24(5), 585–605. https://doi.org/10.1080/09515089.2011.562646
  • Whiteley, L., Spence, C., & Haggard, P. (2008). Visual processing and the bodily self. Acta Psychologica, 127(1), 129–136. https://doi.org/10.1016/j.actpsy.2007.03.005
  • Windt, J. M. (2015). Dreaming : A conceptual framework for philosophy of mind and empirical research. Cambridge, MA: MIT Press.
  • Windt, J. M. (2018). Predictive brains, dreaming selves, sleeping bodies: How the analysis of dream movement can inform a theory of self‐and world‐simulation in dreams. Synthese, 6, 2577–2625.
  • Windt, J. M., & Metzinger, T. (2007). The philosophy of dreaming and self‐consciousness: What happens to the experiential subject during the dream state? In D. Barrett & P. Mcnamara (Eds.), Praeger perspectives. The new science of dreaming: Vol. 3. Cultural and theoretical perspectives (pp. 193–247). Westport, CT: Praeger Publishers/Greenwood Publishing Group.
  • Xiao, B., Hurst, B., Macintyre, L., & Brainard, D. H. (2012). The color Constancy of three‐dimensional objects. Journal of Vision, 12(4), 6. https://doi.org/10.1167/12.4.6
  • Zehetleitner, M., & Rausch, M. (2013). Being confident without seeing: What subjective measures of visual consciousness are about. Attention, Perception, & Psychophysics, 75(7), 1406–1426. https://doi.org/10.3758/s13414-013-0505-2

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.