5,363
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Revisiting embodiment for brain–computer interfaces

ORCID Icon, ORCID Icon & ORCID Icon
Received 02 Jun 2021, Accepted 13 Jan 2023, Published online: 13 Feb 2023

ABSTRACT

Researchers increasingly explore deploying brain–computer interfaces (BCIs) for able-bodied users, with the motivation of accessing mental states more directly than allowed by existing body-mediated interaction. This motivation seems to contradict the long-standing HCI emphasis on embodiment, namely the general claim that the body is crucial for cognition. This paper addresses this apparent contradiction through a review of insights from embodied cognition and interaction. We first critically examine the recent interest in BCIs and identify the extent cognition in the brain is integrated with the wider body as a central concern for research. We then define the implications of an integrated view of cognition for interface design and evaluation. A counterintuitive conclusion we draw is that embodiment per se should not imply a preference for body-mediated interaction over BCIs. It can instead guide research by 1) providing body-grounded explanations for BCI performance, 2) proposing evaluation considerations that are neglected in modular views of cognition, and 3) through the direct transfer of its design insights to BCIs. We finally reflect on HCI’s understanding of embodiment and identify the neural dimension of embodiment as hitherto overlooked.

1. Introduction

Embodiment, the general claim that cognition is deeply dependent upon the physical structure of the body, influenced how HCI researchers theorize about and design user interfaces. For interface design, the concept of embodiment brought about an increased sensitivity toward the physical form of interfaces and the particular body movements performed by users. Researchers utilized the concept to underpin their research on physical gestures, tangible and ubiquitous interfaces and called for involving a more diverse set of physical forms and body gestures than required in traditional GUI interaction (Dourish, Citation2004; Hornecker & Buur, Citation2006; Ishii & Ullmer, Citation1997; Klemmer et al., Citation2006). Distinctions between physical and virtual, and between tangible and desktop computing have so far remained at the foreground of HCI discussions on embodiment.

At the same time, HCI is currently witnessing the emergence of brain–computer interfaces (BCIs). BCIs enable users to interact with computers without motor execution, namely without having to carry out body movements such as moving hands or fingers. Users with limited or no muscle control, such as those with impaired neural pathways due to brain injury, brainstem stroke, multiple sclerosis or any other condition, are thus an obvious target for BCI applications. Yet researchers increasingly develop BCIs for use by the general population. There are multiple motivations behind this recent interest such as the promise of brain signals as high-bandwidth communication channels (Schalk, Citation2008) or as accurate representations of mental content (Spapé et al., Citation2015).

The potential use of BCIs by the general population implies a future in which body movements could at least be partially replaced by brain signals, which represents a transformation that is at least as significant as the ongoing shift into mobile, ubiquitous and full-body interaction. Unlike these domains, however, BCIs received relatively little attention within the HCI discussions of embodiment. The elimination of body movements, at first sight, seems to go against the research program of embodied interaction, but some qualities that are attributed to ubiquitous or tangible interfaces, such as their foundation on familiar, everyday human skills and abilities (Abowd & Mynatt, Citation2000; Dourish, Citation2004; Jacob et al., Citation2008), also motivate the research on BCIs (e.g., Fairclough, Citation2011; Schalk, Citation2008). We believe that the time is due to revisit the concept of embodiment and its implications.

This paper thus examines the implications of embodiment for BCIs and uses the questions raised by BCIs to reflect on HCI’s understanding of embodiment. We target the following audiences:

The first audience we target is researchers that study interaction modalities. Despite the recent interest, HCI has so far lacked an in-depth discussion on how BCIs would fit into the larger landscape of human–computer interaction. As researchers working on BCIs and other input modalities, we are preoccupied with questions such as: How will BCIs compare to existing input methods? What will be the relevance of the body and body movements for human–computer interaction in an era of viable BCIs? Should BCIs replace or complement body movements? These are essentially empirical questions for which we do not claim to have definitive answers. We instead provide methodological guidance to inform future work.

Our second target audience is HCI researchers working on BCIs. The expansion of research scope into the general population requires paying attention to how cognition in the brain is facilitated by and reflected in the wider body—an area of study that falls within the scope of embodied cognition and has so far received little attention within BCI and brain imaging research (Gramann et al., Citation2014). We address this knowledge gap by presenting evidence from embodied cognition and identifying different ways embodiment can guide research. The approach in this paper can thus be described as top-down. Instead of departing from the existing body of BCI work, we depart from the methodological and empirical insights of embodiment to identify research implications. We refer to BCI examples along the way for illustration.

Finally, the paper is intended for researchers with a general interest in the questions of embodiment and embodied interaction. Embodied interaction literature in HCI has drawn from diverse intellectual traditions and contains tensions concerning the design implications of embodiment and attitudes toward mental phenomena. The emergence of BCIs provides an opportunity for HCI to address these tensions and sharpen its own understanding of the concept.

Our contributions can be summarized as follows.

  • We propose novel research directions for BCIs based on insights from embodiment and embodied cognition. We describe how embodied views of cognition lead to different research questions and desiderata than those elicited by modular views of cognition. We present these research directions with example study designs and identify methodological implications.

  • We use the conceptual challenges posed by BCIs to further the understanding of embodiment in HCI and derive research implications for HCI at large. We first evaluate different claims within embodiment regarding their relevance for BCI research. We then reflect on the conceptual tensions within HCI and identify various ways an expanded understanding of embodiment can inform future research.

1.1. Overview and organization

Section 2 introduces the concept of “research vision” in HCI and identifies providing more direct access to mental states than enabled by existing input devices as a motivation for BCIs. It then formulates the concept of directness in terms of information transmission criteria to assess the promise of BCIs.

Section 3 investigates whether cognitivism provides a more explicit basis for the motivation of more direct access to mental states. We compare the commitments of cognitivism and embodiment regarding 1) the status of mental representations, 2) the relationship between meaning and interaction, 3) the neural basis of cognition, and 4) the extent cognition is integrated. We identify the fourth discussion point, the extent cognition is integrated, as the most relevant for the motivation of providing more direct access to mental states.

Section 4 proposes novel research directions for BCIs based on an integrated view of cognition. We present evidence related to embodied cognition and illustrate potential research directions by providing example study designs. We list a number of methodological implications that stem from an integrated view of cognition.

Section 5 reflects on HCI’s understanding of embodiment. Taking cues from parallel discussions in embodied cognition, we argue that BCIs require defining embodiment in a way that does not limit its scope to body movements. We identify the wider implications of an expanded view of embodiment for HCI research.

Section 6 summarizes the main takeaways of the paper.

2. BCIs and a new research vision

Put simply, a brain–computer interface is a “communication system that does not depend on the brain’s normal output pathways of peripheral nerves and muscles” (Wolpaw et al., Citation2000, p. 165). The current technical landscape of BCIs can be described as a plethora of different sensing methods (He et al., Citation2013) that are used in a diverse set of interactive applications ranging from real-time control to longitudinal user monitoring. Mental imagery-based BCI applications require users to control an interface through specific mental tasks. BCIs in this group, for instance, can ask participants to imagine left and right movements and use the time/frequency transformation of the EEG to predict the imagined movements, resulting in a joystick-like interaction element (Williamson et al., Citation2009). BCIs based on event-related potentials (ERPs) or steady-state visual-evoked potentials (SSVEPs), on the other hand, work by gathering brain activity while a user is subjected to visual or other sensory stimuli. For example, in P300 BCI spellers (Farwell & Donchin, Citation1988) users are tasked to focus on letter flashes; the enhanced evoked activity is then used to predict when a letter has been focused on, resulting in the letter to be spelled out by the system. Another BCI group targets using brain signals that occur as part of another activity. An example is the use of specific frequencies of the EEG spectrum to infer the “mental workload” of users and make various system adaptations in the background (Afergan et al., Citation2014; Yuksel et al., Citation2016).

Current BCI applications are generally targeted at enabling communication for people who have physical disabilities that restrict other types of input (e.g., Townsend et al., Citation2010; Williamson et al., Citation2009). In these use cases, BCIs are easily justified as they can be the only means for input. At the same time, the potential utility of BCIs as an everyday input for the wider population is increasingly being investigated in academic (e.g., Allison et al., Citation2007; Blankertz et al., Citation2010, Citation2016; Erp van et al., Citation2012), military (Darpa, Citation2018) and commercial contexts (Levy, Citation2017). In contrast to disability use cases, the deployment of BCIs to the wider population implies replacing manual input devices that currently dominate HCI, a shift that implies certain advantages for BCIs. These advantages are not yet demonstrated; the performance of the state-of-the-art BCIs falls short of established input methods for elementary tasks such as text entry or pointing (Pandarinath et al., Citation2017). The current research interest instead seems to be driven by expectations of future improvements in performance, which consequently points to a theoretical potential that exceeds what is possible with state-of-the-art techniques. It is then necessary to identify how this expectation is formulated and what constitutes the research vision for BCIs.

2.1. A research vision of directly accessing mental states

Research visions can be defined as principles that guide a research enterprise beyond the limitations of current-day technological capabilities (Ishii et al., Citation2012). As such, research visions themselves are drivers of technological innovation and help organize individual research contributions as part of a coherent program. Among well-known HCI research visions are Bush’s concept of Memex as an “enlarged intimate supplement to memory” (Citation1945), Weiser’s vision of ubiquitous computing where computers “vanish into the background” (Citation1991) and Ishii and colleagues’ vision of three-dimensional interfaces that are “as reconfigurable as pixels on a screen” (Citation2012). At a given time, HCI is driven by a multitude of research visions and it is possible to examine the same interface or technology through the lens of different visions. Two research visions, however, can also lead to conflicting design outcomes; for example, direct manipulation interfaces and intelligent agents presented competing visions for what makes an ideal user interface, which led to debates about their respective merits (Maes et al., Citation1997). The current interest in BCIs similarly seems to conflict with some other HCI visions such as the vision of tangible computing that emphasizes a rich repertoire of physical gestures. And as with other technologies, there is more than a single way to frame the future utility of BCIs. Here, we find one potential framing particularly relevant for embodiment. We identify this as the vision of providing more direct access to users’ mental states than allowed by existing input devices that require users to perform body movements.

We observe that while many BCI researchers (e.g., Farwell & Donchin, Citation1988; Spapé et al., Citation2015; Tan & Nijholt, Citation2010; Wolpaw et al., Citation2000) caution against the hype of “wire-tapping” or “mind reading” interfaces, the claims of directness still feature strongly in some future visions of BCIs. Schalk identifies the conventional interface between humans and computers as an impediment to future human-machine symbiosis and argues for the “theoretical and practical possibility that direct communication between the brain and the computer can be used to overcome this impediment by improving or augmenting conventional forms of human communication.” (Citation2008). Along the same lines, Ebrahimi and colleagues identify conventional interfaces as the “weak link in communication” and ask “Are there ways to altogether bypass the natural interfaces of a human user such as his muscles and yet establish a meaningful communication between man and machine?” (Citation2003). This is echoed by Krepki and colleagues: “Since all these information streams pass its own interface (hand/skin, eye, ear, nose, muscles) yet indirectly converge or emerge in the brain, the investigation of a direct communication channel between the application and the human brain should be of high interest to multimedia researchers” (Citation2007). A related promise is direct communication between multiple brains through EEG and TMS (transcranial magnetic stimulation): “Until recently, the exchange of communication between minds or brains of different individuals has been supported and constrained by the sensorial and motor arsenals of our body. However, there is now the possibility of a new era in which brains will dialogue in a more direct way” (Grau et al., Citation2014).

Our aim in presenting these statements is not to generalize over BCI research or to suggest that bypassing the body exclusively accounts for the potential benefits of BCIs. We rather find the statements positively thought-provoking as they point to an actual challenge faced by HCI, namely determining the relevance of the wider body for human–computer interaction in a potential era of viable BCIs. We also note that statements such as those cited above reflect an understanding of the human body that has been influential in HCI long before the advent of BCIs, namely as a limited capacity motor channel between the mind and the computer (Card et al., Citation1983). Once the body beyond the brain is conceived as a communicational bottleneck, the promise of bypassing it has immediate appeal (). For us, it is this intellectual parallel that makes the research vision worthy of further examination.

Figure 1. A vastly simplified model of interaction based on an understanding of cognition and motor execution as a staged process (after Card et al., Citation1983). In comparison to motor input (left), BCIs (right) allow bypassing body movements. Note that directness can come in different degrees (e.g., directly accessing thoughts or sensing motor imagery).

Figure 1. A vastly simplified model of interaction based on an understanding of cognition and motor execution as a staged process (after Card et al., Citation1983). In comparison to motor input (left), BCIs (right) allow bypassing body movements. Note that directness can come in different degrees (e.g., directly accessing thoughts or sensing motor imagery).

Many shortcomings of this conception of the body have since been highlighted. Before moving into criticism, however, it is useful to formulate a strong version of the research vision by laying down a more qualified understanding of directness. Relevant concepts can be found in HCI work that treats human–computer interaction in terms of information transmission between the mind and an interactive system. We particularly turn to “expressiveness” and “effectiveness” criteria (Card et al., Citation1991) that provide a comprehensive list of considerations for input device evaluation.

2.2. Directness as information transmission criteria

Directness can first be understood in terms of more “expressive” (Card et al., Citation1991) interfaces whose semantic space better matches the prior mental content. Interface evaluations often make the idealized assumption that the semantics of an application program adequately represent the mental content to be communicated. Yet one can also make the case that existing input devices provide a restricted vocabulary for expression. One motivation for BCI research has been the perception that brain signals provide a more accurate representation of mental content than allowed by other input methods (Spapé et al., Citation2015). As others noted (Fairclough & Gilleade, Citation2014), however, the current state-of-the-art falls short of this ideal; except for a limited number of generative interfaces (e.g., Kangassalo et al., Citation2020), BCIs often reduce rich brain signal data into a limited set of input actions. An interface should additionally be able to communicate only the intended meaning and nothing else (Card et al., Citation1991). Challenges might emerge for future BCIs when brain activity – so far reserved for thinking – is used for interaction. In eye tracking research a similar challenge has long been known as the “Midas Touch” (Jacob, Citation1990) problem; a user can gaze at a certain location on the interface to perceive information, but the use of eye movements as an input can cause unintended commands. There seems to be considerable overlap between the neural activation patterns of imagination, observation, and execution of motor actions (Decety, Citation1996; Grèzes & Decety, Citation2001). While differentiating motor execution from its imagination can still be possible, for able-bodied users this might raise the challenge of collecting brain signals for interaction without having to perform the actual motor execution.

Assuming that a message can be expressed through the interface, there remains the challenge of doing this effectively, which can be understood in terms of task completion time, ergonomy or any other criteria (Card et al., Citation1991). Completion time, a metric that input devices are often evaluated against, is a combination of multiple factors such as latency and bandwidth. In HCI, the time needed for acquiring an input device, such as by reaching a mouse, approximates to latency. When the body is conceptualized as a communication channel, the scope of latency extends to the lag within the body. Among other factors, this is determined by the conduction velocity of the peripheral nerves, which can be calculated by observing the delay between cortical stimulation in the brain and the electrical activity of muscle tissue (Rothwell et al., Citation1991; Stetson et al., Citation1992). One motivation for using BCIs has thus been the elimination of this inherent lag in peripheral motor control. For example, a BCI can detect when a car driver is about to reach for the brake pedal in an emergency, and brake the car before actuating the foot pedal (Haufe et al., Citation2011).

Another effectiveness metric is the bandwidth of a channel. Yet modeling body parts in terms of bandwidth has been non-trivial. Unlike the communication bandwidth of machines that can be specified by design, abstraction of the human body in terms of bandwidth relies on the throughput that is observed in natural or experimental settings. There exist several metrics for measuring BCI throughput. Information transmission rate (Wolpaw et al., Citation2000) combines classification accuracy of brain signals with timing information, while metrics such as efficiency (Bianchi et al., Citation2007) and utility (Dal Seno et al., Citation2010) also take error recovery into account. Task-specific metrics such as text entry speed enables direct comparison between BCIs and manual input devices. Text entry speeds of everyday typists on a physical keyboard can range between 34–79 words per minute (wpm) (Feit et al., Citation2016). Intracortical BCIs inserted near the brain’s motor cortex can reach close to 8 wpm using a single pointer for character selection (Pandarinath et al., Citation2017). Recently, linguistic content was communicated over electrocorticogram (ECoG) signals that were recorded while subjects were speaking during an experiment (Makin et al., Citation2020). This resulted in high throughput but came with the major pitfall of requiring the subjects to read aloud the sentences. The post hoc analysis accordingly showed that the brain regions that contributed most to the prediction were those associated with speech production. Such intracortical BCIs are invasive and thus reserved for disabled users. For noninvasive BCIs, signal quality and typing performance have so far been lower (Townsend et al., Citation2010). Many BCIs accordingly compensate for the limited throughput by using time-multiplexing, for instance by using event-related signals to select letters from a matrix while rows and columns are sequentially highlighted (Farwell & Donchin, Citation1988).

We so far treated the selection of input techniques as an exclusive problem. However, the parallel use of multiple sensing devices can increase the overall throughput provided that the data streams provide independent information. Information that is gathered from different modalities is redundant when BCI signals simply depend on muscle activity; decoding text from ECoG while speaking (Makin et al., Citation2020) might not provide additional information in comparison to what can be sensed using a microphone. The long-term goal of using BCIs for healthy individuals is to sense what is not easily observable through the wider body (Allison et al., Citation2007; Ebrahimi et al., Citation2003; Fairclough, Citation2011) and filter out artifacts that result from synchronous muscle activity, either by restricting body movements or processing data after the experiment (Fatourechi et al., Citation2007; Wolpaw et al., Citation2000).

2.3. Summary

The treatment of interaction as information transmission between the mind and the system and the related conception of the body as a communication channel leads to questions such as:

  • How expressively does an interface communicate prior mental content?

  • How to avoid unintended interface commands?

  • What is the latency and throughput of different input methods?

  • How much effort/training do different input methods require?

  • What are the interferences and redundancies between different input methods?

This formulation of interaction is often a necessary simplification to deal with the complexity of human behavior, and information transmission criteria indeed provide a list of considerations that is useful to understand potential benefits and practical challenges ahead. Current technological limitations also point to alternative use cases in which BCIs do not completely replace body movements but are used alongside them, for instance, to infer mental load during manual interaction (Afergan et al., Citation2014; Yuksel et al., Citation2016) or detect user errors (Vi et al., Citation2014). Some argued that such “passive” BCIs, which use brain activity that occurs as part of another activity, could more readily benefit able-bodied users when compared to real-time control interfaces (Erp van et al., Citation2012).

A more fundamental problem is the conception of the human motor system as an information channel, which has been beset by difficulties since the early applications of information theory to the human body.Footnote1 Consider Fitts’ Law which conceptualizes pointing performance in terms of the information capacity of the human motor system. While the law serves well as a statistical tool for predicting pointing performance, defining it in information-theoretic terms of source, encoder and channel has been problematic (Gori et al., Citation2018; Seow, Citation2005). Some (e.g., Gori et al., Citation2018; Zhai et al., Citation2012) specify the user intention as the source and motor mapping as the encoder but avoid grounding these constructs to actual physical entities in the human body. A difficulty facing this task is defining an appropriate boundary between a putative source and an encoder, which would require researcher-defined functional constructs to have direct corresponding entities within the body. The evidence from embodied cognition showed some of the problems with this endeavor. The next section summarizes the main discussion points.

3. Cognition and the body

This section presents the main claims of embodiment concerning cognition and the body by contrasting them to what has been later referred to as cognitivism.Footnote2 While a thorough review is beyond our scope, a quick overview is necessary to lay the groundwork for the present discussion. In what follows, we summarize the embodied claims regarding 1) the status of mental representations, 2) the relationship between meaning and interaction, 3) the neural basis of cognition, and 4) the extent cognition is integrated with the body and the wider environment. We discuss the relevance of these claims for the present discussion and identify a modular understanding of cognition, related to the fourth discussion point, as the commitment that is most relevant for the vision of providing more direct access to mental states.

3.1. The status of mental representations

For cognitivism, cognition was primarily the transformation of mental representations (Fodor, Citation1975). The precise meaning of the concept “representation” is debated (for a discussion see Haselager et al., Citation2003), but in most cognitive science the term has come to mean distinct inner states that stand for external or imaginary things such as physical entities, plans, memories or counterfactuals. Mental representations allow thinking in an off-line fashion, namely in the absence of activity-relevant interaction with the environment. When deployed to explain human problem-solving, the human ability for such off-line cognition led some to conceptualize mental representations as analogous to computer programs in that they adequately specify the sequence of operations needed for task completion (Miller et al., Citation1960) or as adequate descriptions of a desired end state, which consequently motivated the long line of HCI research on plan and intent recognition (e.g., Allen, Citation1999; Horvitz et al., Citation1998).

Embodied approaches scrutinize the assumption of well-formulated mental representations that underlie human behavior. Human behavior can instead be explained in terms of habitual responses to environmental stimuli that bring an agent closer to a situation (Dreyfus, Citation2002) and relies on the presence of many external conditions that have no mental counterpart at all (Suchman, Citation1987)—even though it is always possible to attribute intentions or plans to an agent in retrospect. The criticism against the need for explicit plans or goals is not an all-around rejection of mental representations, but it significantly limits the extent they account for human competence. From an embodied viewpoint, mental representations can be more generally characterized as being in coordination with the environment as opposed to representing its current or preferred future state.

The vagueness of mental representations accepted, the promise of interfaces that allow “mind-reading” or “wire-tapping” to intentions may lose some of its appeal as the main challenge in interaction does not always lie in the execution of a well-formulated goal. Does this insight completely diminish the value of more directly accessing mental states? Two reasons make us argue against this. First, the motivation for accessing mental states does not necessarily imply the instant execution of an end goal; it can instead be formulated in terms of facilitating step-by-step interaction in a better way. For instance, a BCI does not need to “mind-read” and transcribe a text phrase at once; it can allow character-by-character entry to facilitate word suggestion, but in a way is ideally more efficient than manual input. Second, mental states do not necessarily correspond to intentions but can rather stand for any data that can be used by a system to generate an appropriate response. A case in point is interfaces that target the implicit use of brain signals. A search interface, for instance, can use brain signals to infer user interests (Eugster et al., Citation2016). In doing so, it can aim to decrease the mental effort of query formulation rather than the physical effort of text entry. We thus find this commitment only partially relevant to the research vision of directly accessing mental states.

3.2. The relationship between meaning and interaction

The emphasis on mental representations also supported the methodological choice of conceptualizing the value of an interactive process, that is its meaning, in terms of users’ prior intentions. A practical challenge for this choice is that users’ mental states are not accessible to researchers in the way computer states are. In both cognitive science and HCI, researchers addressed the challenge by formulating activity in terms of tasks that stand as a proxy for user intentions and assuming rational behavior on users’ side (Card et al., Citation1983; Simon & Newell, Citation1971)—in a way defining the meaning of user actions prior to and independent from interaction. The researcher’s privileged position in framing the activity in terms of tasks and setting the metrics for evaluation is foremost a methodological preference, but also one that is reinforced by the decision to treat human-computer interaction as a “microworld” (Newell & Card, Citation1985) that is studied in isolation.

For agents situated in daily contexts, however, the significance of interaction often depends on self-defined criteria that are intelligible to the agents themselves. Thus, some in HCI expanded the scope of embodiment to cover a methodological commitment toward identifying how meaning is created by users as part of their ongoing interaction (e.g., Dourish, Citation2004; Robertson, Citation2002). A methodological implication of this commitment has been to acquire thick descriptions of a situation instead of abstracting activities a priori into tasks and metrics (Dourish, Citation2004; Harrison et al., Citation2007; Klemmer et al., Citation2006). In parallel, the scope of research expanded beyond the micro-world of human–computer interaction to include externalities such as organizational behavior and social cognition. Aspects of public availability and interpretation came to the fore, one typical example being collocated interactions where users’ arm and head movements become visible to each other (Robertson, Citation2002).

Future BCIs can reduce the information available to other agents if they eliminate the need for body movements.Footnote3 Yet they can also create new opportunities for social interaction if brain signals themselves are made available to others (e.g., Hassib et al., Citation2017). It should, however, be kept in mind that different approaches to meaning result in diverging implications for research. A thorough comparison has been made by Boehner and colleagues in the context of physiological computing and emotions (Citation2007); whether meaning is treated as prior to or emergent during interaction can lead to different design decisions concerning the extent physiological data is abstracted into predefined categories by the system or left to users’ own interpretation. In a task-based approach, a message composed by a BCI or a manual typing interface can be evaluated solely on external measures of performance. In the expanded embodied view, however, the evaluation between different interfaces is not independent of how their qualities are interpreted by other users. A BCI-composed message can alternatively be interpreted as more authentic, lazy, or ambiguous depending on the receiver.

3.3. The neural basis of cognition

Early models in cognitive science also conceived cognition as a generalized and amodal processor, whose functioning rules are independent of the physiology of the body (Fodor, Citation1983; Vera & Simon, Citation1993). Cognition has instead been modeled after the computer architectures of the time, based on symbol-manipulation (Fodor, Citation1975; Simon & Newell, Citation1971), sequential processing, and the separation between processing and memory functions (Atkinson & Shiffrin, Citation1968; Sternberg, Citation1969)—conventions that are followed by the Model Human Processor in HCI (Card et al., Citation1983). The early expectation was that neurophysiology would eventually catch up with the abstract, symbolic models (Simon & Newell, Citation1971, p. 158).

Needless to say, the progress has also been in the other direction; researchers developed biologically-informed models that do not separate memory and processing (e.g., connectionist models (Rumelhart & McClelland, Citation1986)) to explain lower-level human abilities such as visual recognition. The disciplinary tensions can be observed within the “imagery debates” concerning whether mental phenomena should be modeled as “propositional” (i.e., amodal and symbolic) or “depictive” (i.e., modal and preserving the spatial properties of the sensorimotor objects through the topographic organization of the brain) (Kosslyn, Citation1994). The latter view ultimately gained prevalence through multiple lines of evidence (M. Wilson, Citation2002).

What is the relevance of the neural basis of cognition for the present discussion? It can be argued that when the brain is interfaced, its physiology and the localization of neural activity come to the fore in a way they do not with the abstracted models of cognition. In this sense, the motivation for more direct access to mental states does not explicitly assume an abstract model of cognition. At the same time, while not assumed, whether cognition is treated as abstract or embodied has implications for what we understand as mental states. For instance, the word “decoding” is often used in the context of BCIs (e.g., Rao et al., Citation2014; Schalk, Citation2008), but its operationalization depends on one’s research commitment. The concept of decoding involves representing an already symbolic structure in the mind for abstract models of cognition that assume the independence of mental states from the physical structures of the brain. In embodied accounts, however, mental states refer to individual and modality-specific neural activities whose decoding into information depends on the particular method employed.

3.4. The extent cognition is integrated

Closely related to the neural basis of cognition is the extent cognition is integrated with sensorimotor processes and the environment. What exact properties define integration has been the subject of analysis within embodied cognition literature (Clark & Chalmers, Citation1998; Heersmink, Citation2015; Kirsh, Citation2019) and some relevant properties are “coupling”, “persistence” and “individualization”. Relatively loose coupling (i.e., consisting of unidirectional and low-throughput interactions) and limited individualization between cognition and sensorimotor processing characterize cognitivist models; rich sensorimotor data is abstracted into amodal and symbolic information before cognitive processing and vice versa (Fodor, Citation1983). This leads to a more or less modular system in which cognition is sharply separated from sensorimotor processing. The sharp boundaries between cognition and sensorimotor processing, by extension, exclude interaction with the environment from the scope of cognition, leading to a strong internal–external distinction.

Modality-preserving models of neural representation in embodied views, by definition, assume individualized and highly coupled relationships (i.e., consisting of bidirectional, high-throughput interactions) between different physical structures that come into play during cognition. Yet there have been multiple claims regarding the scope of integration.

The first, internalist, claim is the integration between sensorimotor processes and thinking inside the brain. This partly refers to the real-time effect of sensorimotor variables (e.g., moving one’s hand toward or away from the body) on seemingly abstract activities such as language cognition, memory retrieval or problem-solving (Barsalou et al., Citation2003; Dijkstra et al., Citation2007; Glenberg & Kaschak, Citation2002; N. L. Wilson & Gibbs, Citation2007; Thomas & Lleras, Citation2009). Furthermore, body and tool use seems to have causal-historical roles in individualizing brain structures as seen in the metaphorical mappings between abstract concepts and physical experiences in language (Lakoff & Johnson, Citation2008). In HCI, some work directly builds on this line of research and utilizes concepts such as image schemas (Hurtienne & Israel, Citation2007) and embodied metaphors (Antle et al., Citation2009) to design interface controls in a way that preserves their control variables’ mappings to the body. The representation of the body shape and posture in the brain, the “body schema” (Berlucchi & Aglioti, Citation1997; Maravita & Iriki, Citation2004), is another example of individualization as is the phenomenon of “tool extension”, namely the changes to the body schema through the use of tools (Berti & Frassinetti, Citation2000; Iriki et al., Citation1996).

The second, externalist, claim expands cognition’s unit of analysis to encompass what happens outside the brain. Already in the 1970s, the ecological approach argued that the world itself provides an abundance of information for action, namely affordances, which obviates the need for extensive mental representations (Gibson, Citation1977). Extended views of cognition echo this view and call for recognizing the external world as part of a larger cognitive whole (Clark & Chalmers, Citation1998; Clark, Citation2007). To the HCI community, this position is best known through the research program of “distributed cognition” (Hollan et al., Citation2000). Distributed cognition attributes some of the cognitive science constructs such as information, memory, and representation—that are traditionally reserved for mental phenomena—to the environment, in a way blurring the previously sharp boundaries between internal processing and external interactions (Hutchins, Citation1995). Thinking, eye movements, and other motor actions instead become interchangeable methods for information processing. People can eliminate mental rotation or translation activities by manipulating the objects in the environment (Kirsh & Maglio, Citation1994), ease perception by hiding or cueing certain features, or use hands as placeholders to decrease mental memory load (Kirsh, Citation1995). As such, there is less emphasis on the communicative bottleneck between the mind and the environment, and an increased sensibility toward how cognitive processes are offloaded onto the environment. This consequently leads to a less persistent model of cognition that is partly composed of temporary interactions with external structures.

Whether the scope of cognition should be delimited to the mind, to the body, or even to the body-plus-environment is a matter of definition and the epistemic trade-offs that come with different units of analysis have been discussed elsewhere (M. Wilson, Citation2002). The degree of modularity or integration between different entities, however, is an empirical question with practical consequences. Here, we argue that the strict separation that modularity entails is central to the motivation of bypassing body movements; when central cognition is conceived as prior to and independent from body movements, the motor system can ideally be bypassed for direct access. We thus view the extent cognition is integrated as the central concern for research.

3.5. Summary

We outlined the different commitments of cognitivist and embodied approaches and assessed which commitment provides a more explicit conceptual basis to the research vision of providing more direct access to mental states. The extent mental representations are adequate descriptions for action is only partially relevant to the present discussion as the benefits of BCIs do not depend on prior intentions providing the satisfaction conditions of interaction. The conception of cognition as abstract information processing is also not explicitly assumed by the research vision, but what is understood as mental states (i.e., the extent they are modality- and physiology-specific) has implications for what it means to decode them. The relationship between meaning and interaction is less specific to the present discussion but has general methodological relevance. A modular view of cognition is the commitment that we argued to be the most relevant for the motivation of bypassing the body and providing more direct access to mental states.

4. Implications of an integrated view

The utility of an integrated conception ultimately relies on its ability to guide research in the form of generating research questions, explanations, predictions, and interface solutions that are not easily conceivable from a modular understanding of cognition. In this section, we show how embodiment can lead to future research directions by referring to previous studies, which we selected to showcase the integration between sensorimotor and other cognitive processes in a variety of domains. We then discuss the more general implications for methodology.

4.1. Functional relationship between body representations, motor imagery, and BCI performance

It has been shown that motor imagery in the brain can be accompanied by electromyographic (EMG) signals on the arm (Guillot et al., Citation2007) and face (e.g., during internal verbalization of speech; see Kapur et al., Citation2018; Nalborczyk et al., Citation2020), one possible explanation being the incomplete inhibition of peripheral neural activity during motor imagery (Guillot et al., Citation2007). These peripheral correlates to motor imagery should be distinguished from the more radical embodiment claim of tight interconnections between sensorimotor processes and conceptual knowledge, but they support the claim of shared neural substrates between motor imagery, action observation and execution (Grèzes & Decety, Citation2001; Jeannerod, Citation1994; Rizzolatti et al., Citation1996). Other observations point to situations in which the actual execution of body movements both reflects and facilitates internal cognitive processes. For example, studies documented spatial overlaps between gaze patterns and the layout of an imagined visual scene (Brandt & Stark, Citation1997; X. Wang et al., Citation2019), a finding that has been explained through eye movements’ role in retrieving and processing visual information in the absence of sensory stimuli (Ferreira et al., Citation2008). There is also strong evidence for the memory-facilitating role of hand gesturing during speech, in addition to its communicative function (for a review see Pouw et al., Citation2014). Interestingly, such gesturing rarely occurs during thinking, hinting at gestures’ role in mitigating the effect of increased cognitive effort expended during speech production (Hostetter & Alibali, Citation2008).

The overlaps between motor planning and execution in the brain would also predict BCI performance to be partly dependent on motor skills. In line with the expectation, users’ fine motor manipulation skills (Hammer et al., Citation2012), handedness (Zapała et al., Citation2020) and daily amount of hand and arm movements (Randolph et al., Citation2010) have been shown to account for some of the individual variance in motor imagery-based BCI performance. Along the same lines, BCI performance should be lower in the absence of motor ability. Patients in complete locked-in-state indeed show degraded BCI performance, but this finding does not hold for partially paralyzed (Kübler & Birbaumer, Citation2008) highlighting the need for more research to uncover the mechanisms that underlie both BCI and motor performance. Besides motor imagery, motor skills seem to affect brain activation during language comprehension. Hockey players have been reported to show greater activation in the left premotor cortex compared to non-hockey players when presented with sentences that describe hockey actions (Beilock et al., Citation2008).

4.1.1. Research direction: Measuring the effects of motor variables on BCI performance

Numerous studies have recorded brain signal data while experimental subjects perform motor activities such as lifting/grasping (Agashe et al., Citation2015;Luciw et al., Citation2014) or walking (Gwin et al., Citation2010; Kline et al., Citation2015). Part of the motivation for this line of work is improving the usability of brain recordings in naturalistic settings (e.g., BCI use when walking) by identifying and removing signal noise related to body movements (Gwin et al., Citation2010; Kline et al., Citation2015). Yet little work has measured the effect of task-relevant motor variables, a research gap noted within the brain imaging community (Gramann et al., Citation2014). Thus, a potential research direction is evaluating the effects of task-relevant motor variables on task and BCI control performances. Researchers can, for instance, observe whether gesturing can improve BCI performance through its effect on mental imagery, in the same way it aids speech production (Pouw et al., Citation2014).

4.1.2. Research direction: Testing the relationship between motor skill and BCI performance

HCI research can also extend the previous work (e.g., Randolph et al., Citation2010) that aims to find correlates between motor skill in using input devices and BCI performance. While previous work has investigated the role of naturalistic motor skills, HCI researchers can study the effect of skills related to input device use, for instance, by testing if expert gamers have a performance advantage for certain types of BCIs. For motor imagery-based interfaces, researchers can measure the motor skills of experimental participants when using different input devices (e.g., using a stylus for drawing shapes or moving objects on a touchscreen) and test if these correlate to performance when operating BCIs that use similar motor imagery.

Other experimental designs can test whether motor skill advantage extends into reactive BCIs that use event-related potentials (ERPs). ERP-based BCIs often employ symbols (such as letters) for eliciting brain signal responses. Employing more complex visual stimuli that refer to motor capabilities can enable investigating correlations between motor skill and ERP-based BCI performance. For example, we referred to hockey players’ increased brain activation when shown language stimuli (Beilock et al., Citation2008). This observation would predict better performance for hockey players when shown visual stimuli such as a hockey stick.

Finally, researchers can test whether BCI performance can be improved by training users with non-BCI interfaces that rely on similar mental imagery, by observing performance differences between the trained and untrained groups.

4.2. Control mapping and compatibility in perception and action

The integration within the cognitive system would also predict BCI performance to positively correlate with motor variables that are congruent with the control task. Performance studies have long observed that control – display similarity, in terms of spatial and nonspatial factors, results in higher performance for visuoperceptual motor tasks (Fitts & Seeger, Citation1953), a finding that also holds for BCIs (Thurlings et al., Citation2012). The phenomenon has previously been explained through information processing models and the formal analysis of input devices (Card et al., Citation1991; Fitts & Seeger, Citation1953). Recent studies aim to find neural correlates to performance differences and demonstrate the effect of more complex sensorimotor variables. Congruent tactile stimulation on subjects’ hands during motor imagery tasks seems to improve BCI accuracy (Shu et al., Citation2017). Studies on tennis (Bisio et al., Citation2014;Mizuguchi et al., Citation2015) and badminton (Z. Wang et al., Citation2014) players show that holding a racket improves motor imagery, with effects being dependent on player experience and hand posture (Mizuguchi et al., Citation2015).

4.2.1. Research direction: Tangible implements to support motor imagery

The observations related to the role of manual artifacts in supporting motor imagery also point to potential benefits that can result from incorporating tangibles into BCI use. Extending previous work in motor imagery research (Bisio et al., Citation2014; Mizuguchi et al., Citation2015; Z. Wang et al., Citation2014), the field of HCI can draw on its design and fabrication capabilities to create physical artifacts that support BCI performance. For motor imagery-based BCIs, these artifacts can aid users in assuming the posture related to an imagined movement and compare the performance to a control condition in which the tangible artifact is not present. An example experimental setup can instruct participants to imagine hand grasping movements in the presence or absence of a tangible that facilitates grasping posture, and compare the BCI performance of the two conditions.

4.2.2. Research direction: Embodied constraints for the brain

Another research direction that builds on control mapping congruence is the application of “embodied constraints” to BCIs. In HCI, embodied constraints refer to task-specific constraints that are encoded directly into the environment as a way to decrease the cognitive burden on users (Dijk, Citation2009; Hornecker & Buur, Citation2006). Consider the example of a tangible urban-planning interface (Underkoffler & Ishii, Citation1999), the impossibility of interpenetrating the tangible tokens that represent buildings is a desired constraint as it mirrors the impossibility of the actual task. One advantage of having such task-specific information inside the environment, as opposed to the brain, is the convenient configurability of physical structures. Unlike mental representations that form through practice, physical structures can be easily replaced depending on the task at hand. At the same time, recent work shows how such embodied constraints can move closer to the body with the help of electromuscle stimulation, for example, by constraining users’ approach to tangible objects before any physical contact (Lopes et al., Citation2015).

BCI equivalent of embodied constraints would similarly impose task-specific constraints within the brain based on the task at hand. While cortical interventions might not be technically possible and safe in the near future, the interdependence between sensorimotor processes and brain activity points to alternative methods for achieving similar effects. For example, we noted the effect of sensorimotor variables such as hand posture (Mizuguchi et al., Citation2015) or tactile stimulation (Shu et al., Citation2017) on motor imagery performance. Consider how these observations can be synthesized with the design insight of embodied constraints: Based on the requirements of a task, an interface can reinforce or suppress certain mental imagery by changing visual, tactile, or somatosensory stimuli (e.g., by modifying users’ hand or body posture using an exoskeleton). To illustrate, a system can give tactile feedback on the right-hand side of the body when navigating left is not possible in a virtual environment (to match the actual spatial constraint of the environment). No such interactive system exists to our knowledge, but one can recognize how it represents an alternative to the ideal of direct mental access; with embodied constraints, the design focus shifts from the accurate sensing of prior mental content to reinforcing task-related neural activity through stimuli.

4.2.3. Research direction: Body-based metaphorical mappings

Finally, the metaphorical mappings between the body and abstract concepts would predict the interface controls that are congruent with these mappings to outperform design alternatives that are incongruent. For example, phrases “feeling up” or “feeling down” describe sentiments as well as physical directions (Barsalou et al., Citation2003; Lakoff & Johnson, Citation2008) and we would expect these sentiments to be congruent respectively with imagining upward or downward movements. Future studies can test this prediction by observing whether the design of BCI actions, such as leaving an emotional reaction on social media, would benefit from similar mappings; an experimental setup can compare the performance and satisfaction of congruent (positive emotion mapped to upwards movements and vice versa) and incongruent motor imagery for leaving emotional reactions.

4.3. Modality-dependence of language

Language production is a common daily task and various modalities such as speaking or writing often lead to different language expressions. The difference can be partly explained through the distinct communicative strategies used in oral or written contexts but also through the particular sensory and motor demands of each modality; writing and speaking utilize different muscle sets and sensory feedback, which leads to different cognitive constraints on performance as well as different brain activity patterns (Brownsett & Wise, Citation2009). Yet the extent these sensory and motor processes are integrated with semantic and lexical processes (i.e., processes related to meaning and word selection) has been the subject of debate (for reviews, see Chatterjee, Citation2010; Meteyard et al., Citation2012). Some models of language production (e.g.,Dell, Citation1990; Roelofs, Citation1992) conceptualize these processes as modality-neutral while others (e.g., Caramazza & Miozzo, Citation1997) propose modality-specific models. The extent of integration, and thus embodiment, is relevant as modality-specific conceptions of language lead to different evaluation goals than studies that assume the independence of the semantic and lexical stages from motor articulation.

4.3.1. Research direction: Observing qualitative differences between BCI-based language production

HCI studies generally assume a modality-neutral model as they operationalize text entry in terms of externalizing prior thought and compare them based on their effectiveness; subjects are tasked to transcribe a memorized fixed phrase with a given method as quickly and as error-free as possible. If different methods for language production are modality-specific, however, evaluation should observe distinct linguistic outcomes that result from different interfaces. Consider different types of language production that can be facilitated through BCIs:

  • A BCI that reproduces writing experience by relying on the motor imagery of handwriting movements, as in (Willett et al., Citation2021),

  • A BCI that reproduces speaking experience by relying on the motor imagery of speech, as in (Makin et al., Citation2020).

An integrated model of language production would regard the two BCIs as facilitators of distinct lexical selection processes in the brain instead of different outlets for the same lexical content as the earlier semantic and lexical stages are not assumed to be independent of motor articulation. We would then expect their evaluation to observe not only the text entry speed but also the qualitative differences in lexical selection and language production. While current BCI prototypes reproduce existing methods of language production (e.g., speech, handwriting), we might witness the emergence of novel BCI-specific forms of text entry that bypass lexical selection altogether, for instance, by generating text from user reactions to verbal or visual stimuli. The variance in language output becomes even more relevant for such interfaces and the ability to provide appropriate text recommendations becomes a more applicable evaluation criterion.

4.4. Affect and emotion in the body and the brain

Emotion research has received much attention in BCI and physiological computing (Beale & Peter, Citation2008; Fairclough, Citation2009). Going beyond mere detection of emotions, affective computing as an area has addressed how interacting with computers can influence emotional states and how emotional expression can be synthesized in virtual agents (Tao & Tan, Citation2005; Torres et al., Citation2020). Despite such successes, researchers have not reached a consensus on a unified theory of how the behavioral correlates and physiological expressions defining emotions are related to psychological feelings and subjective experience, or affect. Historically, embodied theories of emotion tend toward the so-called James-Lange theory of emotion (Prinz, Citation2008), which hypothesizes that affect is caused by physiological states and behavioral changes as opposed to cognitivist theories of emotion giving cognition and affect a primary role in causing behavior (Oatley & Johnson Laird, Citation2014).

Thus, more modern advocates of the theory suggest that perceiving and thinking about emotion involve “perceptual, somatovisceral, and motoric reexperiencing (collectively referred to as ‘embodiment’) of the relevant emotion in one’s self” (Niedenthal, Citation2007). Despite the lack of unified theory, however, recent reviews demonstrate that emotion detection is most effective when using the widest variety of physiological and behavioral signals available, for example from video, audio, other content, central or peripheral physiology, gaze, and kinesthetic expression (Ahmed et al., Citation2020). Thus, computer techniques and applications in this area are useful as practical demonstrations of how information about emotion can be extracted from brain signals, other physiology or embodied behavior, and may thereby provide data toward theory unification.

4.4.1. Research direction: Further develop multimodal approaches to emotion combining embodied aspects with brain signals

BCI research has long concentrated on brain signals, for obvious reasons, but embodied frameworks suggest a stronger role of peripheral and behavioral signals in determining affective states. Thus, multimodal signals, such as electrocardiography (Ferdinando et al., Citation2016), facial electromyography (Fridlund & Cacioppo, Citation1986), electrodermal activity (Feng et al., Citation2018), and other more peripheral psychophysiological signals may provide complementary information to BCIs, particularly with regards to detecting affective states. Furthermore, in parallel to what is considered for other BCI tasks, the research can include more naturalistic settings where body movements are not constrained and can be effectively detected given that they might play a role in perceiving or thinking of emotions. Given advances in the availability of remote sensing, future research must establish how effective a multimodal, embodied approach relying on rich data combining situational, behavioral, physiological, and neural data can be, not least to mitigate near-future ethical risks (Steinert & Friedrich, Citation2020).

4.4.2. Research direction: Investigate communicating affect and emotion using BCI

Emotions play an important role in interpersonal communication and will become more important to HCI in general, for example with regard to mediated and human–AI interaction. Thus, BCIs may provide emotionally augmented communication, providing complementary information to current interfaces. First, passive BCIs detecting emotions can provide an affective subtext to inform interactants of the intended meaning behind, or affective consequence of communicative acts (Harjunen et al., Citation2018; Kuber & Wright, Citation2013; Spapé et al., Citation2019). Both active and passive BCIs may furthermore enable emotions by themselves to become a language of interaction. For example, by using neurofeedback disposition and arousal toward avatars can be learned to be controlled (Cavazza et al., Citation2014). Similarly, implicit detection of negative affect may be used to adapt a user interface’s complexity and improve engagement (Aricò et al., Citation2017; Lawrence et al., Citation1995).

4.5. Summary and methodological implications

We described various ways embodiment and an integrated understanding of cognition can guide research. These include providing body-grounded explanations for BCI performance, designing BCIs in a way that is compatible with existing body mappings, and proposing evaluation considerations that are neglected in modular views of cognition. Some research directions build on evidence from embodied cognition while others extend embodied interaction design heuristics into the BCI domain. Below, we describe the more general methodological implications of the above-described research directions.

4.5.1. A clearer rationale for BCI applications

The many ways in which cognition in the brain is dependent on and mirrored across the body should motivate researchers to clearly state their rationale for BCI applications targeted toward able-bodied users. In particular, we propose an analytic distinction between using BCIs as 1) a source of neurological data and 2) an enabler of interaction without motor execution.

Firstly, observations related to offloading cognitive activity onto the wider body (as in gesturing during speech) and improved motor imagery in the presence of compatible body postures (e.g., Bisio et al., Citation2014; Mizuguchi et al., Citation2015) suggest that BCI usage for the general population will coexist with and benefit from concurrent motor execution. As such, researchers should be ready to demonstrate the rationale of BCIs in the presence of motor execution, for instance, by observing whether brain signals provide useful information in addition to what can be sensed from body movements.

Conversely, eliminating motor execution should not automatically justify BCI usage as brain activity can be mirrored across the body even in the absence of body movements (Guillot et al., Citation2012) and can closely reflect the environmental stimuli. As BCI research partly moves from lab-conducted studies and disabled users into rich everyday settings and the general population, useful information related to neurological states can partly be inferred through sensing the environment, eye movements, or peripheral nerve activity that occurs without motor execution (e.g., using electromyography). It is then necessary to establish the rationale of using BCIs even when motor execution is purposefully eliminated. For example, researchers can record brain signals, peripheral nerve activity and environmental stimuli in synchrony, and compare their success in predicting what users consider appropriate system behavior.

4.5.2. Caution when restricting body movements

The analytical distinction made above also calls for exercising caution when restricting body movements in order to reduce “data artifacts” (i.e., undesirable signals that interfere with the target neurological phenomena). Cognition-facilitating role of body movements suggests that the restrictions might be interfering with the brain’s usual routines for offloading cognitive effort, with potential effects on cognitive performance and ecological validity.Footnote4 Thus, unless eliminating motor execution is explicitly specified as a design rationale, researchers can aim to identify data artifacts during data processing instead of imposing strict restrictions during observation (Gramann et al., Citation2014; Makeig et al., Citation2009), and even be cautious with prior designations regarding what are data artifacts (i.e., extrinsic to task-relevant cognition). More generally, researchers should distinguish mental performance (e.g., motor imagery performance) from BCI control performance as the latter depends on experiment-specific signal acquisition and processing methods. A potential pitfall of restricting body movements is improving BCI control performance at the expense of mental performance.

4.5.3. Emphasis on joint performance over information transmission

The integration focus also makes the joint performance of the brain-plus-system a more broadly applicable evaluation criterion than the accurate transmission of prior user intentions. The latter has traditionally been prevalent in BCI research as a success metric (Wolpaw et al., Citation2000) but comes with conceptual and practical challenges. On a conceptual level, the concurrent and interdependent nature of cognitive processes makes it difficult to sharply delineate between entities such as an information source and a channel. On a practical level, the assumption of user intentions that exist independent of and prior to externalization becomes less productive due to 1) the challenge of defining them as ground truths in uncontrolled, everyday settings and 2) the potential move toward more complex tasks where user intentions can be inherently vague (such as when a BCI is used to compose a message at once instead of dictating it word by word).

Focusing on joint performance avoids these problematic aspects related to information transmission criteria. Consider the following implications. For evaluation, researchers can formulate their evaluation metrics in objective, third-person terms, without alluding to prior intentions. Subjective, first-person constructs, if employed, can be defined in terms of user acceptance. Here, BCI research can take a page out of the methodology of recommender systems and apply metrics that stand for post hoc user acceptance such as “precision” in place of classification accuracy. For design, researchers can target increasing performance even when it comes at the expense of the information transmission rate. Providing task-specific constraints is one such design strategy; it limits the range of actions available to users (and thus decreases the information value of each user input) but can increase the overall performance by embedding part of the task-related information into the interface.Footnote5 Another example is modeling mental states based on other information sources (e.g., the environment or data from other users) instead of relying exclusively on real-time brain signals.

4.5.4. Expansion of the BCI research scope

Finally, the integration perspective calls for studying BCIs in terms of their role in configuring brain activity, which implies an expansion of research scope to longer-term adaptations related to BCI use. This is particularly applicable to research that targets able-bodied users for whom relying on already existing motor skills might not lead to radically different functionality and experience than provided by current input devices. For example, many recent BCIs (e.g., Blankertz et al., Citation2007; Makin et al., Citation2020; Willett et al., Citation2021) exploit existing motor imagery patterns in the brain to reduce training time and improve performance. Yet alternative interfaces such as an EEG-controlled robotic third arm (Penaloza & Nishio, Citation2018) can extend body schema and enable new action opportunities. This counterintuitively suggests that BCIs can be valuable even if they lag in traditional performance criteria provided that they lead to qualitatively different interaction outcomes. To observe these qualitative differences, researchers can opt for open-ended tasks in user studies (e.g., by instructing text composition in place of text transcription).

The aspects of adaptation and skill-building, in return, emphasize integration qualities that are not obvious when BCIs are seen as neutral ways of accessing prior mental content. For example, a level of persistence in BCI behavior is likely required for new skills and activity patterns to develop. Consider how BCIs show different levels of persistence. Intracortical BCIs (e.g., Hochberg et al., Citation2012) are permanently attached while other scalp-based BCIs are temporary. Various BCIs show different levels of recording and calibration stability (Degenhart et al., Citation2020). Persistence in brain–interface connections has long been a focus in clinical studies (e.g.,Downey et al., Citation2018; Perge et al., Citation2013), but should also be relevant for its role in supporting new forms of expression.

5. Reflecting on embodiment in HCI

Having identified implications for research, we find it timely to reflect on HCI’s understanding of embodiment and derive implications for embodiment research at large. Below, we introduce some of the tensions within embodied interaction literature that come to the fore with the emergence of BCIs and introduce parallel efforts in cognitive science in tackling them. Of particular relevance is the necessity of body movements for cognition and attitudes toward mental phenomena.

5.1. Embodiment without body movements

Embodied approaches to cognition commonly state that cognition is dependent on the body. Yet various claims within embodiment emphasize different aspects, ranging from the body’s role as a constraint on cognition to a mediator between the brain and the environment (R. A. Wilson & Foglia, Citation2017). Thus, a standing challenge for embodied cognition has been to determine this dependence. Patients with Locked-in Syndrome (LIS), who have severely restricted motor capabilities but are often cognitively intact,Footnote6 provide a good opportunity for confronting the question (Kyselo & DiPaolo, Citation2015). The case of LIS patients presents a challenge for accounts of embodiment that present body movements as a prerequisite for cognition. It can, however, be accommodated by other accounts that expand the scope of embodiment to encompass simulations of motor action in the brain as well as non-motor interaction with the environment.Footnote7 Body movements—when available—play a role in cognition, but are not a prerequisite for it (Mahon & Caramazza, Citation2008).

We observe that HCI understandings of embodiment contain similar tensions regarding the significance of body movements but with differences owing to HCI’s constructive orientation. Some HCI interpretations of embodiment put significant emphasis on body movements (Klemmer et al., Citation2006), which in turn motivates utilizing a rich repertoire of physical forms and body gestures when designing interfaces. Others understand embodiment as an approach that applies to any interaction regardless of the input method (Dourish, Citation2004, Citation2013; Svanæs, Citation2013). The latter’s emphasis is on the manipulation of the environment rather than body movements per se. That said, the brain has traditionally been reliant on the body for manipulation. As put by Dourish: “a disembodied brain could not experience the world in the same ways we do, because our experience of the world is intimately tied to the ways in which we act in it.” (Citation2004, p. 18).

The emergence of BCIs raises the question of what it means for a brain to be embodied. In this regard, BCIs present conceptual challenges to embodied interaction in the same way LIS patients pose challenges to embodied cognition, namely identifying the relevance of embodiment beyond overt body movements. Paralleling the discussions in embodied cognition, we argue for expanding the scope of embodiment to encompass internal simulations and non-motor interactions. First, HCI research can investigate how internal representations that are based on interface use are employed even when there is no real-time interaction, in the same way off-line cognition can involve sensorimotor simulations in the brain (M. Wilson, Citation2002). Second, as an alternative to a strict body–interface distinction, future research can study the functional parity of various corporal and artificial structures that come into play during cognition (Clark & Chalmers, Citation1998). The extent a BCI becomes body-like depends on its ability to provide comparable highly-coupled, persistent, and individualized connections, and is thus a matter of integration.

5.2. Different dimensions of embodiment and mental phenomena

We noted early on in the paper that embodiment in cognitive science and other disciplines can better be described as a collection of different theoretical and methodological commitments that individual studies only selectively commit to. For example, neurally grounded, connectionist models of cognition explain human behavior by observing neural structures (e.g., Smolensky, Citation1988), but can keep environmental interactions out of their scope. In contrast, researchers working in the tradition of distributed cognition study the interdependence between mental processing and interactions with the environment (e.g., Kirsh & Maglio, Citation1994). One way to make sense of these diverse approaches is to view them within a continuum of different “dimensions of embodiment” that vary in physical and time scale (Rohrer, Citation2007). Common among different dimensions is a commitment to empirical grounding, but individual studies’ units of observation can range from neural structures in the brain to large organizational systems with multiple individuals. Determining the dimension of embodiment and the unit of observation is a practical question that depends on one’s research purpose.

We can now reflect on how different lines of research related to embodied interaction position themselves along this scale. Looking back at some influential work that shaped embodied interaction discourse in HCI (Marshall & Hornecker, Citation2013), we observe that the neural level of embodiment attracted relatively less attention. For example, phenomenology-informed work studies phenomena such as transparency during tool use. Yet this is done mainly from the first-person viewpoint instead of grounding the experience at the neural level. Some other frameworks deliberately avoid mental phenomena by focusing on what is externally observable. Situated action approach builds its analysis on the sequential relationships between agents’ observable actions and avoids making inferences about mental states (Suchman, Citation1993). Its research methods of choice, conversation/interaction analysis, observe the minute details of speech, gaze, and other actions that come into play during interaction, but the neural basis of social intelligibility (investigated by e.g., Gallese & Lakoff, Citation2005; Niedenthal, Citation2007; Tomasello et al., Citation2005) is excluded from the scope of observation. Distributed cognition leads to a similar outcome by expanding cognition’s unit of analysis to the environment. In Hutchins’ words: “with the new unit of analysis, many of the representations can be observed directly, so in some respects, this may be a much easier task than trying to determine the processes internal to the individual that account for the individual’s behavior.” (Citation1995, p. 266). The decision to avoid mental phenomena can be justified through explanatory parsimony; brain activity is often inaccessible to researchers and model-heavy approaches run the risk of making spurious assumptions. As sensing technologies make brain activity more accessible, however, we expect the mental models in HCI to gain further empirical grounding. Thus, the current line of HCI research that uses brain-sensing for non-interactive, research purposes (for a review see Putze et al., Citation2022) can be informed by insights and methods from embodied cognition to explain phenomena such as tool extension (Bassolino et al., Citation2010; Bergström et al., Citation2019) or virtual presence in neural terms.

6. Conclusion

Early in the paper, we noted that it is possible to frame the promise of a technology in multiple ways depending on the particulars of a research vision. We then pointed to the limitations of framing the potential utility of BCIs in terms of bypassing the body and directly accessing mental states. The many interdependencies between the wider body and cognition in the brain instead demonstrate a level of integration that rebuts the conception of the body as a mere communication channel for prior mental content.

Acknowledging this integration has multiple implications for BCI research. Most obviously, future research should utilize the structure of the wider body and motor skills as a resource for guiding the evaluation and design of BCIs. Some research directions we proposed are providing body-grounded explanations for BCI performance, designing BCIs in ways that preserve body mappings, and even incorporating various sensorimotor variables and tangibles into BCI use. These research directions gain particular prominence as BCI applications are increasingly being designed for able-bodied users, a shift that also requires researchers to more clearly justify their rationale for deploying BCIs. A more abstract implication for research is the application of systemic thinking that defines the sensorimotor integration within the body to the brain–computer interaction itself. This calls for studying the role of interfaces in structuring the cognitive processes within the brain. Instead of treating various BCIs as different outlets for the same mental content, we would expect them to facilitate distinct capabilities and outcomes. An alternative, embodiment-compatible BCI research vision can thus be defined as constructing new cognitive integrations, which emphasizes future BCIs’ role in shaping brain activity as opposed to being a neutral means for accessing mental content.

In parallel, the understanding of embodiment within HCI needs to evolve in order to be relevant for BCI research. Following previous work in cognitive science, we argued for going beyond embodied interaction’s current “externalist” focus and encompassing internal simulations as well as BCI-mediated interactions. As such, embodiment is better viewed as a set of commitments that apply to the study of any input modality rather than directly making a case for using body movements over BCIs. The expanded focus also opens new research directions for the broader embodiment research within HCI. The scope of user studies can expand beyond real-time interaction to examine how past interactive experiences shape off-line cognition in the brain. As opposed to departing from prior body–interface divisions, researchers can study the functional parity of different entities at the neural level. Observations related to embodiment can be explained in more detail by identifying their neural correlates.

The particular research directions we proposed are by no means exhaustive or final. Future HCI research will likely be informed by ongoing developments in cognitive and neurosciences while at the same time testing the validity of knowledge generated in these fields for interactive applications. HCI can uniquely contribute to an embodied focus in BCI through its constructive skills, broad methodological toolbox, and ability to situate BCIs among a wider array of input devices and interaction styles. Looking forward, we consider the emergence of BCIs a grand practical and intellectual challenge for HCI and hope that our paper facilitates further reflection in the field.

Acknowledgments

This work was supported by the European Commission under the grants 826266 (CO-ADAPT) and 824128 (VIRTUALTIMES). We would like to thank Stephen Fairclough, Tapio Takala, Antti Salovaara, and the anonymous reviewers for their feedback on the draft versions of this paper.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

The work was supported by the Horizon 2020 Framework Programme [824128,826266].

Notes on contributors

Barış Serim

Barış Serim received his PhD from Aalto University in 2020; his research interests are multimodal interaction, adaptive interfaces and models of interaction in HCI.

Michiel Spapé

Michiel Spapé received his PhD in Psychology from Leiden University in 2009 and is now docent in cognitive neuroscience at Helsinki University, focusing on emotion, perception/action, and EEG.

Giulio Jacucci

Giulio Jacucci is a professor in the Department of Computer Science at the University of Helsinki; his research interests include multimodal interaction for search and information discovery, physiological computing, and ubiquitous computing for behavioral change.

Notes

1 In fact, HCI lacks a uniform mapping of information-theoretic constructs. BCI research often treats the interface as the information channel and the background brain activity as the noise (Kronegg et al., Citation2005). Research on muscle-operated input devices, in contrast, generally treat the motor system as the channel.

2 Both embodiment and cognitivism are better described as umbrella terms. Cognitivism generally modeled human cognition in terms of mental information processing, but what constitutes its core has been the subject of debate, with researchers alternatively emphasizing the assumption of abstract symbolic processing (Clark & Chalmers, Citation1998; Vera & Simon, Citation1993) or a strict internal–external distinction (Agre, Citation1993). Embodied perspectives commonly object to the commitments of cognitivism, but include a variety of different approaches that are grounded in phenomenology and revised views in cognitive science (Marshall & Hornecker, Citation2013). Work outside HCI cataloged different meanings the term embodiment can stand for (Anderson, Citation2003; Rohrer, Citation2007) and there have been calls to treat embodied cognition as individual claims (M. Wilson, Citation2002).

3 Note that, this is not the case when the technical shortcomings of BCIs indirectly facilitate similar visibility by requiring users to perform body movements (O’Hara et al., Citation2011).

4 The methodological issue we raise is analogous to the distributed cognition framework’s criticism against the exclusion of external cognitive aids in study designs (Hutchins, Citation1995).

5 This example illustrates another point: Integration properties should not be interpreted as normative evaluation criteria. More bandwidth and integration within a system is not necessarily better, a point made early in systems thinking (Bunge, Citation1979).

6 Although there is evidence that motor imagination, which is frequently used in BCIs, is impaired in LIS patients (Conson et al., Citation2008).

7 Prior work (Kyselo & DiPaolo, Citation2015) in particular underlined the ability of the enactivist tradition (Varela et al., Citation2017) in defining embodiment without delimiting its scope to the standard definitions of the body.

References

  • Abowd, G. D., & Mynatt, E. D. (2000). Charting past, present, and future research in ubiquitous computing. ACM Transactions on Computer-Human Interaction, 7(1), 29–58. https://doi.org/10.1145/344949.344988
  • Afergan, D., Shibata, T., Hincks, S. W., Peck, E. M., Yuksel, B. F., Chang, R., & Jacob, R. J. K. (2014). Brain-based target expansion. Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, 583–593. New York, NY, USA: ACM. https://doi.org/10.1145/2642918.2647414
  • Agashe, H. A., Paek, A. Y., Zhang, Y., & Contreras-Vidal, J. L. (2015). Global cortical activity predicts shape of hand during grasping. Frontiers in neuroscience, 9, 9. https://doi.org/10.3389/fnins.2015.00121
  • Agre, P. E. (1993). The symbolic worldview: Reply to Vera and Simon. Cognitive science, 17(1), 61–69. https://doi.org/10.1207/s15516709cog1701_4
  • Ahmed, I., Harjunen, V. J., Jacucci, G., Ravaja, N., Ruotsalo, T., & Spape, M. (2020). Touching virtual humans: Haptic responses reveal the emotional impact of affective agents. IEEE Transactions on Affective Computing. https://doi.org/10.1109/TAFFC.2020.3038137
  • Allen, J. F. (1999). Mixed-initiative interaction. IEEE intelligent systems, 14(5), 14–23. https://doi.org/10.1109/5254.796083
  • Allison, B., Graimann, B., & Gräser, A. (2007). Why use a BCI if you are healthy. ACE Workshop-Brain-Computer Interfaces and Games, 7–11.
  • Anderson, M. L. (2003). Embodied cognition: A field guide. Artificial intelligence, 149(1), 91–130. https://doi.org/10.1016/S0004-3702(03)00054-7
  • Antle, A. N., Corness, G., & Droumeva, M. (2009). What the body knows: Exploring the benefits of embodied metaphors in hybrid physical digital environments. Interacting with Computers, 21(1–2), 66–75. https://doi.org/10.1016/j.intcom.2008.10.005
  • Aricò, P., Borghini, G., DiFlumeri, G., Sciaraffa, N., Colosimo, A., & Babiloni, F. (2017). Passive BCI in operational environments: Insights, recent advances, and future trends. IEEE Transactions on Biomedical Engineering, 64(7), 1431–1436. https://doi.org/10.1109/TBME.2017.2694856
  • Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence & J. T. Spence (Eds.), Psychology of learning and motivation (Vol. 2, pp. 89–195). Elsevier. https://doi.org/10.1016/S0079-7421(08)60422-3
  • Barsalou, L. W., Simmons, W. K., Barbey, A. K., & Wilson, C. D. (2003). Grounding conceptual knowledge in modality-specific systems. Trends in cognitive sciences, 7(2), 84–91. https://doi.org/10.1016/S1364-6613(02)00029-3
  • Bassolino, M., Serino, A., Ubaldi, S., & Ladavas, E. (2010). Everyday use of the computer mouse extends peripersonal space representation. Neuropsychologia, 48(3), 803–811. https://doi.org/10.1016/j.neuropsychologia.2009.11.009
  • Beale, R., & Peter, C. (2008). The role of affect and emotion in HCI. In C. Peter & R. Beale (Eds.), Affect and emotion in human-computer interaction: From theory to applications (pp. 1–11). Springer. https://doi.org/10.1007/978-3-540-85099-1_1
  • Beilock, S. L., Lyons, I. M., Mattarella-Micke, A., Nusbaum, H. C., & Small, S. L. (2008). Sports experience changes the neural processing of action language. Proceedings of the National Academy of Sciences, 105(36), 13269–13273. https://doi.org/10.1073/pnas.0803424105
  • Bergström, J., Mottelson, A., Muresan, A., & Hornbæk, K. (2019). Tool extension in human-computer interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 568:1–568:11. New York, NY, USA: ACM. https://doi.org/10.1145/3290605.3300798
  • Berlucchi, G., & Aglioti, S. (1997). The body in the brain: Neural bases of corporeal awareness. Trends in neurosciences, 20(12), 560–564. https://doi.org/10.1016/S0166-2236(97)01136-3
  • Berti, A., & Frassinetti, F. (2000). When far becomes near: Remapping of space by tool use. Journal of Cognitive Neuroscience, 12(3), 415–420. https://doi.org/10.1162/089892900562237
  • Bianchi, L., Quitadamo, L. R., Garreffa, G., Cardarilli, G. C., & Marciani, M. G. (2007). Performances evaluation and optimization of brain computer interface systems in a copy spelling task. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 15(2), 207–216. https://doi.org/10.1109/TNSRE.2007.897024
  • Bisio, A., Avanzino, L., Ruggeri, P., & Bove, M. (2014). The tool as the last piece of the athlete’s gesture imagery puzzle. Neuroscience, 265, 196–203. https://doi.org/10.1016/j.neuroscience.2014.01.050
  • Blankertz, B., Acqualagna, L., Dähne, S., Haufe, S., Schultze-Kraft, M., Ušcumlic, M., Müller, K. -R., Wenzel, M. A., Curio, G., Sturm, I. (2016). The Berlin brain-computer interface: Progress beyond communication and control. Frontiers in neuroscience, 10, 10. https://doi.org/10.3389/fnins.2016.00530
  • Blankertz, B., Dornhege, G., Krauledat, M., Müller, K. -R., & Curio, G. (2007). The non-invasive Berlin brain–computer interface: Fast acquisition of effective performance in untrained subjects. NeuroImage, 37(2), 539–550. https://doi.org/10.1016/j.neuroimage.2007.01.051
  • Blankertz, B., Tangermann, M., Vidaurre, C., Fazli, S., Sannelli, C., Haufe, S., Maeder, C., Ramsey, L., Sturm, I., Curio, G., Mueller, K. (2010). The Berlin brain-computer interface: Non-medical uses of BCI technology. Frontiers in neuroscience, 4, 198. https://doi.org/10.3389/fnins.2010.00198
  • Boehner, K., DePaula, R., Dourish, P., & Sengers, P. (2007). How emotion is made and measured. International Journal of Human-Computer Study, 65(4), 275–291. https://doi.org/10.1016/j.ijhcs.2006.11.016
  • Brandt, S. A., & Stark, L. W. (1997). Spontaneous eye movements during visual imagery reflect the content of the visual scene. Journal of cognitive neuroscience, 9(1), 27–38. https://doi.org/10.1162/jocn.1997.9.1.27
  • Brownsett, S. L. E., & Wise, R. J. S. (2009). The contribution of the Parietal Lobes to speaking and writing. Cerebral Cortex, 20(3), 517–523. https://doi.org/10.1093/cercor/bhp120
  • Bunge, M. (1979). Treatise on basic philosophy: Ontology II: A world of systems (Vol. 4). Reidel Publishing Company.
  • Bush, V. (1945). As we may think. The Atlantic Monthly, 176(1), 101–108.
  • Caramazza, A., & Miozzo, M. (1997). The relation between syntactic and phonological knowledge in lexical access: Evidence from the “tip-of-the-tongue” phenomenon. Cognition, 64(3), 309–343. https://doi.org/10.1016/S0010-0277(97)00031-0
  • Card, S. K., Mackinlay, J. D., & Robertson, G. G. (1991). A morphological analysis of the design space of input devices. ACM Transactions on Information Systems, 9(2), 99–122. https://doi.org/10.1145/123078.128726
  • Card, S. K., Newell, A., & Moran, T. P. (1983). The psychology of human-computer interaction. L. Erlbaum Associates Inc.
  • Cavazza, M., Charles, F., Aranyi, G., Porteous, J., Gilroy, S. W., Raz, G., Keynan, N. J., Cohen, A., Jackont, G., Jacob, Y., Soreq, E., Klovatch, I., Hendler, T. (2014). Towards emotional regulation through neurofeedback. AH’14: Proceedings of the 5th Augmented Human International Conference. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2582051.2582093
  • Chatterjee, A. (2010). Disembodying cognition. Language and Cognition, 2(1), 79–116. https://doi.org/10.1515/LANGCOG.2010.004
  • Clark, A. (2007). Re-inventing ourselves: The plasticity of embodiment, sensing, and mind. The Journal of Medicine and Philosophy: A Forum for Bioethics and Philosophy of Medicine, 32(3), 263–282. https://doi.org/10.1080/03605310701397024
  • Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19. https://doi.org/10.1093/analys/58.1.7
  • Conson, M., Sacco, S., Sarà, M., Pistoia, F., Grossi, D., & Trojano, L. (2008). Selective motor imagery defect in patients with locked-in syndrome. Neuropsychologia, 46(11), 2622–2628. https://doi.org/10.1016/j.neuropsychologia.2008.04.015
  • Dal Seno, B., Matteucci, M., & Mainardi, L. T. (2010). The utility metric: A novel method to assess the overall performance of discrete brain–computer interfaces. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 18(1), 20–28. https://doi.org/10.1109/TNSRE.2009.2032642
  • Darpa, B. T. O. (2018). HR001118S0029, next-generation non-surgical neurotechnology. Retrieved from https://www.grants.gov/web/grants/view-opportunity.html?oppId=302121
  • Decety, J. (1996). Do imagined and executed actions share the same neural substrate? Cognitive Brain Research, 3(2), 87–93. https://doi.org/10.1016/0926-6410(95)00033-X
  • Degenhart, A. D., Bishop, W. E., Oby, E. R., Tyler-Kabara, E. C., Chase, S. M., Batista, A. P., & Yu, B. M. (2020). Stabilization of a brain–computer interface via the alignment of low-dimensional spaces of neural activity. Nature Biomedical Engineering, 4(7), 672–685. https://doi.org/10.1038/s41551-020-0542-9
  • Dell, G. S. (1990). Effects of frequency and vocabulary type on phonological speech errors. Language and cognitive processes, 5(4), 313–349. https://doi.org/10.1080/01690969008407066
  • Dijk, J. V. (2009). Cognition is not what it used to be: Reconsidering usability from an embodied embedded cognition perspective. Human Technology: An Interdisciplinary Journal on Humans in ICT Environments, 5(1), 29–46. https://doi.org/10.17011/ht/urn.20094141409
  • Dijkstra, K., Kaschak, M. P., & Zwaan, R. A. (2007). Body posture facilitates retrieval of autobiographical memories. Cognition, 102(1), 139–149. https://doi.org/10.1016/j.cognition.2005.12.009
  • Dourish, P. (2004). Where the action is: The foundations of embodied interaction. MIT press.
  • Dourish, P. (2013). Epilogue: Where the action was, wasn’t, should have been, and might yet be. ACM Transactions on Computer-Human Interaction, 20(1), 4. https://doi.org/10.1145/2442106.2442108
  • Downey, J. E., Schwed, N., Chase, S. M., Schwartz, A. B., & Collinger, J. L. (2018). Intracortical recording stability in human brain-computer interface users. Journal of Neural Engineering, 15(4), 046016. https://doi.org/10.1088/1741-2552/aab7a0
  • Dreyfus, H. L. (2002). Intelligence without representation – Merleau-Ponty’s critique of mental representation the relevance of phenomenology to scientific explanation. Phenomenology and the Cognitive Sciences, 1(4), 367–383. https://doi.org/10.1023/A:1021351606209
  • Ebrahimi, T., Vesin, J., & Garcia, G. (2003). Brain-computer interface in multimedia communication. IEEE signal processing magazine, 20(1), 14–24. https://doi.org/10.1109/MSP.2003.1166626
  • Erp van, J., Lotte, F., & Tangermann, M. (2012). Brain-computer interfaces: Beyond medical applications. Computer, 45(4), 26–34. https://doi.org/10.1109/MC.2012.107
  • Eugster, M. J. A., Ruotsalo, T., Spapé, M. M., Barral, O., Ravaja, N., Jacucci, G., & Kaski, S. (2016). Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals. Scientific reports, 6(1), 38580. https://doi.org/10.1038/srep38580
  • Fairclough, S. H. (2009). Fundamentals of physiological computing. Interacting with Computers, 21(1–2), 133–145. https://doi.org/10.1016/j.intcom.2008.10.011
  • Fairclough, S. H. (2011). Physiological computing: Interfacing with the human nervous system. In J. Westerink, M. Krans, & M. Ouwerkerk (Eds.), Sensing emotions: The impact of context on experience measurements (pp. 1–20). Springer Netherlands. https://doi.org/10.1007/978-90-481-3258-4_1
  • Fairclough, S. H., & Gilleade, K. (2014). Meaningful interaction with physiological computing. In S. H. Fairclough & K. Gilleade (Eds.), Advances in physiological computing (pp. 1–16). Springer London. https://doi.org/10.1007/978-1-4471-6392-3_1
  • Farwell, L. A., & Donchin, E. (1988). Talking off the top of your head: Toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and Clinical Neurophysiology, 70(6), 510–523. https://doi.org/10.1016/0013-4694(88)90149-6
  • Fatourechi, M., Bashashati, A., Ward, R. K., & Birch, G. E. (2007). EMG and EOG artifacts in brain computer interface systems: A survey. Clinical Neurophysiology, 118(3), 480–494. https://doi.org/10.1016/j.clinph.2006.10.019
  • Feit, A. M., Weir, D., & Oulasvirta, A. (2016). How we type: Movement strategies and performance in everyday typing. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 4262–4273. New York, NY, USA: ACM. https://doi.org/10.1145/2858036.2858233
  • Feng, H., Golshan, H. M., & Mahoor, M. H. (2018). A wavelet-based approach to emotion classification using EDA signals. Expert Systems with Applications, 112, 77–86. https://doi.org/10.1016/j.eswa.2018.06.014
  • Ferdinando, H., Seppänen, T., & Alasaarela, E. (2016). Comparing features from ECG pattern and HRV analysis for emotion recognition system. 2016 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), 1–6. https://doi.org/10.1109/CIBCB.2016.7758108
  • Ferreira, F., Apel, J., & Henderson, J. M. (2008). Taking a new look at looking at nothing. Trends in cognitive sciences, 12(11), 405–410. https://doi.org/10.1016/j.tics.2008.07.007
  • Fitts, P. M., & Seeger, C. M. (1953). S-r compatibility: Spatial characteristics of stimulus and response codes. Journal of Experimental Psychology, 46(3), 199–210. https://doi.org/10.1037/h0062827
  • Fodor, J. A. (1975). The language of thought (Vol. 5). Harvard University Press.
  • Fodor, J. A. (1983). The modularity of mind. MIT Press.
  • Fridlund, A. J., & Cacioppo, J. T. (1986). Guidelines for human electromyographic research. Psychophysiology, 23(5), 567–589. https://doi.org/10.1111/j.1469-8986.1986.tb00676.x
  • Gallese, V., & Lakoff, G. (2005). The brain’s concepts: The role of the sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3–4), 455–479. https://doi.org/10.1080/02643290442000310
  • Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.), Perceiving, acting, and knowing: Toward an ecological psychology (pp. 67–82). Hillsdale, NJ: Erlbaum.
  • Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9(3), 558–565. https://doi.org/10.3758/BF03196313
  • Gori, J., Rioul, O., & Guiard, Y. (2018). Speed-accuracy tradeoff: A formal information-theoretic transmission scheme (FITTS). ACM Transactions on Computer-Human Interaction, 25(5), 33. https://doi.org/10.1145/3231595
  • Gramann, K., Ferris, D. P., Gwin, J., & Makeig, S. (2014). Imaging natural cognition in action. International Journal of Psychophysiology, 91(1), 22–29. https://doi.org/10.1016/j.ijpsycho.2013.09.003
  • Grau, C., Ginhoux R., Riera A., Nguyen, T. L., Chauvat, H., Berg, M., Amengual, J. L., Pascual-Leone, A., & Ruffini, G. (2014). Conscious brain-to-brain communication in humans using non-invasive technologies. PLoS One, 9(8), 1–6. https://doi.org/10.1371/journal.pone.0105225
  • Grèzes, J., & Decety, J. (2001). Functional anatomy of execution, mental simulation, observation, and verb generation of actions: A meta-analysis. Human brain mapping, 12(1), 1–19.
  • Guillot, A., DiRienzo, F., MacIntyre, T., Moran, A., & Collet, C. (2012). Imagining is not doing but involves specific motor commands: A review of experimental data related to motor inhibition. Frontiers in human neuroscience, 6. https://doi.org/10.3389/fnhum.2012.00247
  • Guillot, A., Lebon, F., Rouffet, D., Champely, S., Doyon, J., & Collet, C. (2007). Muscular responses during motor imagery as a function of muscle contraction types. International Journal of Psychophysiology, 66(1), 18–27. https://doi.org/10.1016/j.ijpsycho.2007.05.009
  • Gwin, J. T., Gramann, K., Makeig, S., & Ferris, D. P. (2010). Removal of movement artifact from high-density EEG recorded during walking and running. Journal of Neurophysiology, 103(6), 3526–3534. https://doi.org/10.1152/jn.00105.2010
  • Hammer, E. M., Halder, S., Blankertz, B., Sannelli, C., Dickhaus, T., Kleih, S., Müller, K. R., Kübler, A. (2012). Psychological predictors of SMR-BCI performance. Biological Psychology, 89(1), 80–86. https://doi.org/10.1016/j.biopsycho.2011.09.006
  • Harjunen, V. J., Spapé, M., Ahmed, I., Jacucci, G., & Ravaja, N. (2018). Persuaded by the machine: The effect of virtual nonverbal cues and individual differences on compliance in economic bargaining. Computers in Human Behavior, 87, 384–394. https://doi.org/10.1016/j.chb.2018.06.012
  • Harrison, S., Tatar, D., & Sengers, P. (2007). The three paradigms of HCI. Alt. Chi. Session at the SIGCHI Conference on Human Factors in Computing Systems San Jose, California, USA, 1–18.
  • Haselager, P., Groot, A. D., & Rappard, H. V. (2003). Representationalism vs. Anti-representationalism: A debate for the sake of appearance. Philosophical Psychology, 16(1), 5–24. https://doi.org/10.1080/0951508032000067761
  • Hassib, M., Schneegass, S., Eiglsperger, P., Henze, N., Schmidt, A., & Alt, F. (2017). EngageMeter: A system for implicit audience engagement sensing using electroencephalography. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 5114–5119. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3025453.3025669
  • Haufe, S., Treder, M. S., Gugler, M. F., Sagebaum, M., Curio, G., & Blankertz, B. (2011). EEG potentials predict upcoming emergency brakings during simulated driving. Journal of Neural Engineering, 8(5), 056001. https://doi.org/10.1088/1741-2560/8/5/056001
  • Heersmink, R. (2015). Dimensions of integration in embedded and extended cognitive systems. Phenomenology and the Cognitive Sciences, 14(3), 577–598. https://doi.org/10.1007/s11097-014-9355-1
  • He, B., Gao, S., Yuan, H., & Wolpaw, J. R. (2013). Brain–computer interfaces. In Neural engineering (pp. 87–151). Springer. https://doi.org/10.1007/978-3-030-43395-6_4
  • Hochberg, L. R., Bacher, D., Jarosiewicz, B., Masse, N. Y., Simeral, J. D., Vogel, J., Haddadin, S., Liu, J., Cash, S. S., van der Smagt, P., & Donoghue, J. P. (2012). Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature, 485(7398), 372. https://doi.org/10.1038/nature11076
  • Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction, 7(2), 174–196. https://doi.org/10.1145/353485.353487
  • Hornecker, E., & Buur, J. (2006). Getting a grip on tangible interaction: A framework on physical space and social interaction. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 437–446. New York, NY, USA: ACM. https://doi.org/10.1145/1124772.1124838
  • Horvitz, E., Breese, J., Heckerman, D., Hovel, D., & Rommelse, K. (1998). The LumièRe project: Bayesian user modeling for inferring the goals and needs of software users. Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, 256–265. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Retrieved from http://dl.acm.org/citation.cfm?id=2074094.2074124
  • Hostetter, A. B., & Alibali, M. W. (2008). Visible embodiment: Gestures as simulated action. Psychonomic Bulletin & Review, 15(3), 495–514. https://doi.org/10.3758/PBR.15.3.495
  • Hurtienne, J., & Israel, J. H. (2007). Image schemas and their metaphorical extensions: Intuitive patterns for tangible interaction. Proceedings of the 1st International Conference on Tangible and Embedded Interaction, 127–134. New York, NY, USA: ACM. https://doi.org/10.1145/1226969.1226996
  • Hutchins, E. (1995). Cognition in the wild. MIT Press.
  • Iriki, A., Tanaka, M., & Iwamura, Y. (1996). Coding of modified body schema during tool use by macaque postcentral neurones. Neuroreport, 7(14), 2325–2330. Retrieved from. http://europepmc.org/abstract/MED/8951846
  • Ishii, H., Lakatos, D., Bonanni, L., & Labrune, J. -B. (2012). Radical atoms: Beyond tangible bits, toward transformable materials. Interactions, 19(1), 38–51. https://doi.org/10.1145/2065327.2065337
  • Ishii, H., & Ullmer, B. (1997). Tangible bits: Towards seamless interfaces between people, bits and atoms. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 234–241. New York, NY, USA: ACM. https://doi.org/10.1145/258549.258715
  • Jacob, R. J. K. (1990). What you look at is what you get: Eye movement-based interaction techniques. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 11–18. New York, NY, USA: ACM. https://doi.org/10.1145/97243.97246
  • Jacob, R. J. K., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., & Zigelbaum, J. (2008). Reality-based interaction: A framework for post-WIMP interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 201–210. New York, NY, USA: ACM. https://doi.org/10.1145/1357054.1357089
  • Jeannerod, M. (1994). The representing brain: Neural correlates of motor intention and imagery. The Behavioral and Brain Sciences, 17(2), 187–202. https://doi.org/10.1017/S0140525X00034026
  • Kangassalo, L., Spapé, M., & Ruotsalo, T. (2020). Neuroadaptive modelling for generating images matching perceptual categories. Scientific reports, 10(1), 14719. https://doi.org/10.1038/s41598-020-71287-1
  • Kapur, A., Kapur, S., & Maes, P. (2018). AlterEgo: A personalized wearable silent speech interface. 23rd International Conference on Intelligent User Interfaces, 43–53. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3172944.3172977
  • Kirsh, D. (1995). The intelligent use of space. Artificial intelligence, 73(1–2), 31–68. https://doi.org/10.1016/0004-3702(94)00017-U
  • Kirsh, D. (2019). When is a mind extended? In M. Colombo, E. Irvine, & M. Stapleton (Eds.), Andy Clark and his critics (pp. 128–142). Oxford University Press. https://doi.org/10.1093/oso/9780190662813.003.0011
  • Kirsh, D., & Maglio, P. (1994). On distinguishing epistemic from pragmatic action. Cognitive science, 18(4), 513–549. https://doi.org/10.1207/s15516709cog1804_1
  • Klemmer, S. R., Hartmann, B., & Takayama, L. (2006). How bodies matter: Five themes for interaction design. Proceedings of the 6th Conference on Designing Interactive Systems, 140–149. New York, NY, USA: ACM. https://doi.org/10.1145/1142405.1142429
  • Kline, J. E., Huang, H. J., Snyder, K. L., & Ferris, D. P. (2015). Isolating gait-related movement artifacts in electroencephalography during human walking. Journal of Neural Engineering, 12(4), 046022. https://doi.org/10.1088/1741-2560/12/4/046022
  • Kosslyn, S. M. (1994). Image and brain: The resolution of the imagery debate. MIT press.
  • Krepki, R., Blankertz, B., Curio, G., & Müller, K. -R. (2007). The Berlin brain-computer interface (BBCI) – towards a new communication channel for online control in gaming applications. Multimedia Tools and Applications, 33(1), 73–90. https://doi.org/10.1007/s11042-006-0094-3
  • Kronegg, J., Voloshynovskyy, S., & Pun, T. (2005). Analysis of bit-rate definitions for brain-computer interfaces. Proceedings of the 2005 Int. Conf. On Human-Computer Interaction(HCI’05).
  • Kuber, R., & Wright, F. P. (2013). Augmenting the instant messaging experience through the use of brain–computer interface and gestural technologies. International Journal of Human-Computer Interaction, 29(3), 178–191. https://doi.org/10.1080/10447318.2012.702635
  • Kübler, A., & Birbaumer, N. (2008). Brain–computer interfaces and communication in paralysis: Extinction of goal directed thinking in completely paralysed patients? Clinical Neurophysiology, 119(11), 2658–2666. https://doi.org/10.1016/j.clinph.2008.06.019
  • Kyselo, M., & DiPaolo, E. (2015). Locked-in syndrome: A challenge for embodied cognitive science. Phenomenology and the Cognitive Sciences, 14(3), 517–542. https://doi.org/10.1007/s11097-013-9344-9
  • Lakoff, G., & Johnson, M. (2008). Metaphors we live by. University of Chicago press.
  • Lawrence, J., Prinzel, I., Scerbo, M. W., Freeman, F. G., & Mikulka, P. J. (1995). A bio-cybernetic system for adaptive automation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 39(21), 1365–1369. https://doi.org/10.1177/154193129503902102
  • Levy, S. (2017). Brain-machine interface isn’t sci-fi anymore. Retrieved from https://www.wired.com/story/brain-machine-interface-isnt-sci-fi-anymore
  • Lopes, P., Jonell, P., & Baudisch, P. (2015). Affordance++: Allowing objects to communicate dynamic use. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2515–2524. New York, NY, USA: ACM. https://doi.org/10.1145/2702123.2702128
  • Luciw, M. D., Jarocka, E., & Edin, B. B. (2014). Multi-channel EEG recordings during 3,936 grasp and lift trials with varying weight and friction. Scientific Data, 1(1), 140047. https://doi.org/10.1038/sdata.2014.47
  • Maes, P., Shneiderman, B., & Miller, J. (1997). Intelligent software agents vs. User-controlled direct manipulation: A debate. In CHI ’97 extended abstracts on human factors in computing systems (pp. 105–106). ACM. https://doi.org/10.1145/1120212.1120281
  • Mahon, B. Z., & Caramazza, A. (2008). A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. Journal of Physiology-Paris, 102(1–3), 59–70. https://doi.org/10.1016/j.jphysparis.2008.03.004
  • Makeig, S., Gramann, K., Jung, T. -P., Sejnowski, T. J., & Poizner, H. (2009). Linking brain, mind and behavior. International Journal of Psychophysiology, 73(2), 95–100. https://doi.org/10.1016/j.ijpsycho.2008.11.008
  • Makin, J. G., Moses, D. A., & Chang, E. F. (2020). Machine translation of cortical activity to text with an encoder–decoder framework. Nature neuroscience, 23(4), 575–582. https://doi.org/10.1038/s41593-020-0608-8
  • Maravita, A., & Iriki, A. (2004). Tools for the body (schema). Trends in cognitive sciences, 8(2), 79–86. https://doi.org/10.1016/j.tics.2003.12.008
  • Marshall, P., & Hornecker, E. (2013). Theories of embodiment in HCI. In The SAGE handbook of digital technology research (pp. 144–158). Sage. https://doi.org/10.4135/9781446282229.n11
  • Meteyard, L., Cuadrado, S. R., Bahrami, B., & Vigliocco, G. (2012). Coming of age: A review of embodiment and the neuroscience of semantics. Cortex, 48(7), 788–804. https://doi.org/10.1016/j.cortex.2010.11.002
  • Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the structure of behavior. Henry Holt; Co.
  • Mizuguchi, N., Yamagishi, T., Nakata, H., & Kanosue, K. (2015). The effect of somatosensory input on motor imagery depends upon motor imagery capability. Frontiers in Psychology, 6, 104. https://doi.org/10.3389/fpsyg.2015.00104
  • Nalborczyk, L., Grandchamp, R., Koster, E. H. W., Perrone-Bertolotti, M., & Lœvenbruck, H. (2020). Can we decode phonetic features in inner speech using surface electromyography? PLoS One, 15(5), 1–27. https://doi.org/10.1371/journal.pone.0233282
  • Newell, A., & Card, S. K. (1985). The prospects for psychological science in human-computer interaction. Human-computer interaction, 1(3), 209–242. https://doi.org/10.1207/s15327051hci0103_1
  • Niedenthal, P. M. (2007). Embodying emotion. Science, 316(5827), 1002–1005. https://doi.org/10.1126/science.1136930
  • Oatley, K., & Johnson Laird, P. N. (2014). Cognitive approaches to emotions. Trends in Cognitive Sciences, 18(3), 134–140. https://doi.org/10.1016/j.tics.2013.12.004
  • O’hara, K., Sellen, A., & Harper, R. (2011). Embodiment in brain-computer interaction. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 353–362. New York, NY, USA: ACM. https://doi.org/10.1145/1978942.1978994
  • Pandarinath, C., Nuyujukian, P., Blabe, C. H., Sorice, B. L., Saab, J., Willett, F. R., Hochberg, L. R., Shenoy, K. V., Henderson, J. M. (2017). High performance communication by people with paralysis using an intracortical brain-computer interface. eLife, 6, e18554. https://doi.org/10.7554/eLife.18554
  • Penaloza, C. I., & Nishio, S. (2018). BMI control of a third arm for multitasking. Science Robotics, 3(20), eaat1228. https://doi.org/10.1126/scirobotics.aat1228
  • Perge, J. A., Homer, M. L., Malik, W. Q., Cash, S., Eskandar, E., Friehs, G., Donoghue, J. P., Hochberg, L. R. (2013). Intra-day signal instabilities affect decoding performance in an intracortical neural interface system. Journal of Neural Engineering, 10(3), 036004. https://doi.org/10.1088/1741-2560/10/3/036004
  • Pouw, W. T. J. L., Nooijer, J. A. D., Gog, T. V., Zwaan, R. A., & Paas, F. (2014). Toward a more embedded/extended perspective on the cognitive function of gestures. Frontiers in Psychology, 5, 359. https://doi.org/10.3389/fpsyg.2014.00359
  • Prinz, J. (2008). Embodied emotions. In W. G. Lycan & J. Prinz (Eds.), Mind and cognition: An anthology (pp. 839–849). Blackwell Publishing.
  • Putze, F., Putze, S., Sagehorn, M., Micek, C., & Solovey, E. T. (2022). Understanding HCI practices and challenges of experiment reporting with brain signals: Towards reproducibility and reuse. ACM Transactions on Computer-Human Interaction, 29(4), 1–43. https://doi.org/10.1145/3490554
  • Randolph, A. B., Jackson, M. M., & Karmakar, S. (2010). Individual characteristics and their effect on predicting mu rhythm modulation. International Journal of Human–Computer Interaction, 27(1), 24–37. https://doi.org/10.1080/10447318.2011.535750
  • Rao, R., Stocco, A., Bryan M., Sarma, D., Youngquist, T. M., Wu, J., & Prat, C. S. (2014). A direct brain-to-brain interface in humans. PLoS One, 9(11), 1–12. https://doi.org/10.1371/journal.pone.0111332
  • Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3(2), 131–141. https://doi.org/10.1016/0926-6410(95)00038-0
  • Robertson, T. (2002). The public availability of actions and artefacts. Computer Supported Cooperative Work (CSCW), 11(3), 299–316. https://doi.org/10.1023/A:1021214827446
  • Roelofs, A. (1992). A spreading-activation theory of lemma retrieval in speaking. Cognition, 42(1–3), 107–142. https://doi.org/10.1016/0010-0277(92)90041-F
  • Rohrer, T. (2007). The body in space: Dimensions of embodiment. In T. Ziemke, J. Zlatev, & R. M. Frank (Eds.), Body, language and mind (Vol. 1, pp. 339–378). Mouton de Gruyter.
  • Rothwell, J., Thompson, P., Day, B., Boyd, S., & Marsden, C. (1991). Stimulation of the human motor cortex through the scalp. Experimental Physiology, 76(2), 159–200. https://doi.org/10.1113/expphysiol.1991.sp003485
  • Rumelhart, D. E., & McClelland, J. L. (1986). PDP Research Group,C. (Eds.)Parallel distributed processing: Explorations in the microstructure of cognition ( foundations, Vol. 1). MIT Press.
  • Schalk, G. (2008). Brain–computer symbiosis. Journal of Neural Engineering, 5(1), 1. https://doi.org/10.1088/1741-2560/5/1/p01
  • Seow, S. C. (2005). Information theoretic models of HCI: A comparison of the Hick-Hyman law and Fitts’ law. Human–Computer Interaction, 20(3), 315–352. https://doi.org/10.1207/s15327051hci2003_3
  • Shu, X., Yao, L., Sheng, X., Zhang, D., & Zhu, X. (2017). Enhanced motor imagery-based BCI performance via tactile stimulation on unilateral hand. Frontiers in human neuroscience, 11, 585. https://doi.org/10.3389/fnhum.2017.00585
  • Simon, H. A., & Newell, A. (1971). Human problem solving: The state of the theory in 1970. The American Psychologist, 26(2), 145–159. https://doi.org/10.1037/h0030806
  • Smolensky, P. (1988). On the proper treatment of connectionism. The Behavioral and Brain Sciences, 11(1), 1–23. https://doi.org/10.1017/S0140525X00052432
  • Spapé, M., Filetti, M., Eugster, M. J. A., Jacucci, G., & Ravaja, N. (2015). Human computer interaction meets psychophysiology: A critical perspective. In B. Blankertz, G. Jacucci, L. Gamberini, A. Spagnolli, & J. Freeman (Eds.), Symbiotic interaction: 4th international workshop (pp. 145–158). Springer International Publishing. https://doi.org/10.1007/978-3-319-24917-9_16
  • Spapé, M., Harjunen, V., Ahmed, I., Jacucci, G., & Ravaja, N. (2019). The semiotics of the message and the messenger: How nonverbal communication affects fairness perception. Cognitive, Affective, & Behavioral Neuroscience, 19(5), 1259–1272. https://doi.org/10.3758/s13415-019-00738-8
  • Steinert, S., & Friedrich, O. (2020). Wired emotions: Ethical issues of affective brain–computer interfaces. Science and Engineering Ethics, 26(1), 351–367. https://doi.org/10.1007/s11948-019-00087-2
  • Sternberg, S. (1969). Memory-scanning: Mental processes revealed by reaction-time experiments. American Scientist, 57(4), 421–457. Retrieved from. http://www.jstor.org/stable/27828738
  • Stetson, D. S., Albers, J. W., Silverstein, B. A., & Wolfe, R. A. (1992). Effects of age, sex, and anthropometric factors on nerve conduction measures. Muscle & Nerve, 15(10), 1095–1104. https://doi.org/10.1002/mus.880151007
  • Suchman, L. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge university press.
  • Suchman, L. (1993). Response to Vera and Simon’s situated action: A symbolic interpretation. Cognitive science, 17(1), 71–75. https://doi.org/10.1207/s15516709cog1701_5
  • Svanæs, D. (2013). Interaction design for and with the lived body: Some implications of merleau-ponty’s phenomenology. ACM Transactions on Computer-Human Interaction, 20(1), 1–30. https://doi.org/10.1145/2442106.2442114
  • Tan, D., & Nijholt, A. (2010). Brain-computer interfaces and human-computer interaction. In D. S. Tan & A. Nijholt (Eds.), Brain-computer interfaces: Applying our minds to human-computer interaction (pp. 3–19). Springer London. https://doi.org/10.1007/978-1-84996-272-8_1
  • Tao, J., & Tan, T. (2005). Affective computing: A review. In J. Tao, T. Tan, & R. W. Picard (Eds.), Affective computing and intelligent interaction (pp. 981–995). Springer. https://doi.org/10.1007/11573548125
  • Thomas, L. E., & Lleras, A. (2009). Swinging into thought: Directed movement guides insight in problem solving. Psychonomic Bulletin & Review, 16(4), 719–723. https://doi.org/10.3758/PBR.16.4.719
  • Thurlings, M. E., van Erp, J. B., Brouwer, A. -M., Blankertz, B., & Werkhoven, P. (2012). Control-display mapping in brain–computer interfaces. Ergonomics, 55(5), 564–580. https://doi.org/10.1080/00140139.2012.661085
  • Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. The Behavioral and Brain Sciences, 28(5), 675–691. https://doi.org/10.1017/S0140525X05000129
  • Torres, E. P., Torres, E. A., Hernández-Álvarez, M., & Yoo, S. G. EEG-based BCI emotion recognition: A survey. (2020). Sensors, 20(18), 5083. 20(18). https://doi.org/10.3390/s20185083
  • Townsend, G., LaPallo, B. K., Boulay, C. B., Krusienski, D. J., Frye, G. E., Hauser, C. K., Schwartz, N. E., Vaughan, T. M., Wolpaw, J. R., Sellers, E. W. (2010). A novel P300-based brain–computer interface stimulus presentation paradigm: Moving beyond rows and columns. Clinical Neurophysiology, 121(7), 1109–1120. https://doi.org/10.1016/j.clinph.2010.01.030
  • Underkoffler, J., & Ishii, H. (1999). Urp: A luminous-tangible workbench for urban planning and design. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 386–393. New York, NY, USA: ACM. https://doi.org/10.1145/302979.303114
  • Varela, F. J., Thompson, E., & Rosch, E. (2017). The embodied mind (revised ed.). MIT press.
  • Vera, A. H., & Simon, H. A. (1993). Situated action: A symbolic interpretation. Cognitive science, 17(1), 7–48. https://doi.org/10.1207/s15516709cog1701_2
  • Vi, C. T., Jamil, I., Coyle, D., & Subramanian, S. (2014). Error related negativity in observing interactive tasks. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3787–3796. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2556288.2557015
  • Wang, X., Ley, A., Koch, S., Lindlbauer, D., Hays, J., Holmqvist, K., & Alexa, M. (2019). The mental image revealed by gaze tracking. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM. https://doi.org/10.1145/3290605.3300839
  • Wang, Z., Wang, S., Shi, F. -Y., Guan, Y., Wu, Y., Zhang, L. -L., Shen, C., Zeng, Y.-W., Wang, D.-H., Zhang, J. (2014). The effect of motor imagery with specific implement in expert badminton player. Neuroscience, 275, 102–112. https://doi.org/10.1016/j.neuroscience.2014.06.004
  • Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 94–104. https://doi.org/10.1038/scientificamerican0991-94
  • Willett, F. R., Avansino, D. T., Hochberg, L. R., Henderson, J. M., & Shenoy, K. V. (2021). High-performance brain-to-text communication via handwriting. Nature, 593(7858), 249–254. https://doi.org/10.1038/s41586-021-03506-2
  • Williamson, J., Murray-Smith, R., Blankertz, B., Krauledat, M., & Müller, K. -R. (2009). Designing for uncertain, asymmetric control: Interaction design for brain–computer interfaces. International Journal of Human-Computer Studies, 67(10), 827–841. https://doi.org/10.1016/j.ijhcs.2009.05.009
  • Wilson, M. (2002). Six views of embodied cognition. Psychonomic Bulletin & Review, 9(4), 625–636. https://doi.org/10.3758/BF03196322
  • Wilson, R. A., & Foglia, L. (2017). Embodied cognition. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. Metaphysics Research Lab, Stanford University. (Spring 2017) https://plato.stanford.edu/archives/spr2017/entries/embodied-cognition/
  • Wilson, N. L., & Gibbs, R. W., Jr. (2007). Real and imagined body movement primes metaphor comprehension. Cognitive science, 31(4), 721–731. https://doi.org/10.1080/15326900701399962
  • Wolpaw, J. R., Birbaumer, N., Heetderks, W. J., McFarland, D. J., Peckham, P. H., Schalk, G., Donchin, E., Quatrano, L. A., Robinson, C. J., & Vaughan, T. M. (2000). Brain-computer interface technology: A review of the first international meeting. IEEE Transactions on Rehabilitation Engineering, 8(2), 164–173. https://doi.org/10.1109/TRE.2000.847807
  • Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., & Vaughan, T. M. (2002). Brain–computer interfaces for communication and control. Clinical Neurophysiology, 113(6), 767–791. https://doi.org/10.1016/S1388-2457(02)00057-3
  • Yuksel, B. F., Oleson, K. B., Harrison, L., Peck, E. M., Afergan, D., Chang, R., & Jacob, R. J. (2016). Learn piano with BACh: An adaptive learning interface that adjusts task difficulty based on brain state. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5372–5384. New York, NY, USA: ACM. https://doi.org/10.1145/2858036.2858388
  • Zapała, D., Zabielska-Mendyk, E., Augustynowicz, P., Cudo, A., Jaśkiewicz, M., Szewczyk, M., Kopiś, N., Francuz, P. (2020). The effects of handedness on sensorimotor rhythm desynchronization and motor-imagery BCI control. Scientific Reports, 10(1), 2087. https://doi.org/10.1038/s41598-020-59222-w
  • Zhai, S., Kristensson, P. O., Appert, C., Anderson, T. H., & Cao, X. (2012). Foundational issues in touch-surface stroke gesture design — an integrative review. Foundations and Trends in Human–Computer Interaction, 5(2), 97–205. https://doi.org/10.1561/1100000012