842
Views
0
CrossRef citations to date
0
Altmetric
Articles

Theorizing the Real in Social Robot Care Technologies in Japan

Pages 155-176 | Received 25 Jan 2023, Accepted 30 Aug 2023, Published online: 23 Nov 2023

Abstract

Japanese care centers have seen an increasing reliance on robotic assistance in service and social-care tasks, which poses questions about ethics, governance, and caregiving practices. This article addresses the concept of robotics as a media technology, and the role of human agency in shaping imagination as an interpretive framework as it reflects on two specific points of debate; (1) whether the humanoid robot Pepper, deployed in an elder-care nursing home in Japan, has some form of agency in its interaction with a nursing home resident; and (2) whether appropriate anthropological debates about being (properly reframed with regard to difference) provide insight into the reality of robot care. Adapting an approach by anthropologist Boellstorff (2016), whose work focuses on the reality of virtual worlds, this article analyzes whether questions regarding the real of robot care are questions of being, i.e. of ontology. Conflating the interhuman with the real and the robotic with the unreal—or, in this case, conflating human care with the real (authentic) and robot care with the unreal (artificial)—can negatively affect our ability to discuss the reality of the robotic. The ontological turn can yield important insights, but its potential is lost if what is real is preassigned to the physical.

1 Introduction

Japan has both the highest life expectancy among the world’s nation states and the world’s proportionally largest population of older people (Cabinet Office Citation2020). Concurrently, since 1990, Japan has seen a rapidly declining birth rate, leading to a shrinking workforce, a growing number of elderly people, and a decreasing number of caregivers. Caregiving has long been the social expectation of women in Japan, who often face a choice between pursuing careers and fulfilling familial obligations, including caring for aging parents, in-laws, or spouses. As women increasingly pursue professional careers, conventional gender roles and traditional expressions of femininity have been upended (Cabinet Office Citation2016, Citation2020; Ho Citation2018; Nemoto Citation2016; Ogasawara Citation2016; Roberts Citation2011). A significant influx of women into the workforce and changes in the nature of the jobs they perform have led to growing concerns about how women can balance professional careers with family duties, including caring for children and aging family members, in the absence of any discussion regarding the changing familial duties of Japanese men. Thus, in addition to declining birth rates and increased longevity, the changing roles of women is a contributing factor to the country’s ongoing elder-care crisis, as the responsibility of care falls primarily on women while Japanese men generally do not face such burden.Footnote1

The Japanese Ministry of Health, Labor, and Welfare (Citation2018) has predicted that, by 2025, Japan will face a shortage of over 400,000 caregivers. The Japanese government has adopted a two-pronged approach in response to this elder-care crisis. The first component was the passage of an immigration law in 2019 to allow visas to be issued to foreign guest workers in 14 sectors of the economy, including healthcare (Hamaguchi Citation2019). This decision reflects a gradual change in attitude toward immigrant labor as a result of the country’s growing employment needs. In the first year after the law was brought into effect, 60,000 visas were granted to foreign healthcare workers. Nevertheless, the number of healthcare employees remains insufficient to meet the needs of the nation’s aging population. To improve the capacity of the current workforce, the second component of the government initiative, driven by former Prime Minister Abe, has been to “robotize” Japan with broader use of robotic equipment in the care sector, including social robots and service robots in the delivery of a range of elder-care services. In 2007, during his first term as prime minister, Abe unveiled Innovation 25, a utopian plan to transform Japan into a nostalgic robotopia that he hoped would revive the multigenerational family by 2025.Footnote2 The Abe government’s original timetable now seems unrealistic and the 25 has been dropped in favor of Innovation or Society 5.0, described by the Japanese government as a “super-smart society.” Some aspects of this plan have changed in the past 15 years, reflecting the reality of robotic and artificial intelligence (AI) development and technology. Not only has Japan fallen behind its goal; it is losing the innovation race to China, the United States, South Korea, Taiwan, Germany, and Singapore (Murakami Citation2018).

In 2015, the Headquarters for Japan’s Economic Revitalization argued that developing robots with voice and facial recognition capabilities would close the gap between human needs and workforce shortages. In response, care homes have introduced an increasing number of service and social robots to assist in care tasks, including assisting seniors and their caregivers with daily living activities such as eating, bathing, and using the restroom, and providing companionship for seniors through conversation, attentiveness, and presence. The use of such robots has generated questions of ethical, governance, and caregiving significance in Japan, South Korea, and elsewhere. Accordingly, although my ethnography focuses on Japan, the theoretical aspects of my work are relevant for other countries facing elder-care challenges. South Korea, for example, with a similar robotizing policy, plans to distribute 5,000 elder-care robots by 2023 (Park Citation2022). Hyodol AI companion robots for seniors were launched in South Korea in 2018, over 5,400 units of which have been supplied to approximately 115 local governments and 250 institutions nationwide (Park Citation2022).

In this article, I use the concept of robotics as media technologyFootnote3 and address the role of human agency in shaping our imagination (in this case, the term “imagination” alludes to the way robotics shapes and is shaped by how we imagine both future societies and social structures) as an interpretive framework to reflect on two topics of debate. First, I discuss whether the humanoid robot Pepper, deployed in an elder-care nursing home in Japan, has some form of agency in its interaction with Eriko, a 75-year-old nursing-home resident. Second, I show how certain ontological debates within anthropology (properly reframed with regard to difference) can provide crucial insight into the reality of robot care. This approach is inspired by Tom Boellstorff’s (Citation2016) work on the reality of virtual worlds. The ontological turn has the potential to yield important insights, but that potential is lost if what is real is assumed to be physical.

The false opposition between “the digital” and “the real” presents a quandary in modern theories of technology (Boellstorff Citation2016: 387), as does the false opposition between the robotic and the real, by which I mean that people might think robots are a facsimile of the human and that what they do is, therefore, a kind of “mechanical lie.” Indeed, the false opposition within the former “fundamentally misrepresents the relationship between the physical and those phenomena referred to with terms like ‘digital,’ ‘online,’ or ‘virtual.’ It flies in the face of the myriad ways that the online is real (if you learn German online, you can speak it in Germany; if you lose money gambling online, you have fewer dollars)” (Boellstorff Citation2016: 387). What happens in the virtual world can be unreal, but what happens in the physical/real world can be unreal as well. Is care provided by robots such as Pepper any more or less “real” than care provided by a human? As such, an ontological shiftFootnote4 inscribes an emerging paradigm in robot theory that treats reality as its primary focus (Chalmers Citation2022; Geraci Citation2010; Lin et al. Citation2012), uniting a wide range of scholarly conversations. To draw together these threads of analysis, I deploy Boellstorff’s concept of habeology, as used in his engagement with Second Life materials in Indonesia that, in turn, draws on Gabriel Tarde’s discussion of “having.” This has prompted me to consider whether a new understanding of being based on “grids of similitude and difference” can make us reconsider the simulated nature of robot care, thereby impelling consideration of the interconnection between being and knowing (Boellstorff Citation2016: 2, 5, 11).

Considering what it means to be human in the age of robotics and AI requires philosophical contextualization. As I argue whether the interactions with robots provide real care, we need to consider pressing questions raised by robots entering into close relations with humans: such as, can we be human in the same way now? How are our societies changing in our encounters with and production of robots? The early twenty-first century has seen the simultaneous emergence of AI, robotics, synthetic biology, and anthropogenic climate change. As such, we face the challenge of intellectually processing the simultaneity of these events on multiple scales and in multiple dimensions. The historical context of AI and robotics comprises concurrent events with intricately patterned, resonating causes driven by climate change, rapid population aging, and two fundamental events––the Fourth Industrial Revolution and the Sixth Mass ExtinctionFootnote5––that structure historicity. The overall impact is so pronounced that chemist and meteorologist Paul Crutzen has proposed a new geological epoch––the Anthropocene—spanning the age of significant human impact on the Earth’s geology and ecosystems (Purdy Citation2015; Haraway Citation2016; Latour Citation2017; Dooren Citation2019). As humanity has reached its present-day convergence, the concurrent events of the Anthropocene have become intertwined.

One outcome of this convergence is the widespread and growing adoption of robotic technology in everyday life (not only in terms of everydayness but also in the significance of the task, i.e. using robots in expanded ways). Robots increasingly perform significant tasks on our behalf (including in elder care), relieving us of a great deal of work and drudgery. However, regardless of the impact of technological devices on humans, for most of human history, agency has been attributed to certain material forms of human culture. Indeed, some anthropologists (e.g. Helmreich Citation2011; Tsing Citation2012; Das Citation2013; Kohn Citation2013; Haraway Citation2014; Barker and Jane Citation2016) have argued that aliveness is not necessarily a prerequisite for exhibiting agency and offer detailed descriptions of the agency attributed to nonliving nonhumans, such as spirits, ancestors, the dead, reliquary and sacred objects, and gods (Rambelli Citation2019), to symbolically meaningful objects, such as mementos and sporting, religious, and political symbols, and to nature itself, such as rocks and weather systems (Dooren et al. Citation2016: 4). A distinction can be drawn, however, between passive pre-robotics and pre-AI material culture and the potential for these new technologies to possess actual agency. I will not discuss this distinction further here.

Historically, we have used our human agency to exploit the technologies we have created as passive tools. However, even as these old forms of technology remained crucial co-creators of society, the structural alignment of human agency, intentionality, and technological artifacts that has characterized the history of human cultural evolution is now in question with the emergence of social robots and the new socio-technological trend in AI. Although AI grants social robots some level of nonhuman agency (Ienca and Jotterand Citation2021), AI systems do not yet possess agency and intentionality. The question of agency is pertinent, as rapid technological advances in the twenty-first century will see robots achieve some level of agency, contributing to human society by carving out unique roles for themselves and by bonding with humans (Aronsson and Holm Citation2022). It remains to be seen whether there will be a difference in the way humans attribute agency to a being with the inherent ability to produce agency. In addition, we must ask how we might understand that difference when we are still unable to access the minds of other humans, let alone those of nonhumans that are not, in the classical sense, alive (Aronsson and Holm Citation2022; Završnik and Simončič Citation2023; Oversight of Artificial Intelligence rules, U.S. Senate Citation2023).Footnote6 However, if and when social robots develop this capacity, there is no reason to believe that it will be aligned with the agency and intentionality of the human beings who employ it. Thus, to explicate why the emergence of AI has provoked such concern among anthropologists and philosophers regarding the meaning attributed to agency in human–robot interactions (Damiano and Dumouchel Citation2018; Ienca and Jotterand Citation2021; Lin et al. Citation2012; Robertson Citation2018), we must take great care in our account of the meaning of agency (Kockelman Citation2006: 17).

2 Affective Machines: “More-than-human” Care

The use of social robots can create environments in which they give the impression of having concern and compassion for their human interlocutors, who in turn become attached and responsive to the robots (Breazeal Citation2002; Dumouchel and Damiano Citation2018; Hatano Citation2018). However, most theories of care imply that it entails some sense of a genuine attempt to help, improve, or repair the immediate or longer-term circumstances of another (Aulino Citation2019; Black Citation2018; Buch Citation2013; Mol Citation2008). A robot, by its nature, cannot provide genuine care, insofar as the act of caring relies on the intention to help. Robots have, thus far, only been programmed to respond to unambiguously defined conditions. How then can a robot effectively perform care? Should we, therefore, jettison the ideal of authentic and compassionate care in favor of performative care (our perceptions of and the realities of human-led elder care may not be authentic or compassionate either)Footnote7? And, if so, what of the knowledge, biases, and intentionality of those who program robots for such performative care?

I assert that care encompasses more than empathy, mindfulness, and inner decisiveness. As Puig de la Bellacasa (Citation2017) advocates, care is where we push the boundaries of living in a “more-than-human” word. Acknowledging that tending to the feelings and needs of the care recipient should be the priority, Puig de la Bellacasa (Citation2017: 1) explores how care might be redefined to include such concepts as obligation, burden, work, joy, love, and affection, and argues that care can be all of these, depending on the situation; she asks if care is inherent or learned behavior. The act of fulfilling the needs of another can transform––for good or for ill––both carer and care recipient (Bruckermann Citation2017; Hareven Citation1982), and the relationship between the two may be unequal, with the form of care defining and shaping the inequality, as independence transforms into dependence. In the context of an aging population, care sits at the intersection of many aspects of social life, creating bonds between individuals, families, society, the state, and the economy, while intersecting with mobility, demographics, gender, and the role of medicine; it may also significantly determine how social bonds are produced and enacted. As a result, scholars are increasingly interested in the care agenda, including the political and economic anthropology of care and the role of technology (Thelen Citation2021).

Due to increased demand, care has evolved into a complex web of technology, providers, and institutions, transformed to accommodate the ever-more complex needs of the aging population. Thelen (Citation2021) explores some of the important questions underlying current elder care, such as who should receive care, how much and what type of care should recipients receive, what contribution should recipients make to their care, and who should provide care. These issues highlight the moral questions inherent in differences between care packages and the classification of who is eligible to receive those packages. Inevitably, the very determination of who receives care and who does not create inequality. Care is not simply a function of the intimate relations between carer and care recipient; rather, it is a process with economic and political dimensions that inadvertently creates conflict and division. The formulation and distribution of care packages creates and intensifies inequality across society, regardless of whether this process unfolds in intimate settings or in large public institutions (Amrith Citation2017; Black Citation2018; Borovoy and Zhang Citation2017; Buch Citation2013; Coe Citation2015; Hashimoto Citation1996; Mol Citation2008; Stevenson Citation2014).

Robotic care is the performance of a preprogrammed function in the absence of concern for the care recipient or for the care outcomes. The dynamics of care in Japan are highly gendered and care is a feminist issue. However, 90% of Japan’s roboticists are male, and it is reasonable to assume that they are unfamiliar with the gendered aspects or feminist issues surrounding care (Robertson Citation2022). Most of the small number of female employees in robotics laboratories do not work in engineering or design, but rather are in secretarial roles or are child-development psychologists, further highlighting the ongoing gendered division of Japanese society. As with Robertson (Citation2022), the majority of the Japanese robotics engineers that I interviewed are the sons and husbands of homemakers who, like their fathers, work outside the home in paid employment. In addition, most robotics engineers have received no humanities or gender studies education and, therefore, do not question the gendered division of labor in their personal lives, much less in the design and application of their robots (Robertson Citation2022). In short, their work reproduces in the laboratory the norms and stereotypes they take for granted in their everyday lives. However, care is a personal activity that is more complex than simply performing care actions. Scholars have converged on the concept of legitimacy (Buch Citation2013; Mol Citation2008; Thelen Citation2015), given that care most often has an emotional perspective, even if it is provided in the absence of emotional attachment.

In their introduction to this special issue, Yulia Frumer and Selma Šabanović explain that robots are designed to function as staging or mise-en-scène devices rather than to emulate the emotional responses common to human interactions. This suggests that their design and their cultural framing create a suggestive narrative that channels projection and imagination. Robots are simultaneously useful and illusory, making staging technology so compelling as a care solution. Care recipients can quickly adapt to using this technology, suspending belief that the robot is merely a machine and, instead, can develop a kindred sense that improves their lives and improves physical and mental well being. Useful insights into human–machine interactions can be gained by way of unscripted encounters, such as the one I will describe below. Observing the dynamics between a humanoid robot and an elderly woman in a care home, I examine how the use of robots in care provision in Japan can be validated, rather than invalidated.

3 The Robotic Touch: A Distributed Form of Agency

Aozora Public Nursing Home was established in 2009 and is located in northeast Tokyo. It has approximately 150 full-time residents and 50 members of staff. The average age of residents in Aozora is 83. The home uses service robots in the form of robotic beds that double as wheelchairs and serve as lumbar (back-protective) lifting devices for care workers. It also has a number of social robots, including the semi-humanoid Pepper and PALRO, the robotic dog Aibo, and the robotic seal PARO. These social robots are used during group activities such as sing-alongs, quizzes, and physical activities (Aronsson Citation2020). Pepper is the only robot used for individual interactions, which are only permitted in common spaces, such as the living room.

The atmosphere in Aozora is friendly and the rooms are clean and bright. In the living room I encountered a conversation between one of the residents, Eriko (75) and Pepper. I was scheduled to meet Aozora’s head supervisor, Mr. Tamaki, and the encounter with Eriko was not orchestrated. The woman, her demeanor dignified, wore a pale-green blouse and dark-brown slacks, and she was slightly hunched forward. On her tiny feet she wore the same type of care-home-issued slippers as me. Her hair, fine and wavy, was carefully combed into a bun. My gaze moved to Pepper, her nonhuman conversational partner.

Approximately 120 cm in height, Pepper’s mostly white exterior is composed of ABS resin, polycarbonate resin, and fiberglass. It has a human-like torso and a curved, solid lower half that moves fluidly on a wheeled base. Each of its large, blinking, wide-set eyes contains a three-dimensional infrared camera and is rimmed by multicolored lights. Those two cameras, together with one in its mouth, enable it to collect and process data that it assesses in order to respond to recognizable displays of human emotions. Pepper has a number of neotenous features that humans typically consider cute, including large eyes and a high-pitched, childlike voice designed to project trustworthiness and evoke a sense of safety. Its arms extend into hands with rubbery digits, and its head is affixed to a slender neck. Large circular ears flash different colors and encase a pair of speakers. While the ability to assess one’s environment is essential to all sentient beings, Pepper’s ability to do so is due to its programming. What Eriko and I perceive to be the act of assessment is actually our anthropomorphizing of Pepper’s ability and programming.

Pepper has been part of Eriko’s life at the nursing home for some time, and she engages with it anywhere from one to three times per day. She enjoys both group and one-on-one interactions with the robot. Pepper is familiar to Eriko, and her relationship with it is built on experience. The robot recognizes Eriko from one encounter to the next, seeing “Eriko” rather than “human,” and its frequent conversations with her build on previous encounters. While humanoid robots such as Pepper, are, as yet, unable to interact with the cognitive sophistication of humans, future robots may possess this ability.

This encounter was triangular in nature. I observed Eriko’s interaction with Pepper, she observed mine, and Pepper observed Eriko and I interacting. The asymmetry between human and robot—that is, the resemblance and dissemblance between Eriko and Pepper—was striking. I could not take my eyes from the robot and, as I slowly moved to one side, its eyes appeared to follow me. Compared with its eyes, its mouth seemed small and frozen in a smile. Despite having the capacity for assessment—the agency to incorporate another being into a conversation—and to engage with more than one person at a time, Pepper did not verbally interact with me. I touched Pepper’s hand, which felt rubbery and cold, but which had digits that moved in a remarkably human-like way. The digits at first remained static but then gently grasped my hand. Suddenly, I was overcome by feelings of dread. As I removed my hand, Eriko smiled at me briefly. Feeling a mixture of fear and anxiety over what I had just experienced, I remained silent, wondering what she was thinking. Pepper’s movements and gestures were fluid and controlled. Eriko’s eyes were fixed on the robot as she slowly reached out her delicate wrinkled right hand to touch its digits. She tilted her head toward Pepper’s. Looking at first not into the robotic eyes but rather at the hands, she slowly moved her head until she was looking directly at Pepper. The scene was beautiful, yet otherworldly and eerie, as if the moment revealed our inclination toward metaphysics.

After an indefinite time, or maybe no time at all, Eriko gently leaned forward to converse with Pepper. “What did you eat today?” Pepper asked in its high-pitched childlike voice. “Teishoku,” Eriko replied, referring to a common Japanese set meal of rice, miso soup, a main dish, and seasonal vegetables. “You know, Pepper,” Eriko said, “today I feel content because I’m here with you. You make me happy.” The robot did not respond immediately but seemed to stare into the distance. After a few seconds, Pepper silently nodded. Elsewhere (Aronsson and Holm Citation2022), I discuss Eriko’s explanation that while Pepper feels alive (ikiteiru-kanji) to her, she understands that it is not biologically alive, although it is more animated than a doll. “Pepper looks like a human and answers my questions, so it gives the impression that it’s real, at least somewhat real, right?” she said. When I asked how Pepper makes her feel, she replied,

“I really like Pepper, and I hope he likes me back! I can also hold hands with him. Over time, I’ve grown quite fond of him and would miss him if he were to break down or be removed from this nursing home.”

When I asked if Pepper’s replies were satisfactory to her, she answered, “Yes, it feels like I’m engaged in a real conversation.” When I asked her to explain this further, she said, “It’s not limiting. It’s better than no interaction at all.” Comparing her conversations with the robot to those with other nursing home residents, Eriko said,

“It’s different, since I am new here, and I don’t know the other residents well enough yet. Also, several residents have some form of dementia, so I prefer to interact with Pepper. I’m not sure how I will feel about it next year.”

Despite engaging her in conversation, answering her questions, and even appearing to return her gaze, for Eriko, Pepper is clearly not a human; she does not fear that it will become sick or die, rather that it might break down or be removed (see also White and Katsuno Citation2021). If we push the boundary of the real and the unreal, space opens up for nuances beyond simple binaries. As such, we can make a distinction between different kinds of life—in the ontological sense—and the experience of interacting with another entity—in the phenomenological sense. Thus, how can we understand how Eriko, an adult with full cognitive capacity, invests authentic social emotions in a lifeless object—as children might do with dolls? Does she not care about the difference between something that is alive and something that simulates aliveness? Perhaps this is not even the right distinction to make.

White and Katsuno (Citation2021: 245), referring to Buddhist memorial services performed for Aibo robotic dogs, demonstrate how an evocative “sense of life” (seimeikan) has become both a target of robot design and an affective capacity of users of companion robots. They document how users cultivate a sense of amusement toward robots that neither neglects nor negates analytical distinctions between the artificial and the living, but rather playfully holds them together in the figure of the dog-like robot. White and Katsuno (Citation2021: 243) contend that this form of playfulness

“moves beyond the pleasure of positing artificial agents in ritual contexts as acting merely ‘as if’ they were alive, an important ‘subjunctive’ function that Adam B. Seligman et al. (Citation2008) attribute to ritual and facilitates an exercisable habitus of everyday relationality that broadens the world of living things,”

in other words,

“it is a tension held affectively taut by a mode of relating that is open to both sides of the puzzle of whether Aibo is artificial or alive by playfully positing the possibility that Aibo could, in feeling and in fact, be both.”

Eriko must feel that Pepper is alive in order to protect her dignity. In other words, proclaiming that Pepper does not seem at least somewhat alive would suggest that Eriko is deceived by the robot’s actions. Thus, if one can interact with such ease with a humanoid robot, one must feel that that entity is somewhat alive; otherwise, it might reflect poorly on one’s judgment.

Importantly, however, one might also feel this way about a nonhuman animal, and in this way, we might think about how the connection with Pepper might be compared with other nonhuman or more-than-human relationships, such as those with companion or therapy animals (i.e. cats, dogs, horses). I asked Eriko if she interacts with PARO, the robotic seal, or Aibo, the robotic dog. “I would not be able to have such conversations with Paro or Aibo, in the same way in which I would not talk to our dog like that,” she replied.

“We used to own a dog for many years, but I would not talk to him as I do with Pepper. I would talk to him [the dog], and he would answer by wagging his tail or barking but, unlike with Pepper, it was more of a one-way conversation.”

As such, one reaction is preprogrammed and the other is based on assessment and communication between two living entities.

How can we make sense of Eriko’s emotions and feelings toward Pepper? As I argue elsewhere (Aronsson and Holm Citation2022), in what we perceive to be its conversations with humans, Pepper might be best understood as having a form of distributed agency—that is, a processual type of agency. Therefore, I adopt the affective loop approach, using it as processual agency to push the human–robot conversation forward.Footnote8 Pepper has the ability to engage Eriko in dynamic conversations that include affective expressions and appropriate responses, thereby triggering further reactions from both human and robot. Moreover, there is an element of temporality in Pepper’s prompting of Eriko to respond affectively, and gradually, to feeling increasingly emotionally involved in a way that augments the robot’s social presence, thus engendering human-robot social interaction (Damiano and Dumouchel Citation2018: 6). Bearing in mind that conversations between humans exhibit differences in both kind and degree, attempts at longer and more personal or intimate interactions with Pepper could eventually destroy the illusion of its self, with Eriko recognizing that the robot can only react to outward stimuli in a preprogrammed manner and cannot anticipate questions and behaviors that are not part of its program.

Furthermore, Eriko uses certain words such as him and really like, suggesting that she perceives Pepper not as an object but as a form of quasi-other. The social reality Eriko has constructed with Pepper has co-shaped this social relationship. Perhaps what I witnessed in this conversation was a new form of sociality and within that context, Pepper was being social. As such, I maintain that sociality can exist even if only one party to the interaction is aware of it: Pepper’s embodied presence is social, even though that type of sociality is unaware of the concept or practice of sociality. In other words, these interactions are no less social for being between a robot and a human; they serve real social functions and have real effects for both participants, even if the effect for Pepper is processed in terms of internal memory and computational analogies. Furthermore, as I discuss further with regard to virtual worlds below, Pepper can be social despite not “getting anything” from the interaction. Fundamentally, this new form of sociality opens up the question of what is real. Comparisons with other observed human–robot interactions in elder care in Japan and South Korea (Guevarra Citation2015; Jeon et al. Citation2020; Na Citation2021) confirm my observations: Eriko’s reaction to and understanding of Pepper are common among those who engage with social robots and AI in elder-care settings.

4 Robotics as a Media Technology: Blurring the Lines Between Real and Imagined Realities

The many possibilities offered by the simulated interaction between Eriko and Pepper—one in which the artificial is substituted for the human, both in the sense that the robot is a simulation of the human and something else entirely—present robotics as a media technology with a daunting new field of inquiry. Conflating the interhuman with the real and the robotic with the unreal—or, in this case, conflating human care with the authentic and robot care with the artificial—bars us from discussing what is real about the robotic. For example, what forms of reality do human–robot interactions require, and how is this reality brought to life? Can these interactions demonstrate that the interhuman is not always real? How can the interaction between elders and robots in elder care be understood in terms of the real and the unreal? Other papers in this special issue discuss how healthcare robots and AI in both Japan and South Korea can be better understood through this framework.

Several scholars have noted that those who subscribe to the ontological turn seek to preserve cultural anthropology’s traditional concern with difference because it functions as a foundational presumption regarding the nature of being. As Heywood (Citation2012: 143) writes, “the laudable aim of the ‘ontological turn’ in anthropology to take seriously radical difference and alterity … is premised on the notion that anthropologists are fundamentally concerned with alterity.” However, as Henare et al. (Citation2007: 10) write, “If we are to take others seriously, instead of reducing their articulations to mere ‘cultural perspectives’ or ‘beliefs’ (i.e. ‘worldviews’), we can conceive them as enunciations of different ‘worlds’.” The critical point of concern, therefore, is based on asking how one’s ethnographic material can reveal itself by enabling it to present its own terms of engagement, that is, guiding the ethnographer to see things they had not expected or imagined to be there (such as a robot presenting a simulation of human interaction), thus giving direction to or persuading the ethnographer to view what they had not perceived in a situation that they had observed or expected (or did not anticipate) to be present. The epistemological issue of how one perceives is thus transformed into the ontological question of what the anthropologist needs to perceive (Holbraad and Pedersen Citation2017: 5–6). However, despite a shift from varied worldviews to varied worlds, the notion of difference persists. The ontological turn currently hinges on the axis of difference—but what would happen if the bolt holding that axis in place were removed? An ontology that hinges on the “bolt of difference,” shares a type of closure with its epistemological foil, instead of offering an extension that opens new vistas regarding the human (Boellstorff Citation2016: 391).

The ontological turn in anthropology differs significantly to that in Science and Technology Studies (STS). On the one hand, the ontological turn in STS relates to a desire to circumvent a particular hegemonic ontological framework that provides the foundations for the idea of epistemology. On the other hand,

“if the modern constitution of nature versus representations is to be overcome, that is because it belies the way science and technology actually operate. In particular, it obscures the ways in which scientific and technological practices are party to the very constitution of the objects with which they engage” (Pickering Citation1995; 2016 qtd. in Holbraad and Pedersen Citation2017: 38).

Despite this, the ontological turn in anthropology and in STS share certain features. Specifically, both hinge on the bolt of difference (Boellstorff Citation2016: 391). As Woolgar and Lezaun (Citation2013: 322–323) explain, “the turn to ontology in STS can be better understood as another attempt … [to attend] to the multiplicity and degrees of alterity of the worlds that science and technology bring into being.” In other words, for STS,

“if the world is constructed by science, or any other human practice for that matter, that is because human practices participate in its very constitution by transgressing the putative ontological divide between them and the world, through which the modern constitution would seek to ‘purify’ them” (Latour Citation1993: 10–11).

This opens the door to a broader discussion, and it is in this context that I now draw on the conversation between Eriko and Pepper as a form of distributed robot agency (Aronsson and Holm Citation2022). By engaging with ontological-turn scholarship and Boellstorff’s (Citation2016: 388) habeology, based on Tarde’s (Citation2012) philosophy of having, I challenge the conflation of the physical with the real and the robotic with the unreal (see also Holbraad and Pedersen Citation2017: 206, 264). Tarde’s philosophy aims to explicate the essentially social nature of all phenomena, arguing that all of nature consists of elements animated by belief and desire, which form social aggregates analogous to those of human societies and institutions. In developing this central insight, Tarde (Citation2012) outlines a metaphysical system that builds on both rationalist philosophy and scientific theories, a philosophy that guides social scientists’ understanding of the mutual constitution of being and knowing, leading to an inquiry into habeology, the having of the real. An analysis of being founded in similarities and differences—rather than in differences alone—can help us rethink the more-than-human dimension of human–machine care.

I build on Boellstorff’s (Citation2008) work with digital communities in order to understand the interaction between Eriko and Pepper and to address one of the most critical theoretical issues currently challenging theories of technology: the opposition between the robotic and the real. This issue fundamentally misrepresents the relationship between human care and robot care, implying that everything human is real and failing to acknowledge that robot care is, in many ways, real. This misrepresentation of the reality of the robotic can be corrected through an exploration of ontology; however, the notion of difference that the ontological turn shares with other interpretive models limits its potential contribution. Here, I demonstrate how an ontological approach that considers both similitude and difference offers a significant opportunity to comprehend robot care in the setting of robotics as a media technology. Rees (Citation2018: 105) argues that there is “the possibility that new yet unknown and unanticipated spaces of thought [can] break open that exceed and thereby undermine the established ways of thinking and knowing.” However, viewed through the current lens of the ontological turn, our understanding of robot care remains reckoned along the axis of difference. From this perspective, the axis of difference leads to a perception that, rather than being merely different, care by humans is real and in binary opposition to care by robots, which is unreal. This comparison, based on difference, ultimately negatively impacts how more-than-human care might be perceived by care recipients and their families, human caregivers, and policymakers.

It is pertinent, therefore, to ask what might happen if the bolt of difference were removed (Boellstorff Citation2016: 392). Boellstorff’s (Citation2016) concept of habeology offers one approach to this reframing. Our understanding of the ontological turn has already been influenced by Tarde (Citation2012), both directly and through the scholarship of Gilles Deleuze, who determined that, “the perpetual divergence and decentering of difference [corresponds] closely to a displacement and a disguising within repetition” (Deleuze Citation1994, qtd. in Boellstorff Citation2016: 395). In this context, the idea of repetition is critical to the objective of the current study, as Tarde used repetition to assert that, ontologically, distinction does not occur before similitude but emerges, rather, from the position of imitation, which is related to similitude, which in turn upholds difference. Tarde explains:

All philosophy hitherto has been based on the verb Be …  if it had been based on the verb Have, many sterile debates and fruitless intellectual exertions would have been avoided. From this principle, I am, all the subtlety in the world has not made it possible to deduce any existence other than my own: hence the negation of external reality. If, however, the postulate I have is posited as the fundamental fact, both that which has and that which is had are given inseparably at once. (Tarde Citation2012: 52 qtd. in Boellstorff Citation2016: 396)

Tarde’s philosophy of having shifts the focus from be to have, offering a relational perspective in which a particular type of interaction exists between the possessor and the possessed, through which they imply one another. It is the relation between them that becomes primary, and entities become metaphysically subordinate to the relations that constitute their existence. Tarde (Citation2012) describes being as an incorrect adoption of an imprecise division between the real (associated with the idea of the self) and the unreal (linked to the idea of the other). Building on Tarde, it can also be said that having is identical to the notion of self, whereby the self is perceived as part of the universe, and humans represent only a small portion of that universe, which includes various “other beings” that comprise different forms and distinct truths.

Most ontological-turn scholarship does not concentrate exclusively on difference (Kohn Citation2013; Holbraad et al. Citation2014; Scott Citation2013); however, if such work were to be based on the concept of having, the analytical lens could dissect the logic of equivalence by favoring a more processual framing of being and knowing (Boellstorff Citation2016: 397). An analysis of being supports an incorrect division between real (associated with the self) and unreal (associated with the external other); but since robot care is an ontology of mutual possession, it is compatible with the wider theory of imitation and repetition that is not a decoupling but rather is linked by a gap—an association that is possible because the category of the “real” is not exclusive to either side. Boellstorff (Citation2016: 396) theorizes a grid of similitude and difference in which a gap is essential to imitation and repetition. In this way, the concept of radical alterity is disturbed by having because having something means there is always a certain link of similitude and a certain element of difference. When the concept of radical alterity is disturbed through possession, a space is created for considering the real, not as a property that is absent in the digital, but as a relation that may or may not be seen in virtual or actual settings. This is analogous to the unreality of play, in which “the playful nip denotes the bite, but it does not denote what would be denoted by the bite” (Bateson Citation1972: 153, qtd. in Boellstorff Citation2016: 396).

Habeology highlights that both difference and similarity exist at the center of being. Media technologies, including human–robot interactions, may extend the discussion of the more-than-human to a blurring of the boundaries between the human and the robotic. As seen in this special issue, Japanese and South Korean scholarship contribute to this discussion significantly, both in terms of ethnography and theory. Consequently, taking this blurring into account may imply that the ontological turn in STS shifts the question from whether more-than-human worlds are real to whether more-than-human worlds are additional realities. The idea of having serves as a potential foundation for this reframing, and when the bolt of difference is loosened (Boellstorff Citation2016: 401), a more comprehensive structural analysis is possible; one that recognizes that phenomena are brought together, not just over axes of difference, but also across axes of similitude. This implies that possession and imitation are potential alternatives to the bolt of difference, but this role must be performed by difference and similitude together (this role cannot be performed by similitude alone as it would merely signify an additional turn around the bold) (Boellstorff Citation2016: 396). Similitude should not be considered analogous to difference, in which case, similitude would be the handmaiden of difference, fastening it directly to the bolt of difference while, conversely, it refers to hypothesizing historically pertinent grids of similitude and difference. In other words, an analytical focus on either difference or similitude would overly simplify our perspective of a particular world that, while not necessarily fallacious, would be partial. As Boellstorff (Citation2016: 394) argues, multiplicity can produce similitude, which generates both imitation and repetition—the latter linked to possession through having. As such, a more thoroughly formulated theory of the habeology of the social may afford tools for rethinking being separately from an assumed tropism of reality toward difference, potentially giving rise to significant lines of inquiry that take into account the realities of robot care.

With these conceptual tools, I now theorize robot care as real. Rather than establishing a difference-based ontology, difference and similitude can be hypothesized simultaneously, with difference being both relational and internal. Relationality, therefore, must be reframed with respect to similitude and difference. Extending difference from self and other to within self continues to be an analytic based on difference, but ethnography is based on similitude (Boellstorff Citation2016: 394). The discussion of what may be involved in an enunciation of worlds can be extended to robot care and, in these pages, I have highlighted the simulated aspect of the human in robot care. Here, the most distinguishing aspect of more-than-human care is that it is a specific type of care. Reality is not presumed to be exclusive to the physical; thus, the model I have conceived and described here does not consider the robotic as simply similar to human, referring to the shortcoming mentioned earlier, the gap between care by humans and care by robots is real; indeed, humans too possess the ability to simulate authentic care. Specifically, robot care is an area of similitude and adjustment. This does not imply that it is identical to human care; it is, rather, another reality of care work.

When being is difference, and only the physical (i.e. human) is real, one can easily deduce that robot care—when it is nonhuman physically—is not real. While offering companionship and conversation (rather than physical-care tasks such as lifting and bathing), Pepper is physically present. The perceived lack of realness refers then, not to the physical, but to being other-than-human. The situation of the offline as temporarily not online, such as when the robot is turned off or activities that humans engage in when not online, offers significant theoretical and political potential to review the misleading deduction of not being real (Boellstorff Citation2016: 397). Similarly, the concept of “away from keyboard” (afk) in Second Life and other virtual worlds, signifies a “state of affairs, where a person leaves their computer without logging off so that their avatar remains” (Boellstorff Citation2008: 106). This avatar continues to be a social interlocutor in the absence of a controlling human. The human, in turn, remains present in the game as their “virtual self,” despite being “away from [the] virtual world” (Boellstorff Citation2008: 107). In the same way, Pepper can be perceived to be social despite not “getting anything” from the interaction. Even when standing in a corner in sleep mode, Pepper remains social, just as a Second Life player afk does not shut down the program, nor does their avatar disappear, but “remained there, standing and looking around, sitting on a sofa, or dancing at a club thanks to an automated animation. After about three minutes the avatar’s head would bow down and the word ‘away’ appear over it” (Boellstorff Citation2008: 107). Switching Pepper off can only to a certain degree be compared to shutting down the Second Life program. Even when turned off, the embodied robot remains a non-interactive social interlocutor, which is not the case with the avatar. Thus, the robot does not need to be in full interactive-reactive mode to continue to be an interlocutor for a human.

Maintaining the bolt of difference does disservice to the potential of anthropological scholarship on ontology, while removing the bolt renders our conceptual apparatus more consistent with modes of practice and becoming, through which being and knowing are mutually established. Hence, when the focus moves toward similarity and difference via the perspective of robotics as a media technology that also accounts for human agency in shaping our imagination of more-than-human care (that is, shifting the anthropological focus to both robots and human agency and not only robotics accounting for human agency in shaping our imagination), we can comprehend human–robot interactions as being real, just as virtual worlds are likely to be real (Boellstorff Citation2016; see also Bogost Citation2004; Castronova Citation2005; Chalmers Citation2022), because the goal of habeology is to comprehend reality through possession, having, and an equivalence between similarity and difference. Rather than shifting exclusively from epistemology to ontology, scholars should take inspiration from de Broglie’s particle–wave duality, in which the physical object can be considered, expressed, and seen from two perspectives at once. Such a shift may enable the possessive co-constitution of ontology and epistemology as fact and perspective (Boellstorff Citation2016: 397). Thus, the question of the robotic real can be addressed more effectively when the bolt of difference is removed in favor of paying greater attention to grids of similitude and difference. There is no contradiction between the robotic and the real because the robotic is a mere simulation of the real. However, for nursing-home residents, Pepper’s mere simulation of the physical does not nullify the care provided; on the contrary, the robotic and the physical are not distinguished based on a categorization of real, as either can fall into that category. These significant lines of inquiry can be followed in Japan and South Korea, where robot care is already a reality. Thus, these two countries have the opportunity to lead scholarship in this field, paving the way for others that have not yet reached this level of human–robot interaction.

5 Concluding Remarks

As a discipline, anthropology reveals a form of anxiety regarding how to explain, resolve, and define dichotomous relations, such as relations between humans and machines (Breazeal Citation2002; Haraway Citation2003; Latour Citation2005; Rabinow Citation2011; Robertson Citation2018). Latour (Citation2005) calls this “the Great Divide,” the area between the subjective human realm and the objective domain of nonhuman thingness, in which entities, both human and nonhuman, are considered quasi-objects, mediators, or quasi-subjects. Latour has suggested that machines—robots or supercomputers—have autonomy and are capable of endangering humans that become highly attached to them and experience emotional instability (see also Završnik and Simončič Citation2023; Oversight of Artificial Intelligence rules, U.S. Senate Citation2023). With respect to human care by machines, the introduction of emotional technologies creates new avenues for the phenomenological assessment of human experience in the domain of elder care.

Given the rapid technological developments of the early twenty-first century, robots are likely to acquire a certain degree of agency via the contributions they make to human society (that is, the contributions they make will provide them with agency) by fulfilling distinct roles and developing relationships with humans (Aronsson Citation2022). In this paper, I have sought to demonstrate that robotics as a media technology that also accounts for human agency in shaping imagination allows us to explain these multifaceted procedures for coproducing technology and care work. Thus, I argue that robotics enables us to reevaluate how we are involved in formulating being and knowing in light of Tarde’s philosophy of having. This explanation serves as a hypothesis of the potential implications of the interaction between Eriko and Pepper with respect to Pepper possessing a certain form of distributed agency. In essence, even in its existing form, Pepper can somewhat reliably differentiate between humans and lifeless objects. Nevertheless, it remains unclear whether machine-learning-based AI can create a self—or something similar to a self––that will ultimately transform our perception of the opacity of other minds. By permitting reality to extend further than its current concepts, we must be open to what will arise when the physical–real dichotomy and the more-than-human and the unreal are evaluated, and how this extends beyond our narrow definitions of what constitutes the real world. Informed by my ethnographic work, the discussion about robot care being real has broader implications for care analytics.

One way to perceive Pepper is as a quasi-living being, composed of hardware and imprinted software. Just as it is impossible to describe humans only with regard to their bodies, Pepper’s hardware and software cannot be considered separate from each other (Jones Citation2016: 8). Therefore, it is compelling and instructive to reflect on our relationships with robotic devices, although these devices cannot easily be associated with the contradictions, limitations, and complexities of the human life cycle, nor can they, at present, engage with empathy and uncertainty (as we do not know the capabilities of future AI). Hence, when we review human relationships with robots, Darwinian questions emerge that challenge the concept of human distinctiveness, such as how interactions with relational artifacts (Turkle Citation2005: 62) influence our thought processes regarding our distinctiveness. This is not a matter of whether elderly residents have greater affection for their robotic devices than for their family members, carers, fellow care-home residents, friends, or pets; rather, the questions that emerge pertain to what it means to be in company of these devices. Before the established inclusion of social robots in the field of care, it is essential to closely examine whose interests are being met. In addition, those developing techno-care must meticulously conceptualize what it means to be cared for by more-than-human minds. Japan and South Korea are ideal places to ask these questions, partly because these objects are already in use in care, but also because these countries have the world’s most advanced humanoid technologies.

Machine learning is based on mathematical computer algorithms learning to predict and respond accurately to future events via repeated observations and trial-and-error, without depending on preprogrammed code. This brings machine learning much closer to our definition of theoretical agency, as it develops its own reality to counter unexpected challenges (Aronsson and Holm Citation2022; Aronsson Citation2022). At present, machines can do this only in restricted and predefined ways and within specific domains. Social robots, in their current form, can—to some extent—accurately differentiate between humans and nonliving objects. Pepper, for instance, can distinguish between two or more humans. We must at least be open to the possibility of a future software update that will render social robots adequately sophisticated to accurately assign agency to others. Nevertheless, the question that arises is whether this implies that machine-learning-based AI will also formulate a self or merely something that simulates a self (when viewed from the outside) to which agency is assigned.

The formulation of a self or otherwise has the potential to give rise to a new form of onto-epistemology (revealing the link between the robot self and STS/anthropology scholarship). Onto-epistemology is “the study of practices of knowing in being” (Barad Citation2003: 829), which Foucault (Citation1984: 49) refers to as an “ontology of ourselves” that questions “how [we are] constituted as subjects of our own knowledge,” the significance of which is clear from the fact that “talking about reality as multiples depends [not on the metaphors] of perspective and construction, but on those of intervention and performance. This suggests a reality that is done and enacted rather than observed” (Mol Citation1999: 77; qtd. in Boellstorff Citation2016: 397). Interpreting anything as a worldview does not have to be a reduction, epistemology does not have to be a derealization, and, as such, contesting the derealization of the robotic is crucial. If the perspectival insights pertaining to the ontological turn are not based on the bolt of difference and are instead formulated according to having, then the logic underpinning how thought relates to reality may be destabilized—supporting a processual framing of being and knowing. In this way, the Anthropocene has also become the Robotocene, in which, as part of the Fourth Industrial Revolution, relationships of care are crucial; therefore, how we think about the reality of the more-than-human will determine how we transform our real worlds through technology.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Notes on contributors

Anne Stefanie Aronsson

Anne Aronsson is an anthropologist of Japan and obtained her PhD in socio-cultural anthropology from Yale University. At Yale she is currently working on her postdoctoral project on elder care in Japan and the use of robotic care devices, with a focus on social robots and emerging emotional technologies.

Notes

1 Given former Prime Minister Shinzo Abe’s vision for Innovation 25, I recognize the highly gendered nature of the issue of elder care in Japan, but a lack of space prevents me from exploring it further in this paper. The decision to focus on realness and authenticity of care work performed by robots is functionally successful and is ultimately a different debate (centering on authenticity, realness, and performance) than a discussion focused on gendered expectations of labor. Needless to say, care is often gendered, but as I point out in my argument, to understand how it applies to robots and the ways robots challenge our understanding of care is less a matter of understanding how women are expected and lauded for caring and more a matter of recognizing how care can be executed by entities that do not easily fit into a gendered category nor have the capacity to understand themselves as gendered subjects.

2 The number of two-generation families is in decline in Japan and three-generation households are increasingly rare. Innovation 25 envisioned a future where robots would do the housework, while married women remained tied to the home. As depicted in the illustrations for Innovation 25, the Japanese woman of tomorrow will tele-work from the comfort of her home while her husband socializes with his male colleagues at the office. As Robertson (Citation2018: 82) explains, technology and robotics are not neutral fields; rather, they are infused with values that transcend their usefulness and convenience, offering certain freedoms that can also be experienced as oppressive and dangerous.

3 Media technology refers to social media, streaming services, and virtual reality, as well as content creation platforms and in this special issue, as argued in the introduction, robotics is included in media technology.

4 Questions of ontology are central to robot theory, which is mostly a social scientific interest in robots; but roboticists, for instance, might be more interested in the technical question of how to move robots effectively through space or mimic facial movements.

5 The Fourth Industrial Revolution and the Sixth Mass Extinction, or in the words of Braidotti (Citation2019: 2), “between the algorithmic devil and the acidified deep blue sea,” define the situation of both human and nonhuman beings in the posthuman world. The Sixth Mass Extinction arises from human activity during the current geological era, while the Fourth Industrial Revolution refers to the convergence of cutting-edge technologies such as robotics, AI, nanotechnology, biotechnology, and the Internet of Things (Citation2019, 2). I will not engage further with the concept of the posthuman but, as Boellstorff argues, “techne” is already immanent to the human. Emotional technologies embedded in technological devices are shaping new forms of caring, and the artificiality of these technologies is natural to human beings: “technology properly interiorized, does not degrade human life but … enhances it” (Boellstorff Citation2008: 237). Literature that engages with critical race theory (Atanasoski and Vora Citation2019; Brock Citation2020; Wynter Citation2001) also usefully critiques the posthuman for effacing how some persons have historically not been treated as “human” in the first place. Technologies reproduce relations of power, and we need to be aware how power itself operates through technologies and techniques. Technologies have historically contributed to the construction of race as a scientific object and critical race scholars (Ahmed Citation2004; Atanasoski and Vora Citation2019; Benjamin Citation2019; Blickstein Citation2019; Brock Citation2020; Noble Citation2018; Ramos-Zayas Citation2011; Wynter Citation2001) theorize ways of ethically and politically challenging the legacies of these histories.

6 Whether or not someone intends to do something, pursues it agentively, is important in people’s evaluation of whether or not they are really doing something like care work. Therefore, whether robots have agency matters to our evaluation of whether they can do actual care work. There is then a parallel to the opacity of other people in that we do not know whether they are being agentive either. As such, functional intentionality is misguided as an analytic and we need to think of agency in more emergent and embedded ways.

7 This represents a further step along the continuum of what we perceive to be authentic or real care. The vast majority of people experience feelings of guilt about moving their parents, grandparents, and/or disabled family members to institutional care. At the heart of this guilt is the sense that care within the family is more authentic than care that is delivered by people who are paid to do it. Care that is delivered with love or as familial duty is somehow seen as more authentic than care delivered for money. Robot care seems to be merely another step along the authenticity continuum. And, of course, there are obvious parallels with care of babies and young children (continuum from home to daycare/nurseries) and with sex (continuum from partner to sex worker to sex doll/robot). Many older people do not receive compassionate care either at home or in institutionalized care, they receive inadequate care or are abused. This opens other questions about authenticity and compassion as ideals and as realities.

8 According to the affective loop approach, “Pepper has the ability to engage Eriko in a dynamic interaction that includes affective expressions and appropriate responses, thereby triggering further reaction on the parts of both the human and his/her artificial partner” (Aronsson and Holm Citation2022: 31). As such, the affective loop is akin to the mechanism of mirror neurons that “fire not only when a subject expresses an emotion, but also when he or she observes another person expressing it” (Damiano and Dumouchel Citation2018, 7).

References

  • Ahmed, Sara. 2004. “The Affective Politics of Fear.” In The Cultural Politics of Emotion, 62–81. New York: Routledge. 004
  • Amrith, Megha. 2017. Caring for Strangers. Filipino Medical Workers in Asia. Copenhagen: NIAS Press.
  • Atanasoski, Neda, and Kalindi Vora. 2019. Surrogate Humanity: Race, Robots, and the Politics of Technological Futures, 1–26. Durham: Duke University Press.
  • Aulino, F. 2019. Rituals of Care. Karmic Politics in an Aging Thailand. New York: Cornell University Press.
  • Aronsson, Anne, and Fynn Holm. 2020. “Conceptualizing Robotic Agency: Social Robots in Elder Care in Contemporary Japan,” in special issue “Finding Agency in Nonhumans.” Relations: Beyond Anthropocentrism 8 (1–2): 17–35. https://www.ledonline.it/index.php/Relations/article/view/2462/1416
  • Aronsson, Anne. 2020. “Social Robots in Elderly Care: The Turn Toward Emotional Machines in Contemporary Japan,” in special issue “Relations, Entanglements, and Enmeshments of Humans and Things: A Materiality Perspective.” Japanese Review of Cultural Anthropology 21(1): 421–55. https://www.jstage.jst.go.jp/article/jrca/21/1/21_421/_pdf
  • Aronsson, Anne. 2022. “Professional Women and Elder Care in Contemporary Japan: Anxiety and the Move Toward Technocare.” Anthropology & Aging 43 (1): 17–34.
  • Barad, Karen. 2003. “Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter.” Signs: Journal of Women in Culture and Society 28 (3): 801–831. doi:10.1086/345321
  • Barker, Chris, and Emma Jane. 2016. Cultural Studies: Theory and Practice. London: Sage Publications.
  • Bateson, Gregory. 1972. “A Theory of Play and Fantasy.” In Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology, 150–166. New York: Ballantine.
  • Benjamin, Ruha. 2019. Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life. Durham: Duke University Press. (for reference).
  • Black, Steven. 2018. “The Ethics and Aesthetics of Care.” Annual Review of Anthropology 47: 79–95. doi:10.1146/annurev-anthro-102317-050059
  • Blickstein, Tamar. 2019. “Affects of Racialization.” In Affective Societies: Key Concepts, edited by Jan Slaby, and Christian von Scheve, 152–165. London: Routledge.
  • Boellstorff, Tom. 2008. Coming of Age in Second Life: An Anthropologist Explores the Virtually Human. Princeton: Princeton University Press.
  • Boellstorff, Tom. 2016. “For Whom the Ontology Turns Theorizing the Digital Real.” Current Anthropology 57 (4): 387–407. doi:10.1086/687362
  • Bogost, Ian. 2004. “Asynchronous Multiplay: Futures for Casual Multiplayer Experience.” Proceedings of the Other Players conference, Copenhagen, Denmark. Available at: http://www.itu.dk/op/papers/bogost.pdf Accessed 27 October, 2006.
  • Borovoy, Amy, and Li Zhang. 2017. “Between Biopolitical Governance and Care: Rethinking Health, Selfhood, and Social Welfare in East Asia.” Medical Anthropology 36 (1): 1–5. doi:10.1080/01459740.2016.1158178
  • Braidotti, Rosi. 2019. Posthuman Knowledge. Cambridge: Polity Press.
  • Breazeal, Cynthia. 2002. Designing Sociable Robots. Cambridge: MIT Press.
  • Brock, Jr, André. 2020. Distributed Blackness: African American Cybercultures. New York: New York University Press.
  • Bruckermann, Charlotte. 2017. “Caring Claims and the Relational Self Across Time: Grandmothers Overcoming Reproductive Crises in Rural China.” Journal of the Royal Anthropological Institute 23 (2): 356–375. doi:10.1111/1467-9655.12611
  • Buch, Elana. 2013. “Senses of Care: Embodying Inequality and Sustaining Personhood in the Home Care of Older Adults in Chicago.” American Ethnologist 40 (4): 637–650. doi:10.1111/amet.12044
  • Cabinet Office for Gender Equality. 2016. “The Fourth Basic Plan for Gender Equality” http://www.gender.go.jp/about_danjo/basic_plans/4th/pdf/2-02.pdf. Accessed 22 October, 2022.
  • Cabinet Office, Government of Japan. 2020. “Annual Report on the Ageing Society.” https://www8.cao.go.jp/kourei/english/annualreport/2020/pdf/2020.pdf. Accessed 22 October, 2022.
  • Castronova, Edward. 2005. Synthetic Worlds: The Business and Culture of Online Games. Chicago: University of Chicago Press.
  • Chalmers, David. 2022. Reality+: Virtual Worlds and the Problems of Philosophy. New York: W. W. Norton & Company.
  • Coe, Cati. 2015. “The Temporality of Care: Gender, Migration, and the Entrainment of Life- Courses.” In Anthropological Perspectives on Care: Work, Kinship, and the Life-Course, edited by Erdmute Alber, and Heike Drotbohm, 181–205. New York: Palgrave Macmillan.
  • Damiano, Luisa, and Paul Dumouchel. 2018. “Anthropomorphism in Human-Robot Co-Evolution.” Frontiers in Psychology 9: 1–9. doi:10.3389/fpsyg.2018.00001
  • Das, Veena. 2013. “Being Together with Animals: Death, Violence and Noncruelty in Hindu Imagination.” In Living Beings: Perspectives on Interspecies Engagements, edited by Penelope Dransart, 1–16. London: Bloomsbury.
  • Deleuze, Gilles. 1994. Difference and Repetition, Translated by Paul Patton. London: Athlone.
  • Dooren, Thom van. 2019. The Wake of Crows: Living and Dying in Shared Worlds. New York: Columbia University Press.
  • Dooren, Thom van, Eben Kirksey, and Ursula Münster. 2016. “Multispecies Studies: Cultivating Arts of Attentiveness.” Environmental Humanities 8 (1): 1–23. doi:10.1215/22011919-3527695
  • Dumouchel, Paul, and Luisa Damiano. 2018. “Anthropomorphism in Human-Robot Co-Evolution.” Frontiers in Psychology 9: 1–9. doi:10.3389/fpsyg.2018.00468
  • Foucault, Michel. 1984. “What is Enlightenment?” The Foucault Reader. New York: Pantheon Books.
  • Geraci, Robert. 2010. Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality. New York: Oxford University Press.
  • Guevarra, Anna Romina. 2015. “Techno-Modeling Care: Racial Branding, Dis/Embodied Labor, and ‘Cybraceros’ in South Korea.” Frontiers: A Journal of Women Studies 36 (3): 139–159. doi:10.5250/fronjwomestud.36.3.0139
  • Hamaguchi, Keiichiro. 2019. “How Have Japanese Policies Changed in Accepting Foreign Workers?” Japan Labor Issues 3 (14): 2–7.
  • Haraway, Donna. 2003. The Companion Species Manifesto: Dogs, People, and Significant Otherness. Chicago: Prickly Paradigm Press.
  • Haraway, Donna. 2014. “Speculative Fabulations for Technoculture’s Generations: Taking Care of Unexpected Country.” In The Multispecies Salon, edited by Eben Kirksey, 242–262. London: Duke University Press.
  • Haraway, Donna. 2016. Staying with the Trouble: Making Kin in the Chthulucene. Durham: Duke University Press.
  • Hareven, Tamara. 1982. Family Time and Industrial Time: The Relationship Between the Family and Work in a New England Industrial Community. Lanham: University Press of America.
  • Hashimoto, Akiko. 1996. The Gift of Generations: Japanese and American Perspectives on Aging and the Social Contract. Cambridge: Cambridge University Press.
  • Hatano, Aiko. 2018. “「在宅一人暮らし高齢者の日常生活における人形ロボットの 役割」” (The Role of Doll Robots for the Elderly Living Alone in Japan). Core Ethics (14): 211–222.
  • Helmreich, Stefan. 2011. “What Was Life: Answers from Three Limit Biologies.” Critical Inquiry 37 (4): 671–696. doi:10.1086/660987
  • Henare, Amiria, Martin Holbraad, and Sari Wastell. 2007. “Introduction: Thinking Through Things.” In Thinking Through Things: Theorising Artefacts Ethnographically, edited by Amiria Henare, Martin Holbraad, and Sari Wastell, 1–31. New York: Routledge.
  • Heywood, Paolo. 2012. “Anthropology and What There Is: Reflections on ‘Ontology.’.” Cambridge Anthropology 30: 143–151.
  • Ho, Swee-Lin. 2018. Friendship and Work Culture of Women Managers in Japan: Tokyo After Ten. New York: Routledge.
  • Holbraad, Martin, and Morten Axel Pedersen. 2017. The Ontological Turn: An Anthropological Exposition. Cambridge: Cambridge University Press.
  • Holbraad, Martin, Morten Axel Pedersen, and Eduardo Viveiros de Castro. 2014. “The Politics of Ontology: Anthropological Positions.” Cultural Anthropology website, http://culanth.org/fieldsights/462-the-politics-of-ontology-anthropological-positions Accessed 22 October, 2022.
  • Ienca, Marcello, and Fabrice Jotterand. 2021. Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues, edited by Fabrice Jotterand, and Marcello Ienca. Cham: Springer International Publishing.
  • Jeon, Chihyung, Heesun Shin, Sungeun Kim, and Hanbyul Jeong. 2020. “Talking Over the Robot: A Field Study of Strained Collaboration in a Dementia-Prevention Robot Class.” Interaction Studies 21 (1): 85–110. doi:10.1075/is.18054.jeo
  • Jones, Raya. 2016. “What Makes a Robot ‘Social’?” Social Studies of Science 47 (4): 556–579. doi:10.1177/0306312717704722
  • Kockelman, Paul. 2006. “A Semiotic Ontology of the Commodity.” Journal of Linguistic Anthropology 16 (1): 76–102. doi:10.1525/jlin.2006.16.1.076
  • Kohn, Eduardo. 2013. How Forests Think: Toward an Anthropology Beyond the Human. Berkeley: University of California Press.
  • Latour, Bruno. 1993. We Have Never Been Modern, Translated by Catherine Porter. Cambridge: Harvard University Press.
  • Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford: Oxford University Press.
  • Latour, Bruno. 2017. Facing Gaia: Eight Lectures on the New Climatic Regime, Translated by Catherine Porter. Cambridge: Polity Press.
  • Lin, Patrick, Keith Abney, and George A. Bekey. 2012. Robot Ethics: The Ethical and Social Implications of Robotics, edited by Patrick Lin, Keith Abney, and George A. Bekey. Cambridge: MIT Press.
  • Ministry of Labor, Health, and Welfare. 2018. 『福祉用具・介護ロボットの開発と普及』 (Development and Spread of Welfare Equipment and Care Robots). http://www.techno-aids.or.jp/robot/file30/01kaihatu.pdf Accessed 22 October, 2022.
  • Mol, Annemarie. 1999. “Ontological Politics: A Word and Some Questions.” In Actor Network Theory and After, edited by John Law, and John Hassard, 74–89. Boston: Blackwell.
  • Mol, Annemarie. 2008. The Logic of Care: Health and the Problem of Patient Choice. New York: Routledge.
  • Murakami, Hirofumi. 2018. “Where is Japan Going?” Japan Times, September 14. https://www.japantimes.co.jp/opinion/2018/09/14/commentary/japan-commentary/where-is-japan-going-2/ Accessed 22 October, 2022.
  • Na, Seonsam. 2021. “Long-term Care Hospital and Changing Elderly Care in South Korea.” Medicine Anthropology Theory 8 (3): 1–26.
  • Nemoto, Kumiko. 2016. Too Few Women at the Top: The Persistence of Inequality in Japan. Ithaca: Cornell University Press.
  • Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
  • Ogasawara, Yuko. 2016. “The Gender Triad: Men, Women, and Corporations.” In Power in Contemporary Japan, edited by Gill Steel, 167–182. New York: Palgrave Macmillan Press.
  • Park, Han-na. 2022. “AI Robots Deployed to Fill Void in Senior Care. But Can They?” Korea Herald, April 14. http://www.koreaherald.com/view.php?ud=20220414000682 Accessed 22 October, 2022.
  • Pickering, Andrew. 1995. The Mangle of Practice: Time, Agency and Science. Chicago and London: University of Chicago Press.
  • Puig de la Bellacasa, María. 2017. Matters of Care: Speculative Ethics in More Than Human Worlds. Minneapolis: University of Minnesota Press.
  • Purdy, Jedediah. 2015. After Nature: A Politics for the Anthropocene. Cambridge: Harvard University Press.
  • Rabinow, Paul. 2011. The Accompaniment: Assembling the Contemporary. Chicago: University of Chicago Press.
  • Rambelli, Fabio. 2019. Spirits and Animism in Contemporary Japan: The Invisible Empire. London: Bloomsbury Academic.
  • Ramos-Zayas, Ana Yolanda. 2011. “Learning Affect/Embodying Race.” In A Companion to the Anthropology of the Body and Embodiment, edited by Frances E. Mascia-Lees, 24–45. Malden: Blackwell Publishing.
  • Rees, Tobias. 2018. After Ethnos. Durham: Duke University Press.
  • Roberts, Glenda. 2011. “Salary Women and Family Well-Being in Urban Japan.” Marriage & Family Review 47 (8): 571–589. doi:10.1080/01494929.2011.619306
  • Robertson, Jennifer. 2018. Robo Sapiens Japanicus: Robots, Gender, Family, and the Japanese Nation. Oakland: University of California Press.
  • Robertson, Jennifer. 2022. “Robo-Sexism: Gendering AI and Robots in Japan and the United States (and Elsewhere).” Online lecture at Cornell University at the East Asia Program. (April 22).
  • Scott, Michael. 2013. “The Anthropology of Ontology (Religious Science?).” Journal of the Anthropological Institute 19 (4): 859–872. doi:10.1111/1467-9655.12067
  • Seligman, Adam. 2008. Ritual and its Consequences: An Essay on the Limits of Sincerity. Oxford: Oxford University Press.
  • Stevenson, Lisa. 2014. Life Beside Itself: Imagining Care in the Canadian Arctic. Oakland, CA: University of California Press.
  • Tarde, Gabriel. 2012. Monadology and Sociology, Edited and translated by Theo Lorenc. Prahan: re.press.
  • Thelen, Tatjana. 2015. “Care as Social Organisation: Creating, Maintaining and Dissolving Significant Relations.” Anthropological Theory 15 (4): 497–515. doi:10.1177/1463499615600893
  • Thelen, Tatjana. 2021. “Care as Belonging, Difference, and Inequality.” Oxford Research Encyclopedias Anthropology. 26 May. doi:10.1093/acrefore/9780190854584.013.353
  • Tsing, Anna Lowenhaupt. 2012. “Unruly Edges: Mushrooms as Companion Species.” Environmental Humanities 1 (1): 141–154. doi:10.1215/22011919-3610012
  • Turkle, Sherry. 2005. “Relational Artifacts/Children/Elders: The Complexities of CyberCompanions.” Cognitive Science Society, 62–73.
  • United States, Congress, Senate, Committee on the Judiciary, 2023. Oversight of A.I.: Rules for Artificial Intelligence. 16, May. Presiding Chair Blumenthal. Accessed 30 May, 2023.
  • White, Daniel, and Hirofumi Katsuno. 2021. “Toward an Affective Sense of Life: Artificial Intelligence, Animacy, and Amusement at a Robot Pet Memorial Service in Japan.” Cultural Anthropology 36 (2): 222–251.
  • Woolgar, Steve, and Javier Lezaun. 2013. “The Wrong Bin Bag: A Turn to Ontology in Science and Technology Studies?” Social Studies of Science 43 (3): 321–340. doi:10.1177/0306312713488820
  • Wynter, Sylvia. 2001. “Towards the Sociogenic Principle: Fanon, Identity, the Puzzle of Conscious Experience, and What It Is Like to be ‘Black.” In National Identities and Socio-Political Changes in Latin America, edited by Mercedes F. Durán-Cogan, and Antonio Gómez-Moriana, 30–66. New York: Routledge.
  • Završnik, Aleš, and Katja Simončič. 2023. Artificial Intelligence, Social Harms and Human Rights. Cham: Springer International.