2,148
Views
3
CrossRef citations to date
0
Altmetric
Original Research Article

Exploring governance tensions of disruptive technologies: the case of care robots in Australia and New Zealand

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

ABSTRACT

Robots are increasingly appearing as a potential answer to the ‘care crisis’ facing a number of countries. Although it is anticipated that many positives will flow from the application of these technologies, they are also likely to generate unexpected consequences and risks. This paper explores the use of robots within disability and aged care settings in the Australian and New Zealand contexts. Informed by thirty-five semi-structured interviews with a range of stakeholders, the paper explores why this area is so difficult to govern examining areas identified as generating tensions around the use of robots in care settings. In each of these areas some respondents saw the introduction of robots to be relatively straightforward applications that do not require extensive structures of governance. Others, however, viewed these applications as having potentially greater implications and the need to govern for these over the longer term. The three areas of tension that we explore in this paper relate to independence and surveillance, the re-shaping of human interaction and who can care. These tensions illustrate some of the problems involved in governing robots in a care service context and some of the potentially difficult issues that governments will need to resolve if these technologies are to be effective. We conclude the paper arguing what is needed is a responsive regulation approach to help resolve some of the complexities and tensions in overseeing these technologies.

Introduction

Many countries are experiencing significant changes in relation to the delivery of care services (Carey, Dickinson, Malbon, & Reeders, Citation2018; Tan & Taeihagh, Citation2020). Groups in receipt of care services are increasing in number, becoming older, have greater levels of disability and chronic illness and higher expectations about the quality of services they should receive (Manchester, Citation2021). In the face of increased demand, many countries are struggling to recruit appropriate care workforces (Hodgkin, Warburton, Savy, & Moore, Citation2017; Kamal et al., Citation2017). Advances in technology have been offered as a potential solution to these twin demand and supply-side pressures through robotics innovation. Against this background, robots are becoming an increasing feature of our care services and discussions about what the future care workforce should look like, capable of fulfilling a number of roles from manual tasks through to social interaction. In some areas these technologies have the potential for positive impacts, creating efficiencies and enhancing effectiveness, quality and safety (Australian Centre for Robotic Vision, Citation2018). Yet, there have also been a series of concerns expressed regarding the potential for these technologies to have negative impacts and these might not be felt equally by the entire population (Sharkey & Sharkey, Citation2012). Robotics is therefore double-edged: offering significant advantages, but with potential consequences or misuse that must be anticipated to avoid negatively impacting particular groups.

Despite posing new challenges for governance, robotic technologies are underexplored within the public administration and public policy literatures. This paper aims to start to fill this gap exploring some of the potential ways these technologies are challenging from a governance perspective. This paper reports data derived from a broader study exploring the implementation of robots in care services in Australia and New Zealand (Dickinson, Smith, Carey, & Carey, Citation2018). This study sought to explore the roles that robots should or should not play in care delivery, and the associated governance role(s) for government. We identify a number of governance tensions that illustrate the challenges surrounding the use of robots in a care service context, and some of the potentially difficult issues that governments need to resolve if these technologies are to be effective. Here, we focus on three in particular: independence and surveillance, the re-shaping of human interaction, and who can care.

The next section sets out the background to the paper providing an overview of the ‘care crisis’ that a number of countries currently face and the ways that robotic technologies have been seen as the answer to this. We then provide an overview of the methodology adopted in this research, before setting out our findings, namely the three different governance tensions identified through the research. In the discussion we argue that most of the resolutions to these tensions do not involve clear answers and many will be contingent on different individuals and their preferences. Yet this does not mean that we should leave their resolution to individuals, individual providers or – in the absence of governance – the market. Decisions over these tensions should be made actively, understanding the strengths and the limitations that they bring with them. This requires complex and adaptive responses from governments, such as those associated with responsive regulation approaches.

Robots and the care crisis

In numerous care systems around the world it has become almost commonplace to define this sector as in ‘crisis’ (e.g. Himmelstein, Woolhandler, Almberg, & Fauke, Citation2017; Hodgkin et al., Citation2017; Ward, Ray, & Tanner, Citation2020). We argue there are at least three drivers of this perception of crisis and these have made the calls for the development and acceptance of new technologies in this space more pervasive.

The case for a demographic ‘crisis’ has been extensively made. It is well established that many countries face a shift in patterns of ageing. It is not simply the sheer number of people requiring care that poses a challenge for governments. Increased numbers of older people within a population, alongside improved life expectancies for people with disability (Guzman-Castillo et al., Citation2017), mean we are seeing greater demand for care services within a population that is older, sicker and with higher numbers of chronic and complex conditions. This demand crisis compounds a difficult situation in many countries, which have care services already struggling within a fiscally constrained environment (Pearson & Ridley, Citation2017).

The sector is also described as experiencing a workforce crisis with this issue manifest through both the number of individuals available within a particular area and the presence of appropriately skilled and disposed individuals (Hodgkin et al., Citation2017; Kamal et al., Citation2017). In recent times this crisis has become all the more apparent in the COVID-19 pandemic. As carers have become sick or have been unable to work with individuals for fear of infection the call for greater use of robotic technologies has been clearly heard (Dickinson, & Smith, Citationforthcoming). Unlike humans, robots cannot get sick or pass on infection in the same way. In addition to appropriate levels of care staff being available, there have also been concerns with respect to conduct of staff. A number of countries have experienced scandals of abuse in residential aged care and disability services in recent years. Australia, for example, has recently undergone two Royal Commissions (formal public inquiries). The first into aged care quality and safety following widespread reports concerning the abuse of older people in residential care and another similar investigation in relation to the neglect and abuse of people with disability. Sadly, these abuses are not restricted to either these care settings or Australia (e.g. Byrne, Citation2018) and while abuse has long existed in care relationships, there is some suggestion these issues have become more frequent in a context of privatisation as providers give priority to commercial agendas (Greener, Citation2015).

Finally, the argument has been made that care services are also suffering from a value crisis. Care in both paid and unpaid forms of work is central to a working economy and can be understood as a core social infrastructure (Glinsner, Sauer, Gaitsch, Otto, & Hofbauer, Citation2018; Rayner & Espinoza, Citation2016; Tronto, Citation2015). Yet over recent decades care has moved increasingly from a private to a public space (Tronto, Citation2013) as it has become commodified through neo-liberal market rhetoric and activity (Fraser, Citation2014). This situation is problematic because when care is seen as only an allocation of resources, there is no measure of whether care has been successful in meeting the needs of the cared-for. If the measure of success is merely that a service has been provided, the voice of the cared-for becomes irrelevant (Tronto, Citation2013). In devaluing care and careworkers, carework has become lowly regarded and low paid, often taken up by those already living at the margins of society (Robinson, Citation2011).

Robots might plausibly play a role in responding to some of these crises of care, although they may generate tensions of their own. As the Australian Human Rights Commission (Citation2018) notes, ‘like any tool, technology can be used for good or ill … modern technology carries unprecedented potential on an individual and global scale. New technologies are already radically disrupting our social, governmental and economic systems’ (pg. 7). Further, such technologies are disruptive to the extent that they significantly alter a number of the different dimensions of the way in which existing systems and processes operate (Schuelke-Leech, Citation2018). For example, the use of robotics has significant implications for the ways that we conceive of ethical frameworks in relation to care services (Smith, Dickinson, Carey, & Carey, Citation2021). History shows that disruptive technologies require careful policy, legal and administrative scrutiny during their implementation (Busuioc, Citation2020), yet robotics is rarely featured in the public administration or public policy literatures (for exceptions see Jeffares, Citation2021; Whitford, Yates, Burchfield, Anastasopoulos, & Anderson, Citation2020). Robotics research has traditionally been funded by defence organisations (Sparrow & Sparrow, Citation2006), with only comparatively recent forays into the health and care fields. While there are many papers dealing with robotics in manufacturing and engineering (e.g. Michalos et al., Citation2015; Polygerinos et al., Citation2017), law (e.g. Calo, Citation2015; Richards & Smart, Citation2016), ethics (e.g. Coeckelbergh & Stahl, Citation2016; Sparrow & Sparrow, Citation2006) and medicine (e.g. Broadbent et al., Citation2016; Moyle et al., Citation2017), the fields of public administration and public policy lag in this regard. This paper responds to this gap by identifying the complexities involved in the use of these technologies and exploring some of the tensions that might be felt in governing these.

Methodology

The data presented in this paper is derived from a broader study exploring the implementation of robots in care services in Australia and New Zealand (Dickinson, Smith, Carey, & Carey, Citation2018). In this work we sought to explore the roles that robots should and, even more critically, should not play in care delivery and the role that government has as a steward in shaping these roles. The project was afforded ethical approval from the [University of New South Wales] Human Research Ethics Committee (HC171025). A qualitative approach to research was adopted making use of semi-structured interviews (Low, Citation2013). Semi-structured interviews are typically used to gain a detailed picture of a respondent’s beliefs or perceptions of a particular topic area (Smith, Citation1995). A purposive approach to sampling was adopted to identify interviewees (Palinkas et al., Citation2015). We sought to engage a range of experts from roles in policy (at different levels and different care provision areas), provision of care services, academics and other expert commentators in the topic area, technology suppliers and across Australia and New Zealand.

In total, 35 interviews were conducted with a range of different stakeholders (see ), although due to the scope of research this did not extend to include the users of these technologies. The research was funded by the Australia and New Zealand School of Government who were predominantly interested in the perspectives of public servants and experts in this first stage of the work. Interviews lasted between 30 mins and 90 minutes and were recorded and transcribed verbatim. After Kallio, Pietilä, Johnson, and Docent (Citation2016), we developed an interview schedule that covered issues such as: where robots are currently being used; advantages and disadvantages of robots; considerations when introducing robots; future roles for robots; and, the role of government in overseeing robots. Although respondents were not asked about governance tensions explicitly, we identified these through our analysis of the data. In our coding process we sought to identify differences between respondents in terms of the challenges and advantages of using technologies and their associated implications for governance of these. Data were analysed using a thematic approach (Blaikie, Citation2010) in NVivo where ‘Like’ data were grouped together to form categories and subcategories (see Dickinson, Smith, Carey, & Carey, Citation2018 for more detail on the coding framework). These categories were developed into more substantive themes, by linking and drawing connections between initial categories and hypothesizing about consequences and likely explanations for the appearance of certain phenomena (Strauss, Citation1987). All authors were involved in developing initial themes, with coding done by two authors independently to verify that the same transcripts were coded in the same way with a third authors consulted in the event of disagreement.

Table 1. Interviewees by background

In the findings that follow, quotes from interviews are reported to illustrate points being made. In accordance with our ethical approval, quotes are not ascribed to individuals and are simply labelled according to the country that the individual primarily works in, their group by background and a number to identify that individual (e.g. AUST01, NZPD03).

Tensions in governing robotic technologies

Although we uncovered a range of robots in use in care services, the field is relatively nascent at present and there has not as yet been a strategic approach that has sought to consider the roll-out of robots in care services in either Australia or New Zealand. Therefore, providers have typically decided to adopt robotics technologies at a local level. In describing these adoption decisions, a number of individuals expressed concerns, or had conflicting beliefs around particular topic areas that we explore further below. In many of these cases concerns are being expressed that decisions to adopt a technology might appear straightforward, but they potentially have a range of unforeseen and significant implications in the future. Some interviewees explicitly referred to potential tensions:

‘There’s a lot of secondary and tertiary effects associated with this stuff. A lot of it is things that we don’t foresee. It’s like invasive species. You can think you’re doing a good job when you’re adding them in but you don’t really understand. That’s why we need more people thinking about what the secondary and tertiary consequences are’ (AUEC08).

This paper aims to do just that, focusing on three particular tensions in relation to: independence and surveillance; the re-shaping of human interaction; and, who can care?

Independence and surveillance

One of the most commonly cited benefits driving these kinds of technological innovations is the promotion of independence. For example, in aged care services, these technologies promise to enable older people to remain in their homes for longer. Robots have the potential to undertake a number of manual tasks, provide social interaction, and manage risk (for example, should a person fall). Robots can also monitor individuals and their environment so that potential hazards can be identified and avoided. For example, robots have been used in conjunction with other devices to monitor pulse and oxygen readings. As one individual who ran such a scheme explained; ‘It is all focused around security of their person and their wellbeing, and it’s giving an added layer of comfort to that person and the family … we did quite a big pilot here of telemedicine … that showed we had a 60% reduction [in] hospitalisation as a result … It was huge, and so that was monitoring and early detection’ (NZPD02). In this case, a combination of devices were used in older people’s homes to prevent emergency hospitalization. But the gains of independence and autonomy come with associated surveillance. As one interviewee explained: ‘most of these objects are not one way objects. They are also collecting data’ (AUEC04).

In analysing the data we identified two major tensions generated through surveillance and gathering of data through use of robots. The first relates to how individuals might be controlled through the use of this data, and the second to security and aggregation of collected data.

At the individual level, some respondents believe there is a simple trade-off in the use of robots, where frail individuals remain cared for at home in return for giving up some element of privacy. One respondent who had conducted research with older people remarked:

‘They’re happy to trade some elements of privacy for autonomy. You know, I’m happy for you guys to have this information if that means you’ll leave me alone to be me. That’s, I think, one of those things where it crosses that line. It’s okay for the daughter to know that it’s two degrees [temperature] … below what it normally is. But she doesn’t then have the right to act on it. She can have that knowledge. If that knowledge gets to a point where it is dangerous for mum, like it’s two degrees below zero in the lounge room, then she can act on it. But up until then, she can’t. I think that that’s that trade-off between privacy and autonomy that they are happy to have. Most of them just wanted to be left alone.’ (AUGD09)

In this way, individuals may allow some data monitoring to demonstrate they are getting along well and can be left alone, free of further interference. However, other respondents did not feel quite as comfortable with this kind of trade-off, arguing it is rarely as simple or straightforward as it might appear and this therefore required greater governance considerations. In the following quote the respondent is putting themselves into the positon of the older person explaining:

’The fact that you’ve fallen over and not hurt yourself is not necessarily something you want your kids to know … there’s something about the connection between the ubiquitous surveillance - the big data - that institutionalizes people through the surveillance. So even in your home you’re just a data point’ (AUST11).

For this respondent, data gathering represents more than just the loss of a little autonomy, it signifies the surrender of personal judgment to the standards and expectations of an outside agent. Such forces could be perceived as being ‘dehumanising’ (AUST11) with individuals losing control of their decision-making processes, which is a key concern in various other applications of artificial intelligence (AI) that include robots in the workplace that monitor employees’ conduct and governments accessing autonomous vehicle data to track passengers’ whereabouts (Taeihagh, Citation2020). One respondent spoke of an experiment where data were collected through a number of different technologies (including, but not limited to, robots) in an aged care facility:

Interviewee:What was fascinating was once that stuff got instrumented, a couple of things became really clear. People were having sex in nursing homes that their kids didn’t know about.

interviewer: [Laughs] Good.

Interviewee: Well except for the fact that their kids were then being told and they tried to stop it. Well because here’s the problem, right? Like that data is really revelatory, right? So if the bed sensors suggest the weight doubles about twice a week or twice a day in one instance I can remember, kids became really concerned about the sensors, called up and said the sensor isn’t working. The nursing staff were in this position of saying no, the sensor is working just fine. Well then why is the weight of my mother’s bed doubling? It became this - there’s a problem where what people aren’t good at thinking about is what does that data reveal and who is it being told to and how does it then get made sense of. There are some really interesting questions there about what are you opting into or out of and who has access to that data and under what circumstances, right? Yes, there are notions that some of that stuff will be easily available but you know. That’s not always - that’s a more complicated thing than it appears’ (AUEC04).

Post-collection, access and long term ownership of data become crucial concerns of some respondents and an issue they believe needs particular governance structures. A number of those we spoke felt insufficient thought has been given to this issue to date. Many individuals do not know that data is being collected, who owns it, how it might be shared, where it is stored (particularly if it moves across international boundaries) and how secure it is – for example, whether it could be hacked in some way. As one interviewee describes:

‘these things can follow you around 24/7 and store your data and report back and people can hack into them conceivably, as well. Yeah, that’s a chronic problem and who owns the data is even deeper, to that as well. As we’re seeing, the wrestling with Facebook and Google as the like as well, in terms of what’s being done with your data and is it appropriate, did you agree to it?’ (AUEC08).

This tension is further compounded by the recency of these issues, which outpaces any governmental activity in this space:

‘it’s beyond the conversation phase now, because the technology is already outpacing our ability to regulate and legislate for it, so we’re way behind. The real question is, what are we going to allow? Are we just going to be a big experiment, where all the stuff is thrown upon us and we see what happens? Then just say, oops, sorry if that was the wrong answer. Or are we going to then end up overreacting and throw the baby out with the bath water and there was good there but now it’s- we can’t use that because all of it’s dangerous … So, the conversation is absolutely essential, there’s no doubt about that. We need to even move beyond the conversation now and start talking regulation and find frameworks through which that can be done’. (AUEC08)

We see, then, a complex series of governance tensions relating to the collection and use of data gathered by robots in care settings and concern that governments have taken little access to date to effectively oversee these issues.

The re-shaping of human interactions

While much of the extant literature typically focuses on human-robot interactions, some of our respondents raised concerns about robots for their potential to re-shape human-to-human interactions. For some, this is inherently negative, for others it is an active aim. Researchers may seek to improve human interactions through the use of robotics, however others saw this as problematic, potentially negatively impacting the importance or quality of human-human interactions and worthy of more attention from government.

Those actively re-shaping human interactions, believe robots can improve relationships by allowing humans to focus on particular activities. In the context of caring for young people, for example, one respondent told us:

‘So, if the robot is an impartial … thing in the classroom … then it’s not the teacher struggling with the child; saying, no, you need to put that away now, we’re doing this. So, that means that the interactions between the child and the teacher can simply be around the learning and not around the logistics around the learning; give those over to the robot and then that could potentially improve the relationship between the autistic child and the teacher, which is only going to make learning easier and all the better anyway’ (AUGD09).

Similarly, a number of respondents spoke about robots doing things that humans do not necessarily want to do. For example, robots are able to undertake repetitive activities that humans may find less enjoyable. As one interviewee put it, ‘many of the things that robots do for us are jobs that people shouldn’t necessarily do. They are not fulfilling things’ (AUPD18). One example of a caring task that is not necessarily difficult role but can be challenging in the context of a busy working day is that of repeating things multiple times. This is a common situation in aged care, particularly where individuals with dementia are concerned. ‘For people with dementia, the robot could remind someone a thousand times in a day and the robot wouldn’t care’ (AUEC01).

Similar issues can be faced in the context of disability services, for example in the care of children with autism:

‘they [robots] are non-judgmental, they are patient and they can repeat everything without getting upset so they don’t get angry, they don’t transmit their own emotion, they don’t come with baggage that changes the interaction … all of that means that … children can feel more calm, feel better to interact with the robots than with a human’ (AUGD13).

For a number of respondents this was seen to be a positive for both staff and those they are caring for: ‘the benefit is always for both sides’ (AUST03).

The ways that humans interact with robots may also have implications for how individuals behave with other humans. If negative or aggressive behaviours towards robots goes unchecked, this could encourage the same behaviour towards humans. In the robotics space this type of ‘virtue ethics’ argument is probably most actively made by the Campaign Against Sex Robots, spearheaded by Kathleen Richardson (Citation2015). In the minds of Richardson and colleagues, if we allow individuals to behave badly towards female sex robots, this will encourage negative behavior towards women more broadly. Similarly, verbal or physical aggression towards robots could be transferred to human carers. It should be noted that this type of virtue ethics argument is significantly contested within the literature, and the empirical data remains mixed (Danaher, Citation2017).

A further concern is those being cared for might come to be overly reliant on robots, to the exclusion of humans in care processes.

‘the main issue I would think in the long term is people relying on robots too much. I think if we look at robots being there to help, and not people in general, but let’s say in a clinical sense to help therapists, to help educators, to help clinicians do their work better but not replace them in anyway. If we aim to start replacing people for machines that would be my main worry because we still need that human touch’ (AUGD13).

Respondents who raised this issue did not consider robots to be a bad thing per se, but technology might incidentally exclude human interaction. Given the evidence connecting social interaction with health outcomes (Marmot, Citation2010), this has significant implications. Some of those we spoke to expressed concern that individuals might be left with robots as their only form of social interaction and staff would not try and engage individuals through other activities: ‘Are we still going to be as fastidious in doing all that physical stuff or it this toy going to be a bit of a panacea to free up staff to do things that they need to do … is it a give and take between more time to do admin as opposed to looking after clients’ (AUST11). Technology obsession is another potential danger: ‘one challenge that we encounter always with the use of technologies … is the possibility of the kids getting obsessed with the technology and just wanting to interact with the technology and not with humans … that would be one of the risks in the long term if it’s not well managed’ (AUGD13). The concern here is that people may become so attached to their robotic device that they might exclude engagement with humans.

These findings demonstrate that interactions with robots presents a range of potential challenges and possible alterations to human-human interaction, and could have implications for human relationships more broadly that require governance structures to oversee these.

Who can care?

One of the drivers of robotic technologies in the caring sector is a lack of appropriate workforce in the face of an aging population in increasing need of care services. Robots can potentially bridge this need although this proposition is not without contest. Some interviewees were strongly opposed to the replacement of humans in care because they view care as intrinsically human and that this requires oversight by governments particularly in publicly funded services.

By and large, those we spoke to imagined the use of robotics in care services would expand further in the future. Applications include areas such as practical assistance and automation in hospitals and aged care settings, autonomous vehicles, and intimate robotics (not necessarily sex robots, but robots that individuals develop an emotional attachment to). However, others were keen to note that many of the robots we see in film or read about in books are not within reach in the near future. There is still significant technological advancement to be made before the reality of domestic robotic agents fulfilling a range of different roles will be realised. A number of other interviewees suggest no matter how advanced robots might become, they should never replace humans. This argument was presented in a number of different ways. Firstly, that robots will never perform to the same standard as humans. Much care work is difficult to automate: ‘a lot of the frontline level work isn’t routine and it’s not repetitive so you’d find it difficult, for example, I mean you might have AI agents that could inform on medical decision making, let’s not go as far as say, replace doctors, but the role of nurses, gardeners, home maintenance, staff, cooks, chefs. That’s not going to be replaced any time soon’ (AUEC01).

A second argument against the replacement of humans in care relationships centres on the idea that human contact is intrinsic to care: ‘there’s nothing that can replace completely human touch I would think and not only in a physical sense but the human contact’ (AUGD13). Some respondents delineated clear boundaries between what robots should and should not do, with a number suggesting they would be uncomfortable with robots touching people. As one interviewee explained: ‘We’ve avoided touching people with robots … I think you have to be pretty careful’ (NZEC05). This was a perspective shared by another interviewee, one which overlaps with the previously examined concern over lack of engagement with humans:

‘The touching, it’s probably going a bit far the touching, but putting the flesh of other humans I think people- I wouldn’t want people to get more isolated with the security of a robot companion that they choose not to engage with other human beings probably, because that human touch is so important for our wellbeing as well’ (NZST02).

Many respondents also felt robots should not appear to be too human. There was a consistent trend that robots should not in any way attempt to replace or mimic humans too closely. Attempting this kind of mimicry could be confusing or upsetting. In robotics, this effect is known as the ‘uncanny valley’, whereby humans are intrinsically uncomfortable around robots that too closely simulate and resemble humans (MacDorman & Ishiguro, Citation2006).

Instead of replacing humans in the future care delivery system, many shared the perspective that robots instead will be tools to augment human skill.

‘So we’re not necessarily talking about using a robot to deliver the intervention without the human getting involved, but we talk about using a robot to support the intervention. To practise, for example. So, if it’s doing therapy there is still a therapist but we can use a robot to practise some of the skills instead of having to practise at school with another child for example’ (AUGD13).

An important implication that follows is these technologies must be developed with cross-sector professionals: ‘it’s got to be a partnership with people … Because it also has to be integrated with what everyone else is doing, both the IT systems and the other people who are involved’ (NZ05). However, not all agreed this ideal is achievable. The reality is much of the care sector operates on relatively slim financial margins, and savings made through the use of robotics will not necessarily translate to quality enhancements. As one interviewee explains,

‘We’ve got to do some social engineering. We’ve got to make it so that people aren’t just sort of warehoused in facilities. I think - I mean that’s a kind of nice rhetorical - it puts the other people in the, we’re cold heart technologists replacing your care. They have this bullshit line about how they’re going to reinvest in social services. But that’s just not true [with] the sector. Okay you take out the cleaners. You don’t spend the savings on bringing in occupational therapists. You just take that’ (AUST11).

Ultimately the crux of this governance tension rests on whether robots are actually capable of caring for individuals or if they can only undertake a series of activities that relate to and support those performed by humans. The position adopted in relation to this comes down to how care is defined, with a number of expert commentators arguing that robots are unable to care as they do not possess emotions and this is interpreted as a core component of these processes (e.g. Turkle, Citation2017). If we introduce robots into care settings, are they actually undertaking care or are they simply tools of care. If we replace carers with robots does that mean that we no longer care for individuals?

Discussion

As established, robotics and similar technologies are still at a relatively early stage of implementation in care services. Although there have been some early forays into this space, we are likely still some way from robust robotic technologies significantly impacting the daily lives of those accessing care services. Yet people who are starting to use and to think about these technologies have raised a series of tensions relating to the governance and regulation of these technologies which need to be addressed. The tensions articulated within this paper are not the only ones, however they do illustrate the types of complex issues produced through the applications of technologies into care spaces. This suggests it is the right time to have conversations about these issues.

Most of the resolutions to these tensions do not involve clear answers and many will be contingent on different individuals and their preferences. Despite this, the answers are not individual ones. They require complex and adaptive responses from governments. In this paper we have illustrated how individuals and groups grapple with the complexities of these issues, in the absence of formal governance. Moreover, some have sought to demonstrate the ways a range of stakeholders make sense of these issues as they engage with these technologies. The types and range of tensions discussed tells us not only is this important for governance, but offers insights into the kinds of forms of governance and regulation that may be applicable to these emerging areas.

In our research we found many were concerned about the lack of systematic thinking concerning robots in care services within different levels of governments. This was seen as problematic because many decisions are currently being made about these technologies, without thinking through the longer term consequence. Returning to the metaphor of invasive species raised by one of our participants, robots are being introduced into complex care ecologies that encompass individual preferences, needs, existing supports and services, and social norms and values regarding care. Technologies developed in isolation cannot simply be adopted by an uninformed market and successfully manage to tackle complex interconnected problems, they require careful consideration in terms of how they integrate with care delivery systems and how they impact on professionals and users of care services. Indeed, this has been one of the mistakes that has been made in a number of attempts to introduce large-scale IT projects (Flyvbjerg, Bruzelius, & Rothengatter, Citation2003). Flyvbjerg et al. (Citation2003) found that risks, burdens and benefits tend to be distributed unequally. Moreover, lack of clear decision making, and governance and regulation responsibilities, can lead to a ‘democratic deficit’. That is, lack of transparency and involvement of civil society in the adoption and implementation of, in this case, disruptive technologies. Flyvbjerg et al.’s (Citation2003) work offers an important warning for the field of robotics and care, pointing to the need for considered and on-going government involvement to ensure transparency in decision making and distribution of benefit and risk, which has been recently acknowledged by governments as crucial for AI governance to ensure the technology is inclusive and representative of diverse groups in society (Radu, Citation2020; Ulnicane, Knight, Leach, Carsten Stahl, & Wanjiku, Citation2020).

There is a range of potential roles that governments might play in terms of providing technological oversight in the care sector. There has long been scepticism from some quarters about governmental involvement, suggesting they impede development and application of these technologies (Dickinson, Citation2018), and as such, the role of government should be restricted to funding and light touch regulation. Yet, as the tensions set out in this paper illustrate, the issues relating to the application of these technologies in a care context are far more complex than an unfettered free-market solution would allow for. Robots, in combination with other advanced technologies, have the capacity to fundamentally alter power relations and the ways that individuals and groups interact with one another.

Consistent with this, overwhelmingly our research found respondents envisaging a strong role for government in technology stewardship. This may be a product of the interview sample pool, however even providers of services and technology suppliers viewed the involvement of different levels of government as essential to the success of robots in care services. Certainly the view of most we interviewed is that the role of government goes beyond simply providing money to pump prime research and generate growth in research and development, or simply setting basic standards around technological capabilities or risk management. In particular, a strong stewarding role was seen as important, especially because interviewees were cognisant of the potential for unintended or unanticipated consequences. Hence, when using these technologies careful consideration needs to be given in not just the planning of technologies and their application but the on-going processes of care delivery – both acceptable risks (to individuals, workforce or society) as well as shaping how that risk is distributed. Despite this, most current care contexts are supplier-driven, and the role of government seriously underdeveloped. In part, this is due to a gap in government capability regarding technological regulation. Disruptive technologies move at a rapid pace, which can outstrip the capacity of governments to (a) keep abreast of and (b) determine appropriate responses due to the difficulties of understanding technical complexity (see also Ulnicane et al., Citation2020). As a result, any governance or regulatory solution will need to be adaptive and responsive. While difficult, precedent does exist in the field of responsive regulation. This approach utilizes the structure and nature of governance networks to encourage diverse actors (providers, technology developers and so forth) to become peer and self-regulators.

Indeed, when describing stewardship and governance approaches interviewees’ responses were consistent with a responsive regulation approach. This approach relies on actors to self and peer regulate, and escalate issues as they arise upon which governments can implement regulatory efforts of different strengths. Importantly, responsive regulation approaches emphasise a ‘light touch’. In the case of disruptive technologies, this is important to ensure the environment is still conducive to technological innovation. Responsive regulation is about ‘tripartism’ in regulation (Braithwaite, Citation2008). The approach emphasises the limits of regulation as a transaction between the state and business. Rather, unless there is a third party (or a network) engaged in regulation, regulation will be captured and corrupted by money power or will not be fit-for-purpose. Importantly, responsive regulation involves listening to multiple stakeholders and making a deliberative and flexible choice (Braithwaite, Citation2008). The increasing relevance of non-state actors in governing technology is central to other emerging governance concepts, such as hybrid governance discussed by Radu (Citation2020), which focuses on the blurring boundaries between the roles played by the public and private sectors.

In devising a responsive regulatory system, regulators need to attend to all members of that community. In this case, individuals who may utilize or be affected by robotic technologies, service providers and workers, and technology developers. All of these actors can hinder or help the effectiveness of the regulatory system, and all can be affected by any legislative decisions that could be made. The vision for how robotic technologies will be utilized, while protecting against harms, requires continuing cooperation amongst all parties. Without this, we cannot hope to understand both the potential for robotic technologies, or the impacts that it might have on different groups. Hence, a responsive regulatory approach requires continuous engagement with these different groups. It is crucial to understand the needs of these groups for policy workers/regulatory to achieve legitimacy. As Braithwaite (Citation2008: pg. 11) points out, ‘history is replete with examples of laws and rules failing to connect with people’s lives … Governments may act with good intentions to benefit the public, but people may lack the capacity or time or resources to cooperate and reap these benefits’.

In developing a responsive regulatory approach, Braithwaite suggests that a regulatory framework be developed. This framework should identify elements that need to be heeded for an effective and enabling regulatory environment. Braithwaite (Citation2013) argues for two elements: those that assist regulators with the task of regulating effectively and fairly, and those that assist the regulated to engage meaningful and constructively with regulatory bodies. The former is a set of sanctions (e.g. financial penalties), and the latter a set of supports (e.g. information, financial incentives). Through the use of supports, the goal is to encourage a set of values – or vision – for how the system will function, which is constructive rather than punitive. Braithwaite argues that different actors then become the ‘minders’ of this vision and cooperate to achieve it. Behind this is the goal to create a system in which informal interventions address problems as they arise, while positive feedback on achievements and strengths is also given.

First steps for policymakers are therefore to engage across the diverse actors who are developing, or impacted by, emergent technologies to find consensus around the values that these systems should be designed with (see also Taeihagh, Citation2020). Based on this continuous engagement, the next step is to co-create a set of supports and sanctions. With the rapidly changing nature of disruptive technologies, this engagement will need to be on-going, with subsequent fine tuning of supports and sanctions. Any frameworks for regulation or legislation must allow for constant revision in the face of changing technological capability, which is a key element of adaptive governance (Taeihagh, Citation2020). Through this engagement, policy workers can help to share information and better connect companies developing new technologies with service providers and users – thereby encouraging consideration of unintended consequences, needs and so forth. Through these activities, a shared vision can be crafted – which forms the basis for self and peer regulation. In future research we plan to empirically explore what such a regulatory response might comprise in relation to the types of tensions and complexities identified in this paper.

Ultimately we argue that the existing literature regarding the adaption and diffusion of disruptive technologies and the role of government is incomplete. Existing studies typically focus on the choice architecture of government in choosing among different governance strategies in response to these technologies. The focus is typically on why actors may or may not adopt these. We go beyond this in this paper and asks how when adopting technologies, we might ensure their effectiveness. When adopted, we need to understand how we can better structure the different relationships between government, service providers, technology providers, professionals and those interacting with these technologies. Like any research this is not without limitations. There is a limited sample size across two countries and securing interviewees from other levels of government and care agencies as well as those in receipt of services would strengthen the research. We cannot claim any degree of broad generalisability in terms of those we spoke to. However, the intention of this work was to be exploratory, mapping out the major features of the terrain within a public policy and public management context against a background that is largely unpopulated by empirical data at present. To this extent, this research should be seen as the start of a conversation, rather than an end point, asking more questions than it answers.

Conclusion

This project sought to explore the use of robotics in care services. Through interviews with a range of stakeholders we identified a range of challenges and complexities relating to the governance of these disruptive technologies. Each of these centres around the nature and boundaries of interaction between humans and robots within care settings. As in the nature of governance tensions, there are no clear answers – all responses carry benefit, risk and unintended consequences. In situations such as these, it is clear that governments need to play a strong governance and regulatory role, but less clear what exactly that role should be. This complexity is further compounded by the rapidly changing nature of technological development, which is currently intersecting with pressures within the care sector. This sets up a strong possibility that implementation in practice will far outpace knowledge and capacity to respond at different government levels. As a result, we have argued that a responsive approach is needed to governance and regulation – where government takes a ‘steering’ or ‘stewarding’ role, but utilises the power of networks to protect against harm.

COI-statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Australia and New Zealand School of Government.

Notes on contributors

Helen Dickinson

Professor Helen Dickinson is Director of the Public Service Research Group and Deputy Head of School (Research), School of Business, University of New South Wales, Canberra.  Her research focuses on the implementation of policy, with particular interests in disability policy, primary health care and 4th industrial revolution technologies.

Catherine Smith

Catherine Smith is a lecturer at the Melbourne Graduate School of Education at the University of Melbourne on the lands of the Wurundjeri peoples. Her research focuses on care ethics and social justice in the nexus of evidence-informed policy and practice concentrates particularly on equity and wellbeing in the use of digital and physical technologies. In her 30 years as an educator, Catherine has developed and delivered school and tertiary level courses, as well as executive and professional education in policy reform in Canada, Guinea-Bissau, the UK, Australia and Indonesia. She is committed to universal design, choice and equitable access in education in physical and digital spaces.

Nicole Carey

Nic Carey is a Principal Research Scientist at the Autodesk Robotics Laboratory. She obtained a BSc in Mathematical and Computer Sciences and a BEng in Mechatronic Engineering from the University of Adelaide, followed by a PhD in Biorobotics from the Australian National University. Her work focuses on biomimetics, adaptive and morphological control,  and human-robot interaction and collaboration. Previous research  appointments include the Centre of Excellence in Cognitive Interaction Technology (Bielefeld University) and the Designing Emergence Laboratory (Harvard University).

Gemma Carey

Professor Gemma Carey is the Director at the Centre for Social Impact, University of New South Wales. She works with governments and non-government organisations to identify and change structures and processes that impact inequality. Presently, her research is primarily focused on personalisation schemes and inequalities.

References

  • Australian Centre for Robotic Vision. (2018). A robotics roadmap for Australia 2018. Brisbane: Author.
  • Australian Human Rights Commission. (2018). Human rights and technology issues paper. Sydney: Author.
  • Blaikie, N. (ed). (2010). Designing social research (2nd ed.). MA, USA: Polity.
  • Braithwaite, J. (2008). Regulatory capitalism: How it works, ideas for making it work better. Cheltenham: Edward Elgar.
  • Braithwaite, V. 2013. A regulatory approach for the Australian Charities and Not‐ for‐profit Commission: A discussion paper. Canberra: Regulatory institutions network occasional papers series, Australian National University.
  • Broadbent, E., Kerse, N., Peri, K., Robinson, H., Jayawerdena, C., Kuo, T., … MacDonald, B. A. (2016). Benefits and problems of health-care robots in aged care settings: A comparison trial. Australasian Journal on Ageing, 35(1), 23–29.
  • Busuioc, M. (2020). Accountable artificial intelligence: Holding algorithms to account. Public Administration Review. doi:10.1111/puar.13293
  • Byrne, G. (2018). Prevelence and psychological sequelae of sexual abuse among individuals with an intellectual disability: A review of the recent literature. Journal of Intellectual Disabilities, 22(3), 3.
  • Calo, R. (2015). Robotics and the lessons of cyberlaw. California Law Review, 103(3), 513–563.
  • Carey, G., Dickinson, H., Malbon, E., & Reeders, D. (2018). The vexed question of market stewardship in the public sector: Examining equity and the social contract through the Australian national disabiity insurance scheme. Social Policy & Administration, 51(1), 387–407.
  • Coeckelbergh, M., & Stahl, B. (2016). Ethics of healthcare robotics. Robotics and Autonomous Systems, 86, 152–161.
  • Danaher, J. (2017). Should we be thinking about robot sex? In J. Danaher & N. McArthur (Eds.), Robot sex: Social and ethical implications (pp. 3–14). Cambridge: MIT Press.
  • Dickinson, H. (2018). The next industrial revolution? The role of public administration in suporting government to oversee 3D printing technologies. Public Administration Review, 70(6), 922–925.
  • Dickinson, H., Smith, C., Carey, N., & Carey, G. (2018). Robots and the delivery of care services: What is the role for government in stewarding disruptive innovation? Melbourne: ANZSOG.
  • Dickinson, H., & Smith, C., (forthcoming) COVID-19 and the rise of the care robots. in Armenia, A, Price-Glynn, K & Duffy, M. (Eds) Confronting the Global Care Crisis during COVID-19: Past Problems, New Issues, and Pathways to Change. New Brunswick: Rutgers University Press .
  • Flyvbjerg, B., Bruzelius, N., & Rothengatter, W. (2003). Megaprojects and risk: An anatomy of ambition. Cambridge: Cambridge Univerity Press.
  • Fraser, N. (2014). Transnationalizing the public sphere. Cambridge: Polity Press. Bibliographies Non-fiction Electronic document.
  • Glinsner, B., Sauer, B., Gaitsch, M., Otto, P., & Hofbauer, J. (2018). Doing gender in public services: Affective labour of emplyment agents. Gender, Work and Organisation, 26(7), 983–999.
  • Greener, J. (2015). Embedded neglect, entrenched abuse: Market failure and mistreatment in eldery residential care. In Z. Irving, M. Fenger, & J. Hudson (Eds.), Social policy review 27: Analysis and debate in social policy, 2015. (pp. 131–150). Bristol: Policy Press.
  • Guzman-Castillo, M., Ahmadi-Abhari, S., Bandosz, P., Capewell, S., Steptoe, A., Singh-Manoux, A., … Flaherty, M. O. (2017). Forecasted trends in disability and life expectancy in England and Wales up to 2025: A modelling study. The Lancet Public Health, 2(7), e307–e313.
  • Himmelstein, D. U., Woolhandler, S., Almberg, M., & Fauke, C. (2017). The U.S. health care crisis continues: A data snapshot. International JOurnal of Health Services, 48(1), 28–41.
  • Hodgkin, S., Warburton, J., Savy, P., & Moore, M. (2017). Workforce crisis in residential aged care: Insights from rural, older workers. Australian Journal of Public Administration, 76(1), 93–105.
  • Jeffares, S. (2021). The virtual public servant: Artificial intelligence and frontline work. Cham: Palgrave Macmillan.
  • Kallio, H., Pietilä, A.-M., Johnson, M., & Docent, M. K. (2016). Systematic methodological review: Developing a framework for a qualitative semi-structured interview guide. Journal of Advanced Nursing, 72(12), 2954–2965.
  • Kamal, A. H., Bull, J. H., Swetz, K. M., Wolk, S. O., Shanafelt, T. D., & Myers, E. R. (2017). Future of the palliative care workforce: Preview to an impending crisis. The American Journal of Medicine, 130(2), 113–114.
  • Low, J. (2013). Unstructured and semi-structured interviews in health research. In M. Saks & J. Allsop (Eds.), Researching health: Qualitative, quantitative and mixed methods. (pp. 87–105). London: Sage.
  • MacDorman, K. F., & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive and social science research. Interaction Studies, 7(3), 297–337.
  • Manchester, H. (2021). Co-designing technologies for care: Spaces of co-habitation. In A. Peine, B. L. Marshall, W. Martin, & L. Neven (Eds.), In Socio-gerontechnology: Interdisciplinary critical studies of ageing and technology (pp. 213–227). Abingdon: Routledge.
  • Marmot, M. (2010). Fair society, healthy lives: The Marmot review. In Strategic review of health inequalitites in England post-2010. London: Institute for Health Equity. ISBN 9780956487001.
  • Michalos, G., Makris, S., Tsarouchi, P., Guasch, T., Kontovrakis, D., & Chryssolouris, G. (2015). Design considerations for safe human-robot collaborative workplaces. Procedia CIRP, 37, 248–253.
  • Moyle, W., Jones, C. J., Murfield, J. E., Thalib, L., Beattie, E. R. A., Shum, D. K. H., … Draper, B. M. (2017). Use of a robotic seal as a therapeutic tool to improve dementia symptoms: A cluster-randomixed controlled trial. Journal of the American Medical Directors Association, 18(9), 766–773.
  • Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementaton research. Administration and Policy in Mental Helth and Mental Health Services, 42(5), 533–544.
  • Pearson, C., & Ridley, J. (2017). Is personalization the right plan at the wrong time? Re-thinking cash-for-care in an age of austerity. Social Policy & Administration, 51(7), 1042–1059.
  • Polygerinos, P., Correll, N., Morin, S. A., Mosadegh, B., Onal, C. D., Petersen, K., … Shepherd, R. F. (2017). Soft robotics: Review of fluid-driven intrinsically soft devices; manufacturing, sensing, control, and applications in human-robot interaction. Advanced Engineering Materials, 19(12), 12.
  • Radu, R. (2020). AI governance: National, hybrid, ambiguous. Policy and Society this issue.
  • Rayner, J., & Espinoza, D. E. (2016). Emotional labour under public management reform: An exploratory study of school teachers in England. The International Journal of Human Resource Management, 27(19), 2254–2274.
  • Richards, N., & Smart, W. (2016). How should the law think about robots? In A. Ryan Calo, M. Froomkin, & I. Kerr (Eds.), Robot Law (pp. 3–22). London: Edward Elgar.
  • Richardson, K. (2015). An anthropology of robots and AI: Annhiliation anxiety and machines. New York: Routledge.
  • Robinson, F. (2011). The ethics of care: A feminist approach to human security, Global ethics and politics. Philadelphia: Temple University Press, 2011. Bibliographies Non-fiction Electronic document.
  • Schuelke-Leech, B.-A. (2018). A model for understanding the orders of magnitude of disruptive technologies. Technological Forecasting and Social Change, 129, 261–274.
  • Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the eldery. Ethics and Information Technology, 14(1), 27–40.
  • Smith, C., Dickinson, H., Carey, N., & Carey, G. (2021). The challenges and benefits of stewarding disruptive technology. In H. Sullivan, H. Dickinson, & H. Henderson (Eds.), The Palgrave handbook of the public servant. (pp. 1023–1038). Basingstoke: Palgrave Macmillan.
  • Smith, J. A. (1995). Semi-structured interviewing and qualitative analysis. In J. A. Smith, R. Harré, & L. Van Langenhove (Eds.), Rethinking methods in psychology (pp. 9–26). London: Sage.
  • Sparrow, R., & Sparrow, L. (2006). In the hands of machines? The future of aged care. Minds and Machines, 16(2), 141–161.
  • Strauss, A. (1987). Qualitative analysis for social scientists. Cambridge: Cambridge University Press.
  • Taeihagh, A. (2020). The governance of artificial intelligence and robotics. In. Policy and society. this issue.
  • Tan, S., & Taeihagh, A. (2020). Governing the adoption of robotics and autonomous systems in long-term care in Singapore. In. Policy and society. this issue.
  • Tronto, J. (2015). Democratic caring and global care responsibilities. In T. Brannelly, L. Ward, & N. Ward (Eds.), Ethics of care: Critical advances in international perspective. 21–30. Bristol: Policy Press.
  • Tronto, J. C. (2013). Caring democracy: Markets, equality and justice. New York: New York University Press.
  • Turkle, S. (2017). Alone together: Why we expect more from technology and less from each other. In 3rd ed. New York: Basic books.
  • Ulnicane, I., Knight, W., Leach, T., Carsten Stahl, B., & Wanjiku, W. G. (2020). Emerging governance for artificial intelligence: Policy frames of government, stakeholders and dialogue. Policy and Society this issue.
  • Ward, L., Ray, M., & Tanner, D. (2020). Understanding the social care crisis in England through older people’s lived experiences. In P. Urban & L. Ward (Eds.), Care ethics, democratic citizenship and the state (pp. 219–239). Basingstoke: Palgrave Macmillan.
  • Whitford, A. B., Yates, J., Burchfield, A., Anastasopoulos, J. L., & Anderson, D. M. (2020). The adoption of robotics by government agencies: Evidence from crime labs. Public Administration Review, 80(6), 976–988.