780
Views
0
CrossRef citations to date
0
Altmetric
Guest Editorial

Making Progress in the Ethics of Digital and Virtual Technologies for Mental Health

This article refers to:
Is There an App for That?: Ethical Issues in the Digital Mental Health Response to COVID-19
Is Virtually Everything Possible? The Relevance of Ethics and Human Rights for Introducing Extended Reality in Forensic Psychiatry

I have been going backwards lately. Thinking about the march of progress in digital and virtual technologies to treat mental health and brain disorders has brought me back to research and reading I started over a decade ago now—on Attention Deficit/Hyperactivity Disorder (ADHD), on authenticity, on moral self-understanding. I am finding texts outside bioethics more useful again, specifically classic writing in the philosophy and sociology of science. I had set most of these texts aside some years ago, finding them irritatingly obscure and, in that inaccessibility, also clubby and elitist. Perhaps it is a rite of passage to return to formative texts after some years of distance. And for me, several years of focus on empirical work seems to have resurrected a thirst for theoretical underpinning.

In research with young girls who were diagnosed with Attention Deficit Disorder (ADD, without the “H” for hyperactivity), I discovered that there was a trend in the interviews to describing an imaginary place of natural sanctuary, fun, joy and creativity. Several of the girls called this place “Lala Land”; it was where they “went” when they had gone “away” during lessons or in other activities that demanded focus on specific tasks. Most of the young girls did not enter and exit Lala Land intentionally, and that was a source of distress for them. But, if they could have control over their entrances and exits (and stimulant medication did afford some girls this level of control), they would want Lala Land to be a part of their daily lives. Paradoxically, with or without medication, Lala Land felt like an empowering place. Girls rejoiced in the ability to make decisions about the architecture of the landscape and the ecosystem of humans, flora and fauna.

It struck me, watching my daughters play the game “Minecraft,” that the designers of this game had rather perfectly captured the desire to be architects of their ecosystem. Amartya Sen (2009) argues that capacity to be an architect of a world, in this way, is meaningless without opportunity, thus his emphasis on “capabilities.” When I read Skorburg and Yam’s (Citation2022) helpful coverage of the lack of evidence for digital mental health interventions, I wonder whether anything might change if, in the case of young people, they themselves were allowed to design the world of the intervention, practise the cognitive and affective skills they are thought to be lacking in settings that offer valued rewards and align with their preferences and priorities.

For example, much has been written about “Autcraft” (https://www.acamh.org/blog/minecraft-young-people-autism/)—the use of Minecraft among young people with autism, which is, to my understanding, created, moderated and governed by people with autism for use by community members. It also encourages peer to peer support and mutual working out of problems in the game, which aligns with a preference many young people have for mental health support from peers over clinicians (Repper and Carter Citation2011). Of course, this approach carries quite a lot of risks as well, having to do with the competencies and capacities of a peer to provide mental health support to a child with significant needs. But it seems to me that the digital world offers an opportunity at least to try carefully designed and properly safeguarded approaches that democratize “expertise” in mental health support (Singh et al. Citation2020) One example in the UK is the on-line message board Childline, created by the National Society for the Prevention of Cruelty to Children (https://www.nspcc.org.uk/keeping-children-safe/our-services/childline/). Childline offers anonymous young users a place to post and to support others who post, with transparent rules and light touch moderation by adults to ensure that appropriate referral to services is made if needed. Thousands of children use Childline message boards and find the peer support helpful both practically and in the context of their mental health.

I also admit to finding the potential for technologies such as VR and XR exciting, in so far as there is further opportunity to allow young people to practice moral, behavioral and cognitive capacities in purpose-built worlds that they design. We wrote about the potential for bioethics research (which we call “design bioethics”) using purpose-built engineered tools (Pavarini et al. Citation2021), and here there is a further potential that such tools can also be used as interventions that empower young people and promote their mental health and wellbeing. The potential risks in specific groups, such as incarcerated patients, are important to identify and consider, as Ligthart et al. note (Citation2022), for reasons of history, of ethics, and the persistence of deep social inequalities. At the same time, it is important not to use the label of “vulnerability” as a way of too quickly discounting the potential for individuals to make informed and positive decisions to utilize technologies in ways that enable them to live a good life. The human rights perspective taken by Ligthart et al is valuable because it frames vulnerability through the lens of human dignity and respect for autonomy and therefore requires careful consideration of the barriers that the label “vulnerable” can erect (Palk et al. Citation2020).

The issue of vulnerability in children is often raised in the context of child and adolescent mental health interventions. I wonder: does a child in Lala Land have this thing called “cognitive liberty” in so far as it is understood to be a mind or brain free of technological interference that would change emergent thoughts, feelings, behaviors. One answer might be “no”—not if the child”s cognitive capacities without medication mean that she is unable to exercise the sort of cognitive control that she would find valuable—such as, for example, the ability to enter and exit Lala Land intentionally. Is medication an analytic equivalent to an XR headset?

This line of thought returns me to an old chestnut: what is the difference between education, which is designed to intervene on children’s cognitive, behavioral and emotional capacities and futures, and the various technologies we use to intervene on these capacities in medicine and psychology. We used to say that those technologies that bypass the control of the human agent—such as deep brain stimulation—were the ones to worry about most. Leaving aside the question of whether such technologies are an exceptional case, I have come to think that we need a more complex theorization of the human-technology relationship on which to hang our ethics, if ethics is to stay relevant. This is where my reading in the philosophy and sociology of science returns to me.

Scholars like Donna Haraway have long argued that we should re-frame the relationship between humans and technology, as in the challenging “Cyborg Manifesto” (Haraway Citation1985). The sociologist Alan Prout (Citation2004) argues that children should be seen as akin to techno-humans From these perspectives, the important ethical analysis is not what technology is “doing to” an apparently autonomous human mind or brain. Rather, we should start with an assumption of a “hybrid of machine and organism,” and ethical analysis should focus on the conditions—individual, social and structural—that give rise to an ability to be a moral agent and to treated as having moral worth. According to Judith Butler (Citation2005), this ethical analysis must grapple with (what she views as) the fact that we are not free in the way that presumptions about “cognitive liberty” suggest. Butler writes:

[N]o “I” belongs to itself. From the outset, it comes into being through an address I can neither recall nor recuperate, and when I act, I act in a world whose structure is in large part not of my making—which is not to say that there is no making and no acting that is mine. There surely is. It means only that the “I,” its suffering and acting, telling and showing, take place within a crucible of social relations… [a]nd when we do act and speak, we not only disclose ourselves but act on the schemes of intelligibility that govern who will be a speaking being, subjecting them to rupture or revision, consolidating their norms, or contesting their hegemony. (Butler Citation2005, 132)

Perhaps this provocation, which has taken me years to begin to understand, helps to advance our field’s thinking about a neuroethics for cyborgs, which started over two decades ago (Wolpe, Citation2004). A key challenge, it seems to me, is how to retain the important emphasis on the “integrity” of the individual while also tackling its limitations. The assumption of the brain, or indeed the mind, as a fixed and isolated entity, with a self arising as an independent emergent construct, is not well aligned with the connectedness that is inherent in ‘techno-human’ development, or in the embodied multiplicities that interactions across virtual, digital and ‘real’ spaces demand, or in the requirement to make transparent the epistemological and structural limitations that form our being in the world.

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

REFERENCES

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.