1,333
Views
0
CrossRef citations to date
0
Altmetric
Open Peer Commentaries

Shouldn’t Our Virtual Avatars be Granted Human Rights Too?

This article refers to:
Is Virtually Everything Possible? The Relevance of Ethics and Human Rights for Introducing Extended Reality in Forensic Psychiatry

Sjors Ligthart et al.’s (Citation2022) article is timely and full of important points. To begin with, the article includes a valuable discussion on the boundaries of the relationship between authority (legal, political and medical) and the application of different degrees of coercion to individuals within a prison setting. I consider the legal excursus the authors provide on the human rights documentation behind the duty to facilitate resocialization as interesting and praiseworthy (though from a different angle, I touched upon related themes elsewhere) (Garasic Citation2015, Citation2017), yet my focus here will follow a different path. Their article brings forth a number of fascinating questions evolving around the use of XR—with a particular emphasis on how such “immersive” simulations could function well as tools to reintegrate forensic patients into society. In other words, Ligthart et al. try to show how a virtual world might function as the perfect environment in which to “test” new ways of helping people suffering from mental issues (or simply having been imprisoned as a result of crimes) finding means to be reintegrated into society—and hopefully contributing in improving their existential condition as well as that of society more broadly. Despite my sympathy toward this possible implementation of this technology, here I want to stress a possible “negative spin-off” of such portray within a forensic setting. The more we affirm that XR represents a close representation of reality, the more we are to deal with a variable that sees an additional problem—not a solution—arising from the use of XR: if someone’s avatar is molested in the XR world, does that not constitute an infringement of human rights? If so, how and why should such an infringement be conceptualized differently from an infringement occurring in the “real world”?

IS THERE ANY HARM IN THE VIRTUAL WORLD?

Let me explain my point by looking from close at the article. In the very beginning of the article, when describing the advantages of using XR for different types of needs and situations—with special emphasis on the forensic setting—the authors write:

“the possibility of creating contexts in which people may safely learn to cope with their feelings and behavior: nobody is harmed when the patient responds, e.g. aggressively.” (Ligthart et al. Citation2022, 144)

Claiming that the virtual reenactment of certain “risky” scenarios mighty help us address the [mental] problem without incurring an actual breaking of the law might be tempting at first, but maybe it is not as easy to settle as one might think. When linked to virtual avatars that have some kind of connection with real people, the affirmation that nobody is harmed when a patient misbehaves in the XR world can be challenged—and I want to do so by imagining two scenarios: one in which the interaction is with avatars governed by human counterparts and one where they are not.

VIRTUAL AVATARS CARRY THE SAME HUMAN RIGHTS AS THEIR HUMAN COUNTERPARTS

I doubt that the authors would be comfortable in claiming an absence of violence in a situation where the avatar of a child (linked to a child of a similar age in the real world) would be harassed by a virtual adult in a supermarket or on a bus for example. The reason is that we would expect a mental continuity between our virtual and real selves that will make our digital experiences directly connected to our psychological state in a back and forth relationship that is ever more common in our life experience. The onlife that Luciano Floridi (Citation2015) has helped us conceptualizing would thus be embodied in a continuum between the two worlds (I have discussed elsewhere the possibility of imagining different responses for different worlds, Garasic Citation2021) that would have to push us to be very careful in properly screening what we allow to occur in the XR, as this is not a detached world in which no one gets hurt, but rather one in which we still have not fully understood how to detect and process our traumas and damages. One of the best ways of improving on that front of course, is to start paying attention to virtual crimes that closely resemble their parallel versions occurring in the real world. A very relevant -and disturbing- example of this, is that of virtual sexual assaults (VSA): a criminal trend that started a long time agoFootnote1 and that -sadly- it is likely to continue happening in the coming years.

Out of the many challenges encountered by experts on the topic, I find of particular relevance for this investigation that some scholars have already pointed out how some jurisdictions have “mysogynized” the whole sphere of VSA by referring to a view of rape with a clear reference to male genital organs. For example, building also on previous accounts (Strikwerda Citation2015), John Danaher interestingly stresses some of the legal outcomes of the attempts to codify and consistently refer to rape when addressing it in the virtual world (even if perpetrated by actual persons in the real world through the use of virtual avatars). He writes:

At the same time, Strikwerda’s definition of rape does not share the anatomical obsessions of some jurisdictions and so will seem misleading to many. For example, in England and Wales, rape is explicitly defined, in Section 1 of the Sexual Offences Act 2003, as non-consensual “penetration of the vagina, anus, or mouth” by the penis. Anything that does not involve this penetrative interference is not counted as “rape.” […] What I suggest then is a broader definition of virtual sexual assault: An unwanted, forced, or nonconsensual sexually explicit behaviour performed by virtual characters, to one another, acting through representations in a virtual environment. (Danaher, 366)

Notwithstanding the importance of addressing this angle of the debate—including the possibility of providing a way of quantifying all the possible variables—or as close as possible to that—of damages that could occur in the real world through the use of avatars online (going beyond “only” VSA, including torture, bullying and all the violent crimes we could imagine), my interest is to stress how we should pay attention also to virtual avatars that will not refer to actual people in the real world.

VIRTUAL AVATARS REQUIRE VIRTUAL RIGHTS WHEN GOVERNED BY AI

Expanding from the first scenario, we must be ready to foresee situations of VSA in which our avatars could suffer or commit rapes that will not follow our standard patterns of conceptualizing sexual intercourse. We can imagine a humanized version of virtual avatar birds for example—with a whole different sexual apparatus and hence of possible breach of autonomy to be defined differently. The absence of penis would make the conceptualization of the penetration of a penis (going back to the definition of VSA mentioned above) impossible in that scenario—but not the willingness to abuse someone else.

Through that very neurological screening that the authors mention, we might imagine an inmate finding himself channeling his impulsive tendencies to sexually assault another individual in new forms that we would not (for physiological reasons) have been able to perform in the real world. Even if the authors do not seem to fully share the call for the introduction for new human rights (Ienca and Andorno Citation2017) derived from the specific progress made by neuroscience, my claim is that even in situation where the avatar would drastically differ in its biology and was governed by AI, we should grant it some “virtual rights” in line with the framework put forward by some proponents of robot rights.

In particular, I have in mind the social-relational approach to robot rights such as that of Mark Coeckelbergh (Citation2021) (robot’s moral relevance is based on the value that humans give to them) and David Gunkel (Citation2018) (the establishment of robot rights should be based on the social response that people have to robots), where the parallel would be that we should introduce some “virtual rights” to protect AI driven avatars because torturing, raping, killing or discriminating them in the XR could tell us something really negative about our “real” inclinations toward the other -be it human or otherwise.

CONCLUSION

As any other technology, XR has the potential of greatly improving our condition—but it also carries risks that need to be addressed with caution. The digital world is no exception and with its constant expansion into our real world, the boundaries of what we must reconceptualize as given or unquestionable is changing at an increasingly fast speed. Accordingly we need to scrutinize from close some of our certainties—including who, what and how we can exercise or discharge human rights. Even if that regards a non-human, virtual avatar. It might be necessary to start thinking about the possibility of granting AI driven virtual avatars some “virtual rights”—using a similar theoretical framework to the one that has been used to argue in favor of robot rights.

Acknowledgment

I would like to thank Nicole Daniel for her insights and comments on the first draft of this commentary.

Additional information

Funding

Funded by Roma Tre University, Department of Educational Sciences publishing funds.

Notes

1 Probably the first record of such a crime is that occurred in the online game Second Life, reported by Benjamin Duranske in Citation2007.

REFERENCES