4,170
Views
13
CrossRef citations to date
0
Altmetric
Target Article

Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy

, , &
 

Abstract

The development of artificial intelligence (AI) in medicine raises fundamental ethical issues. As one example, AI systems in the field of mental health successfully detect signs of mental disorders, such as depression, by using data from social media. These AI depression detectors (AIDDs) identify users who are at risk of depression prior to any contact with the healthcare system. The article focuses on the ethical implications of AIDDs regarding affected users’ health-related autonomy. Firstly, it presents the (ethical) discussion of AI in medicine and, specifically, in mental health. Secondly, two models of AIDDs using social media data and different usage scenarios are introduced. Thirdly, the concept of patient autonomy, according to Beauchamp and Childress, is critically discussed. Since this concept does not encompass the specific challenges linked with the digital context of AIDDs in social media sufficiently, the current analysis suggests, finally, an extended concept of health-related digital autonomy.

This article is referred to by:
Error, Reliability and Health-Related Digital Autonomy in AI Diagnoses of Social Media Analysis
Consultation with Doctor Twitter: Consent Fatigue, and the Role of Developers in Digital Medical Ethics
Is Health-Related Digital Autonomy Setting the Autonomy Bar Too High?
Health-Related Digital Autonomy. A Response to the Commentaries
The Right to Contest AI Profiling Based on Social Media Data
AIDD, Autonomy, and Military Ethics
Four Stages in Social Media Network Analysis—Building Blocks for Health-Related Digital Autonomy in Artificial Intelligence, Social Media, and Depression
Health-Related Digital Autonomy: An Important, But Unfinished Step
The Coercive Potential of Digital Mental Health
A New Type of 'Greenwashing'? Social Media Companies Predicting Depression and Other Mental Illnesses

ACKNOWLEDGMENTS

We thank the anonymous reviewers for their valuable comments and suggestions.

DISCLOSURE STATEMENT

The authors declare that they have no conflicts of interest.

Notes

1 According to Bringsjord and Govindarajulu (Citation2020, 42), ML can be defined as follows: “Machine learning is concerned with building systems that improve their performance on a task when given examples of ideal performance on the task, or improve their performance with repeated experience on the task.” Artificial neural networks are understood in this article as a non-logicist, neuro-computational approach in ML that aims at formally representing the neural structure of biological brains to build ‘deep’ learning systems (Bringsjord and Govindarajulu Citation2020, 35–36).

2 Beauchamp and Childress were concerned with both patients and research subjects when developing their concept of autonomy. Since the perspective of research ethics is not the major topic of this paper, it will be referred to patients subsequently. For ethical issues regarding the use of social media data in the context of research, such as breaches of privacy, decision-making processes and informed consent, see especially Torous, Ungar, and Barnett (Citation2019), Nebeker, Ellis, and Torous (Citation2019), and Wilbanks (Citation2018).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.