730
Views
8
CrossRef citations to date
0
Altmetric
Research Article

Exposed by AIs! People Personally Witness Artificial Intelligence Exposing Personal Information and Exposing People to Undesirable Content

&
Pages 1636-1645 | Published online: 25 May 2020
 

ABSTRACT

Do people personally witness artificial intelligence (AI) committing moral wrongs? If so, what kinds of moral wrong and what situations produce these? To address these questions, respondents selected one of six prompt questions, each based on a moral foundation violation, asking about a personally-witnessed interaction with an AI resulting in a moral victim (victim prompts) or where the AI seemed to engage in immoral actions (action prompt). Respondents then answered their selected question in an open-ended response. In conjunction with liberty/privacy and purity moral foundations and across both victim and action prompts, respondents most frequently reported moral violations as two types of exposure by AIs: their personal information being exposed (31%) and people’s exposure to undesirable content (20%). AIs expose people’s personal information to their colleagues, close relations, and online due to information sharing across devices, people in proximity of audio devices, and simple accidents. AIs expose people, often children, to undesirable content such as nudity, pornography, violence, and profanity due to their proximity to audio devices and to seemly purposeful action. We argue that the prominence in reporting these types of exposure may be due to their frequent occurrence on personal and home devices. This suggests that research on AI ethics should not only focus on the prototypically harmful moral dilemmas (e.g., autonomous vehicle deciding whom to sacrifice) but everyday interactions with personal technology.

Acknowledgments

We want to thank Timothy Maninger, Patrick Gamez, Sophie Rodriguez, Christopher Graves, and Katherine Frisbee for contributions to the study design, data collection, coding, and document editing and Linda Francis for her advice on qualitative research practices. This research was supported by the Army Research Office under Grant Number W911NF-19-1-0246. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

Disclosure of potential of conflicts of interest

We have no conflicts of interest.

Additional information

Notes on contributors

Daniel B. Shank

Daniel B. Shank is an assistant professor in the Department of Psychological Science at Missouri Science & Technology specializing in the areas of social psychology and technology. He obtained a BA in Computer Science from Harding University, and from the University of Georgia he received an MS in Artificial Intelligence and an MA and PhD in Sociology. Dr. Shank served in two postdoctoral research fellowships at the University of Alabama Birmingham and at the University of Melbourne (Australia). His research interests include perceptions of and social interactions with nonhumans including artificial intelligence agents and groups of people. He studies morality, mind attributions, affective impressions, emotions, social interactions, and behavioral reactions and how these processes differ between AIs and humans and on human-AI teams. He has published over 20 articles in psychology, sociology, and behavior science technology journals and currently has grants from the Army Research Office and the Leonard Wood Institute to study affective and moral perceptions and interactions with AIs.

Alexander Gott

Alexander Gott is currently an undergraduate at the Missouri University of Science and Technology, pursuing a B.S. in Psychology with a focus on cognitive neuroscience. His current research topics involve AI and its link to morality, perceptions of mind in artificial entities, and human-AI partnership teams.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.