1,008
Views
1
CrossRef citations to date
0
Altmetric
Articles

Nodes of certainty and spaces for doubt in AI ethics for engineers

&
Pages 37-53 | Received 01 Jun 2021, Accepted 29 Nov 2021, Published online: 04 Jan 2022
 

ABSTRACT

Discussions about AI development frequently bring up the question of ethics because it is difficult to predict how technological decisions might play out once AI systems are implemented and used in the world. Engineers of AI systems are increasingly expected to go beyond the traditions of requirement specifications, taking into account broader societal contexts and their complexities. In this paper we present findings from a hackathon event conducted with working engineers, exploring the gaps between existing guidelines and recommendations for addressing ethical issues with respect to AI technologies and the realities experienced by the engineers in practice. We found that when faced with the uncertainties of how to recognize and navigate ethical issues and challenges, engineers looked to identify the responsibilities that need to be in place to sustain trust and to hold the relevant parties to account for their misdeeds. We re-envision familiar engineering practices as nodes of certainty to accommodate the needs of responsible and ethical AI. Despite the desire for mechanisms for sustaining certainty in how to build AI technology responsibly by providing frameworks for action and accountability, there remains a need to ensure just enough spaces and opportunities to cultivate reasonable doubt. Space and capacity to doubt accepted certainties, in fact, is the very process of ethics, necessary for holding to account our standards, guidelines, and checklists as technology and society co-evolve.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 While there are many definitions of AI, here we follow the terminology put forth in the IEEE Ethically Aligned Design report as a pragmatic solution as this was the point of departure for the discussions on which this paper is based.

2 The term ‘hackathon’ here refers to the specific name of the event “AI Hackathon: Ethical Dilemmas in AI” and our efforts to “hack ethics in AI” in the design of the event.

3 Although the term nezavershennost is typically translated from Russian as unfinalizability, it can also be translated as unfinishness, which is perhaps a better translation for our purposes implying that the work of ethics can never be ‘finished’ as it were.

4 The hackathon goals and structure are explained in detail in the hackathon report ‘Addressing Ethical Dilemmas in AI: Listening to Engineers’ created by the authors with the participation of partner organizations and accessible at https://nordicengineers.org/2021/01/listen-to-the-engineers/

Additional information

Funding

This work was funded by the Association of Nordic Engineers with support from IEEE, and the Data-Ethics ThinkDo Tank.

Notes on contributors

Irina Shklovski

Irina Shklovski is Professor of Communication and Computing in the Department of Computer Science and the Department of Communication at the University of Copenhagen. Her main research areas include speculative AI futures, responsible and ethical technology design, online data leakage, information privacy, creepy technologies and the sense of powerlessness people experience in the face of massive personal data collection.

Carolina Némethy

Carolina Némethy is PhD fellow in visual anthropology at the Arctic University of Norway. She studies how technologies impact human lives as well as the natural environment. Her work at the University of Copenhagen engaged in an in-depth, ethnographic investigation of AI developers' ethical dilemmas, bringing about discussions through which technologists could put ethics into practice as finalized in the report Addressing Ethical Dilemmas in AI: Listening to Engineers.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.