3,063
Views
3
CrossRef citations to date
0
Altmetric
Articles

Governing AI through ethical standards: learning from the experiences of other private governance initiatives

, , &
Pages 1822-1844 | Published online: 22 Aug 2022
 

ABSTRACT

A range of private actors are positioning varied public and private policy venues as appropriate for defining standards governing the ethical implications of artificial intelligence (AI). Three ideal-type pathways – oppose and fend off; engage and push; and lead and inspire – describe distinct sets of corporate and civil society motivations and actions that lead to distinct roles for, and relations between, private actors and states in AI governance. Currently, public-private governance interactions around AI ethical standards align with an engage and push pathway, potentially benefitting certain first-mover AI standards through path-dependent processes. However, three sources of instability – shifting governance demands, focusing events, and localisation effects – are likely to drive continued proliferation of private AI governance that aim to oppose and fend off state interventions or inspire and lead redefinitions of how AI ethics are understood. A pathways perspective uniquely uncovers these critical dynamics for the future of AI governance.

Acknowledgements

We thank Carleton University for funding Benjamin Faveri’s research assistantship for this project. Helpful comments on the paper came from Wendy Wong and Josh Gellers. We also benefited from excellent feedback and suggestions from the special issue editors and three external referees.

Disclosure statement

Ashley Casovan and Benjamin Faveri work at the Responsible AI Institute.

Notes

1 A basic definition of artificial intelligence (AI) contrasts AI with humans’ ‘natural intelligence’. In this definition, ‘intelligence’ refers to an agent’s ability to perceive its environment and rationally respond to inputs from that environment, and to then make decisions that maximize the agent’s likelihood of achieving some desired end (Russell & Norvig, Citation2003).

Additional information

Funding

This work was supported by Carleton University, Faculty of Public Affairs [grant number Public Affairs Research Excellence Chair].

Notes on contributors

Graeme Auld

Graeme Auld is Professor at Carleton University's School of Public Policy and Administration.

Ashley Casovan

Ashley Casovan is the Executive Director at the Responsible AI Institute.

Amanda Clarke

Amanda Clarke is Associate Professor at Carleton University's School of Public Policy and Administration.

Benjamin Faveri

Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 248.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.