3,063
Views
3
CrossRef citations to date
0
Altmetric
Articles

Governing AI through ethical standards: learning from the experiences of other private governance initiatives

, , &
 

ABSTRACT

A range of private actors are positioning varied public and private policy venues as appropriate for defining standards governing the ethical implications of artificial intelligence (AI). Three ideal-type pathways – oppose and fend off; engage and push; and lead and inspire – describe distinct sets of corporate and civil society motivations and actions that lead to distinct roles for, and relations between, private actors and states in AI governance. Currently, public-private governance interactions around AI ethical standards align with an engage and push pathway, potentially benefitting certain first-mover AI standards through path-dependent processes. However, three sources of instability – shifting governance demands, focusing events, and localisation effects – are likely to drive continued proliferation of private AI governance that aim to oppose and fend off state interventions or inspire and lead redefinitions of how AI ethics are understood. A pathways perspective uniquely uncovers these critical dynamics for the future of AI governance.

Acknowledgements

We thank Carleton University for funding Benjamin Faveri’s research assistantship for this project. Helpful comments on the paper came from Wendy Wong and Josh Gellers. We also benefited from excellent feedback and suggestions from the special issue editors and three external referees.

Disclosure statement

Ashley Casovan and Benjamin Faveri work at the Responsible AI Institute.

Notes

1 A basic definition of artificial intelligence (AI) contrasts AI with humans’ ‘natural intelligence’. In this definition, ‘intelligence’ refers to an agent’s ability to perceive its environment and rationally respond to inputs from that environment, and to then make decisions that maximize the agent’s likelihood of achieving some desired end (Russell & Norvig, Citation2003).

Additional information

Funding

This work was supported by Carleton University, Faculty of Public Affairs [grant number Public Affairs Research Excellence Chair].

Notes on contributors

Graeme Auld

Graeme Auld is Professor at Carleton University's School of Public Policy and Administration.

Ashley Casovan

Ashley Casovan is the Executive Director at the Responsible AI Institute.

Amanda Clarke

Amanda Clarke is Associate Professor at Carleton University's School of Public Policy and Administration.

Benjamin Faveri

Benjamin Faveri is a Research and Policy Analyst at the Responsible AI Institute.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.