1,334
Views
1
CrossRef citations to date
0
Altmetric
Articles

Public preferences for governing AI technology: Comparative evidence

 

ABSTRACT

Citizens’ attitudes concerning aspects of AI such as transparency, privacy, and discrimination have received considerable attention. However, it is an open question to what extent economic consequences affect preferences for public policies governing AI. When does the public demand imposing restrictions on – or even prohibiting – emerging AI technologies? Do average citizens’ preferences depend causally on normative and economic concerns or only on one of these causes? If both, how might economic risks and opportunities interact with assessments based on normative factors? And to what extent does the balance between the two kinds of concerns vary by context? I answer these questions using a comparative conjoint survey experiment conducted in Germany, the United Kingdom, India, Chile, and China. The data analysis suggests strong effects regarding AI systems’ economic and normative attributes. Moreover, I find considerable cross-country variation in normative preferences regarding the prohibition of AI systems vis-a-vis economic concerns.

Acknowledgements

I thank the special issue editor and the reviewers for their insightful, constructive comments and criticism. I also thank the organizers and participants of the TUM Workshop on Governance of Artificial Intelligence for their valuable feedback. This research was fully or partly conducted through the Nuffield College Centre for Experimental Social Sciences (CESS). The authors gratefully acknowledge the support of CESS in conducting their research.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1 For a definition, I follow Nitzberg & Zysman in this special issue: ‘AI is technology that uses advanced computation to perform at human cognitive capacity in some task area’ (Nitzberg & Zysman, Citation2021, p. 4). This definition is closely related to Büthe et al., Citation2022. Specifically, in this study I also refer to ‘AI systems’, which ‘[…] carry out a wide variety of tasks with some degree of autonomy, i.e., without simultaneous, ongoing human intervention. Their capacity for learning allows AI systems to solve problems and thus support, emulate or even improve upon human decisionmaking […]’ (Büthe et al., Citation2022). AI systems thus possess the growing potential to perform both routine and non-routine, complex tasks (OECD, Citation2019, Ch.3).

2 This type of experiment usually involves a choice between two alternatives with multiple, randomized characteristics.

3 Imagine a workplace AI application. Version 1 results in productivity gains, leading to more hiring; it also reports the worker’s activity to third parties without consent. Version 2 reports the same information but with prior consent; it leads to similar productivity gains, but those gains lead the employer to replace the worker. While both scenarios concur with common findings in research on the future of work (MacCarthy, Citation2019; OECD, Citation2019, Ch. 3), the preference of average citizens is unknown.

4 Going forward explainability will be interpreted through its result, transparency.

5 I am agnostic about the precise consequences of the application domain for prohibition preferences.

6 A strict ordering using the post-materialism index by Inglehardt, based on data from 1970 to 2006 would yield the following order ranging from higher to lower values on the index: UK, Germany, Chile, India, China.

7 For example, variation in socially consequential algorithms, such as China’s social credit system, or in general attitudes regarding government regulation of AI.

8 All attributes were uniformly distributed in this study, since the attribute value combinations are not mutually exclusive.

9 The model is estimated for the fully pooled sample and assessed whether there are profile order effects. (Figure 6 in the Online Appendix).

10 The finding does not rule out the possibility of differences of smaller magnitude.

Additional information

Funding

This work was supported by the Centre for Experimental Social Sciences (CESS), Nuffield College. The author acknowledges support from the Swiss National Science Foundation (grant no. 100018185417/1).

Notes on contributors

Soenke Ehret

Soenke Ehret is a Postdoc at the University of Lausanne, Faculty of Business and Economics, Quartier de Chamberonne, Internef, 1015 Lausanne.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.