ABSTRACT
This paper examines ways in which the ethics of data-driven technologies might be (re)politicised, particularly where educational institutions are involved. The recent proliferation of principles, guidelines, and frameworks for ethical ‘AI’ (artificial intelligence) have emerged from a plethora of organisations in recent years, and seem poised to impact educational governance. This trend will be firstly shown to align with a narrow form of ethics - deontology - and overlook other potential ways ethical reasoning might contribute to thinking about ‘AI’. Secondly, the attention to ethical principles will be suggested to focus excessively on the technology itself, with the effect of masking political concerns for equity and justice. Thirdly and finally, the paper will propose a more radical form of participation in ethical decision-making that not only challenges the assumption of universal consensus, but also draws more authentically on the capacities for debate, contestation, and exchange inherent in the educational institution.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 See: https://www.rolls-royce.com/sustainability/ethics-and-compliance/the-aletheia-framework.aspx.
2 See: http://lcfi.ac.uk/about/.
3 See: https://ainowinstitute.org/.
4 See: https://datasociety.net/.
6 See: https://www.eismd.eu/ai4people/.
9 See: https://algorithmwatch.org/en/.
14 Floridi and Cowls (Citation2019) suggest this process had substantial impact on the principles and guidelines developed by the European Commission and OECD, through the work of the AI4People coalition.
15 See: https://www.just-ai.net/.
16 See: https://blogit.itu.dk/virteuproject/.