ABSTRACT
AI systems are harming people. Harms such as discrimination and manipulation are reported in the media, which is the primary source of information on AI incidents. Reporting AI near-misses and learning from how a serious incident was prevented would help avoid future incidents. The problem is that ongoing efforts to catalog AI incidents rely on media reports—which does not prevent incidents. Developers, designers, and deployers of AI systems should be incentivized to report and share information on near misses. Such an AI near-miss reporting system does not have to be designed from scratch; the aviation industry’s voluntary, confidential, and non-punitive approach to such reporting can be used as a guide. AI incidents are accumulating, and the sooner such a near-miss reporting system is established, the better.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Funding
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Additional information
Funding
Notes on contributors
Kris Shrishak
Kris Shrishak is a senior fellow at the Irish Council for Civil Liberties (ICCL), where he works on technology-policy with a focus on privacy and algorithmic decision making. Previously he was a researcher at Technical University Darmstadt in Germany where he worked on applied cryptography, privacy enhancing technologies and Internet security. Website: https://krisshrishak.de