3,409
Views
7
CrossRef citations to date
0
Altmetric
Research Article

A participatory data-centric approach to AI Ethics by Design

ORCID Icon
Article: 2009222 | Received 03 Jun 2021, Accepted 17 Nov 2021, Published online: 08 Dec 2021

References

  • Aizenberg, E., and J. van den Hoven. 2020. Designing for human rights in AI. Big Data & Society 7 (2):2053951720949566. doi:10.1177/2053951720949566.
  • Ananny, M. 2016. Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science Technology and Human Values 41 (1):93–778. doi:10.1177/0162243915606523.
  • Aragon, C., C. Hutto, A. Echenique, B. Fiore-Gartland, Y. Huang, J. Kim, G. Neff, W. Xing, and J. Bayer. 2016. Developing a research agenda for human-centered data science. Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing Companion, 529–35. doi:10.1145/2818052.2855518.
  • Bender, E. M., and B. Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics 6:587–604. doi:10.1162/tacl_a_00041.
  • Bødker, S., and M. Kyng. 2018. ACM transactions on computer-human interaction. Participatory Design that Matters; Facing the Big Issues 25 (1):4:1–4:31. doi:10.1145/3152421.
  • Bozdag, E., and J. van den Hoven. 2015. Breaking the filter bubble: Democracy and design. Ethics and Information Technology 17 (4):249–65. doi:10.1007/s10676-015-9380-y.
  • Brey, P., and B. Dainow. 2021. Ethics by design and ethics of use in AI and robotics. The SIENNA project - Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact. Accessed April 26, 2021. https://www.sienna-project.eu/digitalAssets/915/c_915554-l_1-k_sienna-ethics-by-design-and-ethics-of-use.pdf.
  • Consequence Scanning – an agile practice for responsible innovators | doteveryone. n.d. Accessed March 21, 2021. https://www.doteveryone.org.uk/project/consequence-scanning/.
  • Derboven, J., D. De Roeck, M. Verstraete, D. Geerts, J. Schneider-Barnes, and K. Luyten. 2010. Comparing user interaction with low and high fidelity prototypes of tabletop surfaces. Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries, 148–57. doi:10.1145/1868914.1868935.
  • Ehn, P., and D. Sjögren. 1991. From system description to scripts for action. In Design at work - Cooperative design of computer systems, ed. J. Greenbaum, and M. Kyng, 241–69. Hillsdale, New Jersey: Lawrence Erlbaum Associates Inc.
  • Elish, M. C., and D. Boyd. 2018. Situating methods in the magic of Big Data and AI. Communication Monographs 85 (1):57–80. doi:10.1080/03637751.2017.1375130.
  • Eubanks, E. 2018 Automating Inequality - How high-tech tools profile, police, and punish the poor (New York: Picador)
  • Floridi, L., J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum, C. Luetge, R. Madelin, U. Pagallo, F. Rossi, et al. 2018 December. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 1–24. doi:10.12932/AP0443.32.4.2014.
  • Floridi, L., J. Cowls, T. C. King, and M. Taddeo. 2020. How to design AI for social good: Seven essential factors. Science and Engineering Ethics 26 (3):1771–96. doi:10.1007/s11948-020-00213-5.
  • Friedman, B., D. G. Hendry, and A. Borning. 2017. A survey of value sensitive design methods. Foundations and Trends in Human-Computer Interaction 11 (2):63–125. doi:10.1561/1100000015.
  • Friedman, B., and P. H. Kahn. 2003. Human values, ethics, and design. In The human-computer interaction handbook: Fundamentals, evolving technologies and emerging applications, Jacko, J., and Sears, A., 1177–1201. Mahwah: L. Erlbaum Associates Inc.
  • Gebru, T., J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. Daumé III, and K. Crawford. 2020. Datasheets for datasets. ArXiv:1803.09010 [Cs]. http://arxiv.org/abs/1803.09010.
  • Gerdes, A. 2021. Dialogical guidelines aided by knowledge acquisition: enhancing the design of explainable interfaces and algorithmic accuracy. In Proceedings of the Future Technologies Conference (FTC) 2020, Volume 1 Virtual event, eds. K. Arai, S. Kapoor, and R. Bhatia, 243–57. Springer International Publishing.
  • Gerdes, A. 2008. The clash between standardisation and engagement. Journal of Information, Communication and Ethics in Society 6 (1):46–59. doi:10.1108/14779960810866792.
  • Gerdes, A. 2018. An inclusive ethical design perspective for a flourishing future with artificial intelligent systems. European Journal of Risk Regulation 9 (4):677–89. doi:10.1017/err.2018.62.
  • Gillingham, P. 2016. Predictive risk modelling to prevent child maltreatment and other adverse outcomes for service users: Inside the ‘black box’ of machine learning. The British Journal of Social Work 46 (4):1044–58. doi:10.1093/bjsw/bcv031.
  • Greenbaum, J., and M. Kyng. 1991. Design at work: Cooperative design of computer systems. New Jersey: LEA.
  • Hayes, P., I. van de Poel, and M. Steen. 2020. Algorithms and values in justice and security. AI & SOCIETY 35 (3):533–55. doi:10.1007/s00146-019-00932-9.
  • Holstein, K., J. Wortman Vaughan, H. Daumé, M. Dudik, and H. Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–16. doi:10.1145/3290605.3300830.
  • Kelleher, J. D., and B. Tierney. 2018. Data science. Cambridge, MA: The MIT Press.
  • Kim, M., T. Zimmermann, R. DeLine, and A. Begel. 2018. Data scientists in software teams: State of the art and challenges. IEEE Transactions on Software Engineering 44 (11):1024–38. doi:10.1109/TSE.2017.2754374.
  • Lazer, D., R. Kennedy, G. King, and A. Vespignani. 2014. The parable of Google Flu: Traps in big data analysis. Science 343 (6176):1203–05. doi:10.1126/science.1248506.
  • Mitchell, M., S. Wu, A. Zaldivar, P. Barnes, L. Vasserman, B. Hutchinson, E. Spitzer, I. D. Raji, and T. Gebru. 2019. Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–29. doi:10.1145/3287560.3287596.
  • Mittelstadt, B. D., P. Allo, M. Taddeo, S. Wachter, and L. Floridi. 2016. The ethics of algorithms: Mapping the debate. Big Data & Society 3 (2):205395171667967–205395171667967. doi:10.1177/2053951716679679.
  • Nisbet, R., J. F. Elder, and G. Miner. 2009. Handbook of statistical analysis and data mining applications. Amsterdam, Boston: Academic Press/Elsevier.
  • Nissenbaum, H. 2001. How computer systems embody values. Computer 34 (3):120–119. doi:10.1109/2.910905.
  • O’Neil, C. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. New York: Crown.
  • Paleyes, A., R.-G. Urma, and N. D. Lawrence. 2021. Challenges in deploying machine learning: A survey of case studies. ArXiv:2011.09926 [Cs]. http://arxiv.org/abs/2011.09926.
  • Petersen, A. C. M., L. R. Christensen, and T. T. Hildebrandt. 2020. The role of discretion in the age of automation. Computer Supported Cooperative Work (CSCW) 29 (3):303–33. doi:10.1007/s10606-020-09371-3.
  • Russell, S., D. Dewey, and M. Tegmark. 2015. Research priorities for robust and beneficial artificial intelligence. Ai Magazine 36 (4):105–14. doi:10.1609/aimag.v36i4.2577.
  • Sagar, R. 2021. Big data to good data: Andrew Ng urges ML community to be more data-centric and less model-centric. Analytics India Magazine. Accessed May 17, 2021. https://analyticsindiamag.com/big-data-to-good-data-andrew-ng-urges-ml-community-to-be-more-data-centric-and-less-model-centric/.
  • Sculley, D., G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, M. Young, J.-F. Crespo, and D. Dennison. 2015. Hidden technical debt in machine learning systems. In Advances in neural information processing systems, ed. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, vol. 28, 2503–11. Curran Associates, Inc. http://papers.nips.cc/paper/5656-hidden-technical-debt-in-machine-learning-systems.pdf.
  • Seidelin, C., Y. Dittrich, and E. Grönvall. 2020. Foregrounding data in co-design–An exploration of how data may become an object of design. International Journal of Human-Computer Studies 143:102505. doi:10.1016/j.ijhcs.2020.102505.
  • Smith, G. 2018. The AI delusion. Oxford: Oxford University Press.
  • Tubella, A. A., A. Theodorou, V. Dignum, and F. Dignum. 2019. Governance by glass-box: Implementing transparent moral bounds for AI behaviour. arXiv Preprint arXiv:1905.04994.
  • Tubella, A., and V. Dignum. 2019. The glass box approach: Verifying contextual adherence to values. AISafety 2019. Macao, China, August 11–12. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160949.
  • Umbrello, S., and I. van de Poel. 2021. Mapping value sensitive design onto AI for social good principles. AI and Ethics 1:283–96. doi:10.1007/s43681-021-00038-3.
  • van den Hoven, J. 2007. ICT and value sensitive design. IFIP International Federation for Information Processing 233:67–72.
  • Veale, M., and R. Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4 (2):2053951717743530. doi:10.1177/2053951717743530.
  • Zhu, H., B. Yu, A. Halfaker, and L. Terveen. 2018. Value-sensitive algorithm design: Method, case study, and lessons. Proceedings of the ACM on Human-Computer Interaction 2 (CSCW):1–23. doi:10.1145/3274463.