33,018
Views
10
CrossRef citations to date
0
Altmetric
Research Articles

Six Human-Centered Artificial Intelligence Grand Challenges

ORCID Icon, ORCID Icon, , , , ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, , , ORCID Icon, , , ORCID Icon, ORCID Icon, ORCID Icon, , , , , ORCID Icon, ORCID Icon, , & show all
Pages 391-437 | Received 25 Apr 2022, Accepted 27 Nov 2022, Published online: 02 Jan 2023

References

  • Abbass, H. A. (2019). Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cognitive Computation, 11(2), 159–171. https://doi.org/10.1007/s12559-018-9619-0
  • Abou-Zahra, S., Brewer, J., & Cooper, M. (2018). Artificial intelligence (AI) for web accessibility: Is conformance evaluation a way forward? [Paper presentation]. Proceedings of the 15th International Web for All Conference, 1–4, Lyon, France. https://doi.org/10.1145/3192714.3192834
  • AI for Good. (2021). https://aiforgood.itu.int/about/
  • Alami, H., Rivard, L., Lehoux, P., Hoffman, S. J., Cadeddu, S. B. M., Savoldelli, M., Samri, M. A., Ag Ahmed, M. A., Fleet, R., & Fortin, J. P. (2020). Artificial intelligence in health care: Laying the foundation for responsible, sustainable, and inclusive innovation in low- and middle-income countries. Globalization and Health, 16(1), 52. https://doi.org/10.1186/s12992-020-00584-1
  • Allen, C., Smit, I., & Wallach, W. (2005). Artificial morality: Top-down, bottom-up, and hybrid approaches. Ethics and Information Technology, 7(3), 149–155. https://doi.org/10.1007/s10676-006-0004-4
  • Allison, G. (2021, Dec 7). The great rivalry: China vs. the U.S. in the 21st century https://www.belfercenter.org/publication/great-rivalry-china-vs-us-21st-century.
  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–13, Glasgow Scotland UK. https://doi.org/10.1145/3290605.3300233
  • Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people: The role of humans in interactive machine learning. AI Magazine, 35(4), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
  • Anderson, J., Rainie, L., & Luchsinger, A. (2018). Artificial intelligence and the future of humans (pp. 10). Pew Research Center.
  • Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Society, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w
  • Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S. C., Houde, S., Liao, Q. V., Luss, R., & Mojsilovic, A. (2020). AI explainability 360: An extensible toolkit for understanding data and machine learning models. Journal of Machine Learning Research, 21(130), 1–6. https://doi.org/10.1145/3430984.3430987
  • Auernhammer, J. (2020, 11–14 August). Human-centered AI: The role of human-centered design research in the development of AI. In S. Boess, M. Cheung, and R. Cain (Eds.), Synergy – DRS International Conference 2020. https://doi.org/10.21606/drs.2020.282
  • Awad, E., Anderson, M., Anderson, S. L., & Liao, B. (2020). An approach for combining ethical principles with public opinion to guide public policy. Artificial Intelligence, 287, 103349. https://doi.org/10.1016/j.artint.2020.103349
  • Aynsley, C. (2020). The online harms bills cannot wait. Infosecurity Magazine. https://www.infosecurity-magazine.com/opinions/online-harms-bill-wait/
  • Balaji, T., Annavarapu, C. S. R., & Bablani, A. (2021). Machine learning algorithms for social media analysis: A survey. Computer Science Review, 40, 100395. https://doi.org/10.1016/j.cosrev.2021.100395
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.48550/arXiv.1910.10045
  • Bayern, S. (2016). The implications of modern business–entity law for the regulation of autonomous systems. European Journal of Risk Regulation, 7(2), 297–309. https://doi.org/10.1017/S1867299X00005729
  • Becerra-Perez, M. M., Menear, M., Turcotte, S., Labrecque, M., & Legare, F. (2016). More primary care patients regret health decisions if they experienced decisional conflict in the consultation: A secondary analysis of a multicenter descriptive study. BMC Family Practice, 17(1), 156. https://doi.org/10.1186/s12875-016-0558-0
  • Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., & Mojsilovic, A. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943. https://doi.org/10.48550/arXiv.1810.01943
  • Berghel, H. (2017). Net neutrality reloaded. Computer Magazine. 50(10), 68–72. https://doi.org/10.1109/mc.2017.3641632
  • Berndt, J. O., Rodermund, S. C., Lorig, F., & Timm, I. J. (2017). Modeling user behavior in social media with complex agents [Paper presentation]. Third International Conference on Human and Social Analytics (HUSO 2017), Nice, France.
  • Bertolini, A. (2013). Robots as products: The case for a realistic analysis of robotic applications and liability rules. Law, Innovation and Technology, 5(2), 214–247. https://doi.org/10.5235/17579961.5.2.214
  • Bessi, A., & Ferrara, E. (2016). Social bots distort the 2016 US Presidential election online discussion. First Monday, 21, 11–17. https://doi.org/10.5210/fm.v21i11.7090
  • Bhutta, N., Hizmo, A., & Ringo, D. (2021). How much does racial bias affect mortgage lending? Evidence from human and algorithmic credit decisions. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3887663
  • Błachnio, A., Przepiorka, A., & Pantic, I. (2016). Association between Facebook addiction, self-esteem and life satisfaction: A cross-sectional study. Computers in Human Behavior, 55, 701–705. https://doi.org/10.1016/j.chb.2015.10.026
  • Bodenschatz, A., Uhl, M., & Walkowitz, G. (2021). Autonomous systems in ethical dilemmas: Attitudes toward randomization. Computers in Human Behavior Reports, 4, 100145. https://doi.org/10.1016/j.chbr.2021.100145
  • Bond, R. R., Mulvenna, M. D., Wan, H., Finlay, D. D., Wong, A., Koene, A., Brisk, R., Boger, J., & Adel, T. (2019). Human centered artificial intelligence: Weaving UX into algorithmic decision making. RoCHI.
  • Braun, M., Hummel, P., Beck, S., & Dabrock, P. (2021). Primer on an ethics of AI-based decision support systems in the clinic. Journal of Medical Ethics, 47(12), e3–e3. https://doi.org/10.1136/medethics-2019-105860
  • Brennan, P. F. (2021, October 8). Bridge to artificial intelligence (Bridge2AI). https://commonfund.nih.gov/bridge2ai
  • Brown, B., Bødker, S., & Höök, K. (2017). Does HCI scale? Scale hacking and the relevance of HCI. Interactions, 24(5), 28–33. https://doi.org/10.1145/3125387
  • Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.
  • Brynjolfsson, E., Mitchell, T., & Rock, D. (2018). What can machines learn, and what does it mean for occupations and the economy? AEA Papers and Proceedings, 108, 43–47. https://doi.org/10.1257/pandp.20181019
  • Burzagli, L., Emiliani, P. L., Antona, M., & Stephanidis, C. (2022). Intelligent environments for all: A path towards technology-enhanced human well-being. Universal Access in the Information Society, 21(2), 437–456. https://doi.org/10.1007/s10209-021-00797-0
  • Bush, V. (2020). Science, the Endless Frontier. Princeton University Press.
  • Business Roundtable (2019, August 19). Statement on the purpose of a corporation. https://s3.amazonaws.com/brt.org/BRT-StatementonthePurposeofaCorporationJuly2021.pdf.
  • Business Roundtable (2022, January 26). Roadmap for responsible artificial intelligence (AI) https://www.businessroundtable.org/policy-perspectives/technology/ai.
  • Cadbury, A. (2000). Foreword (corporate governance: A framework for implementation. The World Bank Group.
  • Cai, C. J., Reif, E., Hegde, N., Hipp, J., Kim, B., Smilkov, D., Wattenberg, M., Viegas, F., Corrado, G. S., Stumpe, M. C., & Terry, M. (2019). Human-centered tools for coping with imperfect algorithms during medical decision-making [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14, Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300234
  • Cai, H., Gan, C., Wang, T., Zhang, Z., & Han, S. (2020). Once for all: Train one network and specialize it for efficient deployment [Paper presentation]. International Conference on Learning Representations. https://arxiv.org/pdf/1908.09791.pdf
  • Calders, T., & Verwer, S. (2010). Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2), 277–292. https://doi.org/10.1007/s10618-010-0190-x
  • Calmon, F., Wei, D., Vinzamuri, B., Natesan Ramamurthy, K., & Varshney, K. R. (2017). Optimized pre-processing for discrimination prevention. Advances in Neural Information Processing Systems, 30. https://doi.org/10.48550/arXiv.1704.03354
  • Calvo, R. A., & Peters, D. (2014). Positive computing. MIT Press.
  • Calvo, R. A., Peters, D., Vold, K., & Ryan, R. M. (2020). Supporting human autonomy in AI systems: A framework for ethical enquiry. In Ethics of digital well-being (pp. 31–54). Springer.
  • Cambridge University Press, Ed. (2022). Cambridge dictionary.
  • Carlsmith, K. M., & Darley, J. M. (2008). Psychological aspects of retributive justice. Advances in Experimental Social Psychology, 40, 193–236. https://doi.org/10.1016/S0065-2601(07)00004-4
  • Carr, S. (2020). ‘AI gone mental’: Engagement and ethics in data-driven technology for mental health (Vol. 29, pp.125–130). Taylor & Francis.
  • Cavoukian, A. (2009). Privacy by design: The 7 foundational principles. Information and Privacy Commissioner of Ontario, Canada, (5), 12.
  • Chander, A., & Krishnamurthy, V. (2018). The myth of platform neutrality. Georgetown Law Technology Review, 2, 400.
  • Cheatham, B., Javanmardian, K., & Samandari, H. (2019). Confronting the risks of artificial intelligence. McKinsey Quarterly, 2, 38.
  • Chen, Y., Mark, G., Ali, S., & Ma, X. (2017). Unpacking happiness: Lessons from smartphone photography among college students. 429–438. https://doi.org/10.1109/ICHI.2017.25
  • Chinese State Council. (2017). Notice of the state council issuing the new generation of artificial intelligence development plan. https://flia.org/wp-content/uploads/2017/07/A-New-Generation-of-Artificial-Intelligence-Development-Plan-1.pdf.
  • Cho, J., Hwang, G., & Suh, C. (2020). A fair classifier using kernel density estimation. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, & H. Lin (Eds.), Advances in neural information processing systems (Vol. 33, pp. 15088–15099). Curran Associates, Inc. https://proceedings.neurips.cc/paper/2020/file/ac3870fcad1cfc367825cda0101eee62-Paper.pdf
  • Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
  • CIFAR. (2022). Pan-Canadian AI strategy. https://cifar.ca/ai/
  • Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness [Paper presentation]. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 797–806, Halifax NS Canada. https://doi.org/10.1145/3097983.3098095
  • Creemers, R. (2018). China’s social credit system: An evolving practice of control. SSRN Electronic Journal, https://doi.org/10.2139/ssrn.3175792
  • Croskerry, P., Singhal, G., & Mamede, S. (2013). Cognitive debiasing 1: Origins of bias and theory of debiasing. BMJ Qual Saf, 22 (Suppl 2), ii58–ii64. https://doi.org/10.1136/bmjqs-2012-001712
  • Danaher, J. (2016). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299–309. https://doi.org/10.1007/s10676-016-9403-3
  • Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. IJCAI.
  • De Cremer, D., & Kasparov, G. (2021). AI should augment human intelligence, not replace it. Harvard Business Review
  • De Freitas, J., & Cikara, M. (2021). Deliberately prejudiced self-driving vehicles elicit the most outrage. Cognition, 208, 104555. https://doi.org/10.1016/j.cognition.2020.104555
  • De Freitas, J., Anthony, S. E., Censi, A., & Alvarez, G. A. (2020). Doubting driverless dilemmas. Perspectives on Psychological Science, 15(5), 1284–1288. https://doi.org/10.1177/1745691620922201
  • Desouza, K. C., Dawson, G. S., & Chenok, D. (2020). Designing, developing, and deploying artificial intelligence systems: Lessons from and for the public sector. Business Horizons, 63(2), 205–213. https://doi.org/10.1016/j.bushor.2019.11.004
  • Diener, E., Scollon, C. N., & Lucas, R. E. (2009). The evolving concept of subjective well-being: The multifaceted nature of happiness. https://doi.org/10.1007/978-90-481-2354-4_4
  • Dietrich, E., & Fields, C. (1989). Experimental and theoretical artificial intelligence. Journal of Experimental & Theoretical Artificial Intelligence, 1(1), 1–4. https://doi.org/10.1080/09528138908953690
  • Dubé, L., Du, P., McRae, C., Sharma, N., Jayaraman, S., & Nie, J.-Y. (2018). Convergent innovation in food through big data and artificial intelligence for societal-scale inclusive growth. Technology Innovation Management Review, 8(2), 49–65. https://doi.org/10.22215/timreview/1139
  • Dumon, J., Goodman, B., Kirechu, P., Smith, C., Van Deusen, A. (2021). Responsible AI guidelines in practice. Lessons learned from the DIU portfolio. Defense Innovation Unit. https://www.diu.mil/responsible-ai-guidelines.
  • Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., … Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
  • Ehsan, U., & Riedl, M. O. (2020, 19–24 July). Human-centered explainable AI: Towards a reflective sociotechnical approach [Paper presentation]. HCI International 2020 – Late Breaking Papers: Multimodality and Intelligence: 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, 449–466. https://doi.org/10.1007/978-3-030-60117-1_33
  • Emami-Naeini, P., Agarwal, Y., Faith Cranor, L., & Hibshi, H. (2020). Ask the experts: What should be on an IoT privacy and security label? [Paper presentation]. 2020 IEEE Symposium on Security and Privacy (SP), 447–464, San Francisco, California. https://doi.org/10.1109/SP40000.2020.00043
  • Engelmann, S., Chen, M., Fischer, F., Kao, C., & Grossklags, J. (2019). Clear sanctions, vague rewards: How China’s social credit system currently defines “good” and “bad” behavior [Paper presentation]. Proceedings of the Conference on Fairness, Accountability, and Transparency, 69–78. https://doi.org/10.1145/3287560.3287585
  • Ernst, E., Merola, R., & Samaan, D. (2019). Economics of artificial intelligence: Implications for the future of work. IZA Journal of Labor Policy, 9(1). https://doi.org/10.2478/izajolp-2019-0004
  • European Commission (2019, April 8). Ethics guidelines for trustworthy artificial intelligence. FUTURIUM – European Commission. https://ec.europa.eu/futurium/en/ai-alliance-consultation
  • European Commission (2021, December 15). EU industrial R&D investment scoreboard 2020. https://iri.jrc.ec.europa.eu/scoreboard/2020-eu-industrial-rd-investment-scoreboard
  • European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. The Artificial Intelligence Act. https://artificialintelligenceact.eu/
  • European Commission and High Representative of the Union for Foreign Affairs and Security Policy. (2020, December 2). A new EU-US agenda for global change. https://ec.europa.eu/info/sites/info/files/joint-communication-eu-us-agenda_en.pdf.
  • European Digital SME Alliance. (2021). Feedback from: European Digital SME (small and medium enterprises) alliance. https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665574_en.
  • European Parliament. (2022, October 2). Special committee on artificial intelligence in a digital age. https://multimedia.europarl.europa.eu/en/webstreaming/aida-committee-meeting_20220210-0900-COMMITTEE-AIDA
  • Falco, G., Shneiderman, B., Badger, J., Carrier, R., Dahbura, A., Danks, D., Eling, M., Goodloe, A., Gupta, J., Hart, C., Jirotka, M., Johnson, H., LaPointe, C., Llorens, A. J., Mackworth, A. K., Maple, C., Pálsson, S. E., Pasquale, F., Winfield, A., & Yeong, Z. K. (2021). Governing AI safety through independent audits. Nature Machine Intelligence, 3(7), 566–571. https://doi.org/10.1038/s42256-021-00370-7
  • Falk-Krzesinski, H. J., Contractor, N., Fiore, S. M., Hall, K. L., Kane, C., Keyton, J., Klein, J. T., Spring, B., Stokols, D., & Trochim, W. (2011). Mapping a research agenda for the science of team science. Research Evaluation, 20(2), 145–158. https://doi.org/10.3152/095820211X12941371876580
  • Fazelpour, S., & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8), e12760. https://doi.org/10.1111/phc3.12760
  • Feldman, M., Friedler, S. A., Moeller, J., Scheidegger, C., & Venkatasubramanian, S. (2015). Certifying and removing disparate impact [Paper presentation]. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Sydney NSW Australia.
  • Fernandez-Caballero, A., Martinez-Rodrigo, A., Pastor, J. M., Castillo, J. C., Lozano-Monasor, E., Lopez, M. T., Zangroniz, R., Latorre, J. M., & Fernandez-Sotos, A. (2016). Smart environment architecture for emotion detection and regulation. Journal of Biomedical Informatics, 64, 55–73. https://doi.org/10.1016/j.jbi.2016.09.015
  • Ferrer, X., Nuenen, T. v., Such, J. M., Cote, M., & Criado, N. (2021). Bias and discrimination in AI: A cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2), 72–80. https://doi.org/10.1109/mts.2021.3056293
  • Fleissner, C. (2018). Inclusive capitalism based on binary economics and positive international human rights in the age of artificial intelligence. Washington University Global Studies Law Review, 17(1), 201.
  • Fleming, S. (2021, March 15). What is digital sovereignty and why is Europe so interested in it? https://www.weforum.org/agenda/2021/03/europe-digital-sovereignty/.
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People-an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
  • Floridi, L., Holweg, M., Taddeo, M., Amaya Silva, J., Mökander, J., & Wen, Y. (2022). capAI – a procedure for conducting conformity assessment of AI systems in line with the EU Artificial Intelligence Act. SSRN Electronic Journal, https://doi.org/10.2139/ssrn.4064091
  • ForHumanity. (2022). Independent Audit of AI Systems (IAAIS) https://forhumanity.center/blog/auditing-ai-and-autonomous-systems-building-an-infrastructureoftrust/.
  • Foundation for Responsible Robotics. (2022). FRR quality mark for (AI based) robotics. https://responsiblerobotics.org/quality-mark/.
  • Freitag, C., Berners-Lee, M., Widdicks, K., Knowles, B., Blair, G. S., & Friday, A. (2021). The real climate and transformative impact of ICT: A critique of estimates, trends, and regulations. Patterns, 2(9), 100340. https://doi.org/10.1016/j.patter.2021.100340
  • Frischmann, B., & Selinger, E. (2018). Re-engineering humanity. Cambridge University Press.
  • Garibay, O., & Winslow, B. (2021, 26–27 July). HCII2021 special thematic sessions on “human-centered AI”. HCI International 2021, Washington DC, USA. https://2021.hci.international/Human-Centered_AI_Thematic_Sessions.html
  • General Data Protection Regulation 679 CFR 2016. (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). Official Journal, L 119, 1–88. ELI: http://data.europa.eu/eli/reg/2016/679/oj[legislation
  • Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11), e745–e750. https://doi.org/10.1016/S2589-7500(21)00208-9
  • Ghosh, I. (2015). Social wealth economic indicators for a caring economy. Interdisciplinary Journal of Partnership Studies, 1(1). https://doi.org/10.24926/ijps.v1i1.90
  • Gibbons, R. (1998). Incentives in organizations. Journal of Economic Perspectives, 12(4), 115–132. https://doi.org/10.1257/jep.12.4.115
  • Gless, S., Silverman, E., & Weigend, T. (2016). If robots cause harm, who is to blame? Self-driving cars and criminal liability. New Criminal Law Review, 19(3), 412–436. https://doi.org/10.1525/nclr.2016.19.3.412
  • Gong, J., Currano, R., Sirkin, D., Yeung, S., & Holsinger, F. C. (2021, 08–09 May). NICE: Four human-centered AI principles for bridging the AI-to-clinic translational gap [Paper presentation]. Virtual ’21: ACM CHI Workshop on Realizing AI in Healthcare: Challenges Appearing in the Wild, 202, Virtual. ACM.
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139–144. https://doi.org/10.1145/3422622
  • Google. (2022). People + AI guidebook. https://pair.withgoogle.com/guidebook/.
  • Google AI. (2022). Responsible AI practices. https://ai.google/responsibilities/responsible-ai-practices.
  • Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2018). Viewpoint: When will AI exceed human performance? Evidence from AI experts. Journal of Artificial Intelligence Research, 62, 729–754. https://doi.org/10.1613/jair.1.11222
  • Grayson, M., Thieme, A., Marques, R., Massiceti, D., Cutrell, E., & Morrison, C. (2020). A dynamic AI system for extending the capabilities of blind people [Paper presentation]. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 1–4, Honolulu, Hawaiʻi, USA. https://doi.org/10.1145/3334480.3383142
  • Groce, A., Kulesza, T., Zhang, C., Shamasunder, S., Burnett, M., Wong, W.-K., Stumpf, S., Das, S., Shinsel, A., Bice, F., & McIntosh, K. (2014). You are the only possible oracle: Effective test selection for end users of interactive machine learning systems. IEEE Transactions on Software Engineering, 40(3), 307–323. https://doi.org/10.1109/TSE.2013.59
  • Grossman, Z., & van der Weele, J. J. (2017). Self-image and willful ignorance in social decisions. Journal of the European Economic Association, 15(1), 173–217. https://doi.org/10.1093/jeea/jvw001
  • Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58. https://doi.org/10.1145/3301275.3308446
  • Gupta, A. (2021, September 18). The imperative for sustainable AI systems. The Gradient. https://thegradient.pub/sustainable-ai/.
  • Gupta, A., Lanteigne, C., & Kingsley, S. (2020). SECure: A social and environmental certificate for AI systems. https://doi.org/10.48550/arXiv.2006.06217
  • Guy, I. (2015). Social recommender systems (recommender systems handbook) (pp. 511–543). Springer.
  • Hallevy, G. (2010). I, Robot-I, criminal: When science fiction becomes reality: Legal liability of AI robots committing criminal offenses. Syracuse Journal of Science & Technology Law Reporter, 1
  • Han, S., Kelly, E., Nikou, S., & Svee, E.-O. (2022). Aligning artificial intelligence with human values: Reflections from a phenomenological perspective. AI & Society, 37(4), 1383–1395. https://doi.org/10.1007/s00146-021-01247-4
  • Hao, K. (2020). Doctors are using AI to triage covid-19 patients. The tools may be here to stay. MIT Technology Review, 27, 1–12.
  • Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in Neural Information Processing Systems, 29. https://doi.org/10.48550/arXiv.1610.02413
  • Harper, F. M., & Konstan, J. A. (2016). The MovieLens datasets. ACM Transactions on Interactive Intelligent Systems, 5(4), 1–19. https://doi.org/10.1145/2827872
  • He, X., Hong, Y., Zheng, X., & Zhang, Y. (2022). What are the users’ needs? Design of a user-centered explainable artificial intelligence diagnostic system. International Journal of Human–Computer Interaction, 1–24. https://doi.org/10.1080/10447318.2022.2095093
  • Heikkilä, M. (2021, March 29). NATO wants to set AI standards. If only its members agreed on the basics. Politico. https://www.politico.eu/article/nato-ai-artificial-intelligence-standards-priorities.
  • Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? The Behavioral and Brain Sciences, 33(2–3), 61–83. https://doi.org/10.1017/S0140525X0999152X
  • Himmelreich, J. (2018). Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory and Moral Practice, 21(3), 669–684. https://doi.org/10.1007/s10677-018-9896-4
  • Ho, D.-A., & Beyan, O. (2020). Biases in data science lifecycle. arXiv preprint arXiv:2009.09795. https://doi.org/10.48550/arXiv.2009.09795
  • Hodge, R., Rotner, J., Baron, I., Kotras, D., & Worley, D. (2020). Designing a new narrative to build an AI-ready workforce. MITRE Center for Technology and National Security. https://www.mitre.org/sites/default/files/2021-11/prs-20-0975-designing-a-new-narrative-to-build-an-AI-ready-workforce.pdf
  • Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608. http://arxiv.org/abs/1812.04608
  • Hoffmann, A. L. (2021). Terms of inclusion: Data, discourse, violence. New Media & Society, 23(12), 3539–3556. https://doi.org/10.1177/1461444820958725
  • Holmquist, L. E. (2017). Intelligence on tap: Artificial intelligence as a new design material. Interactions, 24(4), 28–33. https://doi.org/10.1145/3085571
  • Huang, L., Jiang, S., & Vishnoi, N. (2019). Coresets for clustering with fairness constraints. Advances in Neural Information Processing Systems, 32. https://doi.org/10.48550/arXiv.1906.08484
  • Hubbard, F. P. (2014). Sophisticated robots: Balancing liability, regulation, and innovation. Florida Law Review, 66(5), 1803.
  • Humble, N., & Mozelius, P. (2019, 31 October–1 November 2019). Artificial intelligence in education – A promise, a threat or a hype? [Paper presentation]. Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics, EM-Normandie Business School Oxford, UK, 149–156. https://doi.org/10.34190/ECIAIR.19.005
  • IEEE. (2019a). Classical ethics in A/IS. In M. I. Aldinhas Ferreira, J. Silva Sequeira, G. Singh Virk, M. O. Tokhi, & E. E. Kadar (Eds.), The IEEE global initiative on ethics of autonomous and intelligent systems (Vol. 95, pp. 11–16). Springer International Publishing. https://doi.org/10.1007/978-3-030-12524-0_2
  • IEEE. (2019b). Ethics in action in autonomous and intelligent systems|IEEE SA. [Paper presentation]. Ethics in Action | Ethically Aligned Design. https://ethicsinaction.ieee.org/
  • Ikwuegbu, I. (2021). AI, automation and the future of work (a research essay).
  • ISO 9241-11. (2018). Ergonomics of human-system interaction—part 11: Usability: Definitions and concepts. International Organization for Standardization Geneva.
  • ISO 9241-210. (2019). Ergonomics of human-system interaction: Part 210: Human-centred design for interactive systems. ISO.
  • Iyengar, S., & Massey, D. S. (2019). Scientific communication in a post-truth society. Proceedings of the National Academy of Sciences of the United States of America, 116(16), 7656–7661. https://doi.org/10.1073/pnas.1805868115
  • Jacko, J. A. (2012). Human computer interaction handbook: Fundamentals, evolving technologies, and emerging applications. CRC press.
  • Jillson, E. (2021). Aiming for truth, fairness, and equity in your company’s use of AI. FTC Bureau of Consumer Protection https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
  • Johnson, M., & Vera, A. (2019). No AI is an island: The case for teaming intelligence. AI Magazine, 40(1), 16–28. https://doi.org/10.1609/aimag.v40i1.2842
  • Joseph-Williams, N., Edwards, A., & Elwyn, G. (2011). The importance and complexity of regret in the measurement of 'good’ decisions: A systematic review and a content analysis of existing assessment instruments. Health Expectations, 14(1), 59–83. https://doi.org/10.1111/j.1369-7625.2010.00621.x
  • Jun, S., Yuming, W., & Cui, H. (2021). An integrated analysis framework of artificial intelligence social impact based on application scenarios. Science of Science and Management of Science & Technology, 42(05), 3.
  • Kallioras, N. A., & Lagaros, N. D. (2020). DzAIℕ: Deep learning based generative design. Procedia Manufacturing, 44, 591–598. https://doi.org/10.1016/j.promfg.2020.02.251
  • Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1–33. https://doi.org/10.1007/s10115-011-0463-8
  • Kamishima, T., Akaho, S., Asoh, H., & Sakuma, J. (2012). Fairness-aware classifier with prejudice remover regularizer. In Flach, P.A., De Bie, T., Cristianini, N. (Eds.), Machine learning and knowledge discovery in databases. ECML PKDD 2012. Lecture Notes in Computer Science (Vol 7524). Springer. https://doi.org/10.1007/978-3-642-33486-3_3
  • Kane, S. K., Guo, A., & Morris, M. R. (2020). Sense and accessibility: Understanding people with physical disabilities’ experiences with sensing systems [Paper presentation]. Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 1–14, Virtual Event Greece. https://doi.org/10.1145/3373625.3416990
  • Karwowski, W. (2018, August 26). The human use of artificial intelligence. Keynote address. 20th Congress of the International Ergonomics Association. Florence, Italy.
  • Karwowski, W., & Zhang, W. (2021). Human factors and ergonomics (handbook of human factors and ergonomics). Wiley.
  • Kelley, P. G., Bresee, J., Cranor, L. F., & Reeder, R. W. (2009). A “nutrition label” for privacy [Paper presentation]. Proceedings of the 5th Symposium on Usable Privacy and Security. Mountain View, CA.
  • Kelley, P. G., Yang, Y., Heldreth, C., Moessner, C., Sedley, A., Kramm, A., Newman, D. T., & Woodruff, A. (2021). Exciting, useful, worrying, futuristic: Public perception of artificial intelligence in 8 countries [Paper presentation]. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Virtual Event, USA.
  • Keyes, C. L. (2002). The mental health continuum: From languishing to flourishing in life. Journal of Health and Social Behavior, 43(2), 207–222. https://doi.org/10.2307/3090197
  • Kirchkamp, O., & Strobel, C. (2019). Sharing responsibility with a machine. Journal of Behavioral and Experimental Economics, 80, 25–33. https://doi.org/10.1016/j.socec.2019.02.010
  • Kleppe, M., & Otte, M. (2017). Analysing and understanding news consumption patterns by tracking online user behaviour with a multimodal research design. Digital Scholarship in the Humanities, 32(suppl_2), ii158–ii170. https://doi.org/10.1093/llc/fqx030
  • Klinova, K., & Korinek, A. (2021). AI and shared prosperity [Paper presentation]. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Virtual Event, USA.
  • Kokolakis, S. (2017). Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Computers & Security, 64, 122–134. https://doi.org/10.1016/j.cose.2015.07.002
  • Komal, S. (2014). Comparative assessment of human intelligence and artificial intelligence. International Journal of Computer Science and Mobile Computing, 3, 1–5.
  • Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences of the United States of America, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111
  • Krügel, S., Ostermaier, A., & Uhl, M. (2022). Zombies in the loop? Humans trust untrustworthy AI-advisors for ethical decisions. Philosophy & Technology, 35(1), 1–37. https://doi.org/10.1007/s13347-022-00511-9
  • Kuhlmann, S., Stegmaier, P., & Konrad, K. (2019). The tentative governance of emerging science and technology—a conceptual introduction. Research Policy, 48(5), 1091–1097. https://doi.org/10.1016/j.respol.2019.01.006
  • Kuzma, J. (2022). Implementing responsible research and innovation: A case study of US biotechnology oversight. Global Public Policy and Governance, 2(3), 306–325. https://doi.org/10.1007/s43508-022-00046-x
  • Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Management Science, 65(7), 2966–2981. https://doi.org/10.1287/mnsc.2018.3093
  • Langley, P. (2019). An integrative framework for artificial intelligence education [Paper presentation]. Proceedings of the AAAI Conference on Artificial Intelligence. Honolulu, HI, USA.
  • Lannelongue, L., Grealey, J., Bateman, A., & Inouye, M. (2021). Ten simple rules to make your computing more environmentally sustainable (vol. 17, p. e1009324). Public Library of Science San Francisco.
  • Latonero, M. (2011). Human trafficking online: The role of social networking sites and online classifieds. SSRN Electronic Journal, https://doi.org/10.2139/ssrn.2045851
  • Lazar, J. (2007). Universal usability: Designing computer interfaces for diverse user populations. John Wiley & Sons.
  • Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
  • Le Métayer, D. (2013). Privacy by design: A formal framework for the analysis of architectural choices [Paper presentation]. Proceedings of the Third ACM Conference on Data and Application Security and Privacy. San Antonio, TX, USA.
  • Le, Q., Miralles-Pechuán, L., Kulkarni, S., Su, J., & Boydell, O. (2020). An overview of deep learning in industry. Data Analytics and AI, 65–98. https://doi.org/10.1201/9781003019855-5
  • Lee, I., & Shin, Y. J. (2020). Machine learning for enterprises: Applications, algorithm selection, and challenges. Business Horizons, 63(2), 157–170. https://doi.org/10.1016/j.bushor.2019.10.005
  • Leonidis, A., Korozi, M., Sykianaki, E., Tsolakou, E., Kouroumalis, V., Ioannidi, D., Stavridakis, A., Antona, M., & Stephanidis, C. (2021). Improving stress management and sleep hygiene in intelligent homes. Sensors, 21(7), 2398. https://doi.org/10.3390/s21072398
  • Leslie, D., Burr, C., Aitken, M., Cowls, J., Katell, M., & Briggs, M. (2021). Artificial intelligence, human rights, democracy, and the rule of law: A primer. The Council of Europe.
  • Liang, F., Das, V., Kostyuk, N., & Hussain, M. M. (2018). Constructing a data-driven society: China’s social credit system as a state surveillance infrastructure. Policy & Internet, 10(4), 415–453. https://doi.org/10.1002/poi3.183
  • Lieberman, H. (2009). User interface goals, AI opportunities. AI Magazine, 30(4), 16–16. https://doi.org/10.1609/aimag.v30i4.2266
  • Lindebaum, D., Vesa, M., & den Hond, F. (2020). Insights from “the machine stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Academy of Management Review, 45(1), 247–263. https://doi.org/10.5465/amr.2018.0181
  • Lipson, H., & Pollack, J. B. (2000). Automatic design and manufacture of robotic lifeforms. Nature, 406(6799), 974–978. https://doi.org/10.1038/35023115
  • Liu, X., & Pan, H. (2022). The path of film and television animation creation using virtual reality technology under the artificial intelligence. Scientific Programming, 2022, 1–8. https://doi.org/10.1155/2022/1712929
  • Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
  • López-González, M. (2021). Applying human cognition to assured autonomy. International Conference on Human-Computer Interaction. Virtual Event, USA.
  • Louizos, C., Swersky, K., Li, Y., Welling, M., & Zemel, R. (2015). The variational fair autoencoder. arXiv preprint arXiv:1511.00830.
  • Luan, H., Geczy, P., Lai, H., Gobert, J., Yang, S. J. H., Ogata, H., Baltes, J., Guerra, R., Li, P., & Tsai, C. C. (2020). Challenges and future directions of big data and artificial intelligence in education. Frontiers in Psychology, 11, 580820. https://doi.org/10.3389/fpsyg.2020.580820
  • Luong, B. T., Ruggieri, S., & Turini, F. (2011). k-NN as an implementation of situation testing for discrimination discovery and prevention [Paper presentation]. Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Diego, CA, USA.
  • Madaio, M. A., Stark, L., Wortman Vaughan, J., & Wallach, H. (2020). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI [Paper presentation]. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, HI, USA.
  • Makav, B., & Kılıç, V. (2019). Smartphone-based image captioning for visually and hearing impaired [Paper presentation]. 2019 11th International Conference on Electrical and Electronics Engineering (ELECO). Bursa, Turkey.
  • Manyika, J., & Sneader, K. (2018). AI, automation, and the future of work: Ten things to solve for. McKinsey & Company. https://www.mckinsey.com/featured-insights/future-of-work/ai-automation-and-the-future-of-work-ten-things-to-solve-for
  • Margetis, G., Antona, M., Ntoa, S., & Stephanidis, C. (2012). Towards accessibility in ambient intelligence environments. International Joint Conference on Ambient Intelligence. Pisa, Italy.
  • Margetis, G., Ntoa, S., Antona, M., & Stephanidis, C. (2021). Human‐centered design of artificial intelligence. Handbook of Human Factors and Ergonomics, 1085–1106. https://doi.org/10.1002/9781119636113.ch42
  • Martelaro, N., & Ju, W. (2017). WoZ Way: Enabling real-time remote interaction prototyping & observation in on-road vehicles [Paper presentation]. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. Portland, OR, USA.
  • Martin, B. (2021). CLAIRE – Confederation of Laboratories for Artificial Intelligence. Research in Europe. https://claire-ai.org/
  • Martinuzzi, A., Blok, V., Brem, A., Stahl, B., & Schönherr, N. (2018). Responsible research and innovation in industry—challenges, insights and perspectives (VOl. 10, pp.702). MDPI.
  • Mayer, A.-S., Strich, F., & Fiedler, M, University of Passau (Germany). (2020). Unintended consequences of introducing AI systems for decision making. MIS Quarterly Executive, 19(4), 239–257. https://doi.org/10.17705/2msqe.00036
  • McGregor, S. (2022). AI incident database. https://incidentdatabase.ai.
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607
  • Mhlambi, S. (2020). From rationality to relationality: Ubuntu as an ethical and human rights framework for artificial intelligence governance [Paper presentation]. Carr Center for Human Rights Policy Discussion Paper Series, vol. 9. Cambridge, MA, USA.
  • Miailhe, N. (2018). The geopolitics of artificial intelligence: The return of empires? Politique Étrangère, 83(3), 105–117.
  • Microsoft. (2022). Responsible AI. https://www.microsoft.com/en-us/ai/responsible-ai.
  • Microsoft News Center. (2022, January 13). Leaders across healthcare, academia and technology form new coalition to transform healthcare journey through responsible AI adoption https://news.microsoft.com/2022/01/13/leaders-across-healthcare-academia-and-technology-form-new-coalition-to-transform-healthcare-journey-through-responsible-ai-adoption/.
  • Milano, S., Taddeo, M., & Floridi, L. (2020). Recommender systems and their ethical challenges. AI & Society, 35(4), 957–967. https://doi.org/10.1007/s00146-020-00950-y
  • Miller, F. A., Katz, J. H., & Gans, R. (2018). The OD imperative to add inclusion to the algorithms of artificial intelligence. OD Practitioner, 50(1), 8.
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Misselhorn, C. (2018). Artificial morality. concepts, issues and challenges. Society, 55(2), 161–169. https://doi.org/10.1007/s12115-018-0229-y
  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting [Paper presentation]. Proceedings of the Conference on Fairness, Accountability, and Transparency. Atlanta, GA, USA.
  • Mittelstadt, B. (2019). AI ethics – too principled to fail? SSRN Electronic Journal, 1, 501–507. https://doi.org/10.2139/ssrn.3391293
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 205395171667967. https://doi.org/10.1177/2053951716679679
  • Mohammed, P. S., & Nell’ Watson, E. (2019). Towards inclusive education in the age of artificial intelligence: Perspectives, challenges, and opportunities (artificial intelligence and inclusive education) (pp. 17–37). Springer.
  • Mozilla. (2022). Open source audit tooling (OAT) project. https://foundation.mozilla.org/en/what-we-fund/fellowships/oat/.
  • Murad, C., & Munteanu, C. (2019). I don’t know what you’re talking about, HALexa” the case for voice user interface guidelines [Paper presentation]. Proceedings of the 1st International Conference on Conversational User Interfaces. Dublin, Ireland.
  • Muro, M., Whiton, J., & Maxim, R. (2019). What jobs are affected by AI? Better-paid, better-educated workers face the most exposure. Metropolitan Policy Program Report.
  • Murphy, E., Walsh, P. P., & Banerjee, A. (2021). Framework for achieving the environmental sustainable development goals. Environmental Protection Agency.
  • Mutlu, E., & Garibay, O. O. (2021). A quantum leap for fairness: Quantum Bayesian approach for fair decision making [Paper presentation]. International Conference on Human-Computer Interaction. Virtual Event, USA.
  • Neary, M., & Schueller, S. M. (2018). State of the field of mental health apps. Cognitive and Behavioral Practice, 25(4), 531–537. https://doi.org/10.1016/j.cbpra.2018.01.002
  • Nielsen, J. (1994). Usability engineering. Morgan Kaufmann.
  • Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics, 2(1), 25–42. https://doi.org/10.1007/bf02639315
  • NIST. (2021). AI risk management framework concept paper. https://www.nist.gov/itl/ai-risk-management-framework.
  • Ntoa, S., Margetis, G., Antona, M., & Stephanidis, C. (2021a). Digital accessibility in intelligent environments (human-automation interaction: mobile computing). Springer.
  • Ntoa, S., Margetis, G., Antona, M., & Stephanidis, C. (2021b). User experience evaluation in intelligent environments: A comprehensive framework. Technologies, 9(2), 41. https://doi.org/10.3390/technologies9020041
  • Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M. E., Ruggieri, S., Turini, F., Papadopoulos, S., & Krasanakis, E. (2020). Bias in data‐driven artificial intelligence systems—AN introductory survey. Wiley Interdisciplinary Reviews, 10(3), e1356. https://doi.org/10.1002/widm.1356
  • Office of the Under Secretary of Defense for Research and Engineering. (2022). USD(R&E) technology vision for an era of competition. https://www.cto.mil/wp-content/uploads/2022/02/usdre_strategic_vision_critical_tech_areas.pdf.
  • Office N. A. I. I. (2022). National artificial intelligence initiative https://www.ai.gov/.
  • Okamura, K., & Yamada, S. (2020). Adaptive trust calibration for human-AI collaboration. PLOS One, 15(2), e0229132. https://doi.org/10.1371/journal.pone.0229132
  • Oliva, E. I. P. N., Gherardi-Donato, E. C. d S., Bermúdez, J. Á., & Facundo, F. R. G. (2018). Use of Facebook, perceived stress and alcohol consumption among university students. Ciencia & Saude Coletiva, 23(11), 3675–3681. https://doi.org/10.1590/1413-812320182311.27132016
  • Oliveira, J. D., Couto, J. C., Paixão-Cortes, V. S. M., & Bordini, R. H. (2022). Improving the design of ambient intelligence systems: Guidelines based on a systematic review. International Journal of Human–Computer Interaction, 38(1), 19–27. https://doi.org/10.1080/10447318.2021.1926114
  • Olsson, T., & Väänänen, K. (2021). How does AI challenge design practice? Interactions, 28(4), 62–64. https://doi.org/10.1145/3467479
  • OpenAI. (2018, May 16). AI and compute. https://openai.com/blog/ai-and-compute/.
  • Organisation for Economic Co-operation and Development. (2021). OECD AI principles. https://oecd.ai/en/ai-principles.
  • Osmani, N. (2020). The complexity of criminal liability of AI systems. Masaryk University Journal of Law and Technology, 14(1), 53–82. https://doi.org/10.5817/mujlt2020-1-3
  • Park, S., Kim, I., Lee, S. W., Yoo, J., Jeong, B., & Cha, M. (2015). Manifestation of depression and loneliness on social networks: A case study of young adults on Facebook [Paper presentation]. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. Vancouver, Canada.
  • PEGA. (2020). The future of work. https://www.pega.com/future-of-work.
  • Peters, D., Calvo, R. A., & Ryan, R. M. (2018). Designing for motivation, engagement and wellbeing in digital experience. Frontiers in Psychology, 9, 797. https://doi.org/10.3389/fpsyg.2018.00797
  • Pomerantsev, P. (2019). This is not propaganda: Adventures in the war against reality. PublicAffairs.
  • Poquet, O., & Laat, M. (2021). Developing capabilities: Lifelong learning in the age of AI. British Journal of Educational Technology, 52(4), 1695–1708. https://doi.org/10.1111/bjet.13123
  • PricewaterhouseCoopers. (2017). Sizing the prize: What’s the real value of ai for your business and how can you capitalise? https://www.pwc.com/gx/en/news-room/docs/report-pwc-ai-analysis-sizing-the-prize.pdf.
  • Quy, T. L., Roy, A., Iosifidis, V., & Ntoutsi, E. (2021). A survey on datasets for fairness-aware machine learning. 12(3), e1452. https://doi.org/10.1002/widm.1452
  • Rabhi, Y., Mrabet, M., & Fnaiech, F. (2018). A facial expression controlled wheelchair for people with disabilities. Computer Methods and Programs in Biomedicine, 165, 89–105. https://doi.org/10.1016/j.cmpb.2018.08.013
  • Rajabi, A., & Garibay, O. O. (2021, July 24–29). Towards fairness in AI: Addressing bias in data using GANs [Paper presentation]. HCI International 2021 – Late Breaking Papers: Multimodality, EXtended Reality, and Artificial Intelligence: 23rd HCI International Conference, HCII 2021, Virtual Event, 509–518. https://doi.org/10.1007/978-3-030-90963-5_39
  • Ramsay, G., & Robertshaw, S. (2019). Weaponising news: RT, Sputnik and targeted disinformation. King’s College London Centre for the Study of Media, Communication & Power.
  • Raworth, K. (2017). Doughnut economics: Seven ways to think like a 21st-century economist. Chelsea Green Publishing.
  • Raworth, K. (2018). A healthy economy should be designed to thrive, not grow. https://www.ted.com/speakers/kate_raworth
  • Renz, A., & Vladova, G. (2021). Reinvigorating the discourse on human-centered artificial intelligence in educational technologies. Technology Innovation Management Review, 11(5), 5–16. https://doi.org/10.22215/timreview/1438
  • Responsible Artificial Intelligence Institute. (2022). Responsible Artificial Intelligence Institute. https://www.responsible.ai/
  • Riedl, M. O. (2019). Human‐centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1), 33–36. https://doi.org/10.1002/hbe2.117
  • Riva, G., Banos, R. M., Botella, C., Wiederhold, B. K., & Gaggioli, A. (2012). Positive technology: Using interactive technologies to promote positive functioning. Cyberpsychology, Behavior and Social Networking, 15(2), 69–77. https://doi.org/10.1089/cyber.2011.0139
  • Roberts, M. (2018). Censored. Princeton University Press.
  • Robinson, L., Cotten, S. R., Ono, H., Quan-Haase, A., Mesch, G., Chen, W., Schulz, J., Hale, T. M., & Stern, M. J. (2015). Digital inequalities and why they matter. Information, Communication & Society, 18(5), 569–582. https://doi.org/10.1080/1369118x.2015.1012532
  • Rudin, C., & Radin, J. (2019). Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harvard Data Science Review, 1(2), 1.2. https://doi.org/10.1162/99608f92.5a8a3a3d
  • Rutherglen, G. (1987). Disparate impact under title VII: An objective theory of discrimination. Virginia Law Review, 73(7), 1297. https://doi.org/10.2307/1072940
  • Ryan, M. (2022). The social and ethical impacts of artificial intelligence in agriculture: Mapping the agricultural AI literature. AI & Society, https://doi.org/10.1007/s00146-021-01377-9
  • Ryan, R. M., & Deci, E. L. (2017). Self-determination theory: Basic psychological needs in motivation, development, and wellness. Guilford Publications.
  • Ryff, C. D., & Singer, B. H. (2008). Know thyself and become what you are: A eudaimonic approach to psychological well-being. Journal of Happiness Studies, 9(1), 13–39. https://doi.org/10.1007/s10902-006-9019-0
  • Saleiro, P., Kuester, B., Hinkson, L., London, J., Stevens, A., Anisfeld, A., Rodolfa, K. T., & Ghani, R. (2018). Aequitas: A bias and fairness audit toolkit. arXiv preprint arXiv:1811.05577
  • Samuel, G., Lucivero, F., & Somavilla, L. (2022). The environmental sustainability of digital technologies: Stakeholder practices and perspectives. Sustainability, 14(7), 3791. https://doi.org/10.3390/su14073791
  • Sarakiotis, V. (2020). Human-centered AI: Challenges and opportunities. UBIACTION
  • Sawyer, B. D., Miller, D. B., Canham, M., & Karwowski, W. (2021). Human factors and ergonomics in design of A3: Automation, autonomy, and artificial intelligence. In G. Salvendy & W. Karwowski (Eds.), Handbook of human factors and ergonomics (pp. 1385–1416). Hoboken, NJ, USA: Wiley.
  • Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big Data & Society, 4(2), 205395171773810. https://doi.org/10.1177/2053951717738104
  • Secretary of State for Digital, C., Media and Sport and Secretary of State for the Home Department by Command of Her Majesty. (2020, December 15). Online harms white paper: Full government response to the consultation. https://www.gov.uk/government/consultations/online-harms-white-paper/outcome/online-harms-white-paper-full-government-response.
  • Seligman, M. E. (2012). Flourish: A visionary new understanding of happiness and well-being. Simon and Schuster.
  • Shearer, E., & Mitchell, A. (2021). News use across social media platforms in 2020. Pew Research Center. https://www.pewresearch.org/journalism/2021/01/12/news-use-across-social-media-platforms-in-2020/
  • Shneiderman, B, University of Maryland, College Park. (2020b). Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109–124. https://doi.org/10.17705/1thci.00131
  • Shneiderman, B. (2000). Universal usability. Ubiquity, 2000(August), 1–91. https://doi.org/10.1145/347634.350994
  • Shneiderman, B. (2016). Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proceedings of the National Academy of Sciences of the United States of America, 113(48), 13538–13540. https://doi.org/10.1073/pnas.1618211113
  • Shneiderman, B. (2020a). Bridging the Gap Between Ethics and Practice. ACM Transactions on Interactive Intelligent Systems, 10(4), 1–31. https://doi.org/10.1145/3419764
  • Shneiderman, B. (2022). Human-Centered AI. Oxford University Press.
  • Smith, A. & Director, F. (2020). Using artificial intelligence and algorithms. US Federal Trade Commission, FTC Business Blog, April, https://www.ftc.gov/news-events/blogs/business-blog/2020/04/using-artificial-intelligence-algorithms.
  • Solove, D. J. (2002). Conceptualizing privacy. California Law Review, 90(4), 1087. https://doi.org/10.2307/3481326
  • Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work: Human–AI collaboration in managerial professions. Journal of Business Research, 125, 135–142. https://doi.org/10.1016/j.jbusres.2020.11.038
  • Spielkamp, M. (2017). Inspecting algorithms for bias. Technology Review, 120(4), 96–98.
  • Stanton, B., & Jensen, T. (2021). Trust and artificial intelligence (Preprint). NIST Interagency/Internal Report (NISTIR) – 8332. https://www.nist.gov/publications/trust-and-artificial-intelligence
  • Steffen, D. (2021). Taking the next step towards convergence of design and HCI: Theories, principles, methods [Paper presentation]. In G. Salvendy & W. Karwowski (Eds.), International Conference on Human-Computer Interaction. Hoboken, NJ, USA: Wiley.
  • Stephanidis, C. (2021). Design for all in digital technologies. In G. Salvendy & W. Karwowski (Eds.), Handbook of human factors and ergonomics (pp. 1187–1215). Hoboken, NJ, USA: Wiley.
  • Stephanidis, C., Antona, M., & Ntoa, S. (2021). Human factors in ambient intelligence environments. In G. Salvendy & W. Karwowski (Eds.), Handbook of human factors and ergonomics (pp. 1058–1084). Hoboken, NJ, USA: Wiley.
  • Stephanidis, C., Salvendy, G., Antona, M., Chen, J. Y. C., Dong, J., Duffy, V. G., Fang, X., Fidopiastis, C., Fragomeni, G., Fu, L. P., Guo, Y., Harris, D., Ioannou, A., Jeong, K-a., Konomi, S. i., Krömker, H., Kurosu, M., Lewis, J. R., Marcus, A., … Zhou, J. (2019). Seven HCI grand challenges. International Journal of Human–Computer Interaction, 35(14), 1229–1269. https://doi.org/10.1080/10447318.2019.1619259
  • Strauß, S. (2021). Deep automation bias: How to tackle a wicked problem of AI? Big Data and Cognitive Computing, 5(2), 18. https://doi.org/10.3390/bdcc5020018
  • Strich, F., Mayer, A.-S., & Fiedler, M, University of Bayreuth, Germany (2021). What do I do in a world of artificial intelligence? Investigating the impact of substitutive decision-making AI systems on employees’ professional role identity. Journal of the Association for Information Systems, 22(2), 304–324. https://doi.org/10.17705/1jais.00663
  • Subramonyam, H., Im, J., Seifert, C., & Adar, E. (2022, April 29–May 5, 2022). Human-AI guidelines in practice: The power of leaky abstractions in cross-disciplinary teams [Paper presentation]. CHI Conference on Human Factors in Computing Systems (CHI ’22). New Orleans, LA, USA.
  • Sung, E. C., Bae, S., Han, D.-I D., & Kwon, O. (2021). Consumer engagement via interactive artificial intelligence and mixed reality. International Journal of Information Management, 60, 102382. https://doi.org/10.1016/j.ijinfomgt.2021.102382
  • Szalavitz, M., Rigg, K. K., & Wakeman, S. E. (2021). Drug dependence is not addiction-and it matters. Annals of Medicine, 53(1), 1989–1992. https://doi.org/10.1080/07853890.2021.1995623
  • Tahiroglu, D., & Taylor, M. (2019). Anthropomorphism, social understanding, and imaginary companions. The British Journal of Developmental Psychology, 37(2), 284–299. https://doi.org/10.1111/bjdp.12272
  • Taplin, J. (2017). Move fast and break things: How Facebook, Google, and Amazon have cornered culture and what it means for all of us. Pan Macmillan.
  • Taylor, M. (2020, July 14). German court bans Tesla 'Autopilot’ name for misleading customers. Forbes https://www.forbes.com/sites/michaeltaylor/2020/07/14/german-court-bans-tesla-autopilot-name-for-misleading-customers/.
  • The Global Partnership on Artificial Intelligence. (2022). GPAI. https://www.gpai.ai/
  • Thorson, K., Cotter, K., Medeiros, M., & Pak, C. (2021). Algorithmic inference, political interest, and exposure to news and politics on Facebook. Information, Communication & Society, 24(2), 183–200. https://doi.org/10.1080/1369118x.2019.1642934
  • Tolan, S., Pesole, A., Martínez-Plumed, F., Fernández-Macías, E., Hernández-Orallo, J., & Gómez, E. (2021). Measuring the occupational impact of AI: Tasks, cognitive abilities and AI benchmarks. Journal of Artificial Intelligence Research, 71, 191–236. https://doi.org/10.1613/jair.1.12647
  • Top500.org. (2020). November 2020 Top500. https://www.top500.org/lists/top500/2020/11/.
  • Truog, R. D., Mitchell, C., & Daley, G. Q. (2020). The toughest triage - allocating ventilators in a pandemic. The New England Journal of Medicine, 382(21), 1973–1975. https://doi.org/10.1056/NEJMp2005689
  • Uddin, G. A., Alam, K., & Gow, J. (2019). Ecological and economic growth interdependency in the Asian economies: An empirical analysis. Environmental Science and Pollution Research International, 26(13), 13159–13172. https://doi.org/10.1007/s11356-019-04791-1
  • UK Research and Innovation. (2021, October 15). Responsible innovation. https://www.ukri.org/about-us/policies-standards-and-data/good-research-resource-hub/responsible-innovation/
  • United Nations Department of Economic and Social Affairs. (2018, April 20). The 17 goals. https://sdgs.un.org/goals
  • United Nations Educational Scientific and Cultural Organization (UNESCO). (2021). Recommendation on the ethics of artificial intelligence. https://en.unesco.org/artificial-intelligence/ethics.
  • US Chamber of Commerce. (2022, January 18). U.S. chamber launches bipartisan commission on artificial intelligence to advance U.S. leadership. https://www.uschamber.com/technology/u-s-chamber-launches-bipartisan-commission-on-artificial-intelligence-to-advance-u-s-leadership.
  • van Allen, P. (2018). Prototyping ways of prototyping AI. Interactions, 25(6), 46–51. https://doi.org/10.1145/3274566
  • Van Ness, L. (2020, February 20). DNA databases are boon to police but menace to privacy, critics say. www.pewtrusts.org/en/research-and-analysis/blogs/stateline/2020/02/20/dna-databases-are-boon-to-police-but-menace-to-privacy-critics-say.
  • van Oudheusden, M. (2014). Where are the politics in responsible innovation? European governance, technology assessments, and beyond. Journal of Responsible Innovation, 1(1), 67–86. https://doi.org/10.1080/23299460.2014.882097
  • Van Raemdonck, N. (2019). The echo chamber of anti-vaccination conspiracies: Mechanisms of radicalization on Facebook and Reddit. Institute for Policy, Advocacy and Governance (IPAG) Knowledge Series, Forthcoming.
  • Vinichenko, M. V., Melnichuk, A. V., & Karácsony, P. (2020). Technologies of improving the university efficiency by using artificial intelligence: Motivational aspect. Entrepreneurship and Sustainability Issues, 7(4), 2696–2714. https://doi.org/10.9770/jesi.2020.7.4(9)
  • Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Fellander, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the sustainable development goals. Nature Communications, 11(1), 233. https://doi.org/10.1038/s41467-019-14108-y
  • Visvizi, A., Lytras, M. D., Damiani, E., & Mathkour, H. (2018). Policy making for smart cities: Innovation and social inclusive economic growth for sustainability. Journal of Science and Technology Policy Management, 9(2), 126–133. https://doi.org/10.1108/jstpm-07-2018-079
  • Vochozka, M., Kliestik, T., Kliestikova, J., & Sion, G. (2018). Participating in a highly automated society: How artificial intelligence disrupts the job market. Economics, Management, and Financial Markets, 13(4), 57–62. http://doi.org/10.22381/EMFM13420185
  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics. Sci Robot, 2(6), eaan6080. https://doi.org/10.1126/scirobotics.aan6080
  • Wallach, D. P., Flohr, L. A., & Kaltenhauser, A. (2020). Beyond the buzzwords: on the perspective of AI in UX and vice versa [Paper presentation]. International Conference on Human-Computer Interaction. Virtual Event, Denmark.
  • Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.
  • Wang, W., & Siau, K. (2018). Artificial intelligence: A study on governance, policies, and regulations [Paper presentation]. MWAIS 2018 Proceedings, vol. 40. St. Louis, MO, USA.
  • WattTime. (2022). WattTime. https://www.watttime.org/
  • WEAll. (2021). Wellbeing economy alliance. https://weall.org/
  • Wellbeing AI Research Institute. (2022). Wellbeing AI Research Institute. https://wellbeingairesearchinstitute.com/.
  • Wellner, P. A. (2005). Effective compliance programs and corporate criminal prosecutions. Cardozo Law Review, 27, 497.
  • West, D. M. (2018). The future of work: Robots, AI, and automation. Brookings Institution Press.
  • West, D. M., & Allen, J. R. (2020, July 28). Turning point. Policymaking in the era of artificial intelligence. https://www.brookings.edu/book/turning-point/.
  • Wieringa, M. (2020). What to account for when accounting for algorithms: A systematic literature review on algorithmic accountability [Paper presentation]. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. Barcelona, Spain.
  • Wigger, D. (2020). Automatisiertes Fahren und strafrechtliche Verantwortlichkeit wegen Fahrlässigkeit (vol. 2). Nomos Verlag.
  • Williams, B. A., Brooks, C. F., & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions, and policy implications. Journal of Information Policy, 8(1), 78–115. https://doi.org/10.5325/jinfopoli.8.1.0078
  • Winfield, A. F., Michael, K., Pitt, J., & Evers, V. (2019). Machine ethics: The design and governance of ethical AI and autonomous systems. Proceedings of the IEEE, 107(3), 509–517. https://doi.org/10.1109/JPROC.2019.2900622
  • Wing, J. M. (2021). Trustworthy AI. Communications of the ACM, 64(10), 64–71. https://doi.org/10.1145/3448248
  • Winograd, T. (2006). Shifting viewpoints: Artificial intelligence and human–computer interaction. Artificial Intelligence, 170(18), 1256–1258. https://doi.org/10.1016/j.artint.2006.10.011
  • Winslow, B., Chadderdon, G. L., Dechmerowski, S. J., Jones, D. L., Kalkstein, S., Greene, J. L., & Gehrman, P. (2016). Development and clinical evaluation of an mHealth application for stress management. Frontiers in Psychiatry, 7, 130. https://doi.org/10.3389/fpsyt.2016.00130
  • Winslow, B., Kwasinski, R., Hullfish, J., Ruble, M., Lynch, A., Rogers, T., Nofziger, D., Brim, W., & Woodworth, C. (2022). Automated stress detection using mobile application and wearable sensors improves symptoms of mental health disorders in military personnel. Frontiers in Digital Health, 4. https://doi.org/10.3389/fdgth.2022.919626
  • Wired Magazine. (2018). AI and the future of work. https://www.wired.com/wiredinsider/2018/04/ai-future-work/
  • Wirth, R., & Hipp, J. (2000). CRISP-DM: Towards a standard process model for data mining [Paper presentation]. Proceedings of the 4th International Conference on the Practical Applications of Knowledge Discovery and Data Mining. Manchester, UK.
  • Wu, W., Huang, T., & Gong, K. (2020). Ethical principles and governance technology development of AI in China. Engineering, 6(3), 302–309. https://doi.org/10.1016/j.eng.2019.12.015
  • Xu, W. (2019). Toward human-centered AI. Interactions, 26(4), 42–46. https://doi.org/10.1145/3328485
  • Xu, W., Dainoff, M. J., Ge, L., & Gao, Z. (2022). Transitioning to human interaction with AI systems: New challenges and opportunities for HCI professionals to enable human-centered AI. International Journal of Human–Computer Interaction, 1–25. https://doi.org/10.1080/10447318.2022.2041900
  • Yang, Q., Scuito, A., Zimmerman, J., Forlizzi, J., & Steinfeld, A. (2018). Investigating how experienced UX designers effectively work with machine learning [Paper presentation]. Proceedings of the 2018 Designing Interactive Systems Conference. Hong Kong, China.
  • Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020). Re-examining whether, why, and how human-AI interaction is uniquely difficult to design [Paper presentation]. Proceedings of the 2020 Chi Conference on Human Factors in Computing Systems. Virtual Event, USA.
  • Yanisky-Ravid, S., & Hallisey, S. (2018). ‘Equality and privacy by design’: Ensuring artificial intelligence (AI) is properly trained & Fed: A new model of AI data transparency & certification as safe harbor procedures. SSRN Electronic Journal, https://doi.org/10.2139/ssrn.3278490
  • Yao, M. Z., Rice, R. E., & Wallis, K. (2007). Predicting user concerns about online privacy. Journal of the American Society for Information Science and Technology, 58(5), 710–722. https://doi.org/10.1002/asi.20530
  • Završnik, A. (2017). Big data, crime and social control. Routledge.
  • Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning fair representations [Paper presentation]. International conference on machine learning. Atlanta, GA, USA.
  • Zimmer, M. J. (1995). Emerging uniform structure of disparate treatment discrimination litigation. Georgia Law Review, 30, 563.
  • Zouhaier, L., Hlaoui, Y. B. D., & Ayed, L. B. (2021). A reinforcement learning based approach of context-driven adaptive user interfaces [Paper presentation]. 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC). Madrid, Spain.