2,645
Views
2
CrossRef citations to date
0
Altmetric
Editorial

Introduction to the Special Issue on AI, Decision-Making, and the Impact on Humans

ORCID Icon & ORCID Icon
Pages 1367-1370 | Received 22 Jan 2023, Accepted 25 Jan 2023, Published online: 13 Feb 2023

Artificial intelligence (AI) algorithms are increasingly making decisions and supporting human decisions. Both autonomous systems and algorithm-in-the-loop decision-support systems use AI algorithms and data-driven models to provide or deny access to credit, healthcare, and other essential resources while steering life-changing decisions related to criminal justice, education, and transportation, among others. These systems are often built without taking into account the human factors associated with their use. The models are often opaque, the recommendations too difficult to interpret, and the systems unaware of the values and consequences of their calculations.

This special issue emerged from a worldwide effort to establish priorities for human-centered artificial intelligence (HCAI) (Ozmen Garibay et al., Citation2023). It brings forward AI research that places humans first in the design of decision-making and decision-support systems. We are pleased to present a selection of papers that study human factors in decision-making systems, AI techniques to support human-centered decision-making, and case studies in incorporating HCAI principles into algorithmic decision-making and decision-support systems.

1. The selection process

After the call for papers was published online in July 2021, we began receiving abstract proposals from potential authors. The submission period for abstract proposals was open until November of the same year, and by the end of this period, we had received a total of 24 proposals. The guest editors carefully reviewed all the abstract proposals and determined that 11 of them were relevant and suitable to be considered for the special issue. The corresponding authors of these 11 proposals were encouraged to submit full papers for further review.

In November 2021, 29 full manuscripts were submitted to the special issue, and the guest editors went through them to identify the ones that were not relevant to the topics of AI, Decision-Making, and the Impact on Humans. As a result, eight papers were early rejected mainly due to limited relevance. The remaining 21 papers underwent a first round of review by experts in the subject and one of the guest editors. This first round of reviews was completed in March 2022 and, based on the results, it was decided to continue with 10 submissions and reject the remainder. Two papers were accepted after minor revisions, while the other eight underwent major revisions. The second round of reviews was completed in July 2022 with the acceptance of the eight revised papers. Overall, all the ten papers that successfully passed the rigorous review process and were therefore selected to be published in the special issue conform to the high standards of papers published in the IJHCI journal and are considered of crucial significance to the topic of AI, Decision-Making, and the Impact on Humans.

2. Accepted papers

This section provides a summary of the ten manuscripts included in the special issue. The papers are arranged in a way that those primarily focusing on research methods appear first, while those showcasing entirely new systems appear last. The remaining papers fall within these two extremes and present studies that are either not system-specific or involve simulated or existing systems.

In What If Artificial Intelligence Become Completely Ambient in Our Daily Lives? Exploring Future Human-AI Interaction through High Fidelity Illustrations, Sunok Lee, Minha Lee, and Sangsu Lee (Citation2022) make a compelling point about the importance of envisioning user-centered future design directions for human-AI interactions. To this goal, the authors propose an interesting methodology – they conducted a collaborative workshop with HCI designers and illustrators to create high-fidelity illustrations of futures in which people’s daily lives coexist with AI. The illustrations were then shown in an online exhibition, allowing potential users to share their perceptions, expectations, and concerns about the future of AI. The authors found that users considered three features to be important in their interactions with AI: a tailored exterior connected to a personally-owned AI, fluid multimodal interactions that allow for natural interactions anywhere, and the ability for AI to ubiquitously support their daily routines.

In Explainable Artificial Intelligence: Evaluating the Objective and Subjective Impacts of xAI on Human-Agent Interaction, Andrew Silva et al. (Citation2022) focus on explainable artificial intelligence (XAI) from a human-agent interaction perspective. They developed a novel XAI survey to measure the degree of explainability of a method and conducted a large-scale user study in which a virtual agent provides advice to a human on a decision-making task. The authors recruited 286 participants to compare the degree of explainability of a range of XAI methods, including feature importance, probability scores, decision trees, counterfactual reasoning, natural language explanations, and case-based reasoning. The results inform future research in this area by highlighting the benefits of counterfactual explanations and the shortcomings of confidence scores for explainability.

In Improving Trustworthiness of AI Solutions: A Qualitative Approach to Support Ethically-Grounded AI Design, Andrea Vianello et al. (Citation2022) present a method for evaluating the trustworthiness of AI systems from the perspective of normative ethics. The method incorporates three branches of ethical thinking, including utilitarian, deontological, and virtue ethics, to offer a comprehensive understanding of how people perceive AI decisions within their socio-technical context. A key contribution of the paper is the development a set of interview questions based on the System Causability Scale and three additional sets of questions. These open-ended questions to examine whether users understand the reasons behind an AI result, whether the AI result is informative enough for decision-making, and how users consider the utilitarian, deontological, and virtue ethical aspects of the AI solution within their socio-technical context. The method is illustrated through a case study of an AI recommendation system used in a business setting. The results suggest that the approach can help identify a wide range of practical issues related to AI systems. The design opportunities identified for the system used in the case study were used to uncover generalized design principles that can be applied to other AI solutions to improve their trustworthiness and align them with ethical considerations.

In Insight into User Acceptance and Adoption of Autonomous Systems in Mission Critical Environments, Kristin Weger et al. (Citation2022) explore the factors influencing the acceptance and adoption of autonomous systems by users in the context of mission-critical environments. A semi-structured interview was conducted with 47 experts in the field to gather data on participants’ subjective experiences, beliefs, and preferences regarding their use of partial, multi-level, or complete autonomous systems or decision-making systems. The study found that there is no single factor that is considered essential for the acceptance and adoption of autonomous systems, but rather a combination of factors such as ease of use, reliability, usefulness, transparency, and security/safety, which are important regardless of the level of autonomy. The study highlights the importance of human-centered approaches in the design of autonomous systems for mission-critical environments to improve their effectiveness and user acceptance.

In Tactical-Level Explanation Is Not Enough: Effect of Explaining AV’s Lane-Changing Decisions on Drivers’ Decision-Making, Trust, and Emotional Experience, Yiwen Zhang et al. (Citation2022) examine the effect of providing explanations and a confirmation option for lane-changing decisions made by autonomous vehicles (AVs) on human drivers’ decision-making, trust, and emotional experience. The study was carried out using a driving simulator with 30 participants divided into three groups and the results were collected through questionnaires and interviews. The results show that providing explanations alone had little effect on trust and experience but resulted in worse decision-making performance. Providing a confirmation option after an explanation improved trust and feeling of control, but had mixed effects on decision-making performance. The study also found that trust and decision-making performance varied significantly depending on the lane-changing scenario, suggesting that future research should focus on complex and conflicting lane-changing scenarios to understand how to provide good explanations for better decision-making performance and calibrated trust in these crucial situations.

In Enhancing Fairness Perception – Towards Human-Centred AI and Personalized Explanations, Avital Shulner-Tal et al. (Citation2022) explore the factors that affect laypeople’s perceptions of fairness and understanding of algorithmic decision-making systems (ADMSs) using a simulated AI-based recruitment decision-support system. Their large-scale online between-subject study focuses on three aspects: system characteristics, personality characteristics, and demographic characteristics. The results suggest that the value of the input features, the output, the input-output relation, whether the system provides an explanation for the output, and the style of the explanation all play a significant role in shaping laypeople’s perceptions of fairness. Additionally, demographic characteristics such as age, residence, education level, and income level, and personality characteristics such as openness, agreeableness, and emotional stability also have an impact on fairness perceptions. The study also found that providing explanations for the outcome of the system increases laypeople’s sense of understanding, which ultimately increases the level of perceived fairness. Based on the findings, the authors derive a framework that can be used to provide users with personalized explanations based on their personality and demographic characteristics, as well as the relevant characteristics of the system.

In Humans and Algorithms Detecting Fake News: Effects of Individual and Contextual Confidence on Trust in Algorithmic Advice, Chris Snijders, Rianne Conijn, Evie de Fouw, and Kilian van Berlo (Citation2022) study the effect that self-confidence has on following algorithmic advice. The authors noted that although previous research has indicated that humans are reluctant to follow algorithmic advice if they do not believe that the algorithm can outperform them, it is unclear whether this is due to individual or contextual factors. They addressed this research gap by conducting a study in the context of fake news detection. Data from 110 participants and 1,610 news stories, of which almost half were fake, suggest that participants’ willingness to accept advice decreases with their self-confidence. However, this effect is contextual rather than individual, meaning that even individuals who are generally confident may still be hesitant to accept algorithmic advice when they feel uncertain about a particular situation. The authors discuss implications for the design of experimental tests of algorithmic advice and for human-AI interaction in general.

In Promoting Music Exploration Through Personalized Nudging in a Genre Exploration Recommender, Yu Liang and Martijn C. Willemsen (Citation2022) explore an interesting case regarding exploration-oriented music recommender systems. They argue that despite the increasing popularity of such systems, it is not yet clear to what extent users actually explore away from their current preferences during explorations, and how musical expertise level plays a role in this behavior. The authors analyze the consistency between short-term, medium-term, and long-term preferences in data from previous studies and find that users with higher musical expertise have more consistent preferences for their top-listened artists and genres. They also conduct a user study to investigate the effects of nudging on genre exploration and the perceived helpfulness of recommendations. The study finds that nudging can increase the likelihood of users exploring distant genres, but that users with high musical expertise are less likely to do so. The results also show that a balanced trade-off between exploration and personalization is needed to improve the perceived helpfulness of recommendations.

In What Are the Users’ Needs? Design of a User-Centered Explainable Artificial Intelligence Diagnostic System, Xin He, Yeyi Hong, Xi Zheng, and Yong Zhang (Citation2022) study explanations in medical domains from the consumer’s perspective. The investigation begins with a systematic review of the literature to develop a user-centered XAI explanation Needs Library for the medical domain. Using this library, the authors then design and evaluate a consumer-centric XAI electrocardiogram diagnostic system prototype. The results of the study provide suggestions for future XAI system design for consumer users in the medical domain, including avoiding overly detailed explanations, considering multiple stakeholders, and providing explanations that are more relevant to the consumer’s personal circumstances.

In The Use of Responsible Artificial Intelligence Techniques in the Context of Loan Approval Processes, Erasmo Purificato et al. (Citation2022) aim to overcome the concerns surrounding the use of AI in decision-making processes in contexts where human knowledge and expertise are considered essential, such as loan approvals. The authors focus on two of the fundamental ethical principles identified by the High-Level Expert Group on AI, explainability, and fairness, to build a system aiming to increase trust and reliance on AI systems among domain experts. The system includes a standardized explainability tool that provides methods to get explanations for each prediction and a fairness tool that allows users to detect and mitigate biases within the model’s behavior. A novel Trust and Reliance Scale is proposed to evaluate the system explainability, while an A/B test is carried out for assessing the fairness feature. Overall, the paper provides an interesting technical contribution together with a comprehensive evaluation framework that can be used as a benchmark for future research in this area.

Acknowledgements

For this special issue, we were fortunate to have the support of an outstanding group of anonymous reviewers who have given their time and expertise to rigorously review the 21 submitted papers that were admitted to the review phase. We are deeply grateful for their contribution and support, without which this special issue would not have been possible. We would also like to extend our sincere thanks to the editors of IJHCI for their guidance and support throughout the entire process.

Additional information

Notes on contributors

Salvatore Andolina

Salvatore Andolina is Senior Assistant Professor at the Politecnico di Milano, Italy. His research focuses on human-computer interaction, information retrieval, and creativity. His current research interests include the design of human-centered AI systems for empowering humans in a variety of ubiquitous, social, and collaborative settings.

Joseph A. Konstan

Joseph A. Konstan is Distinguished McKnight Professor of Computer Science and Engineering at the University of Minnesota where he has also served as the College of Science and Engineering’s Associate Dean for Research since 2019.

References

  • He, X., Hong, Y., Zheng, X., & Zhang, Y. (2022). What are the users’ needs? Design of a user-centered explainable artificial intelligence diagnostic system. International Journal of Human–Computer Interaction, 39(7), 1–24. https://doi.org/10.1080/10447318.2022.2095093
  • Lee, S., Lee, M., & Lee, S. (2022). What if artificial intelligence become completely ambient in our daily lives? Exploring future human-AI interaction through high fidelity illustrations. International Journal of Human–Computer Interaction, 39(7), 1–19. https://doi.org/10.1080/10447318.2022.2080155
  • Liang, Y., & Willemsen, M. C. (2022). Promoting music exploration through personalized nudging in a genre exploration recommender. International Journal of Human–Computer Interaction, 39(7). https://doi.org/10.1080/10447318.2022.2108060
  • Ozmen Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., Falco, G., Fiore, S. M., Garibay, I., Grieman, K., Havens, J. C., Jirotka, M., Kacorri, H., Karwowski, W., Kider, J., Konstan, J., Koon, S., Lopez-Gonzalez, M., Maifeld-Carucci, I., … Xu, W. (2023). Six human-centered artificial intelligence grand challenges. International Journal of Human–Computer Interaction, 39(3), 391–437. https://doi.org/10.1080/10447318.2022.2153320
  • Purificato, E., Lorenzo, F., Fallucchi, F., & Luca, E. W. D. (2022). The use of responsible artificial intelligence techniques in the context of loan approval processes. International Journal of Human–Computer Interaction, 39(7), 1–20. https://doi.org/10.1080/10447318.2022.2081284
  • Shulner-Tal, A., Kuflik, T., & Kliger, D. (2022). Enhancing fairness perception – Towards human-centred AI and personalized explanations understanding the factors influencing laypeople’s fairness perceptions of algorithmic decisions. International Journal of Human–Computer Interaction, 39(7), 1–28. https://doi.org/10.1080/10447318.2022.2095705
  • Silva, A., Schrum, M., Hedlund-Botti, E., Gopalan, N., & Gombolay, M. (2022). Explainable artificial intelligence: Evaluating the objective and subjective impacts of xAI on human-agent interaction. International Journal of Human–Computer Interaction, 39(7), 1–15. https://doi.org/10.1080/10447318.2022.2101698
  • Snijders, C., Conijn, R., Fouw, E. d., & Berlo, K. v (2022). Humans and algorithms detecting fake news: Effects of individual and contextual confidence on trust in algorithmic advice. International Journal of Human–Computer Interaction, 39(7), 1–12. https://doi.org/10.1080/10447318.2022.2097601
  • Vianello, A., Laine, S., & Tuomi, E. (2022). Improving trustworthiness of AI solutions: A qualitative approach to support ethically-grounded AI design. International Journal of Human–Computer Interaction, 39(7), 1–18. https://doi.org/10.1080/10447318.2022.2095478
  • Weger, K., Matsuyama, L., Zimmermann, R., Mesmer, B., Van Bossuyt, D., Semmens, R., & Eaton, C. (2022). Insight into user acceptance and adoption of autonomous systems in mission critical environments. International Journal of Human–Computer Interaction, 39(7). https://doi.org/10.1080/10447318.2022.2086033
  • Zhang, Y., Wang, W., Zhou, X., Wang, Q., & Sun, X. (2022). Tactical-level explanation is not enough: Effect of explaining AV’s lane-changing decisions on drivers’ decision-making, trust, and emotional experience. International Journal of Human–Computer Interaction, 39(7), 1–17. https://doi.org/10.1080/10447318.2022.2098965

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.