4,556
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Special Issue on AI in HCI

, , &

The discussion around the rapid evolution and omnipresence of Artificial Intelligence (AI) is already rich, and AI is expected to be integrated into most aspects of everyday life. At the same time, Human-Computer Interaction (HCI), based on the strong foundations of ergonomics, cognitive science and psychology, constitutes a field critical in shaping the development of technology that is usable and useful to all users. In view of the emerging spread of AI in computing systems of everyday use, it is becoming apparent that AI and HCI can mutually benefit from each other in fruitful cooperation.

AI technologies such as natural language processing, computer vision, and machine learning have already found their way into innovative applications, empowering the development of useful and intuitive interfaces, offering personalization and context awareness, and enhancing the utility of technological systems as well as the providing User Experience. At the same time, AI-empowered systems can offer insights into user behaviour and preferences, and assist in designing and evaluating interactive systems. On the other hand, HCI is critical to developing ethical, trustworthy, and reliable AI technologies. HCI, focusing on the human and their needs, contributes design principles, methods, and tools, for the design and development of AI systems, fosters meaningful AI explainability, and assures a seamless and beneficial collaboration between humans and AI.

In this context, the goal of this special issue is to bring together research findings and best practices from academia and industry, highlighting results of joined AI and HCI forces, and demonstrating progress in related topics, open challenges, as well as novel technological solutions, that leverage AI capabilities to address both new and well-known problems.

The 16 papers accepted for publication in this Special Issue address important topics such as trust, transparency, explainability, responsible AI, human-AI interaction and teaming, chatbots, health and well-being applications, and evaluation, and introduce a wide variety of perspectives, approaches, methods, and applications, reflecting the rapid growth and evolution of this research area.

The issue of trust is addressed in two papers. The first paper, by Choung et al. (Citation2022) analyzes the role of trust in the intention to use AI technologies. Two studies are reported. The first examines the role of trust in the use of AI voice assistants, confirming that trust has a significant effect, which operates through perceived usefulness and participants’ attitude toward voice assistants. The second study distinguishes two dimensions of trust, namely human-like trust and functionality trust, and confirms the indirect effect of trust and the effects of perceived usefulness, ease of use, and attitude on intention to use. Both dimensions of trust share a similar pattern of effects within the model, with functionality-related trust exhibiting a greater total impact than human-like trust. Overall, the paper offers a multidimensional measure of trust that can be utilized in future studies on trustworthy AI.

The second paper, by Herse et al. (Citation2023) investigates trust in the context of human interaction with intelligent agents. An online experiment is presented which examines whether stimulus difficulty and the implementation of agent features, such as user interface and interaction style, influence user perception, trust and decision-making. The results demonstrate that decision changes occur more for hard stimuli, with participants choosing to change their initial decision across all features to follow the agent’s recommendation. Furthermore, agent features can be utilized to mediate user decision-making and trust in-task, though the direction and extent of this influence depend on the implemented feature and difficulty of the task. The results emphasize the complexity of user trust in Human-Agent collaboration.

The paper by Nakao et al. (Citation2022) addresses the issue of responsible AI by investigating the fairness of AI models. A design space exploration is proposed that supports not only data scientists but also domain experts to investigate AI fairness. Using loan applications as an example, a series of workshops were held with loan officers and data scientists to elicit their requirements. Such requirements were then instantiated into a UI to support human-in-the-loop fairness investigations, evaluated through a think-aloud user study. This work contributes better designs to investigate an AI model’s fairness—and move closer towards responsible AI.

Jiang et al. (Citation2022) in their paper introduce Situation Awareness as a conceptual framework for considering human-AI interaction, and argue that it is an appropriate and valuable theoretical lens through which to decompose and view the interaction as hierarchical layers that allow for closer inquiry and discovery. Furthermore, they illustrate why the Situation Awareness perspective is particularly relevant to the current need to understand Human-Centered AI by identifying tensions inherent in AI design and explaining how Situation Awareness may help alleviate these tensions. Users’ enactment of Situation Awareness has the potential to mitigate some negative impacts of AI systems on user experience, improve human agency during AI system use, and promote more efficient and effective decision-making.

Ebermann et al.’s (Citation2022) paper investigate the effect of contradiction in decisions and related explanations between an AI system and the user on user acceptance. A research model is derived and validated by an experiment with 78 participants. Findings suggest that in decision situations with cognitive misfit users experience negative moods significantly more often and have a negative evaluation of the AI system’s support. Overall, the article provides guidance for dealing with human-AI interaction during the decision-making process and sheds light on how explainable AI can increase users’ acceptance of such systems.

Le Guillou et al. (Citation2022) propose a review of research regarding the interaction of humans with artificial agents, including explainable Artificial Intelligence and Human-Robot Interaction (HRI)/HCI. They find out that even if vocabulary and approaches are different, the concepts converge on the necessity for the artificial agents to provide an accurate mental model of their behavior to humans. This has different implications depending on whether a tool/user interaction or a cooperation interaction is considered. In this context, the article adopts a cognitive science perspective on joint-action and discusses joint-action mechanisms as cognitive requirements for future artificial agents.

Demir et al.’s (Citation2022) paper focuses on human-machine teaming and in particular on how interpersonal coordination dynamics between team members are associated with team performance and shared situation awareness in a simulated urban search and rescue task. The study investigates (1) how communication recurrence relates to coordination dynamics between a robot and human operator when they used different communication strategies, and (2) how dynamic characteristics of human–robot interpersonal coordination are associated with team performance and shared situation awareness. The results indicate that (1) teams demonstrating more flexibility in their coordination dynamics are more adaptive to changes in the task environment, and (2) while robot explanations help to improve shared situation awareness, revisiting the same communication pattern (i.e., routine coordination) is associated with better team performance but does not improve shared situation awareness.

Four papers concern interaction with conversational agents and chatbots. Wahde & Virgolin (Citation2022) present a dialogue manager called DAISY, serving as the core part of a conversational agent, which implements five core principles for transparent and accountable conversational AI, namely interpretability, inherent capability to explain, independent data, interactive learning, and inquisitiveness. DAISY-based agents are trained with human-machine interaction and are capable to provide a concise and clear explanation of the actions required to reach a conclusion. The paper compares DAISY-based agents and two methods using Deep Neural Networks, providing quantitative results related to entity retrieval and qualitative results in terms of types of potential errors. The results show that DAISY-based agents achieve superior precision but lower recall, which may be preferable in task-oriented settings. In view of their high degree of interpretability, DAISY-based agents constitute a fundamentally different alternative to the currently popular DNN-based methods.

The second paper in this group, by Jin & Youn (Citation2022), examines the factors which drive consumers to use artificial intelligence (AI)-powered chatbots, and in particular relations among AI-powered chatbots’ anthropomorphism (human-likeness, animacy, and intelligence), social presence, imagery processing, psychological ownership, and continuance intention in the context of Human-AI-Interaction. The results show that consumers perceive human-likeness of AI-powered chatbots as a positive predictor of social presence and imagery processing. Imagery processing is a positive predictor of psychological ownership of the products (fashion industry) and services (tourism industry) promoted by chatbots. Most importantly, social presence and imagery processing are positive predictors of AI-chatbot continuance intention. These empirical findings entail practical implications for both AI-powered chatbot developers and managerial implications for commercial brands.

Zhang et al. (Citation2022) analyze the use of AI-based chatbot by tourism service providers. Based on the Unified Theory of Adoption and Use of Technology 2 (UTAUT2), the Theory of Perceived Risk, anthropomorphism, and personalization, they propose an integrated model to investigate the determinants behind customers’ continuance intention to use chatbots for tourism. In addition, the moderating role of gender differences in the relationships between determinants and continuance intention is tested. The analysis based on a sample of 613 users highlights the positive effects of performance expectancy, social influence, habit, anthropomorphism, and personalization. However, the findings show that time risk and privacy risk have negative influences. Two differences due to gender are identified, but many other relationships show no differences between males and females.

Finally, Batliner et al. (Citation2022) address ethical awareness for paralinguistic applications, by establishing taxonomies for data representations, system designs for and a typology of applications, and users/test sets and subject areas. These are related to an “ethical grid” consisting of the most relevant ethical cornerstones, based on principalism. The characteristics of and the interdependencies between these taxonomies are described and exemplified. This makes it possible to assess more or less critical “ethical constellations.”

Another group of three papers addresses AI applications targeted to improve human everyday life and well-being. The paper by Flores-Carballo et al. (Citation2022) aims at identifying the speaker in interactions between mothers and children with Down syndrome (DS) using audio. Audios were collected from a session in which children with DS solved puzzles, and their mothers were by their side. A dataset was generated by manually annotating human speech activity and non-speech. The authors used machine learning to perform four experiments, including individual and generalized models achieving on average F1-scores of 0.74 (five-class model) and 0.84 (two-class) for individual models and 0.69 (five-class) and 0.82 (two-class) for the generalized models. The results can be helpful to behavioral researchers, therapists, and those interested in better understanding how mother–child interactions unfold in naturalistic settings.

Oyebode et al. (Citation2022) present an extensive literature review of 87 articles published between January 2011 and April 2022 to explore current trends in ML-based adaptive systems for health and well-being. The review focuses on five key areas across the target domains: data collection strategies, model development process, ML techniques utilized, model evaluation techniques, as well as adaptive or personalization strategies for health and wellness interventions. Various technical and methodological challenges are identified, and recommendations are offered for tackling these challenges, leveraging on recent technological advances.

Finally, Kaluarachchi et al. (Citation2023) address Corneal Surface Reflections, or reflections on the eye-surface, as a source of information for passive lifelogging applications, and develop a synthetic and self-supervised learning-based two-stage pipeline of deep learning models to detect objects in these reflections. The prototype consists of a single RGB camera looking into the eye. Data were collected from different users in uncontrolled environments using the prototype and the system was trained to detect multiple classes of objects present in a typical office environment. The model was then evaluated in partially-controlled and in-the-wild scenarios. In addition, the authors discuss the strengths and weaknesses of the system and of using corneal surface reflections for passive lifelogging.

The two last papers of this Special Issue address the important topic of evaluation.

The paper by Soui & Haddad (Citation2023) proposes an evaluation method based on the analysis of Graphical Mobile User Interface (MUI) as a screenshot. The proposed method combines the Densnet201 architecture and K-Nearest Neighbours (KNN) classifier to assess the MUIs. GoogleNet is used to extract automatically the features of MUI, and the KNN classifier is applied to classify the MUIs as good or bad. The proposed approach is evaluated based on publicly available large-scale datasets. The obtained results are very promising and show the efficiency of the proposed model with an average of 93% of accuracy. Finally, the paper by Chaehan So (2022) was inadvertently published in IJHCI Vol. 39, No. 4, 755–775 https://doi.org/10.1080/10447318.2022.2049081, but constitutes an integral part of this Special issue. The paper compares the two-alternative forced choice (2AFC) task to rating scales for measuring aesthetic perception of neural style transfer-generated images and investigates whether and to what extent the 2AFC task extracts clearer and more differentiated patterns of aesthetic preferences. To this aim, 8250 pairwise comparisons of 75 neural style transfer-generated images, varied in five parameter configurations, are measured by the 2AFC task and compared with rating scales. Statistical and qualitative results demonstrate higher precision of the 2AFC task over rating scales in detecting three different aesthetic preference patterns, namely convergence (number of iterations), inverted U-shape (learning rate), and double peak (content-style ratio). Important for practitioners, finding such aesthetically optimal parameter configurations with the 2AFC task enables the reproducibility of aesthetic outcomes by the neural style transfer algorithm, which saves time and computational cost, and yields new insights about parameter-dependent aesthetic preferences.

We would like to thank all the authors and the reviewers of the Special Issue papers for their contribution. We would also like to thank the Editors of the IJHCI Journal, Prof. Gavriel Salvendy and Prof. Constantine Stephanidis, for their support and guidance during the preparation of this Special Issue.

Additional information

Notes on contributors

Margherita Antona

Margherita Antona is Principal Researcher at the HCI Laboratory of ICS–FORTH. Her research interests include design for all, adaptive and intelligent interfaces, Ambient Intelligence and Human Robot Interaction. She is Co–Chair of the UAHCI Conference, coeditor the UAIS Journal, and member of the Editorial Advisory Board of the T&F–IJHCI Journal.

George Margetis

George Margetis is a Postdoctoral Researcher at the HCI Laboratory of ICS–FORTH. His research interests include Human-Centered AI, X-Reality, natural interaction, intelligent user interfaces, and digital accessibility. He is scientific and technical manager in numerous European, National and industry funded R&D projects and has co-authored more than 80 scientific publications.

Stavroula Ntoa

Stavroula Ntoa is a Postdoctoral Researcher at the HCI Laboratory of ICS-FORTH, leading UX research and design activities. Her research interests include UX design and evaluation, AI in HCI, and universal access. She is Co–Chair of the AI–HCI Conference and member of the Editorial Board of the Springer UAIS Journal.

Helmut Degen

Helmut Degen is Senior Key Expert for User Experience at Siemens Corporation (Princeton, NJ, USA). His research topics are explainable AI, trustworthiness, and efficiency. Helmut received a PhD (Dr. phil.) from the Freie Universität Berlin and a master’s in Computer Science (Diplom-Informatiker) from the Karlsruhe Institute of Technology.

References

  • Batliner, A., Neumann, M., Burkhardt, F., Baird, A., Meyer, S., Vu, N. T., & Schuller, B. W. (2022). Ethical awareness in paralinguistics: A Taxonomy of applications. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2140385
  • Choung, H., David, P., & Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2050543
  • Demir, M., Cohen, M., Johnson, C. J., Chiou, E. K., & Cooke, N. J. (2022). Exploration of the impact of interpersonal communication and coordination dynamics on team effectiveness in human-machine teams. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2143004
  • Ebermann, C., Selisky, M., & Weibelzahl, S. (2022). Explainable AI: The effect of contradictory decisions and explanations on users’ acceptance of AI systems. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2126812
  • Flores-Carballo, C. R., Molina-Arenas, G. A., Macias, A., Caro, K., Beltran, J., & Castro, L. A. (2022). Speaker identification in interactions between mothers and children with Down syndrome via audio analysis: A case study in Mexico. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2090610
  • Herse, S., Vitale, J., & Williams, M.-A. (2023). Using agent features to influence user trust, decision making and task outcome during human-agent collaboration. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10.1080/10447318.2022.2150691
  • Jiang, J., Karran, A. J., Coursaris, C. K., Leger, P.-M., & Beringer, J. (2022). A situation awareness perspective on human–AI interaction: Tensions and opportunities. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2093863
  • Jin, S. V., & Youn, S. (2022). Social presence and imagery processing as predictors of chatbot continuance intention in human–AI-interaction. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2129277
  • Kaluarachchi, T., Siriwardhana, S., Wen, E., & Nanayakkara, S. (2023). A corneal surface reflections-based intelligent system for lifelogging applications. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2163240
  • Le Guillou, M., Prévot, L., & Berberian, B. (2022). Bringing together ergonomic concepts and cognitive mechanisms for human—AI agents cooperation. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2129741
  • Nakao, Y., Strappelli, L., Stumpf, S., Naseer, A., Regoli, D., & Del Gamba, G. (2022). Towards responsible AI: A design space exploration of human-centered artificial intelligence user interfaces to investigate fairness. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2067936
  • Oyebode, O., Fowles, J., Steeves, D., & Orji, R. (2022). Machine learning techniques in adaptive and personalized systems for health and wellness. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2089085
  • So, C. (2022). Measuring aesthetic preferences of neural style transfer: More precision with the Two-Alternative-Forced-Choice Task. International Journal of Human–Computer Interaction, 39(9), 755–775. https://doi.org/10.1080/10447318.2022.2049081
  • Soui, M., & Haddad, Z. (2023). Deep learning-based model using DensNet201 for mobile user interface evaluation. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2023.2175494
  • Wahde, M., & Virgolin, M. (2022). DAISY: An implementation of five core principles for transparent and accountable conversational AI. International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2081762
  • Zhang, B., Zhu, Y., Deng, J., Zheng, W., Liu, Y., Wang, C., & Zenga, R. (2022). “I Am Here to Assist Your Tourism”: Predicting continuance intention to use AI-based chatbots for tourism. Does gender really matter? International Journal of Human–Computer Interaction, 39(9). https://doi.org/10.1080/10447318.2022.2124345

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.