1,834
Views
2
CrossRef citations to date
0
Altmetric
Editorial

Transparent Human–Agent Communications

ORCID Icon
Pages 1737-1738 | Received 13 Jun 2022, Accepted 29 Aug 2022, Published online: 13 Sep 2022

As intelligent agents (embodied or otherwise) become more sophisticated, transparency has become a hot topic in recent years and has been identified by various government agencies (e.g., guidelines published by the European Commission (Citation2020)) and advisory groups (e.g., EU High-Level Expert Group on Artificial Intelligence, Citation2020; U.S. National Security Commission on AI, Citation2021) as a top requirement for high-stake and trustworthy AI systems. Technical societies such as IEEE have also published guidelines on transparency (Winfield et al., Citation2021). Increasingly, efforts are focused on making AI’s output more transparent in order to maximize the joint performance of the human–machine team. A major milestone of transparency research is the 2020 IEEE Transactions on Human–machine Systems special issue on Agent and System Transparency, which examines the transparency issue in a variety of human–machine teaming contexts (Chen et al., Citation2020). Since then, there have been significant advancements in AI and its subfields such as explainable AI and have enabled machines’ real-time explanations (e.g., regarding their behaviors, intents, reasoning process, and projected outcomes) to their human partners. However, because of complexities in dynamic tasking environments, it remains a challenge for human-computer interaction researchers and practitioners to design effective human–machine interfaces (HMI) to support robust and efficient human-agent communications (Chen, Citationin press).

The aim of this special issue is to showcase state-of-the-art research on transparent human-agent communications, with various methodologies and applications. There are fourteen articles in this special issue—from articles on transparency methodologies and explainable AI, papers focused on designs of transparent HMIs, to empirical studies. The application areas examined in these articles are diverse—including autonomous vehicles and military/security robots, intelligent medical decision support, assistive tools for people with disabilities, chat bots, and social media.

The first two articles present promising methodologies to support transparent human-agent interactions. Scheutz, Thielstorm, and Abrams present a natural-language-based human–robot communication architecture, Distributed Integrated Affect Reaction Cognition, that can support transparent task-based communications, particularly in situations when the robot needs to reject human commands and provide explanations and justifications. Tucker, Zhou, and Shah present an agent-training technique, Adversarially Guided Self-Play, that can support alignment of human and machine agents’ representation schemes (i.e., latent spaces) with little training data. Data from two experiments, one involving only machine agents and the other involving human–machine partnership, show promising results of the technique to support alignment between partners and enhanced outcomes during team tasks.

The next three articles focus specifically on explainable AI (XAI). Sanneman and Shah propose an XAI framework, Situation Awareness Framework for Explainable AI (SAFE-AI), that can be used for both developing and evaluating XAI agents. The SAFE-AI framework addresses humans’ information needs in order to achieve proper situation awareness of AI agents in the tasking environment. Issues related to human workload and calibrated trust are also discussed extensively. Also addressing XAI and calibrated trust, Roque and Damodaran present a generalizable human-in-the-loop XAI framework that is based on control loops and can support multiple stakeholders. To illustrate the application of the framework, the authors use scenarios of robot-under-cyberattack to demonstrate how the framework can be used for different stakeholders interacting with a robot (e.g., robot operator or cybersecurity personnel). Chien, Yang, and Yu propose an XAI framework, XFlag, that is based on the Long Short-Term Memory model (for target identification), the Layer-wise Relevance Propagation algorithm (for explanation generation), and the Situation awareness-based Agent Transparency framework (for transparency assurance). A user study (in a context of fake news article detection) shows positive user performance (better understanding of system performance in terms of goals, reasoning, and uncertainty) without increasing workload.

The next three articles present design approaches to transparent HMIs. Vorm and Combs review and synthesize transparency frameworks from the perspectives of system monitoring, process visibility, surveillance, and disclosure. These characteristics, all of which beneficial to human trust in systems, are incorporated into a widely-used Technology Acceptance Model. Zhou, Li, Zhang, and Sun present a human–AI interface design tool, Blueprint, which is a visualization tool that captures key transparency design concepts. The Blueprint tool is based on a bibliometric analysis, from which several key transparency design concepts are identified. Stone, Jessup, Ganapathy, and Harel present a HMI design approach that is based on the Design Thinking Framework and incorporate “empathy” for both human and machine agents to support human-agent transparency. The design framework was used to develop the HMI of an AI-based decision support system in the medical domain. Detailed steps of design development and implementation are documented.

The next four articles, also on HMI designs, focus more on specific applications. Pujiarti, Lee, and Yi present two HMI design features, co-activity and conversation atmosphere visualization, that can be incorporated into a transparent human–chatbot interface for settings where user self-disclosures are important. Results of a field study show the promising effects of these design features on eliciting user self-disclosures. In a similar application area, human interaction with a conversational agent, Berka, Balata, Jonker, Mikovec, van Riemsdijk and Tielman’s work is also focused on user model elicitation and personalization. The authors introduce a design concept, Semantic User Models, to capture user characteristics. A case study with visually impaired participants was conducted, and several design solutions were identified to mitigate potential human-agent misalignments. In the military domain, Oron-Gilad, Oppenheim, and Parmet describe the process of designing a bidirectional graphic-based communication tool for military personnel with different roles (commanders and robotics operators) in settings where they communicate with one another as well as with unmanned systems. The authors briefly summarize four studies (with military subject-matter-experts as participants), whose results were incorporated into the development of the tool. In the domain of autonomous driving, Lim and Kim present HMI designs for the exterior of autonomous vehicles to convey messages to various types of road users such as other drivers on the road and pedestrians. Two experiments were conducted to assess user acceptance and perceived usability of the designs. Based on the experimental results, the authors offer suggestions that could be considered when designing external HMI for autonomous vehicles.

The last two articles present empirical studies that examine trust and bidirectional communications between humans and intelligent agents. Luo, Du, and Yang investigate the effects of an intelligent agent’s transparency (conveying rationale behind its recommendations among other possible courses of action) on human participants’ trust in the agent. Results show that, with a more transparent agent, participants were able to calibrate their trust in the agent more effectively. Wright, Lakhmani, and Chen examine the effects of human-agent bidirectional communication pattern (either directive or non-directive) on human-agent team performance in a simulated military tasking environment. Results show that participants’ task performance was impacted by their task load more than by the communication pattern they were asked to follow.

These fourteen articles cover a wide range of topics and present numerous promising techniques that can be used to promote transparent human–agent communications. The authors also identify research gaps and provide thoughtful suggestions on potential future efforts to advance the field of agent transparency. These articles should be useful to HCI researchers as well as practitioners in diverse domains. I would like to acknowledge the reviewers of these submissions and the helpful comments they provided to the authors.

Additional information

Notes on contributors

Jessie Y. C. Chen

Jessie Y. C. Chen is a Senior Research Scientist (ST) for Soldier Performance with U.S. Army Research Laboratory. Her research interests include human-autonomy teaming and agent transparency. Dr. Chen is an Associate Editor for IEEE Transactions on Human-Machine Systems, IEEE Robotics & Automation – Letters, and IJHCI.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.