643
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Using Agent Features to Influence User Trust, Decision Making and Task Outcome during Human-Agent Collaboration

, &
Pages 1740-1761 | Received 28 Jun 2021, Accepted 30 Oct 2022, Published online: 11 Jan 2023
 

Abstract

Optimal performance of collaborative tasks requires consideration of the interactions between intelligent agents and their human counterparts. The functionality and success of these agents lie in their ability to maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology with an ability to vary user trust and decision making in-task. An online experiment was run to investigate whether stimulus difficulty and the implementation of agent features by a collaborative recommender system interact to influence user perception, trust and decision making. Agent features are changes to the Human-Agent interface and interaction style, and include presentation of a disclaimer message, a request for more information from the user and no additional feature. Signal detection theory is utilised to interpret decision making, with this applied to assess decision making on the task, as well as with the collaborative agent. The results demonstrate that decision change occurs more for hard stimuli, with participants choosing to change their initial decision across all features to follow the agent recommendation. Furthermore, agent features can be utilised to mediate user decision making and trust in-task, though the direction and extent of this influence is dependent on the implemented feature and difficulty of the task. The results emphasise the complexity of user trust in Human-Agent collaboration, highlighting the importance of considering task context in the wider perspective of trust calibration.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 see https://github.com/saritaherse/trust-in-HAC for study repository including video.

Additional information

Funding

This research was supported by an Australian Government Research Training Program Scholarship.

Notes on contributors

Sarita Herse

Sarita Herse holds a PhD in Social Robotics from the University of New South Wales (UNSW) and works as a Research Associate for UNSW’s Creative Robotics Lab and the National Facility for Human-Robot Interaction Research. Her research involves investigating the intersection of humans and technology, with particular interest in Human-Agent Collaboration.

Jonathan Vitale

Jonathan Vitale holds a PhD in Information Technology from the University of Technology Sydney (UTS) and he serves as a lecturer at UTS and at the University of New England. His research covers topics concerning computational models of human cognition applied to AI and social robotics in public environments.

Mary-Anne Williams

Mary-Anne Williams is the Michael J. Crouch Chair for Innovation at UNSW. She was previously Distinguished Research Professor at UTS and Director of the UTS Magic Lab. Williams focuses on Innovation and works on AI, Robotics and Law, holding a PhD in Computer Science and a Master of Laws.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.