655
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Using Agent Features to Influence User Trust, Decision Making and Task Outcome during Human-Agent Collaboration

, &
Pages 1740-1761 | Received 28 Jun 2021, Accepted 30 Oct 2022, Published online: 11 Jan 2023
 

Abstract

Optimal performance of collaborative tasks requires consideration of the interactions between intelligent agents and their human counterparts. The functionality and success of these agents lie in their ability to maintain user trust; with too much or too little trust leading to over-reliance and under-utilisation, respectively. This problem highlights the need for an appropriate trust calibration methodology with an ability to vary user trust and decision making in-task. An online experiment was run to investigate whether stimulus difficulty and the implementation of agent features by a collaborative recommender system interact to influence user perception, trust and decision making. Agent features are changes to the Human-Agent interface and interaction style, and include presentation of a disclaimer message, a request for more information from the user and no additional feature. Signal detection theory is utilised to interpret decision making, with this applied to assess decision making on the task, as well as with the collaborative agent. The results demonstrate that decision change occurs more for hard stimuli, with participants choosing to change their initial decision across all features to follow the agent recommendation. Furthermore, agent features can be utilised to mediate user decision making and trust in-task, though the direction and extent of this influence is dependent on the implemented feature and difficulty of the task. The results emphasise the complexity of user trust in Human-Agent collaboration, highlighting the importance of considering task context in the wider perspective of trust calibration.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 see https://github.com/saritaherse/trust-in-HAC for study repository including video.

Additional information

Funding

This research was supported by an Australian Government Research Training Program Scholarship.

Notes on contributors

Sarita Herse

Sarita Herse holds a PhD in Social Robotics from the University of New South Wales (UNSW) and works as a Research Associate for UNSW’s Creative Robotics Lab and the National Facility for Human-Robot Interaction Research. Her research involves investigating the intersection of humans and technology, with particular interest in Human-Agent Collaboration.

Jonathan Vitale

Jonathan Vitale holds a PhD in Information Technology from the University of Technology Sydney (UTS) and he serves as a lecturer at UTS and at the University of New England. His research covers topics concerning computational models of human cognition applied to AI and social robotics in public environments.

Mary-Anne Williams

Mary-Anne Williams is the Michael J. Crouch Chair for Innovation at UNSW. She was previously Distinguished Research Professor at UTS and Director of the UTS Magic Lab. Williams focuses on Innovation and works on AI, Robotics and Law, holding a PhD in Computer Science and a Master of Laws.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.