294
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Team at Your Service: Investigating Functional Specificity for Trust Calibration in Automated Driving with Conversational Agents

Pages 3254-3267 | Received 30 Dec 2022, Accepted 08 May 2023, Published online: 29 Jun 2023
 

Abstract

Functional specificity describes the degree to which operators can successfully calibrate their trust toward different subsystems of a machine. Only a few works have addressed this issue in the context of automated vehicles. Previous studies suggest that drivers have issues distinguishing between different subsystems, which leads to low functional specificity. To counter, this article presents a prototypical design where different in-vehicle subsystems are portrayed by independent conversational agents. The concept was evaluated in a user study where participants had to supervise a level 2 automated vehicle while reading and communicating with the conversational agents in the car. It was hypothesized that a clear differentiation between subsystems could allow drivers to better calibrate their trust. However, our results, based on subjective trust scales, monitoring, and driving behavior, cannot confirm this assumption. In contrast, functional specificity was high among participants of the study, and they based their situational and general trust ratings mainly on the perceptions of the driving automation system. Still, the experiment contributes to issues of trust and monitoring and concludes with a list of relevant findings to support trust calibration in supervisory control situations.

Acknowledgments

I want to thank my previous student Anton Torggler for his great support in conducting and assisting in the evaluation of this experiment, Justin D. Edwards for his consultation regarding the conversational agent design, as well as them both for their contributions to the original article this article is based on.

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Notes on contributors

Philipp Wintersberger

Philipp Wintersberger is a Professor of Interactive Systems at the University of Applied Sciences Upper Austria. His research addresses human-machine cooperation in safety-critical AI driven systems. Currently, he leads a group of researchers and PhD students working on human-AI cooperation in multiple FWF and FFG funded projects.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.