Abstract
Functional specificity describes the degree to which operators can successfully calibrate their trust toward different subsystems of a machine. Only a few works have addressed this issue in the context of automated vehicles. Previous studies suggest that drivers have issues distinguishing between different subsystems, which leads to low functional specificity. To counter, this article presents a prototypical design where different in-vehicle subsystems are portrayed by independent conversational agents. The concept was evaluated in a user study where participants had to supervise a level 2 automated vehicle while reading and communicating with the conversational agents in the car. It was hypothesized that a clear differentiation between subsystems could allow drivers to better calibrate their trust. However, our results, based on subjective trust scales, monitoring, and driving behavior, cannot confirm this assumption. In contrast, functional specificity was high among participants of the study, and they based their situational and general trust ratings mainly on the perceptions of the driving automation system. Still, the experiment contributes to issues of trust and monitoring and concludes with a list of relevant findings to support trust calibration in supervisory control situations.
Acknowledgments
I want to thank my previous student Anton Torggler for his great support in conducting and assisting in the evaluation of this experiment, Justin D. Edwards for his consultation regarding the conversational agent design, as well as them both for their contributions to the original article this article is based on.
Disclosure statement
No potential conflict of interest was reported by the author.
Additional information
Notes on contributors
Philipp Wintersberger
Philipp Wintersberger is a Professor of Interactive Systems at the University of Applied Sciences Upper Austria. His research addresses human-machine cooperation in safety-critical AI driven systems. Currently, he leads a group of researchers and PhD students working on human-AI cooperation in multiple FWF and FFG funded projects.