Abstract
The introduction of highly automated driving systems is expected to significantly change in-vehicle interactions, creating opportunities for the design of novel use cases and interactions for occupants. In this study, we sought to identify and extract these novel use cases and determine preliminary auditory display recommendations for these novel situations. We developed and generated use cases for level 4 automated vehicles through an expert workshop (N = 17) and online focus group interviews (N = 12). Most of the use cases we generated were then tested, apart from meditation, and user opinions were collected in a driving simulator study (N = 20). Results indicated participants were interested in functions that support their experience with both driving and non-driving related interactions in highly automated vehicles. Three categories of use cases for level 4 automated vehicles were developed: driving automation use cases, immersion use cases, and in-vehicle notification use cases. For the driving simulator study, we tested three display modalities for interaction with drivers: visual alert only, non-speech with visual, and speech with visual. In terms of situation awareness (SA), the non-speech with visual display was associated with significantly better SA for the use case consisting of a planned increase in automation level than the speech-with visual display. This study will provide guidance on sonification design to advance user experiences in highly automated vehicles.
Cover letter
A part of this study was published as a proceeding in the HFES Annual Meeting Conference 2021.
Nadri, C., Ko, S., Diggs, C., Winters, M., Vattakkandy, S., & Jeon, M. (2021, September). Novel Auditory Displays in Highly Automated Vehicles: Sonification Improves Driver Situation Awareness, Perceived Workload, and Overall Experience. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 65, No. 1, pp. 586–590). Sage CA: Los Angeles, CA: SAGE Publications.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Notes on contributors
Chihab Nadri
Chihab Nadri is a PhD candidate and lab manager of the Mind Music Machine Lab. His research interests include Automotive Display Design, Music Computing, and Human-Computer Interaction.
Sangjin Ko
Sangjin Ko is a graduate from Virginia Tech now working as a Technical Project Manager at Bear Robotics. He was a graduate researcher at the Mind Music Machine Lab.
Colin Diggs
Colin Diggs is a graduate from Virginia Tech with a major in Industrial and Systems Engineering and currently a data science specialist at MITRE. He was an undergraduate researcher at the Mind Music Machine Lab.
Michael Winters
Michael Winters is a graduate from Georgia Tech now working as a Postdoctoral Researcher at Microsoft. He collaborated with the Mind Music Machine Lab as an independent researcher.
Sreehari Vattakkandy
Sreehari Vattakkandy is a graduate from Pandit Deendayal Energy University with a Major in Mechanical Engineering. His research interests include Human-computer interaction and Sound design. He collaborated with the Mind Music Machine lab as an independent researcher.
Myounghoon Jeon
Myounghoon Jeon is an Associate Professor of Industrial and Systems Engineering and Computer Science at Virginia Tech and director of the Mind Music Machine Lab. His research focuses on emotions and sound in the areas of Automotive User Experiences, Assistive Robotics, and Arts in XR Environments.