979
Views
27
CrossRef citations to date
0
Altmetric
Original Articles

Menu Navigation With In-Vehicle Technologies: Auditory Menu Cues Improve Dual Task Performance, Preference, and Workload

, , , , &
Pages 1-16 | Published online: 22 Oct 2014
 

Abstract

Auditory display research for driving has mainly examined a limited range of tasks (e.g., collision warnings, cell phone tasks). In contrast, the goal of this project was to evaluate the effectiveness of enhanced auditory menu cues in a simulated driving context. The advanced auditory cues of “spearcons” (compressed speech cues) and “spindex” (a speech-based index cue) were predicted to improve both menu navigation and driving. Two experiments used a dual task paradigm in which users selected songs on the vehicle’s infotainment system. In Experiment 1, 24 undergraduates played a simple, perceptual-motor ball-catching game (the primary task; a surrogate for driving), and navigated through an alphabetized list of 150 song titles—rendered as an auditory menu—as a secondary task. The menu was presented either in the typical visual-only manner, or enhanced with text-to-speech (TTS), or TTS plus one of three types of additional auditory cues. In Experiment 2, 34 undergraduates conducted the same secondary task while driving in a simulator. In both experiments, performance on both the primary task (success rate of the game or driving performance) and the secondary task (menu search time) was better with the auditory menus than with no sound. Perceived workload scores as well as user preferences favored the enhanced auditory cue types. These results show that adding audio, and enhanced auditory cues in particular, can allow a driver to operate the menus of in-vehicle technologies more efficiently while driving more safely. Results are discussed with multiple resources theory.

ACKNOWLEDGEMENTS

This article represents a compilation of results collected over several years. As is typical in such a program of research, some of the results have been discussed in preliminary form at academic conference over the course of the project. The data from Experiment 1 were discussed in preliminary form at AutomotiveUI 2009 (Jeon, Davison, Nees, Wilson, & Walker, Citation2009). Experiment 2 consists of entirely novel data that have never been presented or published and is a follow-up study to Experiment 1. Therefore, discussion for the combined Experiments 1 and 2 is also new. We thank Centrafuse for lending the infotainment system and software and Yarden Moskovitch and Ashley Henry for gathering data.

Additional information

Notes on contributors

Myounghoon Jeon

Myounghoon Jeon is an Assistant Professor in Cognitive & Learning Sciences and Computer Science at Michigan Tech. He received his Ph.D. from Georgia Tech in 2012. His Mind Music Machine Lab focuses on auditory displays, affective computing, accessible computing, aesthetic computing, and automotive user interface design research.

Thomas M. Gable

Thomas M. Gable is a Ph.D. student specializing in engineering psychology in the School of Psychology at Georgia Institute of Technology. He earned a bachelor’s degree in psychology from The College of Wooster. His research centers on attention and driving, including the use of multimodal interfaces to decrease driver distraction.

Benjamin K. Davison

Benjamin K. Davison is a researcher at Google. He received his Ph.D. in Interactive Computing in 2012 from Georgia Tech. His Ph.D. research focused on accessible graphs and auditory menus for the blind and visually impaired people.

Michael A. Nees

Michael A. Nees is an Assistant Professor in the Department of Psychology at Lafayette College. He was awarded a Ph.D. from the Georgia Institute of Technology in 2009. His research examines applied and theoretical aspects of auditory perception, auditory and multimodal displays, assistive technologies, and auditory displays for in-vehicle technologies.

Jeff Wilson

Jeff Wilson is a Senior Research Scientist in the Interactive Media Technology Center at the Georgia Institute of Technology. He received his BS and MS in computer science from Georgia Tech in 1999 and 2001, respectively. His research interests include auditory interfaces, and augmented and virtual reality.

Bruce N. Walker

Bruce N. Walker is an Associate Professor in Psychology and Interactive Computing at Georgia Tech. His Sonification Lab studies multimodal interfaces, including sonification and auditory displays. He teaches HCI, Sensation & Perception, and Assistive Technology, and has consulted for NASA, state and federal governments, the military, and private companies.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.