ABSTRACT
This paper explores the viability of new touchscreen-based haptic/vibrotactile interactions as a primary modality for perceiving visual graphical elements in eyes-free situations. For touchscreen-based haptic information extraction to be both accurate and meaningful, the onscreen graphical elements should be schematized and downsampled to: (1) maximize the perceptual specificity of touch-based sensing and (2) account for the technical characteristics of touchscreen interfaces. To this end, six human behavioral studies were conducted with 64 blind and 105 blindfolded-sighted participants. Experiments 1–3 evaluated three key rendering parameters that are necessary for supporting touchscreen-based vibrotactile perception of graphical information, with results providing empirical guidance on both minimally detectable and functionally discriminable line widths, inter-line spacing, and angular separation that should be maintained. Experiments 4–6 evaluated perceptually-motivated design guidelines governing visual-to-vibrotactile schematization required for tasks involving information extraction, learning, and cognition of multi-line paths (e.g., transit-maps and corridor-intersections), with results providing clear guidance as to the stimulus parameters maximizing accuracy and temporal performance. The six empirically-validated guidelines presented here, based on results from 169 participants, provide designers and content providers with much-needed guidance on effectively incorporating perceptually-salient touchscreen-based haptic feedback as a primary interaction style for interfaces supporting nonvisual and eyes-free information access.
Acknowledgments
We acknowledge support from NSF grants CHS-1425337, ECR/DCL Level 2 1644471, and IIS-1822800 on this project.
Additional information
Notes on contributors
Hari P. Palani
Hari P. Palani received his PhD degree in Spatial Informatics from the University of Maine. He is the Founder and Chief Executive Office of UNAR Labs. His research interests include haptic perception, multimodal interface design, spatial cognition and nonvisual graphic accessibility.
Paul D. S. Fink
Paul D. S. Fink, is a Ph.D. student in Spatial Information Science and Engineering at the University of Maine. His research intersects user experience, technology education, and accessibility. Current work includes designing a virtual learning platform for autonomous vehicle AI research and development.
Nicholas A. Giudice
Nicholas A. Giudice is a Professor in the School of Computing and Information Science, University of Maine. His research combines techniques from Experimental Psychology and Human-Computer Interaction, with expertise in spatial learning and navigation and in the design and evaluation of multimodal information-access technologies for blind and visually impaired users.