Abstract
Objective: The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. Design: The study consisted of two experiments: First, accuracy scores were obtained using City University of New York (CUNY) sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. Study sample: We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: Results: To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Conclusions: Results suggest that a listener’s integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy.
Acknowledgements
The project described was supported by an internal University Research Office Grant awarded to the first author at Idaho State University, the INBRE Program, NIH Grant Nos. P20 RR016454 (National Center for Research Resources) and P20 GM103408 (National Institute of General Medical Sciences.)
Declaration of interest
The authors declare no conflicts of interest.
Notes
1 Averaging across spectra is not a precise representation of auditory tuning curves or hearing acuity. It is a simplification based on previous work assessing auditory-visual benefit in light of low versus high-frequency hearing loss (cf. Erber, Citation2003).
2 Research has shown that RT-only C(t) functions are robust even for an error rate of 30% (Townsend & Wenger, Citation2004).