Abstract
Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8–10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.
Acknowledgements
The L2F app is in the development phase and is not currently available for commercial use.
Declaration of interest
The authors report no conflicts of interest. This work was supported by U.S. Department of Health and Human Services-National Institutes of Health (P01 HD-01994, R03 DC-007339, R15 DC01386401, R21 DC-011342) and Language Evaluation and Research Network (LEARN) Center at Haskins Laboratories.