ABSTRACT
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing 6- and 12-month-olds with no sign language experience as they watched fingerspelling stimuli that either conformed to high sonority (well-formed) or low sonority (ill-formed) values, which are relevant to syllabic structure in signed language. Younger babies showed highly significant looking preferences for well-formed, high sonority fingerspelling, while older babies showed no preference for either fingerspelling variant, despite showing a strong preference in a control condition. The present findings suggest babies possess a sensitivity to specific sonority-based contrastive cues at the core of human language structure that is subject to perceptual narrowing, irrespective of language modality (visual or auditory), shedding new light on universals of early language learning.
Acknowledgments
Data collection for the present study was conducted in the UCSD Mind, Experience, & Perception Lab (Dr. Rain Bosworth) while Stone was conducting his Ph.D. in Educational Neuroscience summer lab rotation in cognitive neuroscience; there, Stone was also the recipient of an UCSD Elizabeth Bates Graduate Research Award. We are grateful to the Petitto BL2 student and faculty research team at Gallaudet University and the student research team at the UCSD Mind, Experience, & Perception Lab. We extend our sincerest thanks to Felicia Williams, our sign model, and to the babies and families in San Diego, California, who participated in this study.
Disclosure statement
The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the article.
Notes
1 There is debate about whether sonority is a phonological or phonetic construct. Here, we adopt the same position as Gómez et al. (Citation2014): “Our results do not speak to this debate, because we have no basis to determine whether responses of infants reflect phonetic or phonological preferences” (p. 5837). However, other studies (Berent et al., Citation2011, Citation2013) suggest sonority is at least partially phonological.
2 Some may contend that testing newborns, instead of 6-month-olds, for sensitivities to sonority constraints would offer stronger support for the Biologically-Governed Hypothesis. However, the present study with 6-month and 12-month-old infants continues to be a strong test of either hypotheses. First, it is confirmed that all infants were not systematically exposed to any visual signed language at any point in their lives. Second, the 6-month-old age criterion is significant for infants’ emerging perceptual capabilities. Newborns do have sufficient hearing and can be exposed to speech en utero. However, they have very poor sight at birth, seeing at best extremely blurry image at arms’ length. By six months, their acuity and contrast sensitivity sharpens substantially (Teller, Citation1997). Hence, this is the best age to test because it is at the very point where their vision has just markedly improved they are able to see the fine details of sign language stimuli well, and, for the first time in their lives, during this experiment. This logical reasoning should not be construed to mean that deaf infants do not need exposure to a visual signed language until they are six months old, because they do have coarse vision that is rapidly improving and is sufficient to see faces and moving hands and arms at close distances.
3 Or more accurately, “lexicalized-like,” given that these fingerspelling forms were generated specifically for the present study and were not part of the ASL lexicon at that time.