1,012
Views
14
CrossRef citations to date
0
Altmetric
REGULAR ARTICLES

Prediction in a visual language: real-time sentence processing in American Sign Language across development

ORCID Icon, &
Pages 387-401 | Received 20 Jun 2017, Accepted 17 Nov 2017, Published online: 08 Dec 2017
 

ABSTRACT

Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eye-tracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4–8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimising visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

Acknowledgements

We would like to express sincere appreciation to Michael Higgins, Tory Sampson, and Valerie Sharer for help with stimuli creation, data collection and coding. We thank Marla Hatrak for help with recruitment. We are extremely grateful to all of the families who participated in this study.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. We adopt the convention of using capital letters to represent English glosses for ASL signs.

2. Out of 298 trials in the eye-tracking task, children produced the ASL sign for the target item during picture-naming on 242 trials. The remaining 52 trials contained a target that the child either did not produce or produced with an error during picture-naming. We analyzed the data both with and without these 52 trials, and the pattern of results was identical. Thus, we used the fuller dataset (298 trials) for subsequent analyses of the eye-tracking data.

3. When these two participants were removed from the analysis, the effect was in the same direction but was no longer significant.

Additional information

Funding

This work was supported by National Institute on Deafness and Other Communication Disorders [grant numbers R01DC015272 (AL), R03DC013638 (AB), and R01DC012797 (RM)].

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.