1,012
Views
14
CrossRef citations to date
0
Altmetric
REGULAR ARTICLES

Prediction in a visual language: real-time sentence processing in American Sign Language across development

ORCID Icon, &
Pages 387-401 | Received 20 Jun 2017, Accepted 17 Nov 2017, Published online: 08 Dec 2017
 

ABSTRACT

Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eye-tracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4–8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimising visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

Acknowledgements

We would like to express sincere appreciation to Michael Higgins, Tory Sampson, and Valerie Sharer for help with stimuli creation, data collection and coding. We thank Marla Hatrak for help with recruitment. We are extremely grateful to all of the families who participated in this study.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. We adopt the convention of using capital letters to represent English glosses for ASL signs.

2. Out of 298 trials in the eye-tracking task, children produced the ASL sign for the target item during picture-naming on 242 trials. The remaining 52 trials contained a target that the child either did not produce or produced with an error during picture-naming. We analyzed the data both with and without these 52 trials, and the pattern of results was identical. Thus, we used the fuller dataset (298 trials) for subsequent analyses of the eye-tracking data.

3. When these two participants were removed from the analysis, the effect was in the same direction but was no longer significant.

Additional information

Funding

This work was supported by National Institute on Deafness and Other Communication Disorders [grant numbers R01DC015272 (AL), R03DC013638 (AB), and R01DC012797 (RM)].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 444.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.