296
Views
20
CrossRef citations to date
0
Altmetric
Original Articles

Linguistically guided anticipatory eye movements in scene viewing

, &
Pages 922-946 | Received 09 Dec 2011, Accepted 20 Jul 2012, Published online: 03 Sep 2012
 

Abstract

The present study replicated the well-known demonstration by Altmann and Kamide (1999) that listeners make linguistically guided anticipatory eye movements, but used photographs of scenes rather than clip-art arrays as the visual stimuli. When listeners heard a verb for which a particular object in a visual scene was the likely theme, they made earlier looks to this object (e.g., looks to a cake upon hearing The boy will eat …) than when they heard a control verb (The boy will move …). New data analyses assessed whether these anticipatory effects are due to a linguistic effect on the targeting of saccades (i.e., the where parameter of eye movement control), the duration of fixations (i.e., the when parameter), or both. Participants made fewer fixations before reaching the target object when the verb was selectionally restricting (e.g., will eat). However, verb type had no effect on the duration of individual eye fixations. These results suggest an important constraint on the linkage between spoken language processing and eye movement control: Linguistic input may influence only the decision of where to move the eyes, not the decision of when to move them.

Acknowledgments

Thanks to Victoria Neilsen and Dan Petty for assistance with data collection, and to Chuck Clifton for insightful discussion. Part of this work was presented at the 24th CUNY Conference on Human Sentence Processing, Stanford University, the University of Connecticut Psycholinguistics Colloquium, and the 52nd Annual Meeting of the Psychonomic Society, Seattle, WA; thanks to audiences at these venues for helpful comments. Part of this work was carried out in completion of the second author's undergraduate honours thesis at the University of Massachusetts Amherst.

Notes

1This change to the content of the visual stimuli means that, unlike in Altmann and Kamide (1999), we do not include experimenter-defined “distractor” objects in each image. But as in Altmann and Kamide's experiments, the critical comparison is always between conditions (e.g., looks to the same target object in the eat and move conditions), rather than between objects.

2We conducted a separate experiment that validated these intuitive judgements. In this experiment, 46 participants viewed each scene while hearing question variants of the sentences used in the present study, e.g., What will the batter hit? or What will the batter see? After each trial, participants were asked to choose which of two objects in the scene was the more likely answer to the question, e.g., the ball or the catcher. When the verb was restricting, participants chose the target object on 99.2% of trials. With a control verb, participants chose the target on 62.8% of trials.

3Note that although Altmann and Kamide (1999) reported the latency until the start of the first saccade to the target, we report the latency until the start of the first fixation on the target. These values will differ only by the duration of a saccade, i.e., approximately 30–50 ms.

4Here is a simple analogy. Assume that the probability that a runner's shoes will become untied is constant over time, i.e., shoe-untying is equally likely in each minute spent running. Some runs are very short (15 minutes) and some are very long (3 hours). An analysis of the length of the runs on which shoe-untying occurs will find that it occurs disproportionately often during long runs, and consequently, that the mean duration of the runs during which shoe-untying occurs will be much longer than the mean duration of all runs. This is simply because the long runs take up more of the total running time.

5In support of this second alternative, unpublished recent work in our laboratory has found a dramatic increase in individual eye fixation duration on words (with mean durations greater than 400 ms) when subjects are required to remember these words for a later recognition memory test.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 238.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.