717
Views
4
CrossRef citations to date
0
Altmetric
Articles

Eye Gaze and Head Orientation Cues in Face-to-Face Referential Communication

, &
Pages 201-223 | Published online: 30 Oct 2019
 

ABSTRACT

During face-to-face communication, people use visual cues about what their partners are attending to as they process language. An eyetracking experiment explored how addressees use speakers’ eye gaze and head orientation while interpreting references to objects in a spatial task. Thirty-six naive director/matcher pairs seated face-to-face were separated by a low or high barrier that hid identical mirror-image arrangements of objects. We compared matchers’ ability to disambiguate referring expressions when directors’ eyes and heads were visible over the barrier, when only directors’ heads were visible (with eyes obscured by mirrored sunglasses), or when directors were completely hidden. Seeing directors’ head orientation helped matchers quickly restrict attention to the target side of the display. Seeing directors’ eye gaze helped matchers disambiguate referring expressions earlier (before the linguistic point of disambiguation) than did seeing head orientation alone. Along with benefits, however, there were some costs to monitoring eye gaze in face-to-face communication.

Acknowledgments

We thank Cora Allen-Coleman, Nathan Brewer, Cindy Getschow, Daniel Grief, Sarah Mundell, Barbara Percival, Vanessa Richards, Alix Simonson, and Anne Wildman for their assistance with data collection and/or coding. We thank the anonymous reviewers for their helpful comments and suggestions, and Kayley Porterfield (Oberlin College) and Abby Lindberg (Daemen College) for assistance with revisions. Some of the research in this article was described at the 47th Annual Meeting of the Psychonomic Society (November 2006) in Houston, TX, the Twenty-First Annual CUNY Conference on Human Sentence Processing (March 2008) in Chapel Hill, NC, and the 33rd Annual Conference of the Cognitive Science Society (July 2011) in Boston, MA.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. The EPOD corresponds to the VPOD (visual point of disambiguation) in Hanna and Brennan (Citation2007).

2. For our analyses of directors’ orientation, only the first three windows of looking starting at the beginning of the trial were used, because we were interested in the visual cues provided before (and shortly after) the director began speaking. The color word generally began within the third window after the HPOD.

3. Four windows were used for matchers’ looks to objects in (as opposed to the five windows used for the previous analysis in ) because the color word zero point occurs on average 2–3 500 ms windows later than the HPOD zero point. This means that there is some redundancy between the last windows of the graphs in (display-side looks) and the first windows of (object looks). However, both graphs and sets of analyses are independently informative because the streams of behavioral data are aligned at different points in time (at the HPOD vs. at the color word), with variability increasing as more time elapses beyond the alignment point.

Additional information

Funding

This material is based upon work supported by the National Institutes of Health under National Research Service Award 1F32MH1263901A1, and by the National Science Foundation under Grants No. ISS-0527585, IIS-0713287, ITR-0082602, and ITR-0325188.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 192.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.