1,389
Views
40
CrossRef citations to date
0
Altmetric
Original Articles

Speakers adapt gestures to addressees' knowledge: implications for models of co-speech gesture

&
Pages 435-451 | Received 30 May 2012, Accepted 04 Apr 2013, Published online: 29 May 2013
 

Abstract

Are gesturing and speaking shaped by similar communicative constraints? In an experiment, we teased apart communicative from cognitive constraints upon multiple dimensions of speech-accompanying gestures in spontaneous dialogue. Typically, speakers attenuate old, repeated or predictable information but not new information. Our study distinguished what was new or old for speakers from what was new or old for (and shared with) addressees. In 20 groups of 3 naive participants, speakers retold the same Road Runner cartoon story twice to one addressee and once to another. We compared the distribution of gesture types, and the gestures' size and iconic precision across retellings. Speakers gestured less frequently in stories retold to Old Addressees than New Addressees. Moreover, the gestures they produced in stories retold to Old Addressees were smaller and less precise than those retold to New Addressees, although these were attenuated over time as well. Consistent with our previous findings about speaking, gesturing is guided by both speaker-based (cognitive) and addressee-based (communicative) constraints that affect both planning and motoric execution. We discuss the implications for models of co-speech gesture production.

Acknowledgements

This material is based upon work supported by NSF under Grants IIS-0527585 and ITR-0325188. We thank our colleagues from the Adaptive Spoken Dialogue Project, the Shared Gaze Project, the Dialogue Matters Network (funded by Leverhulme Trust), and the Gesture Focus Group for many helpful discussions, especially Arthur Samuel and Anna Kuhlen. We are grateful to Randy Stein and Marwa Abdalla for their assistance with coding.

Notes

1. One triad was excluded because the speaker had a broken arm, which impeded his gestural production. A second triad was excluded because the speaker misunderstood the task and provided a nearly frame-by-frame report of the cartoon's progression instead of narrating a story; the speaker ran out of time and did not complete the task. The third triad was excluded because one of the addressees had his eyes closed throughout the speaker's narrations (presumably trying to visualise the cartoon events and not falling asleep!). According to Bavelas et al. (Citation1992) the rate of gesturing, especially for interactive gestures, is significantly affected by visual availability, and as such the speaker's gestural production when narrating to this listener might have been affected.

2. Additional cartoons were also narrated, one with Tweety and Sylvester and one with Bugs Bunny and Yosemite Sam. The third cartoon was dropped from the task after the first six triads, because narrating a total of nine stories (three for each cartoon) was often too tiring for the speakers. The narrations of the second cartoon were not analysed.

3. The stroke is the expressive and dynamic part of the gesture, bearing its semantic content, and it is optionally preceded by a preparation phase, during which the hands move from rest towards the space where the stroke is executed, and a retraction phase, during which the hands return to rest (McNeill, Citation1992).

4. A study that normalised the number of gestures by words to assess partner-specific adaptation yielded the paradoxical result that gesture frequency increased when speakers and addressees had common ground (had both watched the story told by the speaker) compared to when they did not, even though speakers produced fewer words and fewer gestures when telling the story to an addressee who had seen the story than to one who had not (Holler & Wilkin, Citation2009).

5. Since coders assigned a single score to the size and iconic precision of all gestures for a given narrative element, the effects we observed could conceivably be due to speakers producing more beats and metanarrative gestures rather than attenuating the precision of their representational gestures. However, attenuation in these qualitative dimensions appears to be driven by representational gestures, since the frequency of beats and metanarrative gestures did not differ across retellings. Although the frequency of beats and of metanarrative gestures increased numerically after the first telling (see ), this increase was not reliable. In particular, the for-the-addressee effect was not significant for either metanarrative gestures (F 1 (1, 18) = 1.53, p =.23; F 2 (1, 19) = 2.23, p =.15) or beat gestures (F 1 (1, 18) = 0.02, ns; F 2 (1, 19) = 0.08, ns). Given the preponderance of representational gestures in our corpus and the lack of adaptation in beat or metanarrative gestures across retellings, the observed effects for size and iconic precision seem to be primarily due to adaptation in the motoric execution of representational gestures.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.