251
Views
14
CrossRef citations to date
0
Altmetric
Regular articles

Suprasegmental lexical stress cues in visual speech can guide spoken-word recognition

&
Pages 793-808 | Received 05 May 2012, Published online: 18 Oct 2013
 

Abstract

Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., ˈca-vi from cavia “guinea pig” vs. ˌka-vi from kaviaar “caviar”). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-ˈjec from projector “projector” vs. ˌpro-jec from projectiel “projectile”), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.

FUNDING

This research was supported by a grant to the first author within the German Science Foundation (DFG) focus program SPP 1234 “Phonological and phonetic competence: Between grammar, signal processing, and neural activity”. We thank Bettina Braun and Lara Tagliapietra for valuable discussions, two anonymous reviewers for constructive feedback, and Anke Bergmans, Laurence Bruggeman, Lies Cuijpers, Vera Hoskam, Jessica Koppers, Marieke Pompe, Robbert van Sluijs, Jet Sueters, and Jelmer Wolterink for their help with the experiments.

Notes

1 Only good lip-readers can distinguish /s/ and /t/ visually (van Son et al., 1994). Although this did not change the general pattern of results, we excluded this item pair, e-ˈro-(sie) versus ˌe-ro-(ˈtiek), from all analyses in both experiments.

Log in via your institution

Log in to Taylor & Francis Online

There are no offers available at the current time.

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.