439
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

Contingent categorisation in speech perception

, , , &
Pages 1070-1082 | Received 11 Mar 2013, Accepted 24 Jun 2013, Published online: 05 Aug 2013
 

Abstract

The speech signal is notoriously variable, with the same phoneme realised differently depending on factors like talker and phonetic context. Variance in the speech signal has led to a proliferation of theories of how listeners recognise speech. A promising approach, supported by computational modelling studies, is contingent categorisation, wherein incoming acoustic cues are computed relative to expectations. We tested contingent encoding empirically. Listeners were asked to categorise fricatives in CV syllables constructed by splicing the fricative from one CV syllable with the vowel from another CV syllable. The two spliced syllables always contained the same fricative, providing consistent bottom-up cues; however on some trials, the vowel and/or talker mismatched between these syllables, giving conflicting contextual information. Listeners were less accurate and slower at identifying the fricatives in mismatching splices. This suggests that listeners rely on context information beyond bottom-up acoustic cues during speech perception, providing support for contingent categorisation.

Acknowledgements

This research was supported by National Institutes of Health Grant DC-008089 to Bob McMurray and the Ballard and Seashore Dissertation Year Fellowship to Keith Apfelbaum. We thank Yue Wang and Dan McEchron for help collecting and analysing the acoustic data used in the simulations.

Notes

1. Here, and throughout, we use the term primary without any larger theoretical claims, simply as a way to describe the fact that a cue like VOT is one of the most important or most reliable cues to voicing.

2. For vowel trials, we used these generic labels rather than the actual vowel identities because vowel pairing was a between-participants factor. Rather than change the labels before every participant, we identified the vowel-button pairings for each participant on the screen during each vowel identification trial.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 444.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.