439
Views
12
CrossRef citations to date
0
Altmetric
Original Articles

Contingent categorisation in speech perception

, , , &
Pages 1070-1082 | Received 11 Mar 2013, Accepted 24 Jun 2013, Published online: 05 Aug 2013
 

Abstract

The speech signal is notoriously variable, with the same phoneme realised differently depending on factors like talker and phonetic context. Variance in the speech signal has led to a proliferation of theories of how listeners recognise speech. A promising approach, supported by computational modelling studies, is contingent categorisation, wherein incoming acoustic cues are computed relative to expectations. We tested contingent encoding empirically. Listeners were asked to categorise fricatives in CV syllables constructed by splicing the fricative from one CV syllable with the vowel from another CV syllable. The two spliced syllables always contained the same fricative, providing consistent bottom-up cues; however on some trials, the vowel and/or talker mismatched between these syllables, giving conflicting contextual information. Listeners were less accurate and slower at identifying the fricatives in mismatching splices. This suggests that listeners rely on context information beyond bottom-up acoustic cues during speech perception, providing support for contingent categorisation.

Acknowledgements

This research was supported by National Institutes of Health Grant DC-008089 to Bob McMurray and the Ballard and Seashore Dissertation Year Fellowship to Keith Apfelbaum. We thank Yue Wang and Dan McEchron for help collecting and analysing the acoustic data used in the simulations.

Notes

1. Here, and throughout, we use the term primary without any larger theoretical claims, simply as a way to describe the fact that a cue like VOT is one of the most important or most reliable cues to voicing.

2. For vowel trials, we used these generic labels rather than the actual vowel identities because vowel pairing was a between-participants factor. Rather than change the labels before every participant, we identified the vowel-button pairings for each participant on the screen during each vowel identification trial.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.