ABSTRACT
We examined whether listeners use acoustic correlates of voicing to resolve lexical ambiguities created by whispered speech in which a key feature, the voicing, is missing. Three associative priming experiments were conducted. The results showed a priming effect with whispered primes that included an intervocalic voiceless consonant (/petal/ “petal”) when the visual targets (FLEUR “flower”) were presented at the offset of the primes. A priming effect emerged with whispered primes that included a voiced intervocalic consonant (/pedal/ “pedal”) when the delay between the offset of the primes and the visual targets (VELO “bike”) was increased by 50 ms. In none of the experiments, the voiced primes (/pedal/) facilitated the processing of the targets (FLEUR) associated with the voiceless primes (/petal/). Our results suggest that the acoustic correlates of voicing are used by listeners to recover the intended words. Nonetheless, the retrieval of the voiced feature is not immediate during whispered word recognition.
Acknowledgements
We are grateful to anonymous reviewers for their helpful comments on earlier versions of this manuscript.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1 In the main experiment, the number of participants was twice as high as in the control experiment, because, as explained above, in the control experiment, each type of prime words (voiced/voiceless) was tested only with their respective associated (congruent) target words.
2 We first conducted a mixed-effects model on all data (collapsed across voiced and voiced word types) to check that the predictor word type interacted significantly with the predictor prime type (related, control) and semantic matching (congruent, incongruent). Note that in this model, the three-way interaction between word type, prime type and semantic matching failed to reach significance. This is not surprising, due to the fact that for methodological reasons two of our variables were between-participants factors, namely semantic matching (congruent, incongruent) and type of words (voiced, voiceless), and thus we likely lacked statistical power. Note also that, as explained at the end of the introduction, we are specifically concerned with the interaction between prime type (related, control) and semantic matching (congruent, incongruent) within each type of words (voiced, voiceless).
3 A possibility, however, is that the differential priming observed with the whispered voiceless (/petal/) and voiced (/pedal/) prime words in the congruent conditions merely results from a difference in the speed of responses of the two groups of participants. Indeed, as indicated in , in the congruent conditions, the participants that heard whispered voiceless primes /petal/ responded on average faster than the participants that heard whispered voiced primes /pedal/, and thus it could be argued that long RTs cause no semantic priming effect. Additional analyses performed in the voiceless priming condition (/petal/-FLEUR), in which a clear priming effect was observed, did not confirm this claim. The 12 slowest participants (RTs on the control primes greater than 700 ms) showed a strong priming effect around 75 ms (mean RTs 786 and 711 ms for the control and related primes, respectively) whereas the 12 fastest participants (RTs on the control primes smaller than 640 ms) showed a priming effect around 10 ms (not significant; mean RTs 584 and 574 ms for the control and related primes, respectively).