ABSTRACT
Studies on French adults using a written lexical decision task with masked priming, in which targets were more primed by consonant- (jalu-JOLI) than vowel-related (vobi-JOLI) primes, support the proposal that consonants have more weight than vowels in lexical processing. This study examines the phonological and/or lexical nature of this consonant bias (C-bias), using a sandwich priming task in which a brief presentation of the target (pre-prime) precedes the prime-target sequence, a manipulation blocking lexical neighbourhood effects. Results from three experiments (varying pre-prime/prime durations) show consistent C-priming and no significant V-priming at earlier and later processing stages (50 or 66 ms primes). Yet, a joint analysis reveals a small V-priming, while confirming a significant consonant advantage. This demonstrates the contribution of the phonological level to the C-bias. Second, differences in performance comparing the classic versus sandwich priming task also establish a contribution of lexical neighbourhood inhibition effects to the C-bias.
Acknowledgements
Thanks to Marie Bertin, Laurie Costerg, Laurène Baudet and Thomas Sordoillet for having run participants.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1 Following the suggestion from a reviewer, Joshua Snell, that entropy might explain the C-bias in French, we calculated a Markov entropy for each consonant and vowel used in French at the orthographic and phonological level, using the formula proposed by Siegelman et al. (Citation2019). If consonants are more informative than vowels they should have a lower entropy on average than vowels. Analyses showed, however, that the distribution of entropy was not significantly different for consonants and vowels, neither at the orthographic nor phonological level. This suggests that entropy alone cannot be the reason why consonants and vowels are differently processed, at least in French.
2 This comparison was also implemented as a sliding contrast. Sliding contrasts use the grand mean as the intercept, therefore the model output for the factors Prime Type and Target Type refers to the mean of all test words.