1,019
Views
4
CrossRef citations to date
0
Altmetric
REGULAR ARTICLES

Predicting (in)correctly: listeners rapidly use unexpected information to revise their predictions

ORCID Icon &
Pages 1149-1161 | Received 22 Jul 2019, Accepted 17 Feb 2020, Published online: 27 Feb 2020
 

ABSTRACT

Comprehenders can incorporate rich contextual information to predict upcoming input on the fly, and cues that conflict with their predictions are quickly detected. The present study examined whether and how comprehenders may revise their existing predictions upon encountering a prediction-inconsistent cue. We took advantage of the rich classifier system in Mandarin Chinese and tracked participants’ eye-movements as they listened to sentences in which the final noun is preceded by a classifier which was either compatible with the most expected noun, incompatible with the most expected noun but indicative of another contextually suitable noun, or uninformative. We found that, upon hearing a prediction-inconsistent classifier, listeners quickly directed their eye gaze away from the originally expected object and immediately onto the (initially) unexpected but contextually suitable object. This provides initial evidence that listeners can quickly use prediction-mismatching cues to revise their existing predictions on the fly.

Acknowledgements

This work was supported by a British Academy/ Leverhulme Trust Small Research Grant (SRG000032444). We would like to thank Suiping Wang for help with norming data collection.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 This is true for all cases in which the pre-nominal cue depends on the semantic and/or syntactic features of the upcoming noun. However, in the case of indefinite articles “a/an” in English which are dependent on the phonological property of the following word (which needs not be a noun, e.g. “an orange kite”), the reliability of the effect (first reported by DeLong et al., Citation2005) has been contested after multiple failed replication attempts (Ito et al., Citation2017; Nieuwland et al., Citation2018). We will not expand on this further since the classifier-noun dependency in Mandarin Chinese is not phonological in nature.

2 The ERP effects reported by these studies varied in their polarity, latency, and scalp distribution (see Kochari & Flecken, Citation2018 for a summary). However, a discussion about such variations is beyond the scope of the present paper.

3 The model for data in a given time window was specified as: glmer(cbind(SamplesInAOI, SamplesTotal-SamplesInAOI) ∼ Noun * Classifier + (1 + Noun * Classifier | Item) + (1 + Noun* Classifier | Subject), data = data, family = binomial).

4 Shuffling involves combining trials from both conditions and randomly drawing trials from the combined data set to form two new subsets.

5 These items were selected based on the judgements of 5 native speakers who did not participate in the eye-tracking study. They were given the sentence contexts leading up to the target noun phrase (e.g., “While playing in the garden, the little boy gave the little girl … ”) and asked to judge whether the sentence can be continued with either of the distractor objects. The average acceptability rating (that at least one of the distractor objects was compatible with the sentence context) was 75% for the 45 selected items.

Additional information

Funding

This work was supported by a British Academy/ Leverhulme Trust Small Research Grant (SRG000032444).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.