Publication Cover
Aging, Neuropsychology, and Cognition
A Journal on Normal and Dysfunctional Development
Volume 30, 2023 - Issue 1
1,996
Views
1
CrossRef citations to date
0
Altmetric
Original Article

Increased reliance on world knowledge during language comprehension in healthy aging: evidence from verb-argument prediction

ORCID Icon, ORCID Icon, &
Pages 1-33 | Received 27 Apr 2020, Accepted 27 Jul 2021, Published online: 06 Aug 2021

ABSTRACT

Cognitive aging negatively impacts language comprehension performance. . However, there is evidence that older adults skillfully use linguistic context and their crystallized world knowledge to offset age-related changes that negatively impact comprehension. Two visual-world paradigm experiments examined how aging changes verb-argument prediction, a comprehension process that relies on world knowledge but has rarely been examined in the cognitive-aging literature. Older adults did not differ from younger adults in their activation of an upcoming likely verb argument, particularly when cued by a semantically-rich agent+verb combination (Experiment 1). However, older adults showed elevated activation of previously-mentioned agents (Experiment 1) and of unlikely but verb-congruent referents (Experiment 2). This is novel evidence that older adults exploit semantic context and world knowledge during comprehension to successfully activate upcoming referents. However, older adults also show elevated activation of irrelevant information, consistent with previous findings demonstrating that older adults may experience greater proactive interference and competition from task-irrelevant information.

Introduction

Aging negatively impacts a variety of cognitive functions, including language comprehension: older adults have been shown to have poorer comprehension performance than younger adults (Carpenter et al., Citation1994). For example, older adults are less accurate in making connective inferences in multi-sentence discourses (Cohen, Citation1979), less consistent in identifying referents for pronouns (Light & Capps, Citation1986), and less accurate in end-of-sentence acceptability judgments and comprehension questions compared to their younger peers (Caplan & Waters, Citation2005; Payne et al., Citation2014). Older adults’ language processing has also widely been reported to be slower than younger adults’ (Caplan & Waters, Citation2005; Van der Linden et al., Citation1999). Furthermore, age-related declines in auditory and visual acuity can negatively impact both spoken (Wingfield & Stine-Morrow, Citation2000) and written language comprehension (Madden, Citation1988).

However, there is also significant evidence that older adults may take advantage of context to offset the negative impacts of these age-related changes (Payne & Silcox, Citation2019; Pichora-Fuller, Citation2008; Stine-Morrow et al., Citation2006). In particular, older adults appear to take robust advantage of semantic context during language comprehension (Pichora-Fuller, Citation2008), often achieving comparable levels of performance to younger adults when rich semantic context is available (Lash et al., Citation2013). The current study uses verb-argument prediction measured via the visual world paradigm (Tanenhaus et al., Citation1995) as a lens for examining how older adults exploit context to guide their activation of upcoming words and concepts during comprehension. Verb argument prediction involves activating a not-yet-mentioned referent that is a likely theme (object) of a verb – for example, activating an edible referent such as “cake” upon reading or hearing eat. This phenomenon is a particularly informative lens through which to examine the question of how older adults make use of context during comprehension because rapid prediction of verb arguments has been shown to depend on world knowledge cued by both visual and linguistic context (Altmann & Kamide, Citation1999; Milburn et al., Citation2016). Furthermore, the visual world paradigm enables direct examination of the rapid prediction of upcoming verb arguments, allowing these processes to be distinguished from incidental activation of verb-congruent referents and other irrelevant information (Kukona et al., Citation2014, Citation2011).

World knowledge and verb-argument prediction

Verb-argument prediction has been a central topic of investigation in the psycholinguistics literature (Altmann & Kamide, Citation1999; Boland, Citation2005; Borovsky et al., Citation2012; Kamide et al., Citation2003; Mack et al., Citation2013; Milburn et al., Citation2016), and has provided critical evidence about how individuals take advantage of different sources of knowledge during the comprehension process (e.g., Warren & Dickey, Citation2021). Much of the evidence regarding verb-argument prediction comes from the visual world paradigm (Tanenhaus et al., Citation1995). In the visual world paradigm, a participant sees an array of objects or images and hears a linguistic stimulus referring to or describing the array. The participant’s eye movements around the array are finely time-locked to their concurrent processing of this linguistic stimulus (Huettig, Rommers, & Meyer, Citation2011), and gazes to likely upcoming referents can emerge before the referent has been named in the linguistic stimulus (Altmann & Kamide, Citation1999). This paradigm has been demonstrated to be effective in a variety of populations, including young adults, children, older adults, and people with aphasia (Altmann & Kamide, Citation1999; Baltaretu & Chambers, Citation2018; Borovsky et al., Citation2012; Hayes et al., Citation2016; Mack et al., Citation2013).

Evidence from the visual world paradigm has demonstrated that world knowledge about likely event participants (McRae & Matsuki, Citation2009) drives predictive looks to upcoming verb arguments. For example, Kamide et al. (Citation2003) found that world knowledge cued by agent+verb combinations drove prediction of likely upcoming objects: upon hearing The little girl will ride the …, listeners were more likely to gaze at a picture of a carousel than a motorcycle, whereas upon hearing The man will ride the … this pattern was reversed. Both carousels and motorcycles are equally rideable objects; the prediction of one versus the other is critically dependent on world knowledge about how the agent of a riding event affects that event’s likelihood. Milburn et al. (Citation2016) extended these results by showing that verb-argument prediction can be driven by combinations of verbs and scenes: the verb fling can take anything flingable as a direct object, but in combination with a naturalistic scene of a bridal party in a Western wedding, the object flowers becomes overwhelmingly more likely. These visual-world paradigm findings demonstrate that comprehenders actively predict upcoming verb arguments, and that they take advantage of world knowledge (activated by combinations of words, images, or both) to do so.

The visual world paradigm also provides evidence that comprehenders automatically activate referents based solely on a verb’s coarse-grained thematic constraints (e.g., that the direct object of ride must be something rideable). These activated referents can be different from the predicted verb arguments and be irrelevant to the unfolding sentence, and their activation usually occurs later than activation of the predicted verb argument. For example, as described previously, Kamide et al. (Citation2003) found that upon hearing The little girl will ride the …, listeners were more likely to gaze at a picture of a carousel than a motorcycle, even before the offset of the verb ride. However, during a later time window, listeners initiated more gazes to the motorcycle compared to an object that could not be ridden (candy). Paralleling these results, Kukona et al. (Citation2011) found that after hearing Toby was arrested by …, college-aged young adults showed both rapid gazes at a picture of a policeman (a likely agent, the person arresting Toby) and slightly later-emerging gazes at a picture of a crook (a person likely to be arrested). This is striking given that the theme role of arrested had already been filled by Toby. Interestingly, there is some evidence that sustained activation of such task-irrelevant referents is related to poorer language comprehension ability: individuals showing less sustained activation of these referents perform better on standardized measures of comprehension skill (Kukona et al., Citation2016).

Together, these findings show that comprehenders anticipate upcoming verb arguments based on world knowledge activated by rich linguistic and visual context. They also show incidental activation of task-irrelevant referents based on the verbs’ coarse-grained semantic constraints. These comprehension processes are thus driven by distinct knowledge sources, and they are indexed by separate gaze patterns in the visual world paradigm (see discussion by Kukona et al., Citation2014; Magnuson, Citation2019).

World knowledge and verb-argument prediction during aging

The important role of world knowledge and context in driving verb-argument prediction makes such predictive processing especially relevant for investigating how older adults exploit context during language comprehension. Crystallized knowledge, including world knowledge, is preserved in aging (Horn & Cattell, Citation1967). Many researchers have argued that older adults may flexibly adapt to rely on their strong crystallized knowledge to offset declines in other functions, such as attentional control or inhibition (Stine-Morrow et al., Citation2006). This would explain why older adults tend to succeed when context can be used to compensate for age-related declines in cognitive or sensory function (Pichora-Fuller, Citation2008). For example, although older adults commonly show poorer word-recognition performance than younger adults in weakly-constraining contexts that make it difficult to anticipate or identify a target word, particularly under challenging listening conditions (Pichora‐Fuller et al., Citation1995; Wingfield et al., Citation1985), this disadvantage disappears almost completely in semantically rich contexts (Lash et al., Citation2013). There is also evidence that older adults may show stronger effects of context than younger adults. For example, Choi et al. (Choi et al., Citation2017) reported that older adults showed larger effects of contextual predictability than younger adults did in an eye-tracking study of reading. Additionally, Rogers and colleagues (C. S. Rogers, Citation2017; C. S. Rogers et al., Citation2012) found that older adults more often recalled hearing words in noise that were not presented but were semantically consistent with context (cases of so-called “false hearing”), compared to younger adults. These findings naturally support the hypothesis that older adults should successfully exploit world knowledge to predict verb arguments, particularly in semantically rich contexts like those used by Kamide et al. (Citation2003) and Milburn et al. (Citation2016).

Interestingly, this hypothesis runs counter to recent claims that there are age-related declines in language-specific predictive processing (DeLong et al., Citation2012; Wlotko & Federmeier, Citation2012; Wlotko et al., Citation2010). It is worth noting that evidence for reduced prediction in cognitive aging often comes from weakly-constraining contexts, such as It was time to hang the new … (Wlotko & Federmeier, Citation2012) where there are many possible continuations of the incomplete context. In such contexts, crystallized world knowledge is less helpful in driving prediction, and the verb often provides the only cue allowing the comprehender to predict upcoming material. Wlotko et al. (Citation2010) summarize a variety of evidence from event-related potentials (ERPs) showing that older adults are less likely than younger adults to anticipate upcoming words based on context. For example, younger adults show smaller N400 responses to unexpected words that are semantically similar to predicted words than to unexpected words that are unrelated (for instance, following a context that makes palms highly expected, young adults show a smaller N400 response to pines compared to tulips: Federmeier & Kutas, Citation1999). This suggests that younger adults have predicted the word that is expected based on context (Federmeier & Kutas, Citation1999). Older adults often do not show such reduced N400 responses for related but unexpected words, suggesting that they have not made a prediction (DeLong et al., Citation2012; Federmeier & Kutas, Citation2005; Federmeier et al., Citation2010).

Older adults, like young adults, may activate contextually-irrelevant referents based on coarse-grained verb constraints in addition to making predictions about specific upcoming words. Such activation of contextually-irrelevant referents intersects with current views of domain-general age-related cognitive changes that may impact language comprehension performance. Hasher and colleagues have drawn attention to the challenges that age-related declines in attentional control and inhibitory function may pose for older adults’ language comprehension (Biss et al., Citation2013; Hasher et al., Citation2007, Citation1991). Evidence from comprehension-question and probe-recognition measures suggests that older adults experience greater sustained activation of task-irrelevant information than younger adults during language comprehension tasks (Christianson et al., Citation2006; Connelly et al., Citation1991; Hartman & Hasher, Citation1991). Activation of task-irrelevant information (including information such as verb-congruent referents) can lead to proactive interference, negatively impacting both encoding of new material and retrieval of previously-encoded information (Archambeau et al., Citation2020; R. D. Rogers & Monsell, Citation1995). There is considerable evidence that older adults are more likely to experience such proactive interference (Archambeau et al., Citation2020; Biss et al., Citation2013; Campbell et al., Citation2010), possibly due to the interacting effects of their greater sustained activation and their decreased inhibitory control. This naturally leads to the operationalized expectation that older adults should show more gazes to contextually-irrelevant but verb-congruent referents than young adults in verb-argument prediction situations.

Verb-argument prediction thus has significant value as a window onto how older adults draw on world knowledge and context during language comprehension, as well as onto age-related changes (in prediction and proactive interference) that may affect language comprehension. Despite this, the effects of cognitive aging on verb-argument prediction have received little attention in either the language-comprehension or the cognitive-aging literature. In the only study to date that has directly compared older and younger adults’ prediction of theme arguments (although see Mack et al., Citation2013, for a comparison of verb-argument prediction in older adults and people with aphasia), Baltaretu and Chambers (Baltaretu & Chambers, Citation2018)) found that older adults showed an advantage over younger adults in their verb-argument processing: they showed a faster accrual of gazes to the likely verb argument and reduced competition from an associatively-related referent.

Current study

We report two experiments examining the influence of healthy cognitive aging on the processes of verb-argument prediction and activation of verb-congruent referents. Both experiments directly compared gazes at highly likely (world-knowledge favored) and possible but unlikely (verb-congruent) arguments to gazes at impossible arguments. These comparisons independently test for the rapid prediction of likely verb arguments and activation of verb-congruent referents, in the same experiment and the same linguistic context. We additionally use a modified visual world paradigm (adapted from Mack et al., Citation2013) in which participants click on the image that best completes a sentence fragment. This design allows participants the time to make predictions without interruption from presentation of the argument. This could potentially be important given that older adults’ processing speed is often slowed (Salthouse, Citation1996; Wingfield et al., Citation1985).

Experiment 1 tested verb-argument prediction based on a semantically rich context provided by the combination of an agent and a verb (Kamide et al., Citation2003). Experiment 2 tested verb-argument prediction based on the verb alone (Altmann & Kamide, Citation1999). If older adults successfully use context to activate world knowledge and predict likely upcoming referents (McRae & Matsuki, Citation2009; Milburn et al., Citation2016; see also Pichora-Fuller, Citation2008), then both older and young adults should show early gazes to likely referents, particularly in the richer contexts in Experiment 1. In contrast, if older adults show reduced prediction across contexts (Federmeier & Kutas, Citation2005; Wlotko et al., Citation2010), older adults should show fewer early gazes to likely referents than young adults in both experiments. Furthermore, if older adults show greater activation of task-irrelevant information (Biss et al., Citation2013; Hasher et al., Citation2007, Citation1991) potentially leading to greater proactive interference (Archambeau et al., Citation2020), they should show more gazes at verb-congruent but unlikely referents than young adults, again in both experiments.

Experiments 1 and 2 were a priori constructed to address related but complementary research questions, as noted above. We ran both experiments on the same participants and during the same testing session with stimulus presentation randomized such that critical stimuli for each experiment acted as filler stimuli for the other experiment (for other reports of multiple experimental item sets being run simultaneously as fillers for each other, see e.g., Gibson & Warren, Citation2004; Staub, Citation2010). This approach allowed us to maximize the amount and quality of data collected while minimizing fatigue for our participants. We report and discuss the results of each experiment separately.

Experiment 1

Participants

Young adults

60 undergraduate students from the University of Pittsburgh participated for course credit (gender information was not available for our young adult sample). They ranged in age from 18 to 21, began learning English before age five, and self-reported normal or corrected-to-normal vision and hearing.

Older adults

Thirty community-dwelling older adults (25 female) with self-reported normal or corrected-to-normal vision and without self-reported history of speech-language, hearing, or neuropsychological disorders participated. Participants additionally self-identified as native English speakers. They ranged in age from 50 to 74 years (mean: 61.1) and varied in their educational attainment, with all participants having completed high school or the equivalent and 21 having completed 2 or more years of post-secondary education. In order to exclude the presence of unreported memory or other cognitive disorders, participants were given the Mini-Mental State Exam (MMSE: Folstein, Folstein & McHugh, Citation1975). All participants scored 28 or better on the MMSE (mean: 29.6), above lower-quartile cutoff scores for healthy older adults (Bleecker, Bolla-Wilson, Kawas & Agnew, Citation1988). All participants passed a 40 dB pure-tone hearing screen (unaided) at 500, 1000, 2000, and 4000 Hz bilaterally.

Materials

Visual stimuli consisted of 24 arrays of four images arranged in a grid (). A typical array in Experiment 1 might show a glass of water, a cup of coffee, a cat, and a rock.

Figure 1. Sample visual display, Experiment 1, showing the likely object (water), verb-congruent competitor (coffee), agent-related competitor (cat), and unrelated competitor (rocks). This visual display was accompanied by the auditory stimulus “The dog will drink the … ” (constrained condition) or “Someone will drink the … ” (control condition).

Figure 1. Sample visual display, Experiment 1, showing the likely object (water), verb-congruent competitor (coffee), agent-related competitor (cat), and unrelated competitor (rocks). This visual display was accompanied by the auditory stimulus “The dog will drink the … ” (constrained condition) or “Someone will drink the … ” (control condition).

Each array was accompanied by one of two possible sentence fragments presented auditorily. All sentence fragments contained an agent, a future-tense transitive verb, and a determiner; no direct object was named in any of the fragments. In the semantically-rich constrained condition, the agent was a specific actor: for example, the sentence “The dog will drink the … ” In the control condition, the agent was always the semantically-empty word someone: for example, the sentence “Someone will drink the … .” Each array of images contained a likely object, favored by world knowledge given the combination of agent and verb in the constrained condition (water, given the agent dog and verb drink); a verb-congruent competitor, which met the coarse-grained semantic requirements of the verb but was a highly unlikely direct object in the constrained condition, given its agent (coffee); an agent-related competitor, which was semantically related to the agent in the constrained condition but was not a possible direct object (cat); and an unrelated competitor, which was an impossible direct object given the verb’s semantic requirements and which was unrelated to either the agent or the verb (rock). See the Appendix for a list of all linguistic stimuli.

This paradigm is adapted from that used by Mack et al. (Citation2013; see also Hayes et al., Citation2016). If older adults’ predictive processing is indeed reduced (DeLong et al., Citation2012; Federmeier & Kutas, Citation2005; Wlotko et al., Citation2010) or their processing generally slowed (Salthouse, Citation1996), then using tasks in which bottom-up information quickly confirms the identity of the predicted item would likely obscure any evidence of their prediction, as noted in the Introduction above. Because of this, Mack et al. (Citation2013) used a modified version of the visual world paradigm in their comparison of verb-argument prediction in older adults and people with aphasia. They presented participants with sentence fragments with the final word missing and asked them to indicate which of an array of images on a computer screen best completed that fragment. Critically, because the direct object was not mentioned in any of the sentences, bottom-up information did not confirm the direct object’s identity, enabling them to observe any potentially delayed or reduced predictive effects.

Participants completed 64 trials presented in a random order. In addition to the 24 Experiment 1 trials, 20 trials were stimuli for Experiment 2, and the other 20 were fillers. Filler stimuli were constructed in the same format as the experimental stimuli. Some filler stimuli contained multiple images that were appropriate objects of their verbs, and some filler stimuli contained no appropriate direct objects. This variability among the filler trials made the sentence-final task more challenging and less predictable, thereby helping to ensure that participants would be engaged.

Audio stimuli were recorded by the second author, who self-identifies as a native speaker of English. Verbs in the control condition were slightly longer than verbs in the constrained condition (t(23) = −2.66; p < .05). Mean verb durations for each condition are shown in . Full stimuli for Experiment 1 can be viewed in Appendix .

Table 1. Means and SDs for verb duration in Experiments 1 and 2.

We used Latent Semantic Analysis (LSA; Landauer, Foltz, & Laham, Citation1998) to examine whether gazes to the verb-congruent competitor may have been motivated by its semantic similarity to the likely object. LSA provides a continuous measure of semantic relatedness, with values ranging from 0 (for completely unrelated terms) to 1 (for strongly related terms). A paired-sample t-test showed higher semantic relatedness between labels for the likely objects and verb-congruent competitors (M = 0.223) than between labels for the verb-congruent competitors and unrelated items (M = 0.06; t(22) = 4.09, p < .05). This comparison shows that the verb-congruent competitor was more strongly related to the likely object than it was to the unrelated competitor. We will return to this point in the Discussion.

Norming

Stimuli were normed to ensure that the images were identifiable and that the verbs were appropriately constraining. 26 undergraduate students from the University of Pittsburgh participated for course credit. Participants saw each grid of images in a PowerPoint slideshow and completed an accompanying paper questionnaire containing the sentence fragments used in the study, with the two versions of each fragment counterbalanced across two lists. Participants ranked the images in the grid in order of how well they completed the accompanying sentence.

Participants ranked the likely object first as a sentence completion in 92% of trials in the constraining-verb condition, reliably more often than any other object was ranked first (t1(25) = 22.37, p < 0.05, t2(23) = 18.35, p < 0.05, paired-sample t-test). Participants also ranked the verb-congruent competitor second as a sentence completion in 70% of trials in the constrained condition, reliably more often than any other object was ranked second (t1(25) = 6.38, p < 0.05, t2(23) = 3.43, p < 0.05, paired-sample t-test). With respect to the object names, the images were described with the likely object name, a synonym, or a member of the same semantic category (e.g., identifying cereal as oatmeal or a jacket as a blazer) on 84% of trials.

Procedure

Participants’ eyes were tracked using an Eyelink 1000 tracker (SR Research Ltd., Toronto, Ontario, Canada) with a sampling rate of 1 ms and a spatial resolution of less than a 30-min arc. The experiment was programmed and presented using the Experiment Builder software (SR Research Ltd., Toronto, Ontario, Canada). Participants viewed stimuli binocularly on a monitor approximately 63 cm from their eyes. Head movements were minimized using forehead and chin rests. After explaining the format of the experiment to the participants, we calibrated the eye tracker using a 9-point fixation stimulus; this ensures that the eyes are tracked both precisely and accurately across the entire display screen. In each trial, participants first clicked on a centrally located fixation cross, thereby repositioning eye gaze and mouse location at the center of the screen. The array of images was then presented, followed 500 ms later by the audio stimulus. Participants were instructed to click on the image that best completed the sentence they heard. shows percentage of clicks to each image in each condition. A single-point drift check was performed after every trial to check if recalibration was necessary, and a full 9-point recalibration was performed halfway through the experiment. The position of the images on the screen was randomized, as was the order in which stimuli were presented. Audio stimuli were presented to participants via two speakers positioned at either side of the viewing monitor. The experiment lasted approximately 20 minutes.

Table 2. Percentage of mouse clicks (Mean; SD) to each image in each condition (Experiment 1).

Results

Eye movements

Gaze data were analyzed during a time window that began 500 ms before verb onset and ended 2000 ms post-verb onset. Eye gaze data were aggregated in nonoverlapping 50 ms time bins with 50 samples per bin using the littlelisteners package (Mahr, ver. 0.0.0.9000). Fixations preceding or following a blink were removed from analysis. Graphs of gazes to each object in each condition for older and young adults can be seen in .

Data were analyzed using Growth Curve Analysis (GCA; Mirman, Citation2016) in the R statistical computing package (R Development Core Team, Citation2020; ver. 3.0.1). P-values were obtained using the lmerTest package (Kuznetsova, Brockhoff, & Christensen, Citation2017; ver. 2.0–20). Models were fit using the fullest random effects structure that would allow convergence (Barr, Levy, Scheepers, & Tily, Citation2013). GCA requires the researcher to choose whether to collapse over participants or over items when processing raw eye-tracking data. We chose to collapse over items in the present study to keep participant-level data intact, allowing us to compare between our two groups.

We modeled the overall time course of fixations to the images in our comparisons of interest using a third-order (cubic) orthogonal polynomial. This describes the time course of fixations in three time terms: the overall slope of the curve (linear), the rise and fall of the curve around a central inflection (quadratic), and the shape of the tails (cubic); these components have been demonstrated to be relevant to psycholinguistic research on lexical activation, constraint integration, and prediction (Mirman, Dixon, & Magnuson, Citation2008). Including the cubic term improved model fit and convergence, but this term can be difficult to map onto any particular cognitive process (see discussion in Mirman, Citation2016). We therefore interpret only the linear and quadratic time terms. We included fixed effects of image, condition, and population, and random slopes of participant and participant-by-image on all time terms.

We coded three image comparisons of interest: likely object vs. unrelated competitor, verb-congruent competitor vs. unrelated competitor, and agent-related competitor vs. unrelated competitor. In all comparisons, the unrelated competitor was coded as the baseline. We also compared looks in the constrained condition to looks in the control condition, using the control condition as baseline. Finally, we compared looks across populations with young adults as the baseline. All comparisons were coded using centered treatment coding. Although these comparisons were entered into a single model, we present the results for each image comparison individually for clarity. Model code is displayed below:

model <- lmer (fixation proportion ~ (linear + quadratic + cubic) * image * population * condition + (1+ linear + quadratic + cubic | participant) + (1+ linear + quadratic + cubic | participant: image), control = lmerControl(optimizer = “bobyqa”), data = data, REML = F)

A summary of the fixed and random effects, including contrasts and contrast codes, entered into the model can be seen in . Full model results can be seen in (note that interactions with the cubic time term are presented in the Appendix for brevity).

Table 3. Model summary, Experiment 1.

Table 4. Full model results, Experiment 1.

Comparison 1: likely object (water) vs. unrelated object (rocks)

Figure 2. Likely Object (water) vs. Unrelated Competitor (rocks) looks in the two conditions for young and older adults. Black lines represent looks to the likely object and unrelated competitor in the constrained condition (“The dog will drink the … ”); gray lines represent looks to these objects in the control condition (“Someone will drink the … ”). Zero on the x-axis represents verb onset.

Figure 2. Likely Object (water) vs. Unrelated Competitor (rocks) looks in the two conditions for young and older adults. Black lines represent looks to the likely object and unrelated competitor in the constrained condition (“The dog will drink the … ”); gray lines represent looks to these objects in the control condition (“Someone will drink the … ”). Zero on the x-axis represents verb onset.

Participants looked more toward the likely object than the unrelated competitor in the constrained condition compared to the control condition (Likely Object vs Unrelated Competitor*Condition interaction: β̂ = .19; SE = .006; t = 32.10; p < .05). Looks to the likely object accrued faster (Linear*Likely Object vs Unrelated Competitor*Condition interaction: β̂ = .24; SE = .05; t = 5.32; p < .05), and showed a more pronounced peak (Quadratic*Likely Object vs Unrelated Competitor*Condition interaction: β̂ = −.47; SE = .05; t = −10.47; p < .05) in the constrained condition than in the control condition. These effects were qualified by a population interaction. The peak of likely object looks in the constrained condition was more pronounced for older adults than for young adults (Quadratic*Likely Object vs Unrelated Competitor *Population*Condition interaction: β̂ = −.43; SE = .09; t = −4.81; p < .05).

Comparison 2: Verb-congruent competitor (coffee) vs. unrelated object (rocks)

Figure 3. Verb-Congruent Competitor (coffee) vs. Unrelated Competitor (rocks) looks in the two conditions for young and older adults. Black lines represent looks to the verb-congruent and unrelated competitors in the constrained condition (“The dog will drink the … ”); gray lines represent looks to these objects in the control condition (“Someone will drink the … ”). Zero on the x-axis represents verb onset.

Figure 3. Verb-Congruent Competitor (coffee) vs. Unrelated Competitor (rocks) looks in the two conditions for young and older adults. Black lines represent looks to the verb-congruent and unrelated competitors in the constrained condition (“The dog will drink the … ”); gray lines represent looks to these objects in the control condition (“Someone will drink the … ”). Zero on the x-axis represents verb onset.

Participants looked more toward the verb-congruent competitor than the unrelated competitor in the control condition compared to the constrained condition (Verb-Congruent Competitor vs Unrelated Competitor*Condition interaction: β̂ = −.25; SE = .006; t = −41.40; p < .05). Looks to the verb-congruent competitor accrued faster in the control condition compared to the constrained condition (Linear*Verb-Congruent Competitor vs Unrelated Competitor*Condition interaction: β̂ = −.55; SE = .05; t = −11.95; p < .05, and additionally showed a more pronounced peak in the control condition (Quadratic*Verb-Congruent Competitor vs Unrelated Competitor*Condition interaction: β̂ = .21; SE = .05; t = 4.58; p < .05). The effects were qualified by population interactions. The overall advantage for the verb-congruent competitor in the control condition was larger for older adults (Verb-Congruent Competitor vs Unrelated Competitor*Population*Condition interaction: β̂ = −.04; SE = .01; t = −3.45; p < .05). Additionally, looks to the verb-congruent competitor in the control condition accrued faster for older adults (Linear*Verb-Congruent Competitor vs Unrelated Competitor*Population*Condition interaction: β̂ = −.45; SE = .09; t = −4.91; p < .05).

Comparison 3: agent-related competitor (cat) vs. unrelated competitor (rocks)

Figure 4. Agent-Related Competitor (cat) vs. Unrelated Competitor (rocks) looks in the two conditions for young and older adults. Black lines represent looks to the agent-related and unrelated competitors in the constrained condition (“The dog will drink the … ”); gray lines represent looks to these objects in the control condition (“Someone will drink the … ”). Zero on the x-axis represents verb onset.

Figure 4. Agent-Related Competitor (cat) vs. Unrelated Competitor (rocks) looks in the two conditions for young and older adults. Black lines represent looks to the agent-related and unrelated competitors in the constrained condition (“The dog will drink the … ”); gray lines represent looks to these objects in the control condition (“Someone will drink the … ”). Zero on the x-axis represents verb onset.

Participants looked more toward the agent-related competitor than the unrelated competitor in the constrained condition compared to the control condition (Agent-Con. Competitor vs Unrelated Competitor*Condition interaction; β̂ = .06; SE = .006; t = 9.65; p < .05). Looks to the agent-related competitor decreased faster (Linear*Agent-Con. Competitor vs Unrelated Competitor*Condition interaction; β̂ = −.12; SE = .05; t = −2.58; p < .05), and additionally showed a more pronounced trough (Quadratic*Agent-Con. Competitor vs Unrelated Competitor*Condition interaction; β̂ = .44; SE = .05; t = 9.81; p < .05) in the constrained condition than the control condition. These effects were qualified by significant population interactions. The overall advantage for the agent-related competitor in the constrained condition was larger for older adults (Agent-Con. Competitor vs Unrelated Competitor*Condition*Population interaction; β̂ = .04; SE = .01; t = 3.32; p < .05). Looks to the agent-related competitor in the constrained condition also decreased faster (Linear*Agent-Con. Competitor vs Unrelated Competitor*Condition interaction; β̂ = .98; SE = .09; t = 10.69; p < .05) and showed a more pronounced trough (Quadratic*Agent-Con. Competitor vs Unrelated Competitor*Condition interaction; β̂ = .64; SE = .09; t = 7.10; p < .05) for older adults.

Discussion

Experiment 1 tested verb-argument prediction based on semantically rich context, specifically the combination of the agent and the verb. There were two primary findings. First, both young and older adults rapidly predicted highly likely direct objects based on agent+verb constraints, indexed by proportionally more time spent fixating the likely object in the constrained condition (water, given The dog will drink the …) compared to the control condition (Someone will drink the…). Older adults thus successfully used their world knowledge regarding likely events (McRae & Matsuki, Citation2009) activated by the combination of agent and verb to drive rapid prediction of upcoming objects. There was no evidence that older adults showed less of an advantage or a slower accrual of looks to the likely object in the constrained condition than did young adults. Instead, older adults showed a more pronounced peak in their likely-object looks than young adults. This latter finding suggests that older adults quickly identified the likely object based on agent+verb constraints and subsequently shifted their visual attention away, in contrast to young adults who rapidly identified the likely object and continued fixating it for the duration of the trial. This pattern of findings is consistent with evidence that older adults successfully exploit rich semantic context (Pichora-Fuller, Citation2008) and their crystallized world knowledge (Stine-Morrow et al., Citation2006) during language comprehension, in this case to predict and activate verb arguments. It is also consistent with Baltaretu and Chambers (Citation2018) visual-world findings showing that older adults anticipated likely upcoming objects as quickly and robustly as young adults. However, these findings do not appear consistent with accounts positing a language-specific decline in predictive processing in healthy cognitive aging (DeLong et al., Citation2012; Wlotko et al., Citation2010).

The second primary finding in Experiment 1 was that older adults showed increased activation of competitors that were related to the sentential agent but were incompatible with the coarse-grained semantic requirements imposed by the verb (cat, given The dog will drink the …) compared to young adults. This was again indexed by their proportionally longer time spent fixating the agent-related image as well as the more-pronounced peak and following trough in agent-related gazes. This latter finding suggests that older adults more rapidly activated this agent-related competitor but then rapidly shifted their visual attention to the predicted likely theme, returning to the agent-related competitor near the end of the trial. This pattern of results is broadly consistent with evidence that older adults experience greater activation of task-irrelevant information than young adults during language comprehension tasks (Christianson et al., Citation2006; Connelly et al., Citation1991; Hartman & Hasher, Citation1991). This activation may lead to elevated proactive interference in older adults (Archambeau et al., Citation2020; R. D. Rogers & Monsell, Citation1995), even when their comprehension is ultimately successful as demonstrated by the mouse-click data in both experiments.

However, we did not find strong evidence of increased activation of verb-congruent but highly unlikely direct objects (coffee, given The dog will drink the …) for either the older or the young adults. This finding is unexpected given previous visual-world results showing that comprehenders activate verb-congruent but irrelevant referents (Borovsky et al., Citation2012; Kamide et al., Citation2003; Kukona et al., Citation2011). One possible explanation for this unexpected finding is that looks to other referents in the visual-world display, favored by stronger constraints than the coarse-grained semantic constraints imposed by the verb, drove participants’ looking behavior and obscured evidence of activation of the verb-congruent competitor. Visual inspection of the data in suggests that the agent-related competitor drew significant visual attention particularly for older adults, possibly at the cost of looks to the verb-congruent competitor. Consistent with this speculation, there was a relatively strong semantic relationship between the word describing the agent (dog) and the agent-related image (kitten), as revealed by LSA estimates: the mean semantic distance between these two was 0.258, somewhat higher than the semantic distance between the words corresponding to each likely object and its associated verb-congruent competitor (M = 0.223).

Although the older adults’ looks to the verb-congruent competitor in the constrained condition were less prominent than expected, their looks to this referent in the control condition revealed an interesting pattern. Older adults looked overall more at the verb-congruent object in the control condition compared to young adults, and their looks to this object likewise accrued faster. Given that both the likely object and the verb-congruent object are roughly equally predictable objects in the control condition (as revealed both by pre-experiment norming as well as mouse responses), this gaze pattern may suggest that older adults generate robust predictions of likely objects whether those predictions are based on a semantically-rich agent+verb combination (as in the constrained condition, wherein only one object is favored by agent+verb constraints) or a verb alone (as in the control condition). This latter finding is somewhat unexpected if rich semantic context and the world knowledge that it activates play an especially critical role in older adults’ language comprehension performance (Payne & Silcox, Citation2019; Pichora-Fuller, Citation2008; Stine-Morrow et al., Citation2006, p.; Wingfield & Stine-Morrow, Citation2000).

However, the verb-congruent competitor was more strongly related to the likely target than to the unrelated competitor, as revealed by the LSA analyses reported above. This relatively strong associative relationship complicates the interpretation of looks to the verb-congruent competitor. Looks to this object could reflect prediction or activation of this referent based on coarse-grained verb constraints, but it may equally reflect activation stemming from low-level associative relationships between the strongly predicted likely object and the verb-congruent competitor. Notably, this relationship is a systematic confound in some previous visual-world findings demonstrating automatic activation of verb-congruent referents. For example, in Kukona et al. (Citation2011), the verb-congruent competitor cop is likely as strongly associated with thief as it is with the verb arrested. In Experiment 2, the semantic relationships between the likely object and the verb-congruent and unrelated competitors were more tightly controlled.

Experiment 2

Whereas Experiment 1 looked for prediction based on semantically rich context including detailed world knowledge activated by the agent and verb combination (ie., dog + drink predicts water), Experiment 2 looked for evidence of prediction based on the verb alone (ie., drink predicts any liquid patient; Altmann & Kamide, Citation1999). These verb-driven contexts are closer to the weakly-constraining contexts that Wlotko and Federmeier (Citation2012) and others have shown are especially likely to reveal age-related differences in predictive behavior. Looks to the verb-congruent competitor in Experiment 1 in the control condition suggested that older adults may make robust predictions based on verbs alone. But Experiment 2 provided a cleaner test of this by including two verb-congruent objects in the array that varied in their plausibility as appropriate direct objects. This design allows us both to examine argument prediction based solely on verbs and to more cleanly determine how interference from possible but unlikely alternatives drives predictive looks.

Methods

Participants

Because Experiment 2 was run concurrently with Experiment 1, the participants were the same as in Experiment 1.

Materials

Visual stimuli consisted of 20 arrays of four images arranged in a grid (). For example, one prototypical array consisted of a cake, a branch, a pail, and a minivan.

Figure 5. Sample visual display, Experiment 2 showing the likely object (cake), verb-congruent competitor (branch), and the two unrelated competitors (pail and minivan). This visual display was accompanied by the auditory stimuli “Someone will eat the … ” (constrained condition) or “Someone will move the … ” (control condition).

Figure 5. Sample visual display, Experiment 2 showing the likely object (cake), verb-congruent competitor (branch), and the two unrelated competitors (pail and minivan). This visual display was accompanied by the auditory stimuli “Someone will eat the … ” (constrained condition) or “Someone will move the … ” (control condition).

Each array was accompanied by one of two possible sentence fragments presented auditorily. All sentence fragments contained a semantically-empty subject (“someone”), a future-tense transitive verb, and a determiner; no direct object was named in any of the fragments (cf. Mack et al., Citation2013). The fragments contained either a constrained verb, which placed strong semantic constraints on its object (e.g., “Someone will eat the … ”), or a control verb, which placed few semantic constraints on potential direct objects (e.g., “Someone will move the … ”).

The objects varied in how compatible they were with the control verb (“eat”). Two objects in the array – the cake and the branch in – satisfied the coarse-grained semantic constraints imposed by the verb on potential direct objects. However, they varied in their likelihood given world knowledge. The cake was a highly likely direct object, while the branch was a verb-congruent but highly unlikely object (i.e., because branches can be eaten – for example, by giraffes). The remaining two objects in the array (pail and minivan) were unrelated competitors. All objects in the array were compatible with the control verb (“move”). See the Appendix for a list of all linguistic stimuli.

Stimuli were recorded by the same native speaker of English as in Experiment 1. Verb durations did not differ between constrained and control conditions (t(19) = −.598; p = .56). Mean verb duration for each condition can be viewed in . Full stimuli for Experiment 2 can be viewed in Appendix .

Norming

The same norming procedure was used as in Experiment 1. Participants ranked the likely object first as a sentence completion in 91% of trials in the constraining-verb condition, reliably more often than any other object (t1(25) = 35.67, p < 0.05, t2(19) = 11.97, p < 0.05, paired-sample t-test). This indicates that comprehenders had a strong preference for the likely object as the best completion in the constrained condition. Participants also ranked the verb-congruent object second as a sentence completion in 75% of trials in the constrained condition, reliably more often than any other object was ranked second (t1(25) = 11.18, p < 0.05, t2(19) = 4.22, p < 0.05, paired-sample t-test). This indicates that comprehenders considered the verb-congruent object a reasonable sentence completion in the constrained condition, though not as good a completion as the likely object. With respect to the object names, the images were described with the target name, a synonym, or a member of the same semantic category on 89% of trials. This indicates that participants were able to identify the images as being the intended objects.

We used Latent Semantic Analysis (LSA; Landauer, Foltz, & Laham, Citation1998) to measure the semantic relationship between the likely object and verb-congruent competitor. Using LSA’s pairwise comparison function, we calculated the semantic distance between each likely object word and its paired verb-congruent competitor, and between each verb-congruent competitor and its paired unrelated competitors. A paired-sample t-test found no significant difference in the mean distance between the likely object and verb-congruent competitor (M = 0.119) and the distance between verb-congruent and unrelated competitors (M = 0.105; t(19) = 0.36, p > 0.7). This comparison shows that the verb-congruent competitor was no more strongly related to the likely object than it was to the unrelated competitors. Any differences in gaze preferences between likely, verb-congruent, and unrelated images reported below are therefore unlikely to be driven by semantic-associative relationships among these items.

Procedure

Because Experiment 2 was administered concurrently with Experiment 1, the same procedure was used. shows percentage of clicks to each image in each condition.

Table 5. Percentage of mouse clicks (Mean; SD) to each image in each condition (Experiment 2).

Results

Gaze data were analyzed during a time window that began 500 ms before verb onset and ended 2000 ms post-verb onset. Eye gaze data were aggregated and models were constructed using the same procedure as in Experiment 1. Graphs of gazes to each object in each condition for older and young adults can be seen in .

We manually averaged looks to the two unrelated competitors in each 50 ms bin because we did not expect differences in looks between them. We coded two image contrasts using looks to the averaged unrelated competitors as a baseline. The first compared gazes to the likely direct object to the average of gazes to the two unrelated competitors. The second compared looks to the verb-congruent direct object to the average of the two unrelated competitors. We compared looks in the constrained condition to looks in the control condition with the control condition as baseline. Finally, we compared looks across populations with young adults as the baseline. All comparisons were coded using centered treatment coding. Although these comparisons were entered into a single model, we present the results for each object comparison individually for clarity. Model code is displayed below:

model <- lmer (fixation proportion ~ (linear + quadratic + cubic) * image * population * condition + (1+ linear + quadratic + cubic | participant) + (1+ linear + quadratic + cubic | participant: image), control = lmerControl(optimizer = “bobyqa”), data = data, REML = F)

A summary of the fixed and random effects, including contrasts and contrast codes, entered into the model can be seen in . Full model results can be seen in (note that interactions with the cubic time term are presented in the Appendix for brevity).

Table 6. Model summary, Experiment 2.

Table 7. Full model results, Experiment 2.

Comparison 1: likely object (cake) vs. unrelated competitor average (minivan/pail)

Figure 6. Likely Object (cake) vs. Unrelated Competitor Average (minivan/pail) looks in the two conditions for young and older adults. Black lines represent looks to the likely object and unrelated competitor in the constrained condition (“Someone will eat the … ”); gray lines represent looks to these objects in the control condition (“Someone will move the … ”). Zero on the x-axis represents verb onset.

Figure 6. Likely Object (cake) vs. Unrelated Competitor Average (minivan/pail) looks in the two conditions for young and older adults. Black lines represent looks to the likely object and unrelated competitor in the constrained condition (“Someone will eat the … ”); gray lines represent looks to these objects in the control condition (“Someone will move the … ”). Zero on the x-axis represents verb onset.

Across the analysis window, participants looked more toward the likely object than the average of the unrelated competitors; this effect was stronger in the constrained condition than the control condition (Likely Object vs Unrelated Competitor Average*Population interaction: β̂ = .22; SE = .005; t = 43.85; p < .05). Looks to the likely object additionally accrued faster (Linear*Likely Object vs Unrelated Competitor Average*Population interaction: β̂ = .44; SE = .04; t = 12.01; p < .05) and showed a more pronounced peak (Quadratic*Likely Object vs Unrelated Competitor Average*Population interaction: β̂ = −.51; SE = .04; t = −14.07; p < .05) than looks to the unrelated competitors in the constrained condition. These effects were qualified by significant population interactions: young adults look at the likely object quickly and continue looking for the duration of the trial, resulting in a stronger slope advantage compared to older adults (β̂ = −.62; SE = .07; t = −8.52; p < .05). Likewise, the looking curve for likely object looks in young adults was a clean parabola with a single peak, in contrast to the “ups and downs” seen for older adults; this results in a more pronounced quadratic effect for young adults (β̂ = .49; SE = .07; t = 6.75; p < .05) f.

Comparison 2: verb-congruent Object (branch) vs. unrelated competitor average (minivan/pail)

Figure 7. Verb-Congruent Competitor (branch) vs. Unrelated Competitor (minivan/pail) looks in the two conditions for young and older adults. Black lines represent looks to the verb-congruent and unrelated competitors in the constrained condition (“Someone will eat the … ”); gray lines represent looks to these objects in the control condition (“Someone will move the … ”). Zero on the x-axis represents verb onset.

Figure 7. Verb-Congruent Competitor (branch) vs. Unrelated Competitor (minivan/pail) looks in the two conditions for young and older adults. Black lines represent looks to the verb-congruent and unrelated competitors in the constrained condition (“Someone will eat the … ”); gray lines represent looks to these objects in the control condition (“Someone will move the … ”). Zero on the x-axis represents verb onset.

Participants looked more toward the verb-congruent object than the average of the unrelated competitors overall more in the constrained condition; this pattern was reversed in the control condition (Verb-Congruent Object vs Unrelated Competitor Average*Condition interaction: β̂ = −.07; SE = .005; t = −13.37; p < .05). Note that this is not surprising given that the unrelated objects (branch, minivan) are verb-congruent in the control condition. Importantly, this effect was qualified by significant population interactions. The slope advantage (Linear*Verb-Congruent Object vs Unrelated Competitor Average*Population*Condition interaction: β̂ = .69; SE = .07; t = 9.59; p < .05) and peak (Quadratic*Verb-Congruent Object vs Unrelated Competitor Average*Population*Condition interaction: β̂ = −.34; SE = .07; t = −4.76; p < .05) for the verb-congruent object in the constrained condition were larger for older adults than for young adults.

Discussion

Experiment 2 examined verb-argument prediction based solely on a constraining verb. There were two primary findings. First, compared to young adults, older adults showed less clear evidence of predicting highly likely direct objects based only on verb constraints. Across the full analysis window (500 ms before verb onset to 2000 ms post-verb onset), older adults exhibited a shallower slope and less-pronounced peak of looks to the likely object (cake) vs. the unrelated competitors (pail/minivan) in the constrained condition (“Someone will eat the … ”) than did young adults. This pattern suggests that older adults may show weaker prediction of likely objects when the verb alone licenses such predictions, in contrast to when predictions are based on semantically-rich verb+agent contexts as in the constrained condition in Experiment 1 (“The dog will drink the … ”). This finding may be at odds with findings in the control condition in Experiment 1 (“Someone will drink the … ”). There, older adults looked more often and more quickly at the verb-congruent object (coffee) than did young adults, suggesting that older adults show relatively robust prediction of direct objects based on verb information alone.

An explanation of this pattern may lie with the second primary finding of Experiment 2: older adults directed more visual attention to the possible but unlikely verb-congruent competitor (branch) than did young adults, which in turn reduced the proportion of time older adults spent fixating the likely object. Older adults had faster-accruing and more sharply-peaked looks to the verb-congruent competitor in the constrained condition than did young adults, primarily in the latter part of the analysis window (see ). This pattern is consistent with previous visual-world findings showing that listeners automatically activate referents compatible with a verb’s coarse-grained constraints following early-emerging looks at the predicted likely object (Kamide et al., Citation2003; Kukona et al., Citation2011). It also suggests that older adults activated the irrelevant verb-congruent competitor more strongly than did young adults. This is consistent with the evidence that older adults experience greater activation of task-irrelevant information during language comprehension (Christianson et al., Citation2006; Connelly et al., Citation1991; Hartman & Hasher, Citation1991), which may lead to greater proactive interference (Archambeau et al., Citation2020).

This pattern also helps explain the older adults’ more variable evidence of likely-object predictions in the GCA analyses reported above. In both experiments, older adults identified the likely object early in the analysis window and subsequently shifted their visual attention away – to the agent-related competitor in Experiment 1 and the verb-congruent competitor in Experiment 2. In contrast, young adults rapidly identified the likely object and continued fixating it for the duration of the trial. The shallower slope and less-pronounced peak in likely-object looks seen for older adults compared to young adults likely reflects greater sustained activation (Biss et al., Citation2013; Connelly et al., Citation1991; Hartman & Hasher, Citation1991) and therefore increased visual attention to the verb-congruent competitor.

The results of Experiment 2 are again broadly consistent with findings that older adults experience sustained activation of task-irrelevant information (Biss et al., Citation2013; Hartman & Hasher, Citation1991; Hasher et al., Citation1991; Hasher & Zacks, Citation1988). Experiment 2’s findings may also provide evidence that older adults show less robust verb-argument prediction than young adults when the predictions are based on the verb only, a less semantically-rich context than the agent+verb contexts in Experiment 1. However, these findings may also reflect older adults’ stronger activation of the verb-congruent competitor, which drew their visual attention from the likely object and reduced their evidence of prediction in the reported GCA findings. We will return to these findings in the General Discussion below.

Of note, the patterns of fixations in this experiment are unlikely to be driven by semantic-associative relationships between the elements in the display, unlike in Experiment 1. The LSA analyses reported above demonstrated that the likely object, verb-congruent competitor, and the impossible competitors did not differ in their semantic similarity to one another. This provides additional confidence that older adults’ fixations to the verb-congruent competitor in Experiment 2 reflect their automatic activation of this referent based on the verb’s coarse-grained semantic constraints, as found in previous visual-world studies (Borovsky et al., Citation2012; Kamide et al., Citation2003; Kukona et al., Citation2011).

General discussion

This study used verb-argument prediction to examine how older adults take advantage of crystallized world knowledge and rich semantic context during language comprehension. A variant of the visual-world paradigm successfully used with older adults and people with aphasia (Hayes et al., Citation2016; Mack et al., Citation2013) was used to reveal two distinct processes associated with verb-argument prediction: rapid prediction of upcoming verb arguments such as themes (objects) based on detailed world knowledge regarding likely events (McRae & Matsuki, Citation2009), and activation of verb-congruent referents based on coarse-grained verb constraints (Kukona et al., Citation2011).

The current findings provide clear evidence that older adults are able to exploit world knowledge to activate upcoming verb arguments, particularly when that world knowledge is cued by semantically-rich verb+argument combinations like those used in other visual-world studies of verb-argument prediction (e.g., Borovsky et al., Citation2012; Kamide et al., Citation2003). In Experiment 1, older adults did not differ from young adults in their fixations to an (unmentioned) highly likely object that was strongly predictable based on the combination of an agent and verb. Older adults’ looks to this object accrued as quickly as young adults’ did and peaked more steeply, suggesting that older adults robustly predicted the upcoming object based on world knowledge activated by the agent+verb combination (Kamide et al., Citation2003; Milburn et al., Citation2016). This is similar to visual-world findings reported by Baltaretu and Chambers (Citation2018), who also found that older adults’ predictive fixations to an upcoming theme accrued as quickly as young adults’ did. It is also consistent with evidence that older adults can strategically exploit context during language comprehension (Pichora-Fuller, Citation2008; Stine-Morrow et al., Citation2006), achieving similar levels of performance to younger adults when rich semantic context is present (Lash et al., Citation2013; Payne & Silcox, Citation2019). In Experiment 2, older adults showed weaker and more variable evidence of predicting an upcoming object based on a verb alone: their looks accrued more slowly and peaked less steeply than did young adults’. This pattern may reflect sustained activation of the verb-congruent competitor, driving older adults’ visual attention away from the likely object and depressing the slope and peak of their likely-object curves. However, it is also possible that the differences between older and young adults’ likely-object fixation patterns in Experiment 2 indicate that older adults perform less well when prediction is based solely on a verb. This latter interpretation would be more consistent with the body of evidence suggesting older adults critically depend on rich context for successful language comprehension (Payne & Silcox, Citation2019; Pichora-Fuller, Citation2008; Stine-Morrow et al., Citation2006; Wingfield & Stine-Morrow, Citation2000)

This relatively preserved prediction of highly likely themes among older adults (at least when based on an agent+verb combination) is consistent with evidence showing that world knowledge regarding likely events is a strong driver of verb-argument prediction (Kuperberg, Citation2013; McRae & Matsuki, Citation2009; Milburn et al., Citation2016). Such crystallized knowledge is preserved in aging (Horn & Cattell, Citation1967) and is strategically exploited by older adults during language comprehension, as noted above. It is also consistent with findings showing that prediction of highly-likely location arguments based on such world knowledge is relatively robust in older adults (Hayes et al., Citation2016). However, it is less consistent with claims that prediction is systematically reduced in older adults (DeLong et al., Citation2012; Federmeier et al., Citation2010; Wlotko & Federmeier, Citation2012; Wlotko et al., Citation2010). One possible explanation for this discrepancy is related to the age range of the participants in the current older-adults sample: the mean age is 61.1, which is younger than the typical age range for older adults in the studies that have reported reduced prediction (DeLong et al., Citation2012; Federmeier et al., Citation2010; Wlotko & Federmeier, Citation2012; Wlotko et al., Citation2010). This is a limitation of the current study. Another potentially relevant factor is the experimental tasks used across these studies. Whereas most of the evidence suggesting that prediction is reduced in older adults comes from ERPs (DeLong et al., Citation2012; Federmeier et al., Citation2010; Wlotko & Federmeier, Citation2012; Wlotko et al., Citation2010), both the current findings and Baltaretu and Chambers (Citation2018) previous visual-world findings suggest that predictive processing is relatively preserved in aging. Of note, the modified visual-world paradigm used here (also used by Hayes et al., Citation2016) may also have encouraged predictive behavior: participants were explicitly told to click on the image that best completed the sentence fragment.

Interestingly, ERP paradigm variants that explicitly ask participants to predict upcoming words also find evidence of preserved prediction in older adults. Dave and colleagues (Dave et al., Citation2018) examined predictive processing using an ERP paradigm in which they asked older and younger adults to predict the final word of a moderately- or weakly-constraining two-sentence discourse, comparing trials where participants correctly predicted the word to trials where their prediction was not correct. They found little or no evidence that older and young adults’ prediction accuracy differed, or that older and young adults’ ERP responses differed in trials where they correctly predicted the final word. Together with the current findings, these findings suggest that prediction may not be dramatically reduced in aging, at least not when task demands encourage strategic use of context (Payne & Silcox, Citation2019; Pichora-Fuller, Citation2008) and when those predictions are based on knowledge that is preserved in aging (Horn & Cattell, Citation1967; Milburn et al., Citation2016).

The current findings also provide clear evidence that older adults exhibited greater sustained activation of irrelevant linguistic information during verb-argument processing than did young adults. In Experiment 1, older adults showed greater activation of the agent-related competitor than did young adults in the constrained condition, both immediately after the agent was mentioned and at the end of the trial. In Experiment 2, older adults showed greater activation of the verb-congruent competitor in constrained conditions than did young adults, again toward the end of the trial. For verb-congruent competitors in Experiment 2, this finding is consistent with previous evidence suggesting that comprehenders automatically activate verb-congruent referents following early predictive gazes to highly likely referents favored by world knowledge (Kukona et al., Citation2011). This pattern of sustained activation of these irrelevant referents is consistent with work by Hasher and colleagues, who argue that age-related declines in attentional control and inhibitory function may negatively impact older adults’ language comprehension (Biss et al., Citation2013; Hasher et al., Citation2007, Citation1991). Activation of contextually-irrelevant information – such as agent-related referents in Experiment 1 and verb-congruent referents in Experiment 2 – can lead to proactive interference, negatively impacting both encoding of new material and retrieval of previously-encoded information (Archambeau et al., Citation2020; R. D. Rogers & Monsell, Citation1995). There is significant evidence that older adults are more likely than younger adults to experience such proactive interference (Archambeau et al., Citation2020; Biss et al., Citation2013; Campbell et al., Citation2010 It is possible that an interaction between age-associated increases in sustained activation during language comprehension (Christianson et al., Citation2006; Connelly et al., Citation1991; Payne et al., Citation2014) and decreases in inhibition (Biss et al., Citation2013; Hasher et al., Citation2007, Citation1991) may explain this increased incidence of proactive interference. Note however that, because we did not collect individual difference measures of working memory or processing speed, we are unable to conclude definitively that the pattern of results seen in older adults in the present study is due to inhibitory changes associated with age.

It is unclear why older adults showed greater interference from the verb-congruent competitor in Experiment 2 but not Experiment 1. One possible explanation is that the agent-related competitor in Experiment 1 drew fixations that might otherwise have gone to the verb-congruent competitor, particularly for the older adults. That is, perhaps the lack of an agent-related competitor in Experiment 2 permitted competition from the verb-congruent competitor to emerge more clearly. The relatively strong semantic-associative relationship between the agent and the agent-related competitor image is consistent with this possibility.

A final question to be answered regarding the Experiment 1 findings is why the greater competition from the agent-related competitor seen for the older adults appeared so late in the trial. The related agent occurred at the very start of the trial, prior to verb onset, so these fixations are unlikely to reflect residual activation of that lexical item. There were also almost no mouse responses in which older adults chose the agent-related competitor as the correct completion of the sentence (<1% of trials), meaning that these fixations likely do not reflect a mouse-click response being prepared. One possible explanation of this somewhat puzzling finding is that the agent-related competitor was activated based on event-related world knowledge. Although the agent-related competitors in the present study represented a wide variety of target-competitor relationships (agent-location, agent-tool, and agent-characteristic, to name a few), previous studies of event-based priming have found complex interconnected networks of activation between event participants, suggesting that event knowledge was likely recruited in the present study to drive looks to the agent-related competitor (McRae & Matsuki, Citation2009). Future research is required to test this possibility, possibly examining populations (such as people with aphasia) who have marked deficits in activation of such event-related knowledge (Dresang et al., Citation2019).

Conclusions

The current findings strengthen previous evidence that older adults successfully exploit context and world knowledge during language comprehension (Payne & Silcox, Citation2019; Pichora-Fuller, Citation2008; Stine-Morrow et al., Citation2006; Wingfield & Stine-Morrow, Citation2000). We also provide new evidence consistent with claims that older adults may experience greater sustained activation of information even when it is not relevant to the task at hand (Biss et al., Citation2013; Campbell et al., Citation2010; Connelly et al., Citation1991; Lustig et al., Citation2006; Weeks et al., Citation2016), which may in turn engender greater proactive interference (Archambeau et al., Citation2020). Using visual-world methods to examine a key psycholinguistic phenomenon, verb-argument prediction, has thus revealed novel evidence of older adults’ skill in using their crystallized knowledge to succeed in moment-by-moment language comprehension.

Acknowledgments

This research was supported by the National Institutes of Health through grant number R01DC011520 to the second and third authors and by grant number UL1TR000005 to the Clinical and Translational Science Institute of the University of Pittsburgh. It is the result of work supported with resources and the use of facilities at the VA Pittsburgh Healthcare System. Thanks are due to Nolan Dickey for help with stimulus creation, to Claire Kirby, Mary Mitkish, Zac Ekves, Joni Keating, Hannah Rosenberg, and Lidia Zacharczuk for help running participants, and to audiences at the 2014 Psychonomic Society meeting for comments and suggestions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the National Institutes of Health (NIH) [R01DC011520,UL1TR000005].

Notes

1. One item in this experiment included a term that we now realize is an outdated, insensitive, and colonial label. We have kept the observations from this item in the dataset but removed it from the list of stimuli, in accordance with the Inuit Circumpolar Council Resolution 2010–01. We apologize for this grave error.

References

  • Altmann, G., & Kamide, Y. (1999). Incremental interpretation at verbs: Restricting the domain of subsequent reference. Cognition, 73(3), 247–264. https://doi.org/10.1016/S0010-0277(99)00059-1
  • Archambeau, K., Forstmann, B., Van Maanen, L., & Gevers, W. (2020). Proactive interference in aging: A model-based study. Psychonomic Bulletin & Review, 27(1), 130–138. https://doi.org/10.3758/s13423-019-01671-0
  • Baltaretu, A. A., & Chambers, C. G. (2018). When criminals blow up … Balloons. Associative and combinatorial information in older and younger listeners’ generation of on-line predictions. Proceedings of the 2018 cognitive science society. Madison, Wisconsin.
  • Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of memory and language, 68(3), 255–278.
  • Biss, R. K., Campbell, K. L., & Hasher, L. (2013). Interference From Previous Distraction Disrupts Older Adults‘ Memory. The Journals of Gerontology: Series B, 68(4), 558–561. https://doi.org/10.1093/geronb/gbs074
  • Bleecker, M. L., Bolla-Wilson, K., Kawas, C., & Agnew, J. (1988). Age‐specific norms for the mini‐mental state exam. Neurology, 38(10), 1565–1565
  • Boland, J. E. (2005). Visual arguments. Cognition, 95(3), 237–274. https://doi.org/10.1016/j.cognition.2004.01.008
  • Borovsky, A., Elman, J. L., & Fernald, A. (2012). Knowing a lot for one’s age: Vocabulary skill and not age is associated with anticipatory incremental sentence interpretation in children and adults. Journal of Experimental Child Psychology, 112(4), 417–436. https://doi.org/10.1016/j.jecp.2012.01.005
  • Campbell, K. L., Hasher, L., & Thomas, R. C. (2010). Hyper-binding: A unique age effect. Psychological Science, 21(3), 399–405. https://doi.org/10.1177/0956797609359910
  • Caplan, D., & Waters, G. (2005). The relationship between age, processing speed, working memory capacity, and language comprehension. Memory, 13(3–4), 403–413. https://doi.org/10.1080/09658210344000459
  • Carpenter, P. A., Miyake, A., & Just, M. A. (1994). Working memory constraints in comprehension: Evidence from individual differences, aphasia, and aging. Handbook of Psycholinguistics, 1075–1122.
  • Choi, W., Lowder, M. W., Ferreira, F., Swaab, T. Y., & Henderson, J. M. (2017). Effects of word predictability and preview lexicality on eye movements during reading: A comparison between young and older adults. Psychology and Aging, 32(3), 232.
  • Christianson, K., Williams, C. C., Zacks, R. T., & Ferreira, F. (2006). Younger and Older Adults’ “Good-Enough” Interpretations of Garden-Path Sentences. Discourse Processes, 42(2), 205–238. https://doi.org/10.1207/s15326950dp4202_6
  • Cohen, G. (1979). Language comprehension in old age. Cognitive Psychology, 11(4), 412–429. https://doi.org/10.1016/0010-0285(79)90019-7
  • Connelly, S. L., Hasher, L., & Zacks, R. T. (1991). Age and reading: The impact of distraction. Psychology and Aging, 6(4), 533. https://doi.org/10.1037/0882-7974.6.4.533
  • Dave, S., Brothers, T. A., Traxler, M. J., Ferreira, F., Henderson, J. M., & Swaab, T. Y. (2018). Electrophysiological evidence for preserved primacy of lexical prediction in aging. Neuropsychologia, 117, 135–147.
  • DeDe, G., Caplan, D., Kemtes, K., & Waters, G. (2004). The relationship between age, verbal working memory, and language comprehension. Psychology and Aging, 19 (4), 601–616. PubMed. https://doi.org/10.1037/0882-7974.19.4.601
  • DeLong, K. A., Groppe, D. M., Urbach, T. P., & Kutas, M. (2012). Thinking ahead or not? Natural aging and anticipation during reading. Brain and Language, 121(3), 226–239. https://doi.org/10.1016/j.bandl.2012.02.006
  • Dresang, H. C., Dickey, M. W., & Warren, T. C. (2019). Semantic memory for objects, actions, and events: A novel test of event-related conceptual semantic knowledge. Cognitive Neuropsychology, 36(7-8), 313–335. https://doi.org/10.1080/02643294.2019.1656604
  • Federmeier, K. D., & Kutas, M. (1999). A rose by any other name: Long-term memory structure and sentence processing. Journal of Memory and Language, 41(4), 469–495. https://doi.org/10.1006/jmla.1999.2660
  • Federmeier, K. D., & Kutas, M. (2005). Aging in context: Age-related changes in context use during language comprehension. Psychophysiology, 42(2), 133–141. https://doi.org/10.1111/j.1469-8986.2005.00274.x
  • Federmeier, K. D., Kutas, M., & Schul, R. (2010). Age-related and individual differences in the use of prediction during language comprehension. Brain and Language, 115(3), 149–161. https://doi.org/10.1016/j.bandl.2010.07.006
  • Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975). “Mini-mental state”: a practical method for grading the cognitive state of patients for the clinician. Journal of psychiatric research, 12(3), 189–198.
  • Gibson, E., & Warren, T. (2004). Reading-time evidence for intermediate linguistic structure in long-distance dependencies. Syntax, 7(1), 55–78. https://doi.org/10.1111/j.1368-0005.2004.00065.x
  • Hartman, M., & Hasher, L. (1991). Aging and suppression: Memory for previously relevant information. Psychology and Aging, 6(4), 587–594. https://doi.org/10.1037/0882-7974.6.4.587
  • Hasher, L., Lustig, C., & Zacks, R. (2007). Inhibitory mechanisms and the control of attention. In Andrew R. A. Conway, Christopher Jarrold, Michael J. Kane, Akira Miyake, and John N. Towse (Eds.), Variation in working memory (pp. 227–249). Oxford University Press.
  • Hasher, L., Stoltzfus, E. R., Zacks, R. T., & Rypma, B. (1991). Age and inhibition. Journal of Experimental Psychology. Learning, Memory, and Cognition, 17(1), 163. doi:10.1037/0278-7393.17.1.163
  • Hasher, L., & Zacks, R. T. (1988). Working memory, comprehension, and aging: A review and a new view. In Gordon H. Bower(Ed.), The psychology of learning and motivation: Advances in research and theory (Vol. 22, pp. 193–225). Academic Press. https://doi.org/10.1016/S0079-7421(08)60041-9
  • Hayes, R. A., Dickey, M. W., & Warren, T. (2016). Looking for a Location: Dissociated effects of event-related plausibility and verb–argument information on predictive processing in Aphasia. American Journal of Speech-Language Pathology, 25(4S), S758–S775. https://doi.org/10.1044/2016_AJSLP-15-0145
  • Horn, J. L., & Cattell, R. B. (1967). Age differences in fluid and crystallized intelligence. Acta Psychologica, 26, 107–129. https://doi.org/10.1016/0001-6918(67)90011-X
  • Huettig, F., Rommers, J., & Meyer, A. S. (2011). Using the visual world paradigm to study language processing: A review and critical evaluation. Acta psychologica, 137(2), 151–171
  • Kamide, Y., Altmann, G., & Haywood, S. L. (2003). The time-course of prediction in incremental sentence processing: Evidence from anticipatory eye movements. Journal of Memory and Language, 49(1), 133–156. https://doi.org/10.1016/S0749-596X(03)00023-8
  • Kemtes, K. A., & Kemper, S. (1997). Younger and older adults’ on-line processing of syntactically ambiguous sentences. Psychology and Aging, 12(2), 362. https://doi.org/10.1037/0882-7974.12.2.362
  • Kukona, A., Braze, D., Johns, C. L., Mencl, W. E., Van Dyke, J. A., Magnuson, J. S., Pugh, K. R., Shankweiler, D. P., & Tabor, W. (2016). The real-time prediction and inhibition of linguistic outcomes: Effects of language and literacy skill. Acta Psychologica, 171, 72–84. https://doi.org/10.1016/j.actpsy.2016.09.009
  • Kukona, A., Cho, P. W., Magnuson, J. S., & Tabor, W. (2014). Lexical interference effects in sentence processing: Evidence from the visual world paradigm and self-organizing models. Journal of Experimental Psychology. Learning, Memory, and Cognition, 40(2), 326. https://doi.org/10.1037/a003490
  • Kukona, A., Fang, S.-Y., Aicher, K. A., Chen, H., & Magnuson, J. S. (2011). The time course of anticipatory constraint integration. Cognition, 119(1), 23–42. https://doi.org/10.1016/j.cognition.2010.12.002
  • Kuperberg, G. (2013). The proactive comprehender: What eventrelated potentials tell us about the dynamics of reading comprehension. Unraveling the Behavioral, Neurobiological, and Genetic Components of Reading Comprehension.
  • Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. (2017). lmerTest package: tests in linear mixed effects models. Journal of statistical software, 82(1), 1–26.
  • Landauer, T. K., Foltz, P. W., & Laham, D. (1998). An introduction to latent semantic analysis. Discourse processes, 25(2–3), 259–284.
  • Lash, A., Rogers, C. S., Zoller, A., & Wingfield, A. (2013). Expectation and entropy in spoken word recognition: Effects of age and hearing acuity. Experimental Aging Research, 39(3), 235–253. https://doi.org/10.1080/0361073X.2013.779175
  • Light, L. L., & Capps, J. L. (1986). Comprehension of pronouns in young and older adults. Developmental Psychology, 22(4), 580–585. https://doi.org/10.1037/0012-1649.22.4.580
  • Lustig, C., Hasher, L., & Tonev, S. T. (2006). Distraction as a determinant of processing speed. Psychonomic Bulletin & Review, 13(4), 619–625. https://doi.org/10.3758/BF03193972
  • Mack, J. E., Ji, W., & Thompson, C. K. (2013). Effects of verb meaning on lexical integration in agrammatic aphasia: Evidence from eyetracking. Journal of Neurolinguistics, 26(6), 619–636. https://doi.org/10.1016/j.jneuroling.2013.04.002
  • Madden, D. J. (1988). Adult age differences in the effects of sentence context and stimulus degradation during visual word recognition. Psychology and Aging, 3(2), 167–172. https://doi.org/10.1037/0882-7974.3.2.167
  • Magnuson, J. S. (2019). Fixations in the visual world paradigm: Where, when, why? Journal of Cultural Cognitive Science, 3(2), 113–139. https://doi.org/10.1007/s41809-019-00035-3
  • McRae, K., & Matsuki, K. (2009). People use their knowledge of common events to understand language, and do so as quickly as possible. Language and Linguistics Compass, 3(6), 1417–1429. https://doi.org/10.1111/j.1749-818X.2009.00174.x
  • Milburn, E., Warren, T., & Dickey, M. W. (2016). World knowledge affects prediction as quickly as selectional restrictions: Evidence from the visual world paradigm. Language, Cognition and Neuroscience, 31(4), 536–548. https://doi.org/10.1080/23273798.2015.1117117
  • Mirman, D. (2016). Growth curve analysis and visualization using R. CRC Press.
  • Mirman, D., Dixon, J. A., & Magnuson, J. S. (2008). Statistical and computational models of the visual world paradigm: Growth curves and individual differences. Journal of memory and language, 59(4), 475–494.
  • Payne, B. R., & Silcox, J. W. (2019). Chapter Seven—Aging, context processing, and comprehension. In K. D. Federmeier (Ed.), Psychology of learning and motivation (Vol. 71, pp. 215–264). Academic Press. https://doi.org/10.1016/bs.plm.2019.07.001
  • Payne, B. R., Grison, S., Gao, X., Christianson, K., Morrow, D. G., & Stine-Morrow, E. A. (2014). Aging and individual differences in binding during sentence understanding: Evidence from temporary and global syntactic attachment ambiguities. Cognition, 130(2), 157–173. https://doi.org/10.1016/j.cognition.2013.10.005
  • Pichora-Fuller, K. (2008). Use of supportive context by younger and older adult listeners: Balancing bottom-up and top-down information processing. International Journal of Audiology, 47(sup2), S72–S82. https://doi.org/10.1080/14992020802307404
  • Pichora‐Fuller, M. K., Schneider, B. A., & Daneman, M. (1995). How young and old adults listen to and remember speech in noise. The Journal of the Acoustical Society of America, 97(1), 593–608. https://doi.org/10.1121/1.412282
  • R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria (2020).https://doi.org/10.3758/s13423-016-1185-4
  • Rogers, C. S. (2017). Semantic priming, not repetition priming, is to blame for false hearing. Psychonomic Bulletin & Review, 24(4), 1194–1204. https://doi.org/10.3758/s13423-016-1185-4
  • Rogers, C. S., Jacoby, L. L., & Sommers, M. S. (2012). Frequent false hearing by older adults: The role of age differences in metacognition. Psychology and Aging, 27(1), 33–45. https://doi.org/10.1037/a0026231
  • Rogers, R. D., & Monsell, S. (1995). Costs of a predictible switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124(2), 207–231. https://doi.org/10.1037/0096-3445.124.2.207
  • Salthouse, T. A. (1996). The processing-speed theory of adult age differences in cognition. Psychological Review, 103(3), 403. https://doi.org/10.1037/0033-295X.103.3.403
  • Staub, A. (2010). Eye movements and processing difficulty in object relative clauses. Cognition, 116(1), 71–86. https://doi.org/10.1016/j.cognition.2010.04.002
  • Stine-Morrow, E. A., Miller, L. M. S., & Hertzog, C. (2006). Aging and self-regulated language processing. Psychological Bulletin, 132(4), 582–606. https://doi.org/10.1037/0033-2909.132.4.582
  • Tanenhaus, M., Spivey-Knowlton, M., Eberhard, K., & Sedivy, J. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632. https://doi.org/10.1126/science.7777863
  • Van der Linden, M., Hupet, M., Feyereisen, P., Schelstraete, M.-A., Bestgen, Y., Bruyer, R., Lories, G., El Ahmadi, A., & Seron, X. (1999). Cognitive Mediators of Age-Related Differences in Language Comprehension and Verbal Memory Performance. Aging, Neuropsychology, and Cognition, 6(1), 32–55. https://doi.org/10.1076/anec.6.1.32.791
  • Warren, T., & Dickey, M. W. (2021). The use of linguistic and world knowledge in language processing. Language and Linguistics Compass, 15(4), e12411. https://doi.org/10.1111/lnc3.12411
  • Weeks, J. C., Biss, R. K., Murphy, K. J., & Hasher, L. (2016). Face–name learning in older adults: A benefit of hyper-binding. Psychonomic Bulletin & Review, 23(5), 1559–1565. https://doi.org/10.3758/s13423-016-1003-z
  • Wingfield, A., Poon, L. W., Lombardi, L., & Lowe, D. (1985). Speed of processing in normal aging: Effects of speech rate, linguistic structure, and processing Time1. Journal of Gerontology, 40(5), 579–585. https://doi.org/10.1093/geronj/40.5.579
  • Wingfield, A., & Stine-Morrow, E. A. L. (2000). Language and speech. In The handbook of aging and cognition, 2nd ed (pp. 359–416). Lawrence Erlbaum Associates Publishers.
  • Wlotko, E. W., & Federmeier, K. D. (2012). Age-related changes in the impact of contextual strength on multiple aspects of sentence comprehension: Context use in young and older adults. Psychophysiology, 49(6), 770–785. https://doi.org/10.1111/j.1469-8986.2012.01366.x
  • Wlotko, E. W., Lee, C.-L., & Federmeier, K. D. (2010). Language of the aging brain: Event-related potential studies of comprehension in older adults: Language of the aging brain. Language and Linguistics Compass, 4(8), 623–638. https://doi.org/10.1111/j.1749-818X.2010.00224.x

Appendix

Experiment 2 stimuli.