361
Views
0
CrossRef citations to date
0
Altmetric
Articles

Events shape long-term memory for story information

ORCID Icon, &
 

ABSTRACT

We segment what we read into meaningful events, each separated by a discrete boundary. How does event segmentation during encoding relate to the structure of story information in long-term memory? To evaluate this question, participants read stories of fictional historical events and then engaged in a postreading verb arrangement task. In this task, participants saw verbs from each of the events placed randomly on a computer screen and then arranged the verbs into groups onscreen based on their understanding of the story. Participants who successfully comprehended the story placed verbs from the same event closer to each other than verbs from different events, even after controlling for orthographic, text-based, semantic, and situational overlap between verbs. Thus, how people structure story information into separate events during online comprehension is associated with how that information is stored in memory. Specifically, story information within an event is bound together in memory more so than information between events.

Acknowledgments

We thank J. Mac Stewart, Madaline Merle, Tori Evans, Taylor Capko, and Jennica Rogers for their help with data collection and scoring participant responses. We also extend our gratitude to Dr. G. A. Radvansky for providing us the stimuli and the situational coding of the stories we used. We also thank Dr. Lester Loschky and the members of the Event Cognition Reading Group at Kansas State University and Dr. Jeffrey Zacks and the Dynamic Cognition Lab at Washington University in St Louis for engaging in thoughtful discussion of this project with us.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/0163853X.2023.2185408

Notes

1. Radvansky and Zacks (2014) use the term event model to capture the online representation for what is happening now. According to Radvansky and Zacks (2014), situation models are a subtype of event model that captures representations tied to discourse. We use the term situation model throughout the article but acknowledge that the claims we make can be applied to the more abstract concept of event models.

2. For instance, making isolated similarity judgments of 24 verbs would require (24 choose 2) 276 trials.

3. We also ran a power analysis using data from eight participants that served as pilot participants after we collected data from 209 participants. Data from these eight pilot participants were not included in the final data set. We ran the power analysis using the mixedpower library in Kumle et al. (2021). Estimating power in (generalized) linear mixed models: an open introduction and tutorial in R. Behavior Research Methods, 1–16. We were unaware how to use the approach prior to data collection. This approach estimates power from a simulation. The power analysis begins by first fitting a statistical model to the data. Power is calculated by repeating three steps. First, new values for the response variable are simulated using the pilot data. Second, the model refits to the simulated responses. We repeated the second step for 1,000 simulations. Third, a statistical test is applied to the simulated data with an alpha set equal to .05. We calculated power as the proportion of significant tests relative to the number of simulations (n = 1,000). We repeated each of these three steps for sample sizes from 20 to 200 participants, incrementing by 20 participants for each simulation. We found that a total sample size of 180 would be needed to observe a difference between verbs that do and do not share an event at a power of .90. Therefore, we should be adequately powered to detect a difference between verbs that share and do not share an event.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.