ABSTRACT
Can you visualize a dog? How about grief? The latter may be more difficult, as grief has no easily identifiable physical referent in the external world. Such an abstract concept is often seen as “disembodied” and represented linguistically. However, for embodied views, abstract concepts can be grounded in perceptual, motor, and introspective experiences. In two studies, participants memorized abstract words using linguistic (sentence-making) or imagery (visualizing situations) strategies. Imagery improved recall at medium (30 min) and long-term (24 h) intervals, but not immediately (30 s). Manipulation checks using interferences confirmed that imagery relied more on sensorimotor experiences. This suggests that the memory representation of abstract concepts is deeply rooted in experiences. Aligned with embodied accounts of conceptual processing, this means that orienting learning towards experiential aspects of concepts is more important than orienting towards the related linguistic information for long-term memory.
Acknowledgements
We would like to thank Dimitri Paisios for his help in R programming and Anna Borghi and the BALLAB for their theoretical and methodological advice which helped us to construct this paper.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The materials, data, analysis codes for Study 1 and 2 are accessible on OSF via this link: https://osf.io/cvaf3/.
Notes
1 Despite the debate surrounding the format of mental imagery and its equivalence with sensorimotor simulation, a large body of evidence can attest the parsimonious hypothesis that mental imagery can rely on a sensorimotor simulation mechanism with visual imagery being a “weak” version of visual perception (Dijkstra et al., Citation2020; Kosslyn & Thompson, Citation2003; Pearson, Citation2019; Pearson et al., Citation2011; Schendan et al., Citation2012), motor imagery based on the same brain structures as real action (Cummings & Williams, Citation2012), auditory imagery recruiting the same neural networks as auditory perception (Zatorre et al., Citation1996), etc.
2 We are aware that in such sentence processing, an embodied view would predict the activation of embodied simulations since each word is situated and it is likely that in such conditions there would be semantic processing. Nevertheless, we argue that this condition is much more oriented towards linguistic elements associated with each word. Moreover, to assess to what extent this sentence making indirectly activates (visual) experiential content, a visual interference was displayed for half the words (cf. the end of the procedure part). Also, we chose a control task with a rather deep level of processing to have quasi-equivalent depth of processing between the two conditions.