0
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Imagining abstractness: The role of embodied simulations and language in memory for abstract concepts

ORCID Icon, ORCID Icon & ORCID Icon
Received 16 Jan 2024, Accepted 26 Jun 2024, Published online: 25 Jul 2024
 

ABSTRACT

Can you visualize a dog? How about grief? The latter may be more difficult, as grief has no easily identifiable physical referent in the external world. Such an abstract concept is often seen as “disembodied” and represented linguistically. However, for embodied views, abstract concepts can be grounded in perceptual, motor, and introspective experiences. In two studies, participants memorized abstract words using linguistic (sentence-making) or imagery (visualizing situations) strategies. Imagery improved recall at medium (30 min) and long-term (24 h) intervals, but not immediately (30 s). Manipulation checks using interferences confirmed that imagery relied more on sensorimotor experiences. This suggests that the memory representation of abstract concepts is deeply rooted in experiences. Aligned with embodied accounts of conceptual processing, this means that orienting learning towards experiential aspects of concepts is more important than orienting towards the related linguistic information for long-term memory.

Acknowledgements

We would like to thank Dimitri Paisios for his help in R programming and Anna Borghi and the BALLAB for their theoretical and methodological advice which helped us to construct this paper.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The materials, data, analysis codes for Study 1 and 2 are accessible on OSF via this link: https://osf.io/cvaf3/.

Notes

1 Despite the debate surrounding the format of mental imagery and its equivalence with sensorimotor simulation, a large body of evidence can attest the parsimonious hypothesis that mental imagery can rely on a sensorimotor simulation mechanism with visual imagery being a “weak” version of visual perception (Dijkstra et al., Citation2020; Kosslyn & Thompson, Citation2003; Pearson, Citation2019; Pearson et al., Citation2011; Schendan et al., Citation2012), motor imagery based on the same brain structures as real action (Cummings & Williams, Citation2012), auditory imagery recruiting the same neural networks as auditory perception (Zatorre et al., Citation1996), etc.

2 We are aware that in such sentence processing, an embodied view would predict the activation of embodied simulations since each word is situated and it is likely that in such conditions there would be semantic processing. Nevertheless, we argue that this condition is much more oriented towards linguistic elements associated with each word. Moreover, to assess to what extent this sentence making indirectly activates (visual) experiential content, a visual interference was displayed for half the words (cf. the end of the procedure part). Also, we chose a control task with a rather deep level of processing to have quasi-equivalent depth of processing between the two conditions.

Additional information

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 238.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.