ABSTRACT
Most everyday experiences are multisensory, and all senses can trigger the conscious re-experience of unique personal events embedded in their specific spatio-temporal context. Yet, little is known about how a cue’s sensory modality influences episodic memory, and which step of this process is impacted. This study investigated recognition and episodic memory across olfactory, auditory and visual sensory modalities in a laboratory-ecological task using a non-immersive virtual reality device. At encoding, participants freely and actively explored unique and rich episodes in a three-room house where boxes delivered odours, musical pieces and pictures of face. At retrieval, participants were presented with modality-specific memory cues and were told to 1) recognise encoded cues among distractors and, 2) go to the room and select the box in which they encountered them at encoding. Memory performance and response times revealed that music and faces outperformed odours in recognition memory, but that odours and faces outperformed music in evoking encoding context. Interestingly, correct recognition of music and faces was accompanied by more profound inspirations than correct rejection. By directly comparing memory performance across sensory modalities, our study demonstrated that despite limited recognition, odours are powerful cues to evoke specific episodic memory retrieval.
Open Scholarship
This article has earned the Center for Open Science badge for Open Data. The data are openly accessible at OSF URL : https://osf.io/d436m/
Acknowledgments
We thank Pr Christian Scheiber for agreeing to support our work as principal investigator and Eda Erdal for her help in data preliminary analyses. We are grateful to the technical platform NeuroImmersion for the development of the EpisOdor Virtual Reality software, and the company EmoSens® for supplying some of the odourants used in this study. This work was supported by the Centre de Recherche en Neurosciences de Lyon (CRNL), the Centre National de la Recherche Scientifique (CNRS), the LABEX Cortex (ANR-11-LABX-0042) of Université de Lyon within the program “Investissements d’Avenir” (ANR-11-IDEX-0007) operated by the French National Research Agency (ANR). The team Auditory Cognition and Psychoacoustics is part of the LabEx CeLyA (Centre Lyonnais d'Acoustique, ANR-10-LABX-60). Dr L. R. was funded by the Roudnitska Foundation and LabEx CeLyA. A CC-BY 4.0 public copyright license has been applied by the authors to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission, in accordance with the grant’s open access conditions.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Data availability statement
The data that support the findings of this study is openly available in the Open Science Framework (OSF) at https://osf.io/d436m/, doi:10.17605/OSF.IO/D436M.