1,053
Views
0
CrossRef citations to date
0
Altmetric
Articles

The effect of variations of emotional expressions on mnemonic discrimination and traditional recognition memory

, , , &
Pages 547-557 | Received 23 Aug 2017, Accepted 15 Jun 2018, Published online: 03 Jul 2018

ABSTRACT

Face recognition occurs when a face is recognised despite changes between learning and test exposures. Yet there has been relatively little research on how variations in emotional expressions influence people’s ability to recognise these changes. We evaluated the ability to discriminate old and similar expressions of emotions (i.e. mnemonic discrimination) of the same face, as well as the discrimination ability between old and dissimilar (new) expressions of the same face, reflecting traditional discrimination. An emotional mnemonic discrimination task with morphed faces that were similar but not identical to the original face was used. Results showed greater mnemonic discrimination for learned neutral expressions that at test became slightly more fearful rather than happy. For traditional discrimination, there was greater accuracy for learned happy faces becoming fearful, rather than those changing from fearful-to-happy. These findings indicate that emotional expressions may have asymmetrical influences on mnemonic and traditional discrimination of the same face.

The problem of face recognition is often presented as a problem of accurately telling people apart. The question of how we distinguish among thousands of individuals is a question often raised in the context of within-category discrimination, a perspective that stresses sensitivity to differences between individuals. As a result, within-person variations in the face recognition literature have been relatively ignored. There are however exceptions. For instance, studies examining the effect of changing the angle and emotion of faces (Bruce, Citation1982), or examining within-person differences in photos of the same face (Jenkins, White, Van Monfort, & Burton, Citation2011, but see also McKone, Kanwisher, & Duchaine, Citation2007; Bukach, Gauthier, & Tarr, Citation2006). How within-person variations influence face recognition is important as they strongly affect face recognition in the real world, given that no face casts the same image twice. Of particular interest to the current study is how individual variations of emotional expressions influence recognition memory.

As stated by Jenkins et al. (Citation2011), a theory of face recognition should not only explain how we tell people apart but also how we can recognise the same person across time (see also Bruce, Citation1994). Face recognition may thus be said to have occurred when a face is recognised despite a change in appearance. One of the constant variations in faces involves emotional expressions, which can manifest both threat-related and non-threatening signals (Keltner, Ekman, Gonzaga, & Beer, Citation2003; see Posamentier & Abdi, Citation2003 for a review). This raises the question of how we discriminate between familiar faces as their emotional expressions change. The objective of the current study was to study variations of emotional expression within the same face, examining its influence on two important recognition skills crucial for episodic memory (Yassa & Stark, Citation2011): the ability to discriminate between small changes in emotional expressions (i.e. mnemonic face discrimination), and the ability to discriminate between old and large (new) changes (i.e. traditional face recognition).

Memory for faces and facial emotion

Before reviewing how emotion from facial expressions affects memory, it is useful to briefly address some constituent terms. Within the face literature, face memory is often differentiated from face perception, as they may engage partly distinguishable mechanisms (e.g. Weighelt et al., Citation2013). Face perception involves an individual’s understanding and interpretation of the face and it depicts the ability to tell apart different faces with little or no memory requirement. Face recognition memory, which relies to a large extent on face perception, instead refers to the ability to retain and individuate faces in long-term memory. In addition to face perception, this requires a comparison of the currently perceived face to previously learned ones, for instance as reflected in old/new traditional recognition tasks (Weigelt et al., Citation2013). Facial recognition memory thus refers to the ability to know that a specific individual face has been seen before, despite a change in parameters such as emotional expression, over time.

According to the emotional tagging hypothesis (Richter-Levin & Akirav, Citation2003), the arousal caused by an emotional experience tags a salient event and promotes facilitation of its consolidation in memory. Faces with emotional expressions can elicit an emotional experience consistent with this hypothesis. For instance, Jackson, Linden, and Raymond (Citation2014) reported that face recognition improved if emotions were shown during the learning phase. It has also been shown that threat-relevant facial expressions (e.g. anger or fear) capture attention and enhance sensory processing and memory processes more than non-threatening facial expressions (Ceccarini & Caudek, Citation2013; Eastwood, Smilek & Merikle, Citation2003; Jackson et al,, Citation2014; Phelps, Ling & Carrasco, Citation2006). It has also been shown that angry faces can be better stored in memory, as compared with happy or neutral faces (Jackson, Wu, Linden, & Raymond, Citation2009). Relatively few studies have used the same identities while varying the emotional expression between learning and test exposures. Some of these studies report that facial identity is better recognised from neutral expressions at test if the learned face expresses happiness rather than anger (Shimamura, Ross, & Bennett, Citation2006). However, seemingly inconsistent findings have also been reported. For instance, Righi, Marzi, Toscani, Baldassi, Ottonello, and Viggiano, (Citation2012) manipulated emotion at encoding (using neutral expressions at test) and reported greater recognition for fearful than happy expression. Inconsistencies in results have also been found in studies manipulating emotional (happy or angry) expressions of the same face at retrieval using neutral faces at learning. Either the influence of facial expression on memory did not differ (D’Argembeau & Van der Linden, Citation2011), or faces that returned as angry (rather than happy) improved recognition (Chen et al., Citation2015). Furthermore, in an attempt to create well-defined perceptual variations of facial expressions of the same face, some studies have used morphing technique to create alterations (e.g. Lorenzino & Caudek, Citation2015). Morphing techniques have also been applied in recognition studies. For instance, Hess, Blairy, and Kleck (Citation1997) compared the influence of varying degrees of emotional expression on recognition and reported that only happiness (compared to anger, disgusts, and sadness) was recognised at close to 100% even at very low levels of intensity (see also Hoffmann, Kessler, Eppel, Rukavina, & Traue, Citation2010). Recognition also appears to be faster for moderately happy expressions than for more intense happy or angry ones (Kaufmann & Schweinberger, Citation2004). Methodological differences used to manipulate facial emotions when examining its effect on recognition may at least in part explain the different findings in these studies. Yet, how small variations in emotional expressions of the same face (e.g. created by morphed stimuli) influence the ability to discriminate between small changes in emotional expressions (i.e. emotional mnemonic discrimination) remains poorly understood. To our knowledge, only one study has studied whether mnemonic discrimination is affected by emotion (Leal, Tighe, & Yassa, Citation2014; Leal & Yassa, Citation2014), but this study employed emotional scenes, which may affect recognition memory differently because they provide a more complex stimulus-type that often involves several people in an environmental context (Keightley, Chiew, Anderson, & Grady, Citation2011). Furthermore, this study did not vary the degrees of emotionality within each scene between learning and test (e.g. more or less blood at the same car crash), but compared semantically similar or dissimilar scenes (e.g. different funeral scenes at study and test). Hence the authors did not test memory for within-item variations as a function of changes in emotional aspects. Employing facial expressions to study recognition memory is of importance for several reasons. Not only are emotional expressions one of the constant variations in faces, but also, prototypical expressions occur relatively infrequently in real life, and emotion is often communicated by small face changes.

The current study

We employed an emotional discrimination task in which intensity and expressions (neutral, happy, fearful) varied within each identity. The use of happy and fearful expressions was arbitrary and largely based on the limited number of databases of facial expression that provide one neutral and two emotional expressions of the same identity. We evaluated memory in two different ways.

Mnemonic discriminability in the current experiment was operationally defined as the ability to detect small changes in emotional expressions between learning and test. This included accuracy for expressions changing from neutral to low-intensity emotions, and emotional expressions changing in intensity within the same valence.

Traditional recognition here refers to the probability to detect larger changes in emotional expressions between learning and test. This included accuracy for expressions changing from neutral to high-intensity emotions, and expressions changing their valence.

With respect to mnemonic discrimination, we evaluated variations of degrees of similarity between learning and test exposure (small variations) and predicted that detection accuracy would vary significantly depending on expressional valence in the following comparisons: (a) neutral expressions transitioning to (low intensity) happy vs. fearful, (b) high intensity happy to low intensity happy vs. high intensity fearful to low intensity fearful, (c) low intensity happy to high intensity happy vs. low intensity fearful to high intensity fearful.

With respect to traditional recognition, we evaluated the ability to discriminate identical from dissimilar expressions (large variations) and predicted that detection accuracy would vary significantly depending on expressional valence in the following comparisons: (a) neutral expressions transitioning to (high intensity) happy vs. fearful, (b) high intensity happy to high intensity fearful vs. fearful to happy, and (c) low intensity happy to low intensity fearful vs. low intensity fearful to low intensity happy.

Although we expected that emotional valence at test would have an impact on both measures of memory accuracy, due to the exploratory nature of the study we refrained from predicting a specific direction for the effects. An impact on memory accuracy (irrespective of its direction) on both memory measures would be in line with studies using similar designs that have varied emotional expressions between learning and test, reporting differences in discrimination ability based on the expressional valence (e.g. Chen et al., Citation2015; Righi et al., Citation2012).

Method

Participants

Thirty participants (15 females, age ranging from 22 to 62 years old, M  =  38.46, SD  =  10.38) were recruited at a commercial business. Participants were informed about the aim of the study and received a movie voucher for their participation. Two participants (1 female) were later excluded from the analyses for not complying with task instructions. The first author conducted the experiment.

Material

For the emotional discrimination task we selected 272 face identities, (130 females) of varying age (young, n  =  174, middle age, n  =  49, old adults, n  =  49) from the database of morphed facial expressions (Stiernströmer, Wolgast, & Johansson, Citation2015, see Appendix 1). Each identity conveyed a neutral, happy, or fearful expression. The happy and fearful expressions showed two intermediate morphed versions of the original expressions, producing a high and a low intensity version of each emotion for each identity (). The task included 80 neutral expressions, 96 happy expressions (48 of high and 48 of low intensity), and 96 fearful expressions (48 of high and 48 of low intensity). Arousal and valence ratings of the expressions were conducted in a separate rating study (Appendix 1). Stimuli were presented electronically using the E-Prime 3.0 software (Psychology Software Tools, Pittsburgh, PA Citation2016) on a PC desktop computer.

Figure 1. Example of a morph within a face identity. The first image (far left) presents the original face image displaying a high intensity happy expression. The second image presents the low intensity happy expression. Third image (middle) presents the original neutral version. The fourth image presents the low intensity fearful expression. The fifth image (far right) presents the original high intensity fearful expression.

Figure 1. Example of a morph within a face identity. The first image (far left) presents the original face image displaying a high intensity happy expression. The second image presents the low intensity happy expression. Third image (middle) presents the original neutral version. The fourth image presents the low intensity fearful expression. The fifth image (far right) presents the original high intensity fearful expression.

Procedure

Seated in a quiet room each participant completed two computerised tasks: an emotional mnemonic discrimination task, and the Mnemonic Similarity Task (MST Kirwan & Stark, Citation2007, Appendix 2). The order of the two tasks was counterbalanced across participants. For the emotional discrimination task, written instructions included illustration of sample faces to assure understanding of the classification requirements for the stimuli manipulations. We also ran practice trials that were not included in the actual experiment.

The task was divided into 16 blocks, each including a learning and a test phase, divided by a 10 s filler task in which the participants were instructed to count backwards from a given number. As shown in , each learning phase contained seventeen identities: six conveyed happiness (three of high and three of low intensity), six conveyed fear (three of high and three of low intensity), and five neutral expressions. During each presentation (three seconds with 0.5 s ISI) the participants judged the expression (“Does the face convey a positive, neutral, or negative emotional expression?”). This was to ascertain that participants paid attention to the stimuli (these responses were not recorded for later analysis). The stimulus order during each learning phase was pseudo-randomized ensuring that all faces were presented in each block. The test phase included the same 17 identities presented during learning. A given identity only appeared in one block, after which it was discarded.

Figure 2. A flowchart of the architecture of the experimental task, illustrating block one structure (of 16); the learning items, test items and corresponding correct responses. [To view this figure in color, please see the Online version of this journal.]

Figure 2. A flowchart of the architecture of the experimental task, illustrating block one structure (of 16); the learning items, test items and corresponding correct responses. [To view this figure in color, please see the Online version of this journal.]

At test, the participants identified each expression according to three responses: “identical expression,” (i.e. old), “small change in expression” (i.e. high similarity), or “large change in expression” (i.e. low similarity). The response window for each face was five seconds with 0.5 ISI. For a correct “identical” response, the expression at test had to be identified as an exact match to the learned expression. For a correct “small change” response, the expression at test had to be identified as changing only slightly from the learned expression (i.e. a similar item); the intensity could have transitioned to lower or higher depending on the expression presented at learning, or from neutral to low intensity emotions. For a correct “large change” response, the test expression had to be identified as involving a large change in the expression producing a new emotion (i.e. a dissimilar item). Our three item types (old Target, Similar and Dissimilar Lures) were pseudo-randomized to ensure that all item manipulations were presented in each block. Our experimental comparisons are presented in . As per Swedish laws, this study did not require an independent Human Subjects approval, but was reviewed within the Department and complied with all ethical requirements. Participants provided written informed consent.

Figure 3. Illustration of the experimental comparisons in the emotional discrimination task. [To view this figure in color, please see the Online version of this journal.]

Figure 3. Illustration of the experimental comparisons in the emotional discrimination task. [To view this figure in color, please see the Online version of this journal.]

Data analyses

A lure discrimination index (LDI: Formula 1) was used to evaluate mnemonic discrimination ability between similar facial expressions. This index has been used for similar purposes (Chen et al., Citation2015; Leal et al., Citation2014; Leal & Yassa, Citation2014; Stark, Yassa, Lacy, & Stark, Citation2013) and was also suitable for evaluating the ability to discriminate between faces that were identical or slightly changed (for an alternative measure see Yassa, Mattfeld, Stark, & Stark, Citation2011; Chang, Murray, & Yassa, Citation2015). As our design used old Targets and two types of Lures: similar and dissimilar (rather than Lures and Foils) our operationalisation of the LDI varied from the traditional calculation, (p(‘New’|Lure)-p(‘New’|Target) in the following ways: In the current design, the traditional “new” response (and Foil items) used in prior studies corresponded to “low similarity” (and dissimilar Lures), and a traditional “similar” response (Lure items) corresponded to “high similarity” (similar lures). We also refrained from using only correct rejections (‘Low similarity’|Similar Lure, or “new”|Lure,) as a behavioural correlate for mnemonic discrimination, as it could be contaminated by rejections (that are the result of insufficient encoding, i.e. misses). We therefore subtracted the probability of rejecting an old Target item (which quantifies misses) from the probability of rejecting Similar Lure items, to produce the LDI index used here. Hence, our LDI measure corrected for a general tendency to answer the same way, for instance to always respond as low similarity (new).

For our second measure we used Snodgrass and Corwin’s (Citation1988) target recognition (p(Old | Target) – p(Old | New). This measure reflects hit-rate (correctly responding “old” to old target items) and false alarm rate (i.e. incorrectly responding “old” to new items). The measure offers an unbiased estimate of accuracy in response to old and new items (where higher values imply more accurate recognition) and is an established measure of the participants’ ability to identify old identical items. Our version of this measure (Formula 2) reflects the ability to accurately discriminate between old identical and dissimilar expressions, where dissimilar expressions corresponded to traditionally new items.Repeated measures analyses of variance (ANOVAs) were conducted for analyses of main effects, with eta squared as an effect measure. Main effects were followed up with pairwise comparisons using Bonferroni corrections. Paired-sample t-tests were used to analyze normally distributed data (after testing for normality with the Shapiro–Wilk test). Cohen’s d (Citation1988) was used to estimate effect size. Non-normal data were analyzed with non-parametric Wilcoxon signed rank t-tests, with Rosenthal’s r (Citation1994) used to estimate effect size. A p value ≤ 0.05, two-tailed was considered significant. All tests were performed with SPSS, version 21.

Results

Mnemonic discrimination

The first set of analyses evaluated how well the participants discriminated between small intensity differences between learning and test. presents the average proportions of the three responses based on raw data: “identical (old),” “high similarity (small change),” and “low similarity” (“large change”) to the three item types (“Target,” “Similar Lure,” “Dissimilar Lure”).

Table 1. Mean proportion of response alternatives (SD) for each item type.

Neutral to emotional facial expressions

A repeated-measures ANOVA was conducted on the LDIs for neutral faces transitioning to low-intensity emotions with the type of emotional expressions as the repeated factor with two levels (fearful vs. happy). The analysis revealed a main effect for neutral expressions transitioning to low intensity emotion, Wilks’ Lambda  =  0.50, F(1, 27)  =  26.78, p  <  .001, n2 = .50, observed power .99. Follow-up pairwise comparisons revealed higher values for neutral-fearful (M  =  0.24, SD  =  0.14) than neutral-happy expressions (M  =  0.12, SD  =  0.15, Z  =  3.79, p  <  .001, r  =  .64), indicating a better mnemonic discrimination for learned neutral expressions becoming fearful rather than happy. Given this difference based on expression valence, our hypothesis 1a (that detection accuracy for neutral expressions transitioning to low intensity happy vs. fearful would vary) was supported ().

Figure 4. An illustration of accurate memory performance for the two memory measures. Mnemonic discrimination performance (left) reflects the participants’ ability to correctly distinguish between similar facial expressions. The facial expressions at learning were presented as neutral and became slightly more emotional (happy or fearful) at test. Traditional recognition accuracy (right) for emotional expressions at learning and test for both high (100%) and low (50%) intensity. [To view this figure in color, please see the Online version of this journal.]

Figure 4. An illustration of accurate memory performance for the two memory measures. Mnemonic discrimination performance (left) reflects the participants’ ability to correctly distinguish between similar facial expressions. The facial expressions at learning were presented as neutral and became slightly more emotional (happy or fearful) at test. Traditional recognition accuracy (right) for emotional expressions at learning and test for both high (100%) and low (50%) intensity. [To view this figure in color, please see the Online version of this journal.]

Emotional-to-emotional facial expressions

An ANOVA was conducted on the LDIs for high to low intensity changes with emotion as the within-subject (repeated) factor with two levels: happy (M   =   0.15, SD  =  0.17) vs. fearful (M  =  0.18, SD  =  0.18). The analysis did not find an effect (p  =  .48, observed power .01).

Another ANOVA was conducted on the LDIs for low to high intensity changes with emotion as the within-subject (repeated) factor with two levels: happy (M  =  0.09, SD  =  0.12) vs. fearful (M  =  0.11, SD  =  0.18). The analysis did not reveal an effect (p  =  .59, observed power .08). Hence, since accuracy did not vary with expression valence, our hypotheses 1b and 1c (that detection accuracy would vary for high to low and low to high intensity changes within emotional expressions) were not supported.

Traditional recognition

We also examined performance for large intensity differences between learning and test. presents the average proportions of the responses.

Table 2. Mean proportion of performance for traditional recognition.

Neutral to emotional facial expressions

An ANOVA was conducted on traditional recognition accuracy of neutral expressions shown as high intensity emotions (happy, fearful) at test. There was no main effect (p  =  .08, observed power .41), suggesting that manipulating emotion at retrieval did not affect results significantly. Hence, our hypothesis 2a (regarding detection accuracy for neutral expressions transitioning to high intensity emotion (happy versus fearful)) was not supported.

Emotional-to-emotional facial expressions

We also analysed whether accuracy discrimination of an emotional expression presented at learning was affected differently when a different emotional expression was presented at test (for both high and low intensity). First, we analysed recognition performance for high-intensity expressions with emotional transition as the within-subject factor (happy to fearful, fearful to happy). The analysis revealed a main effect, Wilks’ Lambda  =  0.77, F(1, 27)  =  8.02, p  =  .009, n2  =  .23, observed power .80. Accuracy for high intensity expressions were greater for happy expressions presented as fearful at test, than for fearful expressions presented as happy, t(27)  =  2.83, p  =  .01, d  =  0.6 (). The effect on accuracy was coupled with an effect on hit-rate, Wilks’ Lambda  =  0.86, F(1, 27)  =  4.52, p  =  .04, n2  =  .14, observed power  =  0.54, with a greater value for happy than fearful, t(27)  =  2.13, p  =  .04, d  =  0.39. False alarm was also lower for happy expressions changing to fearful, than fearful changing to happy, F(1, 27)  =  8.14, p  =  .01, observed power  =  .80, z  =  −2.42, p  =  .02, r  =  0.32.

A second repeated measure ANOVA on accuracy was conducted on recognition accuracy of low-intensity emotions changing valence between learning and test (happy to fearful, fearful to happy). The analysis revealed a main effect on accuracy, Wilks’ Lambda  =  0.86, F(1, 27)  =  4.47, p  =  .44, observed power .53, with higher values for happy expressions transitioning to fearful than for fearful changing to happy, t(27)  =  2.11, p  =  .04, d  =  0.37 (). Marginally non-significant results were obtained for hit-rate (p  =  0.06) and false alarms (p  =  0.09), with higher values for happy expressions changing to fearful. Taken together, our recognition hypotheses 2b and 2c (predicting that detection accuracy for emotional expressions transitioning to a different emotional expression for both high and low intensity would vary) were supported.

Given the higher mean age of the participants than that for the typical student samples, repeated measures ANOVAs were also conducted to examine whether age and gender as covariates affected the results significantly, but they did not.

Discussion

A central finding of this study was that mnemonic discrimination between similar expressions was better for neutral expressions becoming fearful rather than for those becoming happy (hypothesis 1a). The second finding concerned traditional recognition and showed that participants remembered learned happy expressions better than fearful ones for both high and low intensity (hypotheses 2b-c).

Our first finding is in line with reports from other studies showing that threat-relevant expressions capture attention and enhance sensory and memory accuracy better than non-threatening (e.g. happy) facial expressions. Our result suggests a different impact of fearful and happy expressions on mnemonic discrimination on the same face. A reasonable explanation is that fear-related stimuli, even if they only consist of subtle fearful facial expressions, have a particular behavioural relevance as they could signal threat to one’s safety and survival. Unfortunately, our study did not assess whether, and if so how, the physical similarity contributed to the effect of emotional cues at retrieval, that is the extent to which image similarity affected mnemonic discrimination. It is therefore possible that there was a greater structural similarity between images of neutral to fearful expressions than of neutral to happy expressions influencing the results.

Comparing our second finding (of detection accuracy for traditional recognition for emotional expressions changing into another emotional expression) to previous studies is more difficult as few studies have used emotional expressions at both learning and test. Nonetheless, it is partly consistent with studies showing that happy faces are identified or recognised faster than angry or sad ones (e.g. Shimamura et al., Citation2006). Perhaps threatening-stimuli presented at test draw attention to what motivated the negative emotion (e.g. why/what is this person afraid of?), rather than to the particular person who elicited the emotion. Happy expressions in contrast may direct attention to the particular person (source) who elicited the smile. To the extent that a smiling face communicates a social bond such as kinship and familiarity, it may be advantageous to remember the specific person manifesting it (e.g. Baudouin, Gilibert, Sansone, & Tiberghien, Citation2000).

Strengths and limitations

Strengths from the study include recruiting a non-student sample, thus allowing for generalisation to more general populations. The study also included new and validated morphed stimuli that allow the evaluation of different degrees of emotional similarity in facial expressions on same face identity. This was an important feature as the only previous study on the effect of emotion on mnemonic discrimination could not examine varying degree of emotion within the same stimuli (Leal et al., Citation2014). On the other hand, our study also had a number of limitations. First, due to trial-length, neutral expressions were never presented as dissimilar facial expressions during test. Second, the neutral expressions used in this study had been rated as somewhat negative. Although neutral expressions are often used as baseline conditions to compare processing of emotional facial expressions, neutral faces may be perceived as slightly negative, especially when used alongside emotional expressions (Marusak, Zundel, Brown, Rabinak, & Thomason, Citation2017). In addition, emotion may, at least in part, have been read into the face by the perceiver. In addition, the study did not examine whether there was a greater structural similarity between some of the expressions used. Furthermore, the emotional category judgements at learning were not recorded, hence we were unable to evaluate whether high and low similarity were equally well identified. Given these methodological factors, together with the relatively small effect sizes, interpreting the results should be made with caution. It also bears mentioning that this study was conducted with non-students with a higher mean age than that of studies using students, which could explain some differences with those studies.

Conclusion

This study explored whether variations of facial emotional expressions in same face influence the ability to differentiate between old and similar/dissimilar expressions in recognition memory. Our results suggest that small emotional signals of fearful expressions at retrieval influence mnemonic discrimination differently depending on valence. When degree of similarity was large (dissimilar expressions), accurate discrimination was greater for learned happy than fearful expressions. Our findings have implications for our understanding of how variations of emotional expressions influence recognition memory: small variations that originate from neutral expressions favour threat-related stimuli, whereas larger variations of similarity that originate from emotional expressions favour non- threat-related stimuli. Hence, there is an asymmetrical influence of emotion on mnemonic and traditional discrimination. These results have implications for social aspects of face processing as they qualify prior findings supporting a happy face advantage (Kirita & Endo, Citation1995). There may also be practical implications relating to eyewitness identification research (e.g. Lindsay, Mansour, Bertrand, Kalmet & Melsom, Citation2011; Ryan & Schwartz, Citation2013): suspects’ facial expression during the first encounter (the crime) may differ from the second encounter (identifying the suspect), which may then influence recognition accuracy leading to incorrect recognitions. Future work may want to look more closely at individual differences on the effects of varying facial expressions on memory, for instance by evaluating whether memory for subtle expressional changes (e.g. from neutral to fear and happy) differ according to traits, such as anxiety and depression, where a bias for negative information may be expected (e.g. Becker, MacQueen, Wojtowicz, Citation2009; Leal et al. Citation2014; Shelton & Kirwan, Citation2013). This may also be worth examining in relation to dissociation and insecure attachments, where hypersensitivity to neutral stimuli and a tuning away from more fearful stimuli might be predicted (DePrince & Freyd, Citation1999, Citation2001; He, Nanxin & Tonggui, Citation2010; Liotti, Citation2004; Olsen & Beck, Citation2012).

Supplemental material

Supplemental Material

Download Zip (122.2 KB)

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  • Baudouin, J.-Y., Gilibert, D., Sansone, S., & Tiberghien, G. (2000). When the smile is a cue to familiarity. Memory (Hove, England), 8, 285–292. doi:10.1080/09658210050117717.
  • Becker, S., MacQueen, G., & Wojtowicz, J. M. (2009). Computational modeling and empirical studies of hippocampal neurogenesis-dependent memory: Effects of interference, stress and depression. Brain Research, 1299, 45–54. doi:10.1016/j.brainres.2009.07.095.
  • Bruce, V. (1982). Changing faces: Visual and non-visual coding processes in face recognition. British Journal of Psychology, 73, 105–116. doi:10.1111/j.2044-8295.1982.tb01795.x.
  • Bruce, V. (1994). Stability from variation: The case of face recognition the MD Vernon memorial lecture. The Quarterly Journal of Experimental Psychology Section A, 47, 5–28. doi:10.1080/14640749408401141.
  • Bukach, C. M., Gauthier, I., & Tarr, M. J. (2006). Beyond faces and modularity: The power of an expertise framework. Trends in Cognitive Sciences, 10, 159–166. doi:10.1016/j.tics.2006.02.004.
  • Ceccarini, F., & Caudek, C. (2013). Anger superiority effect: The importance of dynamic emotional facial expressions. Visual Cognition, 21(4), 498–540. doi: 10.1080/13506285.2013.807901
  • Chang, A., Murray, E., & Yassa, M. A. (2015). Mnemonic discrimination of similar face stimuli and a potential mechanism for the “other race” effect. Behavioral Neuroscience, 129, 666–672. doi:10.1037/bne0000090.
  • Chen, W., Liu, C. H., Li, H., Tong, K., Ren, N., & Fu, X. (2015). Facial expressions at retrieval affects recognition of facial identity. Frontiers in Psychology, 6, 780. doi: 10.3389/fpsyg.2015.00780
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.
  • D’Argembeau, A., & Van der Linden, M. (2011). Influence of facial expression on memory for facial identity: Effects of visual features or emotional meaning? Emotion, 11, 199–202. doi: 10.1037/a0022592
  • DePrince, A. P., & Freyd, J. J. (1999). Dissociative tendencies, attention and memory. Psychological Science, 10(5), 449–452. doi: 10.1111/1467-9280.00185
  • DePrince, A. P., & Freyd, J. J. (2001). Memory and dissociative tendencies: The roles of attentional context and word meaning in a directed forgetting task. Journal of Trauma and Dissociation, 2, 67–82. doi: 10.1300/J229v02n02_06
  • Eastwood, J. D., Smilek, D., & Merikle, P. M. (2003). Negative facial expression captures attention and disrupts performance. Perception & Psychophysics, 65, 352–358. doi: 10.3758/BF03194566
  • He, J., Nanxin, L., & Tonggui, L. (2010). Adult attachment and incidental memory for emotional words. Interpersonal: An International Journal on Personal Relationships, 1–20. doi: 10.5964/ijpr.v5isupp1.79
  • Hess, U., Blairy, S., & Kleck, R. E. (1997). The intensity of emotional facial expression and decoding accuracy. Journal of Nonverbal Behavior, 21, 241–257. doi: 10.1023/A:1024952730333
  • Hoffmann, H., Kessler, H., Eppel, T., Rukavina, S., & Traue, H. C. (2010). Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better than men. Acta Psychologia, 135, 278–283. doi: 10.1016/j.actpsy.2010.07.012
  • Jackson, M. C., Linden, D. E. J., & Raymond, J. E. (2014). Angry expressions strengthen the encoding and maintenance of face identity representations in visual working memory. Cognition & Emotion, 28, 278–297. doi: 10.1080/02699931.2013.816655
  • Jackson, M. C., Wu, C. Y., Linden, D. E. J., & Raymond, J. E. (2009). Enhanced visual short-term memory for angry faces. Journal of Experimental Psychology: Human Perception and Performance, 35, 363–374. doi: 10.1037/a0013895
  • Jenkins, R., White, D., Von Montfort, X., & Burton, M. A. (2011). Variability in photos of the same face. Cognition, 121, 313–323. doi: 10.1016/j.cognition.2011.08.001
  • Kaufmann, J. M., & Schweinberger, S. R. (2004). Expression influences the recognition of familiar faces. Perception, 33, 399–408. doi: 10.1068/p5083
  • Keightley, M. L., Chiew, K. S., Anderson, J. A. E., & Grady, C. L. (2011). Neural correlates of recognition memory for emotional faces and scenes. Social Cognitive and Affective Neuroscience, 6, 24–37. doi: 10.1093/scan/nsq003
  • Keltner, D., Ekman, P., Gonzaga, G. C., & Beer, J. (2003). Facial expression of emotion. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 415–432). Oxford: Oxford University Press.
  • Kirita, T., & Endo, M. (1995). Happy face advantage in recognizing facial expressions. Acta Psychologica, 89, 149–163. doi: 10.1016/0001-6918(94)00021-8
  • Kirwan, C. B., & Stark, S. M. (2007). Overcoming interference: An fMRI investigation of pattern separation in the medial temporal lobe. Learning and Memory, 14, 625–633. doi: 10.1101/lm.663507
  • Leal, S. L., Tighe, S. K., & Yassa, M. A. (2014). Asymmetric effects of emotion on mnemonic interference. Neurobiology of Learning and Memory, 111, 41–48. doi: 10.1016/j.nlm.2014.02.013
  • Leal, S. L., & Yassa, M. A. (2014). Effects of aging on mnemonic discrimination of emotional information. Behavioral Neuroscience, 128, 539–547. doi: 10.1037/bne0000011
  • Lindsay, R. C. L., Mansour, J. K., Bertrand, M. I., Kalmet, N., & Melsom, E. I. (2011). Face recognition in eyewitness memory. In A. J. Calder, G. Rhodes, M. H. Johnson, & J. V. Haxby (Eds.), The Oxford handbook of face perception (pp. 307–308). New York: Oxford University Press. doi: 10.1093/oxfordhb/9780199559053.013.0016
  • Liotti, G. (2004). Trauma, dissociation, and disorganized attachment: Three strands of a single braid. Psychotherapy: Theory, Research, Practice, Training, 41, 472–486. doi: 10.1037/0033-3204.41.4.472
  • Lorenzino, M., & Caudek, C. (2015). Task-irrelevant emotion facilitates face discrimination learning. Vision Research, 108, doi: 10.1016/j.visres.2015.01.007
  • Marusak, H. A., Zundel, C. G., Brown, S., Rabinak, C. A., & Thomason, M. E. (2017). Convergent behavioural and corticolombic connectivity evidence of a negativity bias in children and adolescents. Social Cognitive and Affective Neuroscience, 517–525. doi: 10.1093/scan/nsw182
  • McKone, E., Kanwisher, N., & Duchaine, B. C. (2007). Can generic expertise explain special processing for faces? Trends in Cognitive Sciences, 11, 8–15. doi: 10.1016/j.tics.2006.11.002
  • Olsen, S. A., & Beck, J. G. (2012). The effects of dissociation on information processing for analogue trauma and neutral stimuli: A laboratory study. Journal of Anxiety Disorders, 26, 225–232. doi: 10.1016/j.janxdis.2011.11.003
  • Phelps, E. A., Ling, S., & Carrasco, M. (2006). Emotion facilitates perception and potentiates the perceptual benefits of attention. Psychological Science, 17(4), 292–299. doi: 10.1111/j.1467-9280.2006.01701.x
  • Posamentier, M. T., & Abdi, H. (2003). Processing faces and facial expressions. Neuropsychology Review, 13, 113–143. doi: 10.1023/A:1025519712569
  • Psychology Software Tools, Inc. [E-Prime 3.0]. (2016). Retrieved from http://www.pstnet.com
  • Richter-Levin, G., & Akirav, I. (2003). Emotional tagging of memory information – In the search for neural mechanisms. Brain Research Reviews, 43, 247–256. doi: 10.1016/j.brainresrev.2003.08.005
  • Righi, S., Marzi, T., Toscani, M., Baldassi, S., Ottonello, S., & Viggiano, M. P. (2012). Fearful expressions enhance recognition memory: Electrophysiological evidence. Acta Psychologica, 139, 7–18. doi: 10.1016/j.actpsy.2011.09.015
  • Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper, & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 231–244). New York: Russell Sage Foundation.
  • Ryan, K. F., & Schwartz, N. Z. (2013). Face recognition in emotional scenes: Observers remember the eye shape but forget the nose. Perception, 42, 330–340. doi: 10.1068/p7359
  • Shelton, D. J., & Kirwan, C. B. (2013). A possible negative influence of depression on the ability to overcome memory interference. Behavioural Brain Research, 256, 20–26. doi: 10.1016/j.bbr.2013.08.016
  • Shimamura, A. P., Ross, J., & Bennett, H. (2006). Memory for facial expressions: The power of a smile. Psychonomic Bulletin & Review, 13, 217–222. doi: 10.3758/BF03193833
  • Snodgrass, J. G., & Corwin, J. (1988). Pragmatics of measuring recognition memory: Applications to dementia and amnesia. Journal of Experimental Psychology: General, 117, 34–50. doi: 10.1037/0096-3445.117.1.34
  • Stark, S. M., Yassa, M. A., Lacy, J. W., & Stark, C. E. L. (2013). A task to assess behavioral pattern separation (BPS) in humans: Data from healthy aging and mild cognitive impairment. Neuropsychologia, 51, 2442–2449. doi: 10.1016/j.neuropsychologia.2012.12.014
  • Stiernströmer, E. S., Wolgast, M., & Johansson, M. (2015). A database of morphedfacial expressions of emotions. Lund Psychological Reports, 15, 1–20.
  • Weighelt, S., Koldewyn, K., Dilks, D. D., Balas, B., McKone, E., & Kanwisher, N. (2013). Domain-specific development of face memory but not face perception. Developmental Science, 17(1), 47–58. doi: 10.1111/desc.12089
  • Yassa, M., Mattfeld, A. T., Stark, S. M., & Stark, C. E. L. (2011). Age-related memory deficits linked to circuit-specific disruptions in the hippocampus. Proceedings of the National Academy of Sciences, 108, 8873–8878. doi: 10.1073/pnas.1101567108
  • Yassa, M. A., & Stark, C. E. L. (2011). Pattern separation in the hippocampus. Trends in Neurosciences, 34, 515–525. doi: 10.1016/j.tins.2011.06.006