1,906
Views
5
CrossRef citations to date
0
Altmetric
Research Article

Discrimination thresholds for smiles in genuine versus blended facial expressions

& ORCID Icon | (Reviewing Editor)
Article: 1064586 | Received 27 Feb 2015, Accepted 16 Jun 2015, Published online: 17 Jul 2015

Abstract

Genuine smiles convey enjoyment or positive affect, whereas fake smiles conceal or leak negative feelings or motives (e.g. arrogance, contempt, embarrassment), or merely show affiliation or politeness. We investigated the minimum display time (i.e. threshold; ranging from 50 to 1,000 ms) that is necessary to distinguish a fake from a genuine smile. Variants of fake smiles were created by varying the type of non-happy (e.g. neutral, angry, sad, etc.) eyes in blended expressions with a smiling mouth. Participants judged whether faces conveyed happiness or not. Results showed that thresholds vary as a function of type of eyes: blended expressions with angry eyes are discriminated early (100 ms), followed by those with disgusted eyes, fearful, and sad (from 250 to 500 ms), surprised (750 ms), and neutral (from 750 to 1,000 ms) eyes. An important issue for further research is the extent to which such discrimination threshold differences depend on physical or affective factors.

Public Interest Statement

Facial expressions have critical communicative and adaptive functions for social interaction. Among the basic expressions of emotion, facial happiness is special for several reasons. Happy faces facilitate cooperation with, and influence on, other people, and also improve the expresser’s psychological and physiological well-being. The smile is normally associated with facial happiness and is by far the most frequent emotional expression in social settings. However, the smile is morphologically and functionally multifaceted, and can thus be used for multiple (even opposite) purposes. Accordingly, it is important to examine the observers’ thresholds in the discrimination between different types of smiles (depending on the accompanying eye expression), and the perceptual and cognitive factors accounting for such thresholds, which was the aim of the current study. This must be of interest to the general public, in addition to researchers from different backgrounds.

Competing interests

The authors declare no competing interests.

1. Introduction

The distinction between genuine and fake smiles can be addressed in functional and morphological terms. Functionally, a genuine smile conveys feelings of enjoyment. In contrast, a fake smile conceals or leaks negative feelings (arrogance/dominance, sarcasm, and contempt, or nervousness, embarrassment, and appeasement, etc.), or it portrays mere social politeness devoid of affect (see Ambadar, Cohn, & Reed, Citation2009; Niedenthal, Mermillod, Maringer, & Hess, Citation2010). In fact, where genuine smiles promote cooperative behavior in the observers (Johnston, Miles, & Macrae, Citation2010), or cause positive affective priming, that is, facilitation in the processing of subsequent positively valenced words (McLellan, Johnston, Dalrymple-Alford, & Porter, Citation2010; Miles & Johnston, Citation2007) and pictures (Calvo, Fernández-Martín, & Nummenmaa, Citation2012), posed or fake smiles do not.

Morphologically, faces with genuine smiles involve changes in two main areas (Ekman & Friesen, Citation1976): the mouth, with lip corners turned up and pulled backwards, due to contraction of the zygomaticus major muscle, often with a raised upper lip and exposed teeth; and the eye region, with contraction of the orbicularis oculi muscle, which lifts the cheeks, narrows the eye opening, and produces wrinkles around the eyes (called the Duchenne marker). The orbicularis is less subject to voluntary control than the zygomaticus, and therefore smiles are thought to represent genuine happiness only when the former is also engaged (Frank, Ekman, & Friesen, Citation1993; Soussignan, Citation2002). Although such a marker can appear both spontaneously and deliberately (Krumhuber & Manstead, Citation2009), the lack of it, or its replacement with negatively valenced expressive changes (e.g. frown, etc.), is typically considered indicative of a fake smile.

In the current study, we aimed to investigate the threshold levels in the recognition of fake vs. genuine smiles, as a function of variants of expressive changes in the eye region. Prior research has shown discrimination between genuine and non-genuine smiles (Ambadar et al., Citation2009; Johnston et al., Citation2010; Krumhuber & Manstead, Citation2009; McLellan et al., Citation2010), but there is also confusion between them (Calvo, Gutiérrez-García, Avero, & Lundqvist, Citation2013; Okubo, Kobayashi, & Ishikawa, Citation2012). The degree to which a fake smile can be detected depends on the type of non-happy eyes accompanying a smiling mouth in a face: there is increasing difficulty in identifying a fake smile as such in the presence of angry vs. disgusted vs. sad vs. fearful vs. surprised vs. neutral eyes (Calvo et al., Citation2012, 2013). As an extension of prior research, the question addressed in the current study was: When or at which threshold are different fake smiles detected or distinguished from genuine smiles, depending on the type of non-happy eyes in a face with a smile?

Difficulties in discriminating genuine from fake smiles are hypothesized to be due to two major properties of the smiling mouth: visual saliency and categorical distinctiveness. First, visual saliency has been operationalized as a combination of physical image properties, such as local contrast, spatial orientation, and energy, in computational models of visual attention (Borji & Itti, Citation2013; Torralba, Oliva, Castelhano, & Henderson, Citation2006). Calvo and Nummenmaa (Citation2008) found that the smiling mouth is, in fact, more salient than any other region of happy and non-happy faces. Consistently, in expression recognition tasks, the smiling mouth attracts more attention than any other region of emotional faces, as indicated by eye fixations (Beaudry, Roy-Charland, Perron, Cormier, & Tapp, Citation2014; Bombari et al., Citation2013; Calvo & Nummenmaa, Citation2008). Second, distinctiveness refers to the degree that a facial feature is associated with a particular expressive category and not with others, and thus it allows viewers to recognize an expression as belonging to that category. While changes in the eye region are more distinctive of angry and fearful expressions, disgust relies more on the mouth region, and sadness and surprise are similarly recognizable from both regions, the smiling mouth is most diagnostic of happy expressions (Calder, Young, Keane, & Dean, Citation2000; Calvo, Fernández-Martín, & Nummenmaa, Citation2014; Nusseck, Cunningham, Wallraven, & Bülthoff, Citation2008; Smith, Cottrell, Gosselin, & Schyns, Citation2005). Whereas facial features in the other expressions overlap to some extent across categories, the smile is uniquely associated with facial happiness.

Accordingly, the smiling mouth is both salient and distinctive of happy facial expressions (see also Becker & Srinivasan, Citation2014). These two properties probably account for their recognition advantage, with happy faces being consistently categorized more accurately and faster than any other face without a smile (Calvo & Lundqvist, Citation2008; Leppänen & Hietanen, Citation2004; Loughead, Gur, Elliott, & Gur, Citation2008; Milders, Sahraie, & Logan, Citation2008; Palermo & Coltheart, Citation2004; Stone & Valentine, Citation2007; Tottenham et al., Citation2009; see a revision in Calvo & Nummenmaa, Citationin press). Visual saliency would make the smiling mouth easily accessible to perception, thus securing an early attentional processing of such a diagnostic feature of the happiness expression. In contrast, for non-happy faces, the lower saliency of the respective diagnostic features would allow for more attentional competition and interference. In addition, by being a single distinctive feature, the smile can be used as a shortcut for a quick and accurate categorization of a face as happy (Adolphs, Citation2002; Calvo & Beltrán, Citation2014; Leppänen & Hietanen, Citation2007). In contrast, the recognition of non-happy expressions would require configural processing of particular combinations of facial features, and thus the process would be slower and more fallible.

From the previous review, we can conclude that the smile facilitates recognition of happy faces and their discrimination from non-happy faces due to their having (vs. not having) such a salient and distinctive feature. However, a smile combined with non-happy eyes (i.e. fake-smile faces) will probably hinder their discrimination from truly happy faces (with happy eyes; i.e. the Duchenne marker), as both share a smiling mouth. Presumably, a salient and distinctive smile initially captures attention and guides categorization similarly for faces with a genuine smile and those with a fake smile. The smiling mouth would overshadow the eye region, and thus override the processing of the eye expression (Calvo, Fernández-Martín, & Nummenmaa, Citation2013). As a consequence, a smiling mouth is hypothesized to bias viewers towards (wrongly) judging the face as happy even in faces with non-happy eyes, or slow down their rejection as “not happy,” and therefore hamper an accurate discrimination from genuine smiles.

To investigate discrimination thresholds for genuine vs. fake smiles, we used the composite paradigm (Calder et al., Citation2000; Leppänen & Hietanen, Citation2007; Tanaka, Kaiser, Butler, & Le Grand, Citation2012), with blended facial expressions (a smiling mouth but non-happy eyes) as stimuli. Different types of eyes served to establish types of fake smiles.Footnote1 The bottom half of a happy face (with a smiling mouth) was fused with the top half (with the eye region) of an angry, sad, fearful, disgusted, surprised, or neutral face. The same smiling mouth was thus combined with different eye expressions, which allowed us to examine the role of the eyes. For comparison, we also used intact (non-composite), genuinely happy faces (smiling mouth and eyes) and genuinely non-happy faces (e.g. angry mouth and angry eyes, etc.). All these face stimuli were presented for recognition, with participants judging whether the faces conveyed happiness or not. Variations of stimulus display time (from 50 to 1,000 ms) served to determine recognition thresholds, that is, the shortest display time at which fake smiles are judged as different from genuine smiles.

2. Method

2.1. Participants

Ninety-six psychology undergraduates (between 18 and 25 years of age; 71 female) at La Laguna University participated in the main experiment. They gave informed consent and received course credit for their participation. The study was approved by the local ethics committee, in accordance with the WMA Helsinki Declaration 2008.

2.2. Stimuli

We selected 168 digitized photographs from the Karolinska Directed Emotional Faces (KDEF; Lundqvist, Flykt, & Öhman, Citation1998) stimulus set. The face stimuli portrayed 24 individuals (12 females: KDEF No. 01, 02, 07, 11, 14, 19, 20, 22, 26, 29, 31, 35; and 12 males: KDEF No. 03, 05, 06, 10, 11, 12, 22, 23, 24, 25, 31, 35), each posing seven basic expressions (neutral, happiness, anger, disgust, sadness, fear, and surprise). In addition to these faces with genuine expressions taken from the KDEF, we constructed six blended expressions of each of the 24 selected KDEF models, thus producing 144 new face stimuli. To this end, we combined the upper half of each non-happy face and the lower half of the happy face of the same individual by cutting each face along a horizontal line through the bridge of the nose and smoothing the junction by Adobe® Photoshop® CS5. The following blended expressions were created: neutral eyes + happy smile (NeHa), angry eyes + happy smile (AnHa), disgusted eyes + happy smile (DiHa), fearful eyes + happy smile (FeHa), sad eyes + happy smile (SaHa), and surprised eyes + happy smile (SuHa) (see Figure ). Non-facial areas (e.g. hair, etc.) were removed by applying an ellipsoidal mask. Each face subtended a visual angle of 8.4° (height) × 6.4° (width) at a 60-cm viewing distance.

Figure 1. Sample of face stimuli with truly happy, truly non-happy, and blended expressions.

Notes: AnHa: angry eyes + happy mouth (smile; i.e. angry upper part of face with happy lower part of face); DiHa: disgusted eyes + smile; SaHa: sad eyes + smile; FeHa: fearful eyes + smile; SuHa: surprised eyes + smile; NeHa: neutral eyes + smile. For copyright reasons, a different face stimulus is shown in the figure instead of the original KDEF pictures. Permission to use this photo has been provided.
Figure 1. Sample of face stimuli with truly happy, truly non-happy, and blended expressions.

2.3. Genuine vs. non-genuine smiling faces

Genuine smiles were defined as those involving a smiling mouth and happy eyes (intact KDEF, truly happy faces); and fake smiles, as those involving the same smiling mouth but non-happy eyes (composite faces with blended expressions). It must, nevertheless, be noted that all the KDEF facial expressions were originally posed by the models. Accordingly, to determine whether our face stimuli conveyed genuine happiness or not, we used three criteria. First, we chose the KDEF models that included AU6 (i.e. the Duchenne marker; cheek raiser by orbicularis oculi muscle, causing wrinkles and “crow’s feet” around the eyes) in addition to AU12 (i.e. lip corner puller by zygomaticus major muscle), according to the Facial Action Coding System (Ekman, Friesen, & Hager, Citation2002). All the faces with a smile also included AU25 and AU26, with an open mouth and exposed teeth. Second, in a different study (see Calvo & Fernández-Martín, Citation2013), the face upper-half alone (i.e. with the eye region, but not the mouth) was presented for 150 ms, and participants were asked to respond whether the eye expression was happy or not. In spite of the short display, the eye region of the truly happy faces was correctly identified as happy by 81% of participants. Third, in another study (Calvo et al., Citation2012), the truly happy faces produced affective priming on the processing of pleasant scenes, whereas faces with the same smiling mouth but non-happy eyes did not. This allows us to argue that our selected genuine-smile faces convey positive affect, whereas the fake-smile faces do not.

2.4. Apparatus, procedure, and design

The stimuli were presented on a CRT monitor with a 100-Hz refresh rate. E-prime software controlled stimulus presentation and response collection. Each participant received 312 experimental trials (24 of each of the seven genuine and the six blended expression categories) in four blocks, randomly. Each trial (see Figure ) began with a central fixation circle for 500 ms, followed by a target face in the center of the screen. The face stimulus was displayed for 50, 100, 250, 500, 750, or 1,000 ms, followed by a mask with a question mark in the middle. The question mark was a prompt for participants to respond whether or not the face looked really happy by pressing one of two keys. The backward mask and the question mark remained on the screen until the participant responded. A Fourier-phase scrambled neutral face was used as the mask for all trials. The intertrial interval was 1,500 ms. The probability of responding that the face was happy, as well as the latencies from the onset of the prompt to the onset of the response, were collected. The experimental conditions were combined in a mixed factorial design, with type of facial expression (13) as a within-subjects factor, and stimulus display time (6) as a between-subjects factor. Sixteen participants were randomly assigned to each display time condition.

Figure 2. Sequence of events on each trial and stimulus size.

Note: Permission to use this photo has been provided.
Figure 2. Sequence of events on each trial and stimulus size.

3. Results

3.1. Probability of categorizing faces as “happy”

A 13 (type of expression) by 6 (display time) Greenhouse-Geisser corrected ANOVA was conducted on the probability that faces were categorized as “happy” (see Figure and Table ). The effects of expression, F(12, 1080) = 362.92, p < .0001, ηp2 = .80, and display time, F(5, 90) = 99.25, p < .0001, ηp2 = .85, were qualified by an interaction, F(60, 1080) = 5.03, p < .0001, ηp2 = .20. To decompose the interaction, repeated-measures ANOVAs were conducted for each display time separately, either involving all 13 expressions as a factor or only the truly happy faces and the six blended expressions with a smile but non-happy eyes. In all cases, the statistical effect of expression was followed by post hoc multiple comparisons with Bonferroni corrections (p < .05) to control Type I error inflation. Importantly, to determine the threshold at which discrimination between genuine- and fake-smile expressions occur, the probability of responding “happy” must be significantly higher for truly happy than for blended expressions at a particular display time or exposure level (see Figure ). In Appendix A, the uncorrected pairwise contrasts for each exposure level, with the corresponding effect sizes (r and Cohen’s d) between each type of blended expression with a smile and the truly happy faces, are also reported.

Figure 3. Mean probability of responding that truly happy faces (with a smiling mouth and happy eyes), truly non-happy expressions (no smiling mouth and non-happy eyes; average scores for all six expressions), and blended expressions (with a smiling mouth but non-happy eyes; average scores for all six expressions) were “happy.”

Note: Across display time conditions, for each expression category, mean scores with a different letter are significantly different (after Bonferroni corrections for multiple comparisons); means sharing a letter are equivalent.
Figure 3. Mean probability of responding that truly happy faces (with a smiling mouth and happy eyes), truly non-happy expressions (no smiling mouth and non-happy eyes; average scores for all six expressions), and blended expressions (with a smiling mouth but non-happy eyes; average scores for all six expressions) were “happy.”

Table 1. Mean probability of responding “Happy” to a face, and RTs for correct responses (“Happy,” to truly happy faces; “Not Happy” to non-happy and to blended expressions), as a function of face category and type of expression (non-happy vs. blended vs. truly happy), averaged across exposure levels

Figure 4. Mean probability that each type of blended expressions (with a smiling mouth but non-happy eyes) were judged as “happy” in each display condition in relation to truly happy faces (with a smiling mouth and happy eyes).

Notes: Circles within the dotted line boxes indicate that the corresponding blended expressions were discriminated from the truly happy faces. Labels preceded by an asterisk indicate the discrimination threshold for each blended expression. For the meaning of acronyms (AnHa, etc., see Figure ).
Figure 4. Mean probability that each type of blended expressions (with a smiling mouth but non-happy eyes) were judged as “happy” in each display condition in relation to truly happy faces (with a smiling mouth and happy eyes).

In the 50-ms display condition, the statistical effect with all 13 expressions included in the analysis, F(12, 180) = 80.15, p < .0001, ηp2 = .84, revealed significant differences between the truly happy faces and the truly non-happy faces, but responses were equivalent for truly happy faces and all the blended expressions. When only the seven expressions with a smile (i.e. truly happy and blended) were analysed, the effect, F(6, 90) = 5.78, p < .001, ηp2 = .28, did not yield any significant difference after Bonferroni corrections were performed.

In the 100-ms display condition, the probability of responding “happy” was higher for truly happy faces than for AnHa expressions, both when all 13 expressions were included in the analysis, F(12, 180) = 81.92, p < .0001, ηp2 = .85, and when only the seven expressions with a smile were analysed, F(6, 90) = 11.05, p < .0001, ηp2 = .42, whereas there were no significant differences between the truly happy faces and all the other blended expressions after Bonferroni corrections.

In the 250-ms display condition, the probability of responding “happy” was higher for truly happy faces than for AnHa and DiHa expressions when all 13 expressions were analysed, F(12, 180) = 69.72, p < .0001, ηp2 = .82. When only the seven expressions with a smile were analysed, the effect, F(6, 90) = 11.54, p < .0001, ηp2 = .43, revealed that, in addition to AnHa and DiHa faces, also FeHa and SaHa expressions were distinguished from truly happy faces.

In the 500-ms display condition, the probability of responding “happy” was higher for truly happy faces than for AnHa, DiHa faces, as well as for FeHa, and SaHa expressions, both when all 13 expressions were analysed, F(12, 180) = 62.68, p < .0001, ηp2 = .81, and when only the seven expressions with a smile were analysed, F(6, 90) = 13.07, p < .0001, ηp2 = .47.

In the 750-ms display condition, the probability of responding “happy” was higher for truly happy faces than for AnHa, DiHa, FeHa, SaHa, and also SuHa expressions when all 13 expressions were combined, F(12, 180) = 48.73, p < .0001, ηp2 = .77. When only the seven expressions with a smile were analysed, the effect, F(6, 90) = 11.54, p < .0001, ηp2 = .43, revealed that all the blended expressions, including NeHa faces, were significantly different from the truly happy faces.

Finally, in the 1,000-ms display condition, the probability of responding “happy” was higher for truly happy faces than for all the blended expressions, both when all 13 expressions were analysed, F(12, 180) = 50.29, p < .0001, ηp2 = .77, and also when only the seven expressions with a smile were analysed, F(6, 90) = 31.57, p < .0001, ηp2 = .68.

3.2. Reaction times of correct responses

For reaction times (RTs) of correct responses (i.e. “happy” responses, for truly happy faces; “not happy” responses, for truly non-happy faces and blended expressions; see Table ), main effects of type of expression, F(12, 1080) = 61.73, p < .0001, ηp2 = .41, and display time, F(5, 90) = 3.33, p = .008, ηp2 = .16, appeared, with no interaction (F < 1). Latencies were shorter for the truly happy faces and all the truly non-happy faces than for all the blended expressions, which did not differ from each other. Latencies were longer in the 50- and the 100-ms display condition than in the 1,000-ms condition, with no significant differences between the other display conditions (M = 802 vs. 798 vs. 772 vs. 742 vs. 715 vs. 676 ms, for the 50- vs. 100- vs. 250- vs. 500- vs. 750- vs. 1,000-ms displays, respectively).

4. Discussion

Prior research has provided evidence that observers can differentiate genuine from non-genuine smiles, as indicated by judgments of whether they convey happiness or not (Ambadar et al., Citation2009; Krumhuber & Manstead, Citation2009), the degree of induced prosocial behavior (Johnston et al., Citation2010), positive affective priming (Calvo et al., Citation2012; McLellan et al., Citation2010; Miles & Johnston, Citation2007), facial mimicry (Slessor et al., Citation2014), and brain activity (McLellan, Wilcke, Johnston, Watts, & Miles, Citation2012). However, discrimination is limited and often non-genuine smiles are accepted by viewers as showing facial happiness (Calvo et al., Citation2012; Okubo et al., Citation2012).

The tendency to judge a smiling-mouth face as happy—even in the absence of happy eyes—is consistent with the fact that the smile is both necessary and sufficient for categorization of faces as happy (Calder et al., Citation2000; Calvo et al., Citation2014). This probably occurs because people typically smile when they are happy, and thus the smile is taken as diagnostic of happiness, which leads viewers to infer that the expresser is happy, and makes happy faces the most easily recognized of all basic expressions (Nelson & Russell, Citation2013; Palermo & Coltheart, Citation2004; Tottenham et al., Citation2009; see Calvo & Nummenmaa, Citationin press). However, importantly, diagnostic value can also wrongly bias viewers towards perception of happiness (as people often smile when they are not happy). In fact, a smiling mouth makes non-happy eyes look happy, or happier than when the same eyes appear in a face with no smile (Calvo & Fernández-Martín, Citation2013; Calvo et al., Citation2013).

Beyond the evidence regarding the relative difficulties in distinguishing fake from genuine smiles, the current study makes a major contribution: the amount of visual signal that is required for smile discrimination, as well as discrimination thresholds, significantly vary for different types of smiling faces depending on type of eye expression. More specifically, a blended expression with a smile but angry eyes was discriminated (as “not happy”) from a truly happy face (as “happy”) at a 100-ms display time (but not earlier); the threshold was settled between 250 and 500 ms for expressions with disgusted, fearful, or sad eyes; at 750 ms, for surprised eyes; and, finally, faces with a smiling mouth and neutral eyes needed between 750 ms and 1-s exposure to be judged as different from truly happy faces. In prior research on fake smiles, a self-paced presentation mode with free inspection time was generally used, and only neutral eyes accompanying a smile were included. The present study adds to those using fixed-display times (Calvo et al., Citation2012; Experiments 2–4; Miles & Johnston, Citation2007; Experiment 2; McLellan et al., Citation2010; Experiment 2) and variations of type of eye expression (Calvo et al., Citation2012, 2013). The systematic combination of six display times and six eye expressions allowed us to establish rather precise threshold discrimination differences among fake smiles as a function of type of eyes.

Our finding that faces with a smiling mouth and neutral eyes (NeHa) need between 750 ms and 1-s exposure to be distinguished from faces with a smiling mouth and happy eyes may seem at odds with those reported by Miles and Johnston (Citation2007) and McLellan et al. (Citation2010) showing differences between genuine vs. posed smiles as early as 50 and 100 ms from face onset. Their genuine vs. posed happy face stimuli (albeit taken from a different face set) were comparable to our truly happy vs. NeHa expressions, respectively, in that both AU12 and AU6 were present (truly happy) or AU6 was absent (posed or NeHa). The empirical discrepancy can be explained as a function of task demands. Miles and Johnston (Citation2007) and McLellan et al. (Citation2010) used an implicit measure of emotional processing of facial expressions, that is, affective priming, which requires the extraction of a coarse impression about whether the expression is pleasant or unpleasant. In contrast, we used an explicit expression categorization task, with facial configurations involving multiple combinations of expressive features in the eyes and the mouth, which probably requires more refined discriminations. It is thus understandable that such a categorization task with multiple variants took longer than the dichotomous affective evaluation. In fact, it is possible that implicit affective judgments are faster than explicit verbal responses, and that affective evaluation actually occurs prior to semantic categorization (but see Nummenmaa, Hyönä, & Calvo, Citation2010). In line with this, Lieberman et al. (Citation2007) found that verbally labeling facial expressions, compared with merely observing emotional faces, diminished activity in the amygdala and other limbic regions in the brain, thus suggesting that explicit categorization inhibits or delays emotional processing. This could explain why affective priming discrimination (McLellan et al., Citation2010; Miles & Johnston, Citation2007) was faster than the current explicit expression categorization.

The current findings concerning the different smile discrimination thresholds as a function of eye expression raises the important issue of the underlying factors. Two major factors can be considered: perceptual and affective. Regarding perceptual factors, we have argued that the smiling mouth is visually highly salient, which can explain why it attracts overt attention earlier or more than any other region of happy and non-happy faces (Beaudry et al., Citation2014; Bombari et al., Citation2013; Calvo & Nummenmaa, Citation2008). Due to saliency, the smile could overshadow other face regions (including the eyes), which would as a consequence receive less attention. By extending this argument, we could predict that salient non-happy eyes would attract more attention, or resist the smile attentional capture better, than less salient eyes, and therefore the blended expression would be identified as “not happy” more easily. If so, saliency of the eye region should facilitate fake smile discrimination. Support for this hypothesis would require that angry eyes (in an otherwise smiling face) would be more salient than disgusted, fearful, and sad eyes, which should be more salient than surprised and neutral eyes. Against this prediction, however, prior research has shown that (a) the non-happy eyes are less salient in all the blended expressions with a smile than when the same eyes appear in their respective truly non-happy faces, and, (b) most importantly, there are no differences in the eye region saliency among the blended expressions, or between the happy eyes of truly happy faces and the non-happy eyes of blended expressions (Calvo et al., Citation2012; Calvo et al., Citation2013; Calvo, Gutiérrez-García, et al., Citation2013). Thus, the high saliency of the smiling mouth (equivalent for all the blended expressions) probably overrides the contribution of minor saliency differences in the eye region. Accordingly, discrimination differences are unlikely to be due to perceptual factors in the eye region. The smiling mouth is so salient that perceptual factors in the non-happy eye region are minimized.

In contrast, several lines of prior research suggest that affective factors could account for the different discrimination thresholds for fake vs. genuine smiles. First, Calvo, Gutiérrez-García, et al. Citation(2013) conducted a norming study on the same 13 expression categories and stimuli that were used in the current study. The faces were rated in affective valence (from unpleasantness to pleasantness, in a 1–9 scale). Unsurprisingly, the truly happy faces were rated as the most pleasant. Interestingly, pleasantness decreased for NeHa, followed by SuHa, followed by FeHa, SaHa, and DiHa faces, with AnHa faces being rated as the most unpleasant. This is in correspondence with the current threshold findings, with thresholds decreasing as blended expressions were more distant in valence from the truly happy faces. Second, Calvo et al. (Citation2012) used an affective priming paradigm with faces (neutral, truly happy, NeHa, SaHa, and AnHa) as primes and visual scenes as probes. Relative to neutral primes, truly happy faces produced positive priming earlier (340 ms) than NeHa faces (550 ms; and also SaHa faces, to a lesser extent), whereas AnHa faces never produced such priming. This indicates that affective priming decreased with decreasing affective valence, and that the smile in the most unpleasant expressions (AnHa) did not convey any pleasantness. Third, in an event-related potentials (ERP) study, Calvo, Marrero, and Beltrán Citation(2013) presented truly happy, NeHa, FeHa, and AnHa faces, and participants judged whether the faces looked happy or not. Neural activation patterns showed that AnHa—but not Neha or FeHa—faces were discriminated from truly happy faces early (175–240 ms), with modulation of the emotional processing P200 ERP component. This is consistent with the current threshold discrimination advantage for angry eyes, and that affective processing underlies such an advantage.

Finally, there is the issue of ecological validity of the current face stimuli with blended expressions, and therefore the generalizability of results. The composite faces that we used as “fake-smile” expressions are related to a variety of smiling faces in social contexts (see Calvo, Marrero et al., Citation2013, Appendix A): AnHa (i.e. angry eyes with a smiling mouth) and DiHa faces look like sarcastic and contemptuous smiles; FeHa and SaHa faces resemble nervous or embarrassed smiles; and NeHa and SuHa faces seem variants of polite smiles. Nevertheless, some of those facial configurations might not portray totally natural expressions. In any case, our systematic manipulation of eye-mouth regions was a necessary experimental constraint as we wanted to isolate the role of different types of eye expressions in faces with a smile. A certain degree of ecological validity of the blended expressions thus had to be sacrificed in the pursuit of internal validity.

5. Conclusions

Discrimination thresholds for genuine smiles in truly happy faces vs. fake smiles in blended expressions vary with type of eye expression: fake smiles in a facial configuration with angry eyes are detected earlier (100-ms display), followed by those with disgusted, fearful, or sad eyes (between 250 and 500 ms), surprised eyes (750 ms), and neutral eyes (between 750 ms and 1 s) eyes. The angry eyes are highly resistant to the biasing influence of the smile, whereas the neutral eyes are the most susceptible of being biased. Presumably, the high visual saliency and categorical distinctiveness of a smiling mouth—not only in truly happy faces, but also in blended expressions with non-happy eyes—reduces the role of non-happy eyes, and contributes to confusion of fake smiles as if they were genuinely happy. Nevertheless, this occurs similarly for all types of eyes, and therefore visual saliency does not account for discrimination differences. Rather, it is possible that the affective valence differences between the non-happy and the truly happy eyes can account for the different smile discrimination thresholds across blended expressions with a fake smile.

Additional information

Funding

This research was supported by the Spanish Ministerio de Ciencia e Innovación [grant number PSI2009-07245] and the Spanish Ministerio de Economía y Competitividad [grant number PSI2014-54720-P].

Notes on contributors

Manuel G. Calvo

Aida Gutiérrez-García has completed Masters of Psychology and is preparing her PhD under the supervision of professor Manuel G. Calvo. The authors’ joint interests are concerned with cognition–emotion relationships, especially the processing of emotional facial expressions and visual scenes, on which they have co-authored some recent articles (e.g. Emotion or Journal of Nonverbal Behavior). The current study is part of a larger project on facial expression recognition using chronometric, eye-tracking, and electroencephalographic techniques. The reported study represents an extension of prior research on the perception of smiles, particularly the role of saliency and distinctiveness of the smiling mouth and the eyes in correctly identifying genuine happy faces versus fake smiles. A major novelty and contribution of the current approach involves the use of perceptual thresholds techniques to determine when discrimination occurs.

Notes

1. The smiling mouths were exactly the same in the truly happy faces (with happy eyes) and the six blended expressions with non-happy eyes. When we refer to “fake smiles,” we mean fake-smile faces because the property of being fake corresponds to the facial expression as a whole. What made a smiling face fake was that, within the whole configuration, a local feature such as the smiling mouth was not congruent with the non-happy eyes. Accordingly, the smile becomes fake within the gestaltic configuration, although the smile itself is simply a “genuine smiling mouth” if we consider it in isolation. Relatedly, it must be noted that the viewers were actually judging the whole face—rather than the smiling mouth itself—as “happy” or “not happy.”

References

  • Adolphs, R. (2002). Recognizing emotion from facial expressions: Psychological and neurological mechanisms. Behavioral and Cognitive Neuroscience Reviews, 1, 21–62.10.1177/1534582302001001003
  • Ambadar, Z., Cohn, J. F., & Reed, L. I. (2009). All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. Journal of Nonverbal Behavior, 33, 17–34.10.1007/s10919-008-0059-5
  • Beaudry, O., Roy-Charland, A., Perron, M., Cormier, I., & Tapp, R. (2014). Featural processing in recognition of emotional facial expressions. Cognition and Emotion, 28, 416–432.
  • Becker, D. V., & Srinivasan, N. (2014). The vividness of the happy face. Current Directions in Psychological Science, 23, 189–194.10.1177/0963721414533702
  • Bombari, D., Schmid, P. C., Schmid-Mast, M., Birri, S., Mast, F. W., & Lobmaier, J. S. (2013). Emotion recognition: The role of featural and configural face information. The Quarterly Journal of Experimental Psychology, 66, 2426–2442.10.1080/17470218.2013.789065
  • Borji, A., & Itti, L. (2013). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 185–207.10.1109/TPAMI.2012.89
  • Calder, A. J., Young, A. W., Keane, J., & Dean, M. (2000). Configural information in facial expression perception. Journal of Experimental Psychology: Human Perception and Performance, 26, 527–551.
  • Calvo, M. G., & Beltrán, D. (2014). Brain lateralization of holistic versus analytic processing of emotional facial expressions. NeuroImage, 92, 237–247.10.1016/j.neuroimage.2014.01.048
  • Calvo, M. G., & Fernández-Martín, A. (2013). Can the eyes reveal a person’s emotions? Biasing role of the mouth expression. Motivation and Emotion, 37, 202–211.10.1007/s11031-012-9298-1
  • Calvo, M. G., Fernández-Martín, A., & Nummenmaa, L. (2012). Perceptual, categorical, and affective processing of ambiguous smiling facial expressions. Cognition, 125, 373–393.10.1016/j.cognition.2012.07.021
  • Calvo, M. G., Fernández-Martín, A., & Nummenmaa, L. (2013). A smile biases the recognition of eye expressions: Configural projection from a salient mouth. The Quarterly Journal of Experimental Psychology, 66, 1159–1181.10.1080/17470218.2012.732586
  • Calvo, M. G., Fernández-Martín, A., & Nummenmaa, L. (2014). Facial expression recognition in peripheral versus central vision: Role of the eyes and the mouth. Psychological Research, 78, 180–195.10.1007/s00426-013-0492-x
  • Calvo, M. G., Gutiérrez-García, A., Avero, P., & Lundqvist, D. (2013). Attentional mechanisms in judging genuine and fake smiles: Eye-movement patterns. Emotion, 13, 792–802.10.1037/a0032317
  • Calvo, M. G., & Lundqvist, D. (2008). Facial expressions of emotion (KDEF): Identification under different display-duration conditions. Behavior Research Methods, 40, 109–115.10.3758/BRM.40.1.109
  • Calvo, M. G., Marrero, H., & Beltrán, D. (2013). When does the brain distinguish between genuine and ambiguous smiles? An ERP study. Brain and Cognition, 81, 237–246.10.1016/j.bandc.2012.10.009
  • Calvo, M. G., & Nummenmaa, L. (2008). Detection of emotional faces: Salient physical features guide effective visual search. Journal of Experimental Psychology: General, 137, 471–494.10.1037/a0012771
  • Calvo, M. G., & Nummenmaa, L. (in press). Perceptual and affective mechanisms in facial expression recognition: An integrative review. Cognition and Emotion.
  • Ekman, P., & Friesen, W. V. (1976). Pictures of facial affect. Palo Alto, CA: Consulting Psychologists Press.
  • Ekman, P., Friesen, W. V., & Hager, J. C. (2002). Facial action coding system. Investigator’s guide. Salt Lake City, UT: Human Face.
  • Frank, M. G., Ekman, P., & Friesen, W. V. (1993). Behavioral markers and recognizability of the smile of enjoyment. Journal of Personality and Social Psychology, 64, 83–93.10.1037/0022-3514.64.1.83
  • Johnston, L., Miles, L., & Macrae, C. (2010). Why are you smiling at me? Social functions of enjoyment and non-enjoyment smiles. British Journal of Social Psychology, 49, 107–127.10.1348/014466609X412476
  • Krumhuber, E. G., & Manstead, A. S. R. (2009). Can Duchenne smiles be feigned? New evidence on felt and false smiles. Emotion, 9, 807–820.10.1037/a0017844
  • Leppänen, J., & Hietanen, J. K. (2004). Positive facial expressions are recognized faster than negative facial expressions, but why? Psychological Research Psychologische Forschung, 69, 22–29.10.1007/s00426-003-0157-2
  • Leppänen, J., & Hietanen, J. K. (2007). Is there more in a happy face than just a big smile? Visual Cognition, 15, 468–490.10.1080/13506280600765333
  • Lieberman, M. D., Eisenberger, N. I., Crockett, M. J., Tom, S. M., Pfeifer, J. H., & Way, B. M. (2007). Putting feelings into words: Affect labeling disrupts amygdala activity in response to affective stimuli. Psychological Science, 18, 421–428.10.1111/j.1467-9280.2007.01916.x
  • Loughead, J. M., Gur, R. C., Elliott, M., & Gur, R. E. (2008). Neural circuitry for accurate identification of facial emotions. Brain Research, 1194, 37–44.10.1016/j.brainres.2007.10.105
  • Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska Directed Emotional Faces—KDEF. Stockholm: CD-ROM from Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet. ISBN 91-630-7164-9.
  • McLellan, T. L., Johnston, L., Dalrymple-Alford, J., & Porter, R. (2010). Sensitivity to genuine versus posed emotion specified in facial displays. Cognition and Emotion, 24, 1277–1292.10.1080/02699930903306181
  • McLellan, T. L., Wilcke, J. C., Johnston, L., Watts, R., & Miles, L. K. (2012). Sensitivity to posed and genuine displays of happiness and sadness: A fMRI study. Neuroscience Letters, 531, 149–154.10.1016/j.neulet.2012.10.039
  • Milders, M., Sahraie, A., & Logan, S. (2008). Minimum presentation time for masked facial expression discrimination. Cognition and Emotion, 22, 63–82.10.1080/02699930701273849
  • Miles, L. K., & Johnston, L. (2007). Detecting happiness: Perceiver sensitivity to enjoyment and non-enjoyment smiles. Journal of Nonverbal Behavior, 31, 259–275.10.1007/s10919-007-0036-4
  • Nelson, N. L., & Russell, J. A. (2013). Universality revisited. Emotion Review, 5, 8–15.10.1177/1754073912457227
  • Niedenthal, P. M., Mermillod, M., Maringer, M., & Hess, U. (2010). The simulation of smiles (SIMS) model: Embodied simulation and the meaning of facial expression. Behavioral and Brain Sciences, 33, 417–433.10.1017/S0140525X10000865
  • Nummenmaa, L., Hyönä, J., & Calvo, M. G. (2010). Semantic categorization precedes affective evaluation of visual scenes. Journal of Experimental Psychology: General, 139, 222–246.10.1037/a0018858
  • Nusseck, M., Cunningham, D. W., Wallraven, C., & Bülthoff, H. H. (2008). The contribution of different facial regions to the recognition of conversational expressions. Journal of Vision, 8(8), 1–23.10.1167/8.8.1
  • Okubo, M., Kobayashi, A., & Ishikawa, K. (2012). A fake smile thwarts cheater detection. Journal of Nonverbal Behavior, 36, 217–225.10.1007/s10919-012-0134-9
  • Palermo, R., & Coltheart, M. (2004). Photographs of facial expression: Accuracy, response times, and ratings of intensity. Behavior Research Methods, Instruments, & Computers, 36, 634–638.10.3758/BF03206544
  • Slessor, G., Bailey, P. E., Rendell, P. G., Ruffman, T., Henry, J. D., & Miles, L. K. (2014). Examining the time course of young and older adults’ mimicry of enjoyment and nonenjoyment smiles. Emotion, 14, 532–544.10.1037/a0035825
  • Smith, M. L., Cottrell, G., Gosselin, F., & Schyns, P. G. (2005). Transmitting and decoding facial expressions. Psychological Science, 16, 184–189.10.1111/psci.2005.16.issue-3
  • Soussignan, R. (2002). Duchenne smile, emotional experience, and autonomic reactivity: A test of the facial feedback hypothesis. Emotion, 2, 52–74.10.1037/1528-3542.2.1.52
  • Stone, A., & Valentine, T. (2007). Angry and happy faces perceived without awareness: A comparison with the affective impact of masked famous faces. European Journal of Cognitive Psychology, 19, 161–186.10.1080/09541440600616390
  • Tanaka, J. W., Kaiser, M., Butler, S., & Le Grand, R. (2012). Mixed emotions: Holistic and analytic perception of facial expressions. Cognition and Emotion, 26, 961–977.10.1080/02699931.2011.630933
  • Torralba, A., Oliva, A., Castelhano, M. S., & Henderson, J. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113, 766–786.10.1037/0033-295X.113.4.766
  • Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry, T., Nurse, M., Hare, T. A., … Nelson, C. (2009). The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Research, 168, 242–249. doi:10.1016/j.psychres.2008.05.006

Appendix A.

Mean probability of responding “happy” to each type of expression, and under each display condition, and pairwise contrasts between truly happy faces and each type of blended expression with a smile but non-happy eyes, with the corresponding effect sizes (r and Cohen’s d).