1,273
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Facial emotion detection in Vestibular Schwannoma patients with and without facial paresis

ORCID Icon, , , &
Pages 317-326 | Received 08 Jan 2020, Published online: 15 Apr 2021

ABSTRACT

This study investigates whether there exist differences in facial emotion detection accuracy in patients suffering from Vestibular Schwannoma (VS) due to their facial paresis. Forty-four VS patients, half of them with, and half of them without a facial paresis, had to classify pictures of facial expressions as being emotional or non-emotional. The visual information of images was systematically manipulated by adding different levels of visual noise. The study had a mixed design with emotional expression (happy vs. angry) and visual noise level (10% to 80%) as repeated measures and facial paresis (present vs. absent) and degree of facial dysfunction as between subjects’ factors. Emotion detection accuracy declined when visual information declined, an effect that was stronger for anger than for happy expressions. Overall, emotion detection accuracy for happy and angry faces did not differ between VS patients with or without a facial paresis, although exploratory analyses suggest that the ability to recognize emotions in angry facial expressions was slightly more impaired in patients with facial paresis. The findings are discussed in the context of the effects of facial paresis on emotion detection, and the role of facial mimicry, in particular, as an important mechanism for facial emotion processing and understanding.

Introduction

The human face carries information that provides insight into people’s mental states. One profound piece of information concerns emotional states. Recognizing emotions and simulating them are vital in human social life. Newborn babies like faces and face-like stimuli (Johnson, Citation2005), and they already employ facial mimicry at a young age (e.g., Beall et al., Citation2008; Kaiser et al., Citation2017). Furthermore, the ability to simulate other’s facial expressions and mimicking them is argued to play a fundamental role in detecting and comprehending the emotional state of others (e.g., Bornemann et al., Citation2012; Neal & Chartrand, Citation2011; Niedenthal, Citation2007), and supporting social interactions and social bonding (e.g., Fischer & Hess, Citation2016; Hess et al., Citation2016).

For most people, facial mimicry and recognizing and other’s facial emotional expressions seem to come rather naturally and automatically (e.g., Dimberg et al., Citation2002). However, in rare cases, emotion processing of facial expressions might be severely impaired as a result of neural disorders (e.g., Cristinzio et al., Citation2007; Kumfor et al., Citation2014). Moreover, mimicking other’s facial expressions might be corrupted due to facial dysfunction such as facial paresis. When patients with facial paresis are unable to mimic the expression of others, then not only the expression of their own emotional state but also their understanding of the emotional state of others could possibly be affected. Consistent with this idea, several studies show that patients with impaired facial functioning in general report lowered social and emotional functioning, as well as impacted mental health (e.g., Blom et al., Citation2020; Fu et al., Citation2011; Guntinas‐Lichius et al., Citation2007; Nellis et al., Citation2017; Van Swearingen et al., Citation1998). The association between facial functioning and socioemotional facets of quality of life thus gives reason to suggest that facial functioning and particularly impaired facial functioning in patients impact specific aspects of their emotion processing.

In the present study, we explored this association between facial paresis and emotion recognition in more detail. Specifically, we examined a specific group of patients that suffer from Vestibular Schwannoma (VS), also referred to as acoustic neuroma (Weinberger & Terris, Citation2015). VS’s are benign unilateral tumors with typical clinical symptoms being hearing loss on the affected side, tinnitus, and disequilibrium (e.g., Johnson & Lalwani, Citation2012; Weinberger & Terris, Citation2015). Due to its location near the facial nerve, surgical removal of the tumor can cause injury to the facial nerve, and hence facial paresis. Experimental studies on emotion processing in patients with facial paresis are rare, and the studies that have been conducted report differing results. While some studies suggest that impaired facial muscle movements due to Parkinson’s (Argaud et al., Citation2016) or due to the locked-in syndrome (Pistoia et al., Citation2010) negatively affects emotion recognition or decoding accuracy, another study examining patients with Moebius syndrome – a condition resulting in facial paresis and an impaired abduction of the eyes – report no differences in accuracy of facial emotion recognition between patients with a facial paresis and healthy controls (Rives Bogart & Matsumoto, Citation2010). This state of affairs raises doubt about the importance of facial functioning and facial mimicry in emotion processing. The present study therefore aims to add to this area of research by examining the ability to accurately detect emotional facial expressions in VS patients who suffer from facial paresis (vs. not) due to surgical removal of the tumor.

Facial mimicry plays a more important role when emotional expressions are more difficult to detect. Impoverished visibility of emotional information diminished classification accuracy of emotional stimuli, especially angry facial expressions (e.g., Du & Martinez, Citation2011). When visual information is scarce, one would thus expect facial mimicry to start playing a more crucial role. Consequently, people who are limited in facial mimicry due to facial paresis might show reduced emotion detection when visual information is degraded. This concurs with the notion that emotional simulation only occurs when it adds new information about the other person’s emotional state (e.g., Winkielman et al., Citation2008, Citation2015). Consistent with this notion, recent research suggests that facial mimicry plays a more important role in emotion processing when the emotions are more subtle, but are less vital when emotional expressions are easier to perceive. For example, participants who received Botox injections in their facial muscles showed reduced emotional experience only to less intense emotional video clips and not to more intense emotional video clips (Davis et al., Citation2010). Relatedly, a different study (Baumeister et al., Citation2016) examining Botox users yielded impaired emotion categorization only for less intense emotional stimuli. Botox users rated slightly emotional sentences and facial expressions as less emotional, an effect that did not show for more intense emotional stimuli or for neutral stimuli. Building on these findings, VS patients with facial paresis might show reduced emotion detection accuracy, especially when visual information gradually degrades the emotional expression of another person’s face.

The present study

The current study involved a unique group of patients with a Vestibular Schwannoma (VS). VS is a relatively rare disease, and facial paresis after surgical removal in VS patients is even more rare. In the Netherlands – where the current study took place – the prevalence was estimated to be 15.5 persons per million in 2012 (Kleijwegt et al., Citation2016). Considering the low prevalence of VS, we were able to recruit 44 VS patients: one group that had a unilateral facial paresis after surgical removal of the VS, while another group did not have a facial paresis, thus constituting a matched VS control group. This study aims to further our knowledge by focusing on whether the presence of a facial paresis impairs the emotion detection accuracy of emotional facial expressions with different levels of visibility (i.e., eight levels, amounting to 256 trials in total). We deemed it interesting and important to first explore the influence of visibility level for two clearly separated facial expression of different valence (happy and anger). This enabled us to examine the role of visibility in emotion detection in detail, while simultaneously keeping the task doable for our specific sample of participants by not further increasing the number of trials – and thus burden placed on participants. Manipulating the precise level of visibility of the image as well as utilizing two types of emotional facial expressions thus provided us with a test to examine differences in impairment of emotion detection. Based on earlier research, we hypothesize that patients with a facial paresis are less accurate in detecting emotion in facial expressions, especially when face images are obscured by noise compared to patients without a facial paresis. Because emotion detection is more strongly impaired by visual noise for angry faces than for happy faces, it might be expected that the difference between VS patients with or without facial paresis is more pronounced for angry faces.

Materials and methods

Study overview: Detecting emotion in faces with different levels of image visibility

Images of happy and angry facial expressions served as target stimuli, and images of neutral faces served as fillers. As a measure of emotion detection accuracy, participants were asked to indicate whether the image displayed an emotional expression or not (e.g., as in emotion detection tasks, such as employed in Goren & Wilson, Citation2006; Smith et al., Citation2018). The study had a mixed design with emotional expression (two levels: happy vs. angry) and visual noise level (eight levels: 10%, 20%, 30%, 40%, 50%, 60%, 70%, and 80%) as repeated measures, and facial paresis (present vs. absent) as between subjects’ factor.

Participants

Forty-four patients who had been diagnosed with VS participated in this study. Half of them had developed a unilateral facial paresis after removal of the VS, and half of them did not have their VS removed and had not developed a facial paresis. Running a sensitivity analysis in G*Power 3.1 (α = .05, power = 80%, N = 44) for an ANOVA: Repeated measures within–between interaction (including the moderator test of the patient group as well) indicated that we were able to detect a small difference between the two groups in our experimental design, effect size f = .12. Due to a technical issue, the data of one participant were not recorded correctly, leaving us with a final sample of 43 patients. Permission for the study was granted by the Medical Ethics Committee of the Leiden University Medical Center. Participants provided written informed consent in accordance with the principles contained in the Declaration of Helsinki.

Distinctive properties of the participants

Twenty-one patients with a one-sided facial paresis participated (13 females, Mage = 54.00, SDage = 7.61, time passed since diagnosis M = 6.99 years, SD = 5.60). Of these, 13 were left-sided, and 8 were right-sided. Twenty-two participants (14 females, Mage = 55.82, SDage = 7.13, time passed since diagnosis M = 5.76 years, SD = 3.82) with a VS but without facial paresis served as a control group. Of this group, 14 had a VS on the left side, and 8 had it on the right side. The participants in the VS control group were matched as closely as possible to the facial paresis VS group on gender, age, side of the VS and time that had elapsed since the diagnosis. Facial functioning was graded using the House Brackman Grading scale (HBG) (House, Citation1985), currently the most widely used and accepted scale to document the degree of facial paresis (Zandian et al., Citation2014). This scale includes six levels of facial nerve function, with a HBG of 1 representing normal facial function, a HBG 6 representing complete paralysis. HBG was scored both by the participants themselves as well as by the experimenter. Inter-rater reliability showed to be high (r = .86, p < .001); hence, the average of these two scores was used for analyses. As expected, patients with a facial paresis had a substantial average HBG score (M = 3.90, SD = 1.15) compared to patients without a facial paresis (M = 1.27, SD = .55), t(42) = 9.82, p < .001, d = 2.99, 95% CI [2.11; 3.86].

Participant recruitment and response rate

Part of the participants applied for participation via responding to a call for participants on an online forum for people with VS (i.e., the Dutch website for vestibular schwannomas: Rwww.brughoektumor.nl). The remaining participants were invited to participate by a letter they received from their treating physician explaining the study. In total, 44 out of 62 (71%) -including the participant who participated but of whom the data was not saved correctly- participants who either applied via the online forum or invited by their physician participated.

Stimuli

Facial expression stimuli were created using images of four males and four females (from the Radboud Faces Database: Langner et al., Citation2010) portraying a happy, angry or neutral expression. For each unique face, eight versions were created introducing different levels of noise in Photoshop (10%, 20%, 30%, 40%, 50%, 60%, 70%, and 80% noise), see for an example. Faces were presented in grayscale against a gray background. The stimuli thus consisted of 64 happy facial expressions (8 per noise level), 64 angry facial expressions (8 per noise level), and 64 neutral facial expressions (8 per noise level). Only the emotional facial expressions were the target stimuli, and neutral stimuli served as fillers.

Figure 1. Examples of a male happy face and a male angry face, image noise levels of 10 until 80%

Figure 1. Examples of a male happy face and a male angry face, image noise levels of 10 until 80%

Procedure

On the day of the appointment, the experimenter visited the participants at their home, where the experiment was conducted on a laptop. Participants were told that they would see pictures of emotional and neutral facial expressions. They were instructed to indicate whether the face was an emotional one or a non-emotional one (neutral) by pressing the corresponding key on the keyboard. Thus, a correct response would be that a happy or angry face is emotional, and an incorrect one when indicating that a happy or angry face is non-emotional. It was further emphasized that they should be accurate and fast.

The experiment started with 16 practice trials in which patients received feedback on screen after each trial regarding their performance. Then, the experiment started. In the actual experiment, no more feedback was provided. Half of the trials were neutral faces; the other half were expressions of emotions (happy or angry). In order to keep the number of emotional and non-emotional faces equal, but without adding pictures of new actors, each neutral face was used twice. In total, the experiment consisted of 256 trials, presented in four blocks of 64 trials each. Emotional and non-emotional faces were presented randomly without replacement. After each block, participants could take a short break if needed. Each trial started with a blank screen (1000 ms), after which a fixation point appeared (randomized times of 600–700-800-900 and 1000 ms), followed by the image of the face (presented until classification as emotional or neutral), after which the next trial would start again with a blank screen. The experiment was self-paced. Accuracy (in percentage) of emotion detection in the faces was used as the dependent variable.

Statistical analyses

We will first test the hypothesis that VS patients with and without facial paresis show differences in emotion detection accuracy based on the visibility of images of happy and angry facial expressions with frequentist statistical testing in the form of Anova and t-tests. Next, we will perform an Anova with patients’ HBG as covariate in order to provide a more thorough view of the relationship between the degree of facial dysfunction in VS patients and their emotion detection accuracy. In addition to the frequentist statistical tests, Bayesian analyses are performed to quantify the evidence of the hypotheses under investigation given the data. Bayesian Factors (BF) are reported, with a larger BF representing more evidence in the data set for the hypothesis under consideration. In case sphericity was violated for any of the reported results, Greenhouse-Geisser corrections were applied and adjusted degrees of freedom were reported.

Finally, to gain more specific insight into potential differences between the two VS patient groups we conduct two exploratory analyses. First, we examine the pattern of emotion detection accuracy across the noise levels by inspecting the linear or quadratic trends for happy vs. angry faces between the two groups by means of a repeated measure Anova. The results of this analysis should provide a deeper understanding regarding the pattern of the effect of visual noise on emotion detection accuracy tested in the main analyses. Second, we examine whether emotion detection accuracy differs from chance (50%) for the two patient groups at each level of noise, separately for happy vs. angry faces by use of t-tests.

Results

Emotion detection accuracy and presence vs. absence of facial paresis

To test whether facial paresis would play a role in emotion detection accuracy, a repeated measures' analysis was done with noise level of the image (10–80%, in steps of 10%) and type of emotional expression (happy vs. angry) as within-subject factors, and facial paresis (group where facial paresis is present vs. absent) as between-subject factor.Footnote1 , Footnote2

First of all, this analysis yielded a significant and large main effect of noise level, showing that emotion detection accuracy decreased when the visual noise level of the image increased, F(3.21, 134.91) = 99.30, p < .001, ηp2 = .70. Moreover, a main effect of type of emotional expression was found, F(1,42) = 107.86, p < .001, ηp2 = .72. Emotion detection accuracy was higher for expressions of happiness (M = 98.51, SD = 3.41) than for expressions of anger (M = 74.22, SD = 14.27). These two main effects were classified by a strong interaction effect between the noise level of the image and the type of emotional expression, F(2.81, 115.28) = 65.36, p < .001, ηp2 = .61. As can be seen in , while for both happy and angry expressions, accuracy levels decreased for increasing noise levels in the image; for happy expressions, detection accuracy levels went down only slightly, while for angry expressions detection accuracy went down much sharper when noise levels in the image increased. In line with this, a Bayesian analysis of variance indicated that the model including the interaction between visual noise and type of emotional expression explained the data very well compared to matched models not including this effect (BFincl = 1.263e+ 53).

Figure 2. The effect of visual noise per type of emotional expression for patients with Vestibular Schwannoma with and without facial paresis

Figure 2. The effect of visual noise per type of emotional expression for patients with Vestibular Schwannoma with and without facial paresis

Importantly, patients with and without facial paresis did not differ in their overall emotion detection accuracy, F(1,41) = 0.94, p = .337, ηp2 = .02. Furthermore, theer was neither interaction between noise level and facial paresis, F(7,287) = 0.77, p = .611, ηp2 = .02 nor between emotional expression and facial paresis F(1,41) = 1.70, p = .200, ηp2 = .04. Lastly, the above described interaction between noise level and the emotion depicted in the image was not classified by a further interaction with facial paresis, F(7,287) = 0.80, p = .591, ηp2 = .02. In line with this, Bayesian analysis of variance indicated that neither the model including a main effect of facial paresis (BFincl = 0.27) nor the model including the interaction between visual noise and facial paresis (BFincl = 0.01), or the model including the interaction between visual noise, type of emotional expression, and facial paresis (BFincl = 0.03) explained the data well compared to matched models not including these effects.

We conclude that these findings reveal strong classic effects regarding emotion detection accuracy for the current task, including the differential effects of happiness and anger. However, VS patients with and without facial paresis did not show to differ in their emotion detection accuracy levels.

Emotion detection accuracy and degree of facial dysfunction (HBG)

To examine whether the degree of facial dysfunction (HBG) was associated with emotion detection accuracy, a repeated measures’ analysis was done with the noise level of the image (10'%–80%, in steps of 10%) and type of emotional expression (happy vs. angry) as the within subject factors, and the degree of facial dysfunction as measured by the HBG as a covariate.

HBG was not related to the overall emotion detection accuracy, F(1,41) = 0.18, p = .672, ηp2 = .00. Secondly, no interaction showed between noise level and HBG, F(3.15, 129.23) = 0.58, p = .773, ηp2 = .01, nor between emotional expression and HBG F(1,41) = 0.77, p = .387, ηp2 = .02. Lastly, the interaction between noise level and the emotion was not classified by a further interaction with HBG, F(7,287) = 0.90, p = .511 ηp2 = .02.

In line with this, a Bayesian analysis of variance indicated that the model including the interaction between visual noise and type of emotional expression again best explained the data compared to matched models not including this effect (BFincl = 7.311e+52). Neither the model including the main effect of HBG (BFincl = 0.19) nor the model including the interaction between visual noise and HBG (BFincl = 6.97e-5) or the model including the interaction between visual noise, type of emotional expression, and HBG (BFincl = 0.00) explained the data well compared to matched models not including these effects.

The degree of facial dysfunction thus did not show to be associated with patients’ emotion detection accuracy levels.

Exploratory analysis 1: Emotion detection accuracy patterns

For exploratory purposes, we conducted two additional analyses. First, inspection of the pattern of emotion detection accuracy suggests different specific trends for the different levels of noise. We therefore examined whether the (linear and quadratic) trends for happy vs. angry faces differed between the two groups. Accordingly, a repeated measures’ Anova analysis was conducted with noise level of the image (10'%–80%, in steps of 10%) and type of emotional expression (happy vs. angry) as within-subject factors, and facial paresis (group where facial paresis is present vs. absent) as between-subject factor. These analyses showed a large linear main effect for the visual noise level, F(1,41) = 187.81, p < .001, ηp2 = .82, while the quadratic effect also showed to be significant but smaller in size than the linear effect, F(1,41) = 77.75, p < .001, ηp2 = .66.

This linear main effect of visual noise level showed a strong interaction with type of emotional expression, F(1,41) = 122.37, p < .001, ηp2 = .75, while the quadratic effect of visual noise level showed to interact with type of emotional expression to a lesser degree, F(1,41) = 30.15, p < .001, ηp2 = .42. Thus, while for both happy and angry expressions, accuracy levels decreased for increasing noise levels in the image; for happy expressions, detection accuracy levels went down only slightly, while for angry expressions, detection accuracy showed a sharp decrease for increased visual noise levels in the image. Lastly, this interaction between the linear effect of visual noise level and the emotion depicted in the image was not classified by a further three-way interaction with facial paresis, F(1,41) = 0.03, p = .869, ηp2 = .00. An interaction that as could be expected also did not show with the quadratic effect of visual noise level, F(1,41) = 0.58, p = .449, ηp2 = .01.

In short, the linear effect of visual noise showed to be a good explanation of the observed emotion detection accuracy pattern. With increasing levels of visual noise, overall emotion detection accuracy showed to decline in a linear fashion, with the decline in detection accuracy declining strongly for expressions of anger, and only slightly for expressions of happiness.

Exploratory analysis 2: Emotion detection accuracy at chance level

Second, we analyzed whether emotion detection would differ by chance (defined as a score of 50%) at each level of noise, separately for happy vs. angry faces). T-tests of the eight comparisons per emotion, comparing per group if the score per noise level was different from chance, were conducted. As can be seen in , for both VS patients with and without facial paresis, recognition of emotion in facial expressions of happiness was generally very high and above chance level, at all visual noise levels. VS patients with facial paresis even showed 100% classification accuracy scores for three noise levels, while VS patients without facial paresis showed such scores for one noise level.

Table 1. Facial paresis (present vs. absent) and classification accuracy of happy facial expressions testing difference from chance (50%)

A different pattern emerged for angry facial expressions. As can be seen in , while emotion detection accuracy was significantly higher than chance at visual noise levels 10% to 50% for both groups. VS patients with and without facial paresis performed at, near, or below the chance level (i.e., they remained uncertain whether a face showed emotion or not) for visual noise levels of 60% and higher. This suggests that the emotion displayed in expressions of anger is perceivable up until 50% of visual noise. Interestingly, participants with a facial paresis appear to be less accurate compared to participants without a facial paresis in detecting emotion in angry expressions when the emotion is perceivable (i.e., for visual noise levels of 10–50%). We explored this apparent difference by means of independent sample t-tests.

Table 2. Facial paresis (present vs. absent) and classification accuracy of angry facial expressions testing difference from chance (50%)

The average detection accuracy was calculated for angry expressions of which the emotion is perceivable (i.e., with visual noise levels of 10–50%), and for angry expressions of which the emotion is not perceivable (visual noise levels of 60–80%). Though participants without a facial paresis appear to be more accurate (M = 91.4%, SD = 7.0) in detecting emotion in angry expressions when the emotion is perceivable (visual noise levels of 10–50%) than participants with a facial paresis (M = 85.8%, SD = 14.4), this difference did not reach significance (t(42) = 1.74, p = .086, d = 0.53), A Bayesian independent sample t-test showed that the data are 1.91 times more likely (BF+0 = 1.91) to reflect a difference than for it not to reflect such an effect.

Furthermore, participants without a facial paresis appear to show similar accuracy levels (M = 50.9%, SD = 26.2) as participants with a facial paresis (M = 48.4%, SD = 20.9) with respect to angry expressions in which the emotion is not perceivable (visual noise levels of 60–80%), t(42) = 0.35, p = .731, d = 0.11. A Bayesian independent sample t-test showed that the data are indeed 3.20 times more likely (BF01 = 3.20) to reflect a null effect than to reflect a difference between the two groups for these stimuli.

Further inspection of suggests that the differences mainly emerged for the 10%, 20%, and 30% levels of visual noise. We therefore conducted independent sample t-tests comparing detection accuracy for angry facial expressions between the two participant groups for these three specific levels of visual noise. The only significant difference between the two groups showed for expressions with a visual noise level of 30%. Here, participants with facial paresis were less accurate (88.1%) than those without facial paresis (96.7%), t(42) = 2.34, p = .028. No significant differences showed at the 10% and 20% visual noise levels (both p’s > .174).

The data pattern thus suggests that facial functioning could have been relevant for the detection of emotion in angry facial expressions in case the expression is perceivable in the first place.

Discussion

The goal of the present study was to examine the accuracy of detecting emotion in facial expressions with different levels of visibility and potential disturbances in such emotional detection in patients with a Vestibular Schwannoma (VS) with a facial paresis compared to VS patients without a facial paresis. Half of the VS patient sample had a unilateral facial paresis, and the other half did not have a facial paresis and thus served as a matched VS control group.

First, emotion detection accuracy diminished in a linear fashion with increasing visual noise levels for facial emotional expressions. This effect showed to be different for expressions of happiness and anger; the accuracy of detecting angry facial expressions was affected more strongly by the visual noise level of the images than the accuracy of detecting happy facial expressions was. Emotion detection accuracy showed a much sharper decline with increasing visual noise levels for angry facial expressions. These findings are in line with previous research on facial emotion processing, and indicate that individuals are better at recognizing happiness than anger (e.g., Montagne et al., Citation2007), and that the amount of visual information available influences emotion detection, more so for angry than for happy facial expressions (e.g., Du & Martinez, Citation2011).

Furthermore, none of the effects described above showed to be associated with the mere presence or absence of a facial paresis nor did they show to be related to the specific degree of facial functioning of the VS patients. All in all, VS patients with and without a facial paresis show a similar pattern of emotion detection accuracy in facial expressions of happiness and anger, even when the images were highly impoverished. VS patients with and without facial paresis thus do not appear to differ in this facet of emotion processing.

Exploratory analyses suggest that facial paresis could possibly affect the processing of angry expressions. That is, patients with facial paresis seemed to be somewhat less accurate than patients without facial paresis in emotion detection for angry expressions that were more perceivable as showing a facial expression. Nevertheless, only for angry facial expressions with a visual noise level of 30% did this apparent difference between patients with and without facial paresis reach statistical significance. Emotion detection of angry facial expressions was at chance level for higher levels of visual noise, suggesting that in these cases the expressions were no longer perceivable for both patient groups.

A possible explanation for these exploratory findings is two-fold. First, these results reflect the previously stated difference in the impact of the amount of visual information on emotion detection in anger compared to happy facial expressions (Du & Martinez, Citation2011). This difference could increase the relevance of facial mimicry for detecting anger under differing levels of visual information, since facial mimicry is assumed to aid emotion understanding (e.g., Niedenthal, Citation2007) and is thought to be of less relevance when emotion understanding is rather straightforward (e.g., Arnold & Winkielman, Citation2020). Second, it suggests that facial functioning -and as such, the possibility of facial mimicry might only be relevant when individuals can overtly perceive emotion in a facial expression to begin with. Under such circumstances, individuals with impaired facial functioning show somewhat reduced emotion detection in angry expressions. It follows then that when the emotion in a facial expression is not perceivable to begin with, there is nothing to facially mimic either, showing in similar emotion detection accuracy levels for individuals with and without facial paresis.

Whereas the exploratory analysis showed a few subtle differences between the two patient groups, we wish to stress here that our main findings do not provide clear evidence that facial dysfunction hampers facial emotion processing. These general findings thus suggest that facial mimicry does not play a critical role in detecting emotion in facial expressions of anger and happiness, even when these images become highly impoverished. These results are in line with previous studies, showing no direct association between emotion processing of facial expressions and impaired facial functioning in facial paresis patients (Rives Bogart & Matsumoto, Citation2010) and with facial muscle activity in healthy participants (Blom et al., Citation2020).

Considering that the current study does provide a strong replication of the impact of reduced visual information on emotion perception in happy and angry facial expressions (e.g., Du & Martinez, Citation2011), the absence of strong facial paresis effects suggests that other processes play a more important role here. For example, a recent study showed that recognition of emotional facial expressions can be achieved via two routes, namely by relying on visual information or on (sensori)motor information, such as facial mimicry (De La Rosa et al., Citation2018). Considering our findings in light of that study would suggest that participants relied on visual information even when this information was highly reduced, rather than relying on sensorimotor information processing involved in simulating the facial expressions of others.

We would like to note that in the current study, we focused on examining the influence of facial functioning and visibility on emotion detection in two types of emotional facial expressions (happy and anger). Future research could examine these factors further by broadening the types of emotional expressions. Moreover, broadening the type of measures used, for instance, by asking individuals to classify emotional facial expressions with respect to the emotional intention of the other, could also be intriguing considering that previous studies report increased relevance of facial mimicry when individuals are asked to understand the emotion in more detail (e.g., Hess & Fischer, Citation2014; Seibt et al., Citation2015).

In closing, the present experiment is one of the few experimental studies focusing on emotion processing in people with a facial paresis, and one of the first studies focusing on emotion processing in patients with a VS, in particular. Manipulating the precise level of visibility of the image as well as utilizing two types of emotional facial expressions provided us with specific information about possible differences in impairment of emotion detection. Future research could explore emotion perception in facial paresis patients further by, for example, by use of dynamic emotional stimuli (as, for example, addressed in Carr et al., Citation2014). This would provide more understanding on the relationship between facial dysfunction and emotion processing. Increased knowledge on emotion processing in VS patients’ with and without facial paresis is not only relevant for theory building of emotion processing. It is also important for informing health practitioners concerning the care they could provide facial paresis patients regarding their wellbeing.

Acknowledgments

This work was supported by the Netherlands Organization for Scientific Research Social Sciences under Grant number 464-10-010 (ORA Reference No. ORA-10-108) awarded to the last author. Permission for the study was granted by the Medical Ethics Committee of the Leiden University Medical Center. Protocol number: NL40223.058.12, principal investigator C.C. Wever.

Disclosure statement

No potential conflict of interest was reported by the authors.

Data availability statement

A data set is available and stored and can be requested from the corresponding author.

Additional information

Funding

This work was supported by the Netherlands Organization for Scientific Research Social Sciences under Grant number 464-10-010 [ORA Reference No. ORA-10-108].

Notes

1 No differences showed in the relevant effects based on participants’ sex.

2 Though emotion detection accuracy was our measure of interest, response times were also recorded. No differences in average response times for happy and angry emotional expressions showed between participants with and without facial paresis.

References