442
Views
15
CrossRef citations to date
0
Altmetric
Research Article

Automatic facial coding versus electromyography of mimicked, passive, and inhibited facial response to emotional faces

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 874-889 | Received 29 May 2020, Accepted 04 Mar 2021, Published online: 25 Mar 2021
 

ABSTRACT

Decoding someone's facial expressions provides insights into his or her emotional experience. Recently, Automatic Facial Coding (AFC) software has been developed to provide measurements of emotional facial expressions. Previous studies provided first evidence for the sensitivity of such systems to detect facial responses in study participants. In the present experiment, we set out to generalise these results to affective responses as they can occur in variable social interactions. Thus, we presented facial expressions (happy, neutral, angry) and instructed participants (N = 64) to either actively mimic, to look at them passively (n = 21), or to inhibit their own facial reaction (n = 22). A video stream for AFC and an electromyogram (EMG) of the zygomaticus and corrugator muscles were registered continuously. In the mimicking condition, both AFC and EMG differentiated well between facial expressions in response to the different emotional pictures. In the passive viewing and in the inhibition condition AFC did not detect changes in facial expressions whereas EMG was still highly sensitive. Although only EMG is sensitive when participants intend to conceal their facial reactions, these data extend previous findings that Automatic Facial Coding is a promising tool for the detection of intense facial reaction.

Acknowledgements

Data from this study are permanently stored and available upon request at https://madata.bib.uni-mannheim.de/id/eprint/320.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1 Adequate sample size was determined a priori using G*Power 3.1 (Faul et al., Citation2007). A sample size of 21 participants is needed to detect strong interaction effects within the following study with a maximum alpha-error of 5% and beta-error of 10% regarding factors Viewing Instruction and Emotional Expression. Because gender a large source of variation for facial expression, we decided to invite female participants and to use female models as stimulus material only in order to enhance statistical power.

2 KDEF picture numbers: AF01ANS, AF01HAS, AF01NES, AF06ANS, AF06NES, BF06HAS, AF07ANS, AF07HAS, AF07NES, AF08ANS, AF08HAS, AF08NES, AF09HAS, BF09ANS, AF09NES, AF11ANS, AF11HAS, AF11NES, AF13ANS, AF13HAS, AF13NES, AF14ANS, AF14HAS, AF14NES, BF15ANS, AF15HAS, AF15NES, AF16ANS, AF16HAS, AF16NES, AF17NES, BF17ANS, BF17HAS, AF19ANS, AF19HAS, AF19NES, AF20NES, AF20HAS, AF20NES, AF23ANS, AF23HAS, AF23NES, BF25ANS, AF25HAS, AF25NES, AF27ANS, AF27HAS, AF27NES, AF31ANS, AF31HAS, AF31NES, AF32ANS, AF32HAS, AF32NES, AF35ANS, AF35HAS, AF35NES.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.