442
Views
15
CrossRef citations to date
0
Altmetric
Research Article

Automatic facial coding versus electromyography of mimicked, passive, and inhibited facial response to emotional faces

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 874-889 | Received 29 May 2020, Accepted 04 Mar 2021, Published online: 25 Mar 2021
 

ABSTRACT

Decoding someone's facial expressions provides insights into his or her emotional experience. Recently, Automatic Facial Coding (AFC) software has been developed to provide measurements of emotional facial expressions. Previous studies provided first evidence for the sensitivity of such systems to detect facial responses in study participants. In the present experiment, we set out to generalise these results to affective responses as they can occur in variable social interactions. Thus, we presented facial expressions (happy, neutral, angry) and instructed participants (N = 64) to either actively mimic, to look at them passively (n = 21), or to inhibit their own facial reaction (n = 22). A video stream for AFC and an electromyogram (EMG) of the zygomaticus and corrugator muscles were registered continuously. In the mimicking condition, both AFC and EMG differentiated well between facial expressions in response to the different emotional pictures. In the passive viewing and in the inhibition condition AFC did not detect changes in facial expressions whereas EMG was still highly sensitive. Although only EMG is sensitive when participants intend to conceal their facial reactions, these data extend previous findings that Automatic Facial Coding is a promising tool for the detection of intense facial reaction.

Acknowledgements

Data from this study are permanently stored and available upon request at https://madata.bib.uni-mannheim.de/id/eprint/320.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1 Adequate sample size was determined a priori using G*Power 3.1 (Faul et al., Citation2007). A sample size of 21 participants is needed to detect strong interaction effects within the following study with a maximum alpha-error of 5% and beta-error of 10% regarding factors Viewing Instruction and Emotional Expression. Because gender a large source of variation for facial expression, we decided to invite female participants and to use female models as stimulus material only in order to enhance statistical power.

2 KDEF picture numbers: AF01ANS, AF01HAS, AF01NES, AF06ANS, AF06NES, BF06HAS, AF07ANS, AF07HAS, AF07NES, AF08ANS, AF08HAS, AF08NES, AF09HAS, BF09ANS, AF09NES, AF11ANS, AF11HAS, AF11NES, AF13ANS, AF13HAS, AF13NES, AF14ANS, AF14HAS, AF14NES, BF15ANS, AF15HAS, AF15NES, AF16ANS, AF16HAS, AF16NES, AF17NES, BF17ANS, BF17HAS, AF19ANS, AF19HAS, AF19NES, AF20NES, AF20HAS, AF20NES, AF23ANS, AF23HAS, AF23NES, BF25ANS, AF25HAS, AF25NES, AF27ANS, AF27HAS, AF27NES, AF31ANS, AF31HAS, AF31NES, AF32ANS, AF32HAS, AF32NES, AF35ANS, AF35HAS, AF35NES.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 503.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.