Abstract
Definitions and measures of discrete emotions are thorny issues in research on affect, resulting from the multitude of emotion theories and approaches to this concept; there is no gold standard for measures of discrete emotions. The utility of measurement methods should be compared across multiple perspectives to allow some degree of cumulativeness. This study reported emotion data collected from a college sample (N = 113), using seven professionally produced videos as stimuli messages. Fear, anger, sadness, disgust, and happiness were measured with the self-report method and by analyzing recorded facial expressions with FaceReader™, an FACS-based computer software that automatically analyzes facial expressions for discrete emotions. Multilevel modeling analyses demonstrated initial evidence for the correspondence between emotions measured with both methods and convergent and discriminant validity for FaceReader™ as a method of measuring discrete emotions.
Disclosure Statement
No potential conflict of interest was reported by the author(s).
Correction Statement
This article has been republished with minor changes. These changes do not impact the academic content of the article.
Notes
1. Affectiva™ by Imotions (https://imotions.com/affectiva/) is a similar product.
2. OpenFace (https://cmusatyalab.github.io/openface/) is an open source tool for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation. It does not code facial features into emotions scores. FaceReader and its competitor iMotion do two things: 1) facial recognition, and 2) coding facial features into emotions. OpenFace only does #1). It is not a comparable product.
3. The synoses of the sttimuli messages are available: https://doi.org/10.17605/OSF.IO/JP2V3
4. It is an empirical question if and how valid it is to only use the peak intensity score from machine coding while there are vast amount of data. We also retrieved the average scores of a 5-second period when a particular emotion peaked. There was minimal difference (two digits after the decimal point) in the peak scores obtained.
5. In the original analyses, sex and age were entered as controlled covariates. Per the request from a reviewer, they were removed from the results reported here.
6. Due to the censored distributions, multilevel tobit regressions were also estimated to predict each of the self-report emotions. Fixed-effects tobit regression coefficients have to be decomposed for interpretation (Long & Freese, Citation2006); they are not directly comparable to the parameter estimates from mixed effects models reported here. The pattern of significance, however, were identical to the multilevel modeling results.
7. This holds true for other machine coding methods that apply the FACS coding scheme.