Publication Cover
Experimental Aging Research
An International Journal Devoted to the Scientific Study of the Aging Process
Volume 47, 2021 - Issue 5
264
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Age-related Differences in Expression Recognition of Faces with Direct and Averted Gaze Using Dynamic Stimuli

, &
Pages 451-463 | Received 12 Nov 2019, Accepted 03 Mar 2021, Published online: 27 Mar 2021
 

ABSTRACT

Background: It is still an open to what extent the ecological validity of face stimuli modulates age-related differences in the recognition of facial expression; and to what extent eye gaze direction may play a role in this process. The present study tested whether age effects in facial expression recognition, also as a function of eye gaze direction, would be less pronounced in dynamic than static face displays.

Method: Healthy younger and older adults were asked to recognize emotional expressions of faces with direct or averted eye gaze presented in static and dynamic format.

Results: While there were no differences between the age groups in facial expression recognition ability across emotions, when considering individual expressions, age-related differences in the recognition of angry facial expressions were attenuated for dynamic compared to static stimuli.

Conclusion: Our findings suggest a moderation effect of dynamic vs. static stimulus format on age-related deficits in the identification of angry facial expressions, suggesting that older adults may be less disadvantaged when recognizing angry facial expressions in more naturalistic displays. Eye gaze direction did not further modulate this effect. Findings from this study qualify and extend previous research and theory on age-related differences in facial expression recognition and have practical impact on study design by supporting the use of dynamic faces in aging research.

Acknowledgments

The authors are grateful to participants for their time. MZ was supported by the University of Queensland Development Fellowship.

Disclosure statement

Authors declare no conflict of interest.

Supplementary material

Supplemental data for this article can be accessed on the publisher’s website.

Notes

1. As a secondary aim in this study, we were interested to exploring the relationship between theory of mind (ToM) ability and emotion recognition. The rationale behind this analysis, the methods, results and discussion are reported in the supplementary materials.

2. Our sample size was based on sample sizes used in comparable studies in the field of emotion and aging (Campbell, Murray, Atkinson, & Ruffman, Citation2015; Grainger et al., Citation2015). Also, although controversial, we conducted a post-hoc power analysis using G*Power (Faul, Erdfelder, Lang, & Buchner, Citation2007) to determine whether our design had sufficient power to detect an age interaction on static vs. dynamic facial expression recognition. This analysis showed that for response time, the estimated power was above the recommended level of 0.80 (Cohen, Citation1988). For accuracy, however, based on the effect size of d = 0.47 that we observed in this study, a sample size of approximately 175 participants would have been needed to obtain power at the recommended level of 0.80. Thus, the observed results pertaining to accuracy must be interpreted with caution.

3. These background measures were used as covariates in additional analyses to determine their impact on the results. No significant main effect or interaction between covariates with task performance was found. Results are reported in the Supplementary material.

4. Applying Greenhouse-Geisser correction resulted in the same findings as reported.

5. Note that RTs were not overall faster to dynamic than static stimuli and thus could not have been solely driven by differences in presentation times across the static vs. dynamic task formats but were (at least in part) a function of the different emotion expression displays. To further address differences in presentation times for static vs. dynamic stimuli, we applied log transformation to normalize the RT data by dividing all RTs in the static condition by 4000 ms and all RTs in the dynamic by 3200 ms (duration of image presentation during static and dynamic formats, respectively). Re-analysis of the data with these transformed scores yielded comparable results. Additional analyses using log transformation on RTs also revealed comparable results.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 372.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.