2,005
Views
0
CrossRef citations to date
0
Altmetric
Short Report

Can Preschoolers Recognize the Facial Expressions of People Wearing Masks and Sunglasses? Effects of Adding Voice InformationOpen Data

ABSTRACT

Early childhood is marked by significant developmental changes in the ability to recognize facial expressions. However, since the COVID-19 outbreak, people have been wearing masks more frequently during social interactions which may hamper the recognition of facial expressions. This study examines whether preschoolers recognize the facial expressions of people with partially covered faces (wearing masks or sunglasses) or uncovered faces better, and whether recognition improves when additional voice information is provided. The participants included 27 Japanese preschoolers (11 boys and 16 girls) aged 3–5 years. The participants were presented with two groups of facial expressions: stimuli showing uncovered faces and those showing faces partially covered with a mask or sunglasses. A two-factor within-participant analysis of variance was conducted on the number of correct facial-expression responses in each trial. The children recognized the expressions of uncovered faces significantly better than those of faces with masks or sunglasses. When voice information was added, they recognized all facial expressions. Therefore, partially covered faces interfere with preschoolers’ recognition of facial expressions, and voice information aids facial expression recognition.

Introduction

In social interactions, people often try to read others’ thoughts and emotions, and convey their feelings to communicate smoothly. Facial expressions are particularly easy for children to read; they develop the ability to read these cues earlier than voices, situations, or body movements (Nelson & Russell, Citation2011; Quam & Swingley, Citation2012). Approximately half of all children aged 2 years can recognize the facial expressions of happiness, sadness, anger, and fear, and most can do so by the age of 5 years (Pons, Harris, & de Rosnay, Citation2004). Thus, early childhood (mainly 2–5 years of age) is marked by significant developmental changes in the ability to recognize facial expressions (Widen & Russell, Citation2008).

However, we cannot always see another person’s entire face. Since the COVID-19 outbreak, people have been wearing masks more frequently. Although mask-wearing is effective in preventing the transmission of infectious diseases, masks may hamper the recognition of facial expressions. When worn properly, a mask hides the lower part of the face, including the nose and mouth (60–70% of the entire face), making facial expression recognition difficult (Freud, Stajduhar, Rosenbaum, Avidan, & Ganel, Citation2020). Moreover, as facial expression recognition relies on information taken from the entire face (Farah, Wilson, Drain, & Tanaka, Citation1998; Maurer, Le Grand, & Mondloch, Citation2002), masks qualitatively alter how the partial face is recognized (Freud, Stajduhar, Rosenbaum, Avidan, & Ganel, Citation2020).

Therefore, how do people recognize facial expressions when only part of the face is visible? The eyes are the most noticeable facial features (Grossmann, Citation2017), and people initially recognize others’ facial expressions by looking at their eyes (Eisenbarth & Alpers, Citation2011; Scheller, Büchel, & Gamer, Citation2012). However, the eyes alone are insufficient, and attention is shifted to other features, such as the mouth and eyebrows, to obtain additional information (Leitzke & Pollak, Citation2016). Not all emotions are perceived similarly, and some parts of the face are more easily perceived depending on the type of emotion displayed (e.g., the eyes for sadness and anger and the mouth for happiness). However, each emotion is judged based on a synthesis of information obtained from various facial features (Bombari et al., Citation2013; Eisenbarth & Alpers, Citation2011). Thus, adults make judgments based on information gathered from the visible portions of a partially covered face; however, information regarding how this process manifests in preschoolers is scarce. Several studies have shown that children can identify basic emotions from the eyes alone by the age of 3 years (Franco et al., Citation2014) and from the eyes or mouth alone by 5–7 years of age (Gagnon, Gosselin, & Maassarani, Citation2014).

When a mask or shawl obscures the mouth, pleasure and fear, characteristically expressed via the mouth, become difficult to read (Bombari et al., Citation2013; Eisenbarth & Alpers, Citation2011). Ruba and Pollak (Citation2020) examined the level of emotion recognition from faces without coverings, with masks, and with sunglasses in American children aged 7 to 13 years. No difference was found between the recognition of faces covered with masks and sunglasses; however, while the children responded to all conditions with above-chance accuracy, recognizing expressions on covered faces was more difficult than those on uncovered faces. A study conducted in Germany on adults aged 18–87 years showed similar results (Carbon, Citation2020). Thus, mask-wearing appears to impair facial expression recognition, regardless of age.

Based on these results, Gori, Schiatti, and Amadeo (Citation2021) examined how well preschoolers who have developed the ability to recognize facial expressions can read them when the faces are covered with masks, and whether there are differences in the facial expression recognition of Italian toddlers (3–5 years), children (6–8 years), and adults (18–30 years) with and without masks. They found that masks decreased the percentage of correct responses for all ages. The results differed significantly for toddlers and revealed that facial expression recognition develops throughout childhood; preschoolers are more susceptible to the influence of masks on their ability to recognize facial expressions. The performance on the no mask condition was approximately 70% for toddlers and children, indicating that they could not recognize approximately 30% of the trials even without masks. Therefore, when they could not understand facial expressions in the trials with masks, it remains unclear whether the mask effect is significant or crucial in recognition. To focus on the mask effect, easy tasks are suitable for children.

Moreover, cultural differences affect distinguishing facial expressions. Yuki, Maddux, and Masuda (Citation2007) found that Japanese people tend to interpret emotions based on others’ eyes, whereas American people do so based on others’ mouths. Therefore, when only part of the face is visible, there may be cultural differences in facial expression recognition. For example, Japanese people experience greater difficulty interpreting emotions from faces without eye information than American people. Jack, Blais, Scheepers, Schyns, and Caldara (Citation2009) identified that East Asians use a culture-specific decoding strategy that is inadequate to distinguish the facial expressions of fear and disgust. East Asians persistently fixate on the eye region of other’s faces, whereas Western Caucasians fixate across the face.

Furthermore, children also exhibit cultural strategies of facial expression recognition. Senju, Vernetti, Kikuchi, Akechi, and Hasegawa (Citation2013) reported that Japanese and British children aged 1–7 years show culture-specific eye movement patterns on face scanning. Geangu et al. (Citation2016) found that Japanese and British 7-month-old infants rely on a cultural strategy to discriminate facial expressions. Moreover, Haensel, Ishikawa, Itakura, Smith, and Senju (Citation2020) suggested that Japanese and British 10- and 16-month-old infants show similar cultural tendencies on face scanning as adults. Thus, cultural tendencies regarding facial expression recognition might be consistent across age groups.

Li, Liu, Li, Qian, and Dai (Citation2020) reported that 63% of Japanese people wore face masks in public during the COVID-19 pandemic, whereas North Americans and Europeans discouraged healthy people from wearing face masks in public. Nakayachi, Ozaki, Shibata, and Yokoi (Citation2020) suggested that, when wearing face masks, Japanese people feel that they are conforming to social norms and are less anxious. Therefore, more Japanese people wear face masks in public than those in the West. However, while the rate of wearing sunglasses in public in Japan is considerably low (Ng & Ikeda, Citation2011), sunglasses are the most used measures for sun protection among Americans and Europeans (Seité, Del Marmol, Moyal, & Friedman, Citation2017). Thus, these cultural differences in daily life and cognitive tendency might affect facial expression recognition with masks and sunglasses. Specifically, Japanese children might have more difficulty recognizing others’ facial expressions with sunglasses than with masks. Therefore, investigating how Japanese children recognize facial expressions of faces covered with masks and sunglasses is crucial.

Peoples’ emotions can still be read even if their facial expressions cannot be seen because emotions are not exclusively expressed through a single piece of information; facial expressions, voice, and posture play vital roles in emotion recognition (Planalp, Defrancisco, & Rutherford, Citation1996). When judging others’ emotions, we use varied information expressed by them (Schirmer & Adolphs, Citation2017). Visual and auditory information are frequently used, thus suggesting that voice information provides important cues for emotion recognition (Mehrabian, Citation1986). Vroomen and de Gelder (Citation2000) found that people use both visual and auditory information, even when they are instructed to ignore such information, when it is inconsistent. Quam and Swingley (Citation2012) found that children aged 4–5 years interpret exaggerated, stereotypically happy or sad pitch cues.

Regarding cultural differences, Japanese people tend to inhibit their facial expressions (Matsumoto, Takeuchi, Andayani, Kouznetsova, & Krupp, Citation1998; Yuki, Maddux, & Masuda, Citation2007) and focus on auditory information (Tanaka et al., Citation2010). Tanaka et al. (Citation2010) found that the effect of to-be-ignored voice information on facial judgments was larger in Japanese samples than in Dutch samples. Ishii, Reyes, and Kitayama (Citation2003) reported that Japanese people showed greater difficulty ignoring vocal emotional tone than verbal content, whereas American people showed greater difficulty ignoring verbal content than vocal emotional tone. Hence, Japanese people are more attuned to vocal processing in the multisensory perception of emotions.

Against this background, it is clear that masks and sunglasses affect facial expression recognition in Western countries (cf. Gori, Schiatti, & Amadeo, Citation2021; Ruba & Pollak, Citation2020) where people have cultural tendencies to focus on others’ mouths in facial expression recognition and are accustomed to wearing sunglasses in public. However, we have little knowledge regarding how masks and sunglasses affect facial expression recognition in countries where people have cultural tendencies to focus on others’ eyes in facial expression recognition and are accustomed to wearing masks in public. Moreover, the kind of information used to help us to recognize others’ emotions when we cannot use entire facial information is unclear. In this study, we recruited Japanese preschoolers as participants to investigate how masks and sunglasses affect their facial expression recognition. In addition, we presented vocal cues with facial stimuli to determine whether participants consider voice information as a cue to understand others’ emotions when they cannot use entire facial information. We hypothesized that Japanese preschoolers have greater difficulty recognizing the facial expressions of people wearing masks or sunglasses compared with those of people wearing neither (Hypothesis 1A). Regarding cultural background, we hypothesized that Japanese preschoolers can recognize the facial expressions of people wearing masks more easily than those of people wearing sunglasses (Hypothesis 1B). Further, based on Quam and Swingley (Citation2012), we hypothesized that presenting voice information with facial expressions facilitates Japanese preschoolers’ recognition of the facial expressions of people wearing masks or sunglasses (Hypothesis 2).

Methods

Transparency and openness

In this section, we report our sample size determination, any data exclusions, manipulations, and measures, according to the Journal Article Reporting Standard (Kazak, Citation2018). Data were analyzed using SPSS version 28. The design and analysis of this study were not pre-registered.

Participants

Prior to data collection, we conducted a priori power analysis (G*Power; Faul, Erdfelder, Lang, & Buchner, Citation2007). It showed that the number of participants needed to achieve a medium to large effect size within factors using a repeated measures analysis of variance with f = 0.35, power of 1-β =.80, and a two-tailed α = .05, with an average correlation of .5 among repeated factors, was 25 per group. Therefore, we recruited 27 Japanese preschoolers (mean age 67.48 months, range 48–70 months; 11 boys and 16 girls) attending a university-affiliated kindergarten.

Materials

To examine the participants’ ability to recognize facial expressions, we prepared a facial expression and voice stimuli, presented on a laptop with Windows 10.

Facial expression stimuli

We used pictures of four Japanese adults, two men and two women, showing four emotions: happiness, sadness, anger, and surprise. These four emotions were assumed to be recognizable by preschoolers based on previous findings regarding children’s difficulty in reading expressions of fear (Ruba & Pollak, Citation2020). Stimuli from one person of each sex were used to confirm the recognition of the emotion words, and the remaining stimuli of the other person from each sex were used for the trial. We prepared 24 stimuli (four emotions in three types of coverings; uncovered, masked, and wearing sunglasses) for each sex. The masks, clothing, and backgrounds were presented in the same neutral white color. Furthermore, the photos were taken indoors under natural light. For the photos used in the mask trials, the models wore masks; for the sunglasses trials, sunglasses were added to the photos with no facial covering. All the photos were square-shaped, displaying the model from their head to neck.

Voice stimuli

For the voice stimuli, we used “ohayou” (“good morning” in Japanese) uttered by one male and one female university student, different from the models of the image stimuli, using the same four emotions as those depicted in the facial expression photos. This phrase was chosen for its neutrality and lack of emotional connotations. Moreover, it can be uttered with various emotions and is familiar to preschoolers. The recordings were made in a quiet environment where the models were asked to think of each emotion and record the audio, which was subsequently edited to 1.5 seconds before use.

For the facial expression photos and voices, a preliminary survey was conducted among 12 university students, and stimuli receiving more than 75% correct responses were used. Additionally, as the preliminary survey showed no difference between how the voice stimuli were heard depending on whether a mask was worn during the voice recording, voice recordings without a mask were used for all covering types.

The stimuli were created using PowerPoint to present photos of the four emotions as one set (). When presenting each set of four photos, each photo was highlighted individually in succession. In the voice-added condition, the program was designed to play the voice stimulus corresponding to the facial expression simultaneously with the target image highlighted for each stimulus type. Additionally, the stimuli were presented randomly.

Figure 1. Facial expression stimuli used in the study.

Figure 1. Facial expression stimuli used in the study.

Procedure

The surveys were conducted individually. The experimenter and the participant sat across the desk facing each other. After rapport building, the confirmation, practice, and main trials were conducted.

Confirmation trial

The confirmation trial was conducted to ensure that participants understood the emotion-related words and their meanings. On the computer screen, four pictures representing four unmasked facial expressions were presented individually, and each participant was asked to respond freely to the question “How do you think this person is feeling?” If a participant could not answer correctly, they would have been provided with a prompt. However, all participants answered correctly.

Practice trial

The practice trial was conducted to familiarize children with responding during the main trial. Four photos of emotionally expressive faces were presented on one screen, and the participants were told, “You will see four images of faces in order, sometimes with a voice. You will be asked to choose a face with a voice and a happy face, so watch and listen carefully.” Thereafter, the images were presented in order for two seconds each. When one of the four image stimuli was presented, a neutral voice saying “ohayou” was played. Afterward, the four photos were presented again, and the following questions were asked: (i) “From which face did you hear the voice?” and (ii) “Which face looks happy?” For each question, the children were asked to select one answer by pointing to it.

The same procedure was followed for all three trials (uncovered, mask, and sunglasses covering types); if there was no response or an incorrect response, the correct answer would have been revealed to inform participants how to answer. However, all participants answered correctly.

Main trial

The main trial was conducted similarly to the practice trial. Before presenting the stimuli in the main trial, participants were instructed, “You will see four faces individually, and when you have seen all four, you will choose a happy (sad, angry, or surprised) face, so watch carefully until the end.” In the voice-added condition, four voice stimuli of “ohayou” with four different emotional cues were used; that is, an image stimulus was presented together with a matching emotional voice stimulus. After the four stimuli were presented, the four photos were presented again, and the participants were asked to answer the question, “Which of these faces is a happy (sad, angry, or surprised) face?” The facial expression stimuli presented were matched to the participants’ sex. There were 12 voiceless trials (four emotions of three covering types) and 12 voice-added trials, presented in a random order. The cumulative score was on a scale of 0–4 for each covering type; 1 point was awarded for each correct response and 0 for each incorrect response.

Ethical considerations

Two of the authors obtained approval from the Ethics Committee of Research Involving Human Subjects at their respective institutions before conducting the study. We obtained informed consent from the person in charge of the kindergarten rather than from the participants’ parents because the university-affiliated kindergarten is also a research field for the institution, and the Ethics Committee of Research Involving Human Subjects at the authors’ institution specified that informed consent can only be obtained from the person in charge. Additionally, we obtained informed consent from the participants.

Results

We checked for a repetition effect because participants watched the same facial stimuli 24 times; however, no significant repetition or order effects were found (n.s.). Therefore, all participants were included in the final data analysis.

shows the average percentage of correct responses for each covering type × voice condition. A t-test for chance level (25%) was conducted for each of the other conditions; the percentage of correct responses was significantly higher than chance in all conditions (p < 0.001).

Figure 2. Average percentage of correct responses for stimulus type with and without voice.

Figure 2. Average percentage of correct responses for stimulus type with and without voice.

Furthermore, a two-factor within-participant analysis of variance was conducted on the number of correct responses. The main effects of stimulus type (F (2, 52) = 13.04, p < 0.001, ηp2 = .33) and voice (F (1, 26) = 18.09, p < 0.001, ηp2 = .41) were significant. The interaction (F (2, 52) = 7.82, p = 0.01, ηp2 = .23) was also significant. Multiple comparisons revealed that the number of correct responses was significantly higher in the uncovered than the sunglasses trial (p < 0.001), the uncovered than the mask trial (p = 0.03), and the mask than the sunglasses trial (p = 0.01). Regarding interaction, a simple main effect test revealed a significant difference for covering type only in the voiceless condition (F (2, 104) = 20.80, p < 0.001, ηp2 = .44). Multiple comparisons revealed significant differences between covering types, and the number of correct responses was significantly higher in the uncovered (M = 3.93, SD = .07) than the sunglasses trial (M = 3.19, SD = .10) (p < 0.001), the uncovered than the mask trial (M = 3.70, SD = .12) (p = 0.02), and the mask than the sunglasses trial (p = 0.01). Contrastingly, no significant differences were observed (uncovered: M = 4.00, SD = .00, mask: M = 3.93, SD = .05, sunglasses: M = 3.78, SD = .10) in the voice-added condition (F (2, 104) = 1.84, p = 0.17, ηp2 = .07). Furthermore, no differences were observed in the percentage of correct responses for each emotion.

Discussion

In this study, we compared Japanese children’s ability to recognize facial expressions when other peoples’ faces were uncovered, covered with masks, or covered with sunglasses. Additionally, we examined whether the paired presentation of voice stimuli facilitated facial expression recognition. In the voiceless condition, facial expression recognition was significantly lower when the model was wearing a mask compared with no face covering. Likewise, facial expression recognition was significantly lower when the model wore sunglasses compared with no face covering. Furthermore, participants could recognize emotions significantly more accurately when the model wore a mask compared with sunglasses. Thus, our results support Hypothesis 1A and suggest that preschoolers more accurately recognize facial expressions of people who are wearing masks compared with sunglasses (Hypothesis 1B).

Our results are consistent with those of previous studies (Carbon, Citation2020; Gori, Schiatti, & Amadeo, Citation2021; Ruba & Pollak, Citation2020) as facial expression recognition decreased when the other person wore a mask or sunglasses, thereby indicating that emotional judgments are made by observing multiple parts of the face.

Moreover, others’ emotions in daily life are not judged solely on static facial expressions, as in the facial photos used here; movements, sounds, situations, and contexts are also considered (Schirmer & Adolphs, Citation2017). In this study, no significant difference was observed in the performance of facial expression recognition among the uncovered, mask, and sunglasses conditions when voice stimuli were added (100% correct response rate), thus supporting Hypothesis 2. As multiple pieces of information are integrated to recognize others’ emotions (Schirmer & Adolphs, Citation2017), even if parts of the face are covered by a mask or sunglasses, visible and audible information is sufficient to compensate for and infer emotion. Several studies have shown that facial expressions and voice interact holistically in emotion recognition (de Gelder & Bertelson, Citation2003; de Gelder, Morris, & Dolan, Citation2005). Similarly, this study found that voice information affects Japanese children’s facial expression recognition.

We found that recognizing the facial expressions of those with masks was significantly more difficult than those without a covering. However, the correct response rate was approximately 95% even when the facial expressions were difficult to read. This may be due to differences in the experimental design compared with related studies (e.g., Gori, Schiatti, & Amadeo, Citation2021; Ruba & Pollak, Citation2020). In this study, participants selected the instructed face from the stimuli comprising four facial expressions, whereas when Ruba and Pollak (Citation2020) showed facial expression stimuli to the children, they were prompted to identify the emotion on the face by selecting one of six labels. Gori, Schiatti, and Amadeo (Citation2021) also showed facial expression stimuli to the children, who answered by selecting one of five labels. We used fewer labels than in previous related studies; therefore, it might have been easier for the participants in this study to recognize facial expressions, compared with previous studies.

Additionally, the higher performance of the participants in this study than in Ruba and Pollak (Citation2020) might be due to a same-other-race effect – the facial expressions of people from one’s own race are easier to recognize than that of other races (Kelly et al., Citation2007). Kelly et al. (Citation2007) suggested that the same-other-race effect on facial expression recognition emerges by 6 months of age. In this study, all participants were Japanese, and all the stimuli comprised Japanese faces, whereas the participants in Ruba and Pollak (Citation2020) included Black, Caucasian, and multi-racial participants, and the stimuli comprised only Caucasian faces. Furthermore, cultural tendencies might affect the performance of participants. Cultural studies have suggested that Asian (including Japanese) people have a construal of the self as interdependent and characterize the Asian culture as collectivist (Markus & Kitayama, Citation1991; Matsumoto, Takeuchi, Andayani, Kouznetsova, & Krupp, Citation1998). Furthermore, Asian people have a cultural tendency to respect the values of others and value harmonious relationships. Therefore, Japanese people must control and suppress expressions of emotions to avoid imposing their emotions on others (Weiss, Thomas, Schick, Reyes, & Contractor, Citation2022). Moreover, Fogel, Toda, and Kawai (Citation1988) found that Japanese mothers are more likely to punctuate their facial expressions than American mothers when communicating with infants. Thus, Japanese people should pay careful attention to others’ facial expressions because of the Japanese tendency to display subtle facial expressions in daily life. In this study, we used facial expression stimuli comprising exaggerated facial expressions. Owing to their exaggerated nature, it might have been easier for participants to discriminate between others’ facial expressions in the experimental situation than in their daily lives.

In this study, a comparison between partially covering the face stimuli with masks and sunglasses showed that reading facial expressions was more difficult with sunglasses. This result contradicts Ruba and Pollak’s (Citation2020) study, which found no difference in facial expression recognition between the two conditions. Japanese people pay attention to others’ eyes when recognizing facial expressions (Grossmann, Citation2017; Jack, Blais, Scheepers, Schyns, & Caldara, Citation2009; Yuki, Maddux, & Masuda, Citation2007); therefore, recognizing others’ facial expressions when their eyes are covered by sunglasses is difficult. Moreover, daily life experience plays a crucial role in the development of face processing (Kelly et al., Citation2007). In this study, we collected the data in November 2021, when the participants had seen many people wearing masks in public owing to the COVID-19 pandemic. However, the participants had seen fewer people wearing sunglasses in public in Japan (e.g., Ng & Ikeda, Citation2011). Thus, Japanese participants in this study could recognize facial expressions with masks more easily than those with sunglasses.

In conclusion, although the Japanese preschoolers had a high percentage of correct responses when recognizing the facial expressions of people wearing masks and sunglasses, this result was lower than that for facial expression recognition of people with no covering. Furthermore, the additional voice information facilitated the understanding of facial expressions of people wearing masks and sunglasses.

The study has the following limitations. The facial expressions used may not have been representative of natural facial expressions expressed in everyday situations. The presented photo stimuli were created by asking the models to intentionally recall an emotion; they were not facial expressions expressed in natural situations. Therefore, the facial expressions in the stimuli could be exaggerated or different from those usually expressed. Furthermore, this study used only four emotions: happiness, sadness, anger, and surprise, to make tasks easy for focusing on the effect of facial covering. However, in daily life, we express more emotions, such as fear and disgust. Each emotion has different facial expression characteristics, and masks and sunglasses might affect these characteristics in different patterns. Therefore, a more detailed study examining the differences in facial expression recognition using photos taken in everyday situations or those with ambiguous or varied facial expressions should be conducted.

In this study, there were no substantial differences in participants’ ages, and even 3-year-olds could identify the facial expressions in the masked condition. However, masks may make it difficult for younger children to recognize facial expressions. Therefore, the effects of masks and sunglasses on younger children’s ability to recognize facial expressions must be examined.

Although previous research has focused on cultural differences in facial expression recognition (e.g., Yuki, Maddux, & Masuda, Citation2007), cultural differences in facial expression recognition with masks and sunglasses remain unclear. Furthermore, it remains unclear whether the effect of auditory information on facial expression recognition is specific to Japanese culture. Masuda et al. (Citation2008) found that Japanese people observe emotions through others’ facial expressions, as inseparable from social context, whereas American people see them as individual feelings. Thus, Japanese people might be sensitive to not only others’ facial expressions but also other contextual information (including voice information). Therefore, future studies should conduct cross-cultural examinations regarding the voice effect on facial expression recognition with faces partially covered by masks and sunglasses.

Ethics approval

All procedures used in this research were approved by the Ethics Committee of Research Involving Human Subjects of Shizuoka University (approval number: 21–33).

Open Scholarship

This article has earned the Center for Open Science badge for Open Data. The data are openly accessible at https://doi.org/10.1080/15248372.2023.2207665

Acknowledgments

We would like to thank Editage (www.editage.com) for English language editing.

Disclosure statement

No potential conflict of interest was reported by the authors.

Data availability statement

All data have been made publicly available at the OSF and can be accessed at https://osf.io/gfbtk/. The analysis code and research materials for this study are available and can be accessed upon reasonable request by emailing the corresponding author.

Additional information

Funding

This work was supported by JSPS KAKENHI (Grant Number: 20K14155).

References

  • Bombari, D., Schmid, P. C., Schmid Mast, M., Birri, S., Mast, F. W., & Lobmaier, J. S. (2013). Emotion recognition: The role of featural and configural face information. The Quarterly Journal of Experimental Psychology, 66(12), 2426–2442. doi:10.1080/17470218.2013.789065
  • Carbon, C. C. (2020). Wearing face masks strongly confuses counterparts in reading emotions. Frontiers in Psychology, 11, 566886. doi:10.3389/fpsyg.2020.566886
  • de Gelder, B., & Bertelson, P. (2003). Multisensory integration, perception and ecological validity. Trends in Cognitive Sciences, 7(10), 460–467. doi:10.1016/j.tics.2003.08.014
  • de Gelder, B., Morris, J. S., & Dolan, R. J. (2005). Unconscious fear influences emotional awareness of faces and voices. Proceedings of the National Academy of Sciences of the United States of America, 102(51), 18682–18687. doi:10.1073/pnas.0509179102
  • Eisenbarth, H., & Alpers, G. W. (2011). Happy mouth and sad eyes: Scanning emotional facial expressions. Emotion, 11(4), 860–865. doi:10.1037/a0022758
  • Farah, M. J., Wilson, K. D., Drain, M., & Tanaka, J. N. (1998). What is “special” about face perception? Psychological Review, 105(3), 482–498. doi:10.1037/0033-295X.105.3.482
  • Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G*power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. doi:10.3758/bf03193146
  • Fogel, A., Toda, S., & Kawai, M. (1988). Mother–infant face-to-face interaction in Japan and United States: A laboratory comparison using 3-month-old infants. Developmental Psychology, 24(3), 398–406. doi:10.1037/0012-1649.24.3.398
  • Franco, F., Itakura, S., Pomorska, K., Abramowski, A., Nikaido, K., & Dimitriou, D. (2014). Can children with autism read emotions from the eyes? The eyes test revisited. Research in Developmental Disabilities, 35(5), 1015–1026. doi:10.1016/j.ridd.2014.01.037
  • Freud, E., Stajduhar, A., Rosenbaum, R. S., Avidan, G., & Ganel, T. (2020). The COVID-19 pandemic masks the way people perceive faces. Scientific Reports, 10(1), 22344. doi:10.1038/s41598-020-78986-9
  • Gagnon, M., Gosselin, P., & Maassarani, R. (2014). Children’s ability to recognize emotions from partial and complete facial expressions. The Journal of Genetic Psychology, 175(5–6), 416–430. doi:10.1080/00221325.2014.941322
  • Geangu, E., Ichikawa, H., Lao, J., Kanazawa, S., Yamaguchi, M. K., Caldara, R., & Turati, C. (2016). Culture shapes 7-month-olds’ perceptual strategies is discriminating facial expressions of emotion. Current Biology, 26(14), R663–664. doi:10.1016/j.cub.2016.05.072
  • Gori, M., Schiatti, L., & Amadeo, M. B. (2021). Masking emotions: Face masks impair how we read emotions. Frontiers in Psychology, 12, 669432. doi:10.3389/fpsyg.2021.669432
  • Grossmann, T. (2017). The eyes as windows into other minds. Perspectives on Psychological Science, 12(1), 107–121. doi:10.1177/1745691616654457
  • Haensel, J. X., Ishikawa, M., Itakura, S., Smith, T. J., & Senju, A. (2020). Cultural influences on face scanning are consistent across infancy and adulthood. Infant Behavior & Development, 61, 101503. doi:10.1016/j.infbeh.2020.101503
  • Ishii, K., Reyes, J. A., & Kitayama, S. (2003). Spontaneous attention to word content versus emotional tone: Differences among three cultures. Psychological Science, 14(1), 39–46. doi:10.1111/1467-9280.01416
  • Jack, R. E., Blais, C., Scheepers, C., Schyns, P. G., & Caldara, R. (2009). Cultural confusions show that facial expressions are not universal. Current Biology, 19(18), 1543–1548. doi:10.1016/j.cub.2009.07.051
  • Kazak, A. E. (2018). Editorial. Editorial: Journal Article Reporting Standards American Psychologist, 73(1), 1–2. doi:10.1037/amp0000263
  • Kelly, D. J., Quinn, P. C., Slater, A. M., Lee, K., Ge, L., & Pascalis, O. (2007). The other-race effect develops during infancy: Evidence of perceptual narrowing. Psychological Science, 18(12), 1084–1089. doi:10.1111/j.1467-9280.2007.02029.x
  • Leitzke, B. T., & Pollak, S. D. (2016). Developmental changes in the primacy of facial cues for emotion recognition. Developmental Psychology, 52(4), 572–581. doi:10.1037/a0040067
  • Li, T., Liu, Y., Li, M., Qian, X., & Dai, S. Y. (2020). Mask or no mask for COVID-19: A public health and market study. PloS One, 15(8), e0237691. doi:10.1371/journal.pone.0237691
  • Markus, H. R., & Kitayama, S. (1991). Culture and the self: Implications for cognition, emotion and motivation. Psychological Review, 98(2), 224–253. doi:10.1037/0033-295X.98.2.224
  • Masuda, T., Ellsworth, P. C., Mesquita, B., Leu, J., Tanida, S., & Van de Veerdonk, E. (2008). Placing the face in context: Cultural differences in the perception of facial emotion. Journal of Personality and Social Psychology, 94(3), 365–381. doi:10.1037/0022-3514.94.3.365
  • Matsumoto, D., Takeuchi, S., Andayani, S., Kouznetsova, N., & Krupp, D. (1998). The contribution of individualism vs. collectivism to cross-national differences in display rules. Asian Journal of Social Psychology, 1(2), 147–165. doi:10.1111/1467-839X.00010
  • Maurer, D., Le Grand, R. L., & Mondloch, C. J. (2002). The many faces of configural processing. Trends in Cognitive Sciences, 6(6), 255–260. doi:10.1016/S1364-6613(02)01903-4
  • Mehrabian, A. (1986). Communication without words. Psychology Today, 2(4), 53–55.
  • Nakayachi, K., Ozaki, T., Shibata, Y., & Yokoi, R. (2020). Why do Japanese people use masks against COVID-19, even though masks are unlikely to offer protection from infection? Frontiers in Psychology, 11, 1918. doi:10.3389/fpsyg.2020.01918
  • Nelson, N. L., & Russell, J. A. (2011). Preschoolers’ use of dynamic facial, bodily, and speech cues to emotion. Journal of Experimental Child Psychology, 110(1), 52–61. doi:10.1016/j.jecp.2011.03.014
  • Ng, W., & Ikeda, S. (2011). Use of sun-protective items by Japanese pedestrians: A cross-sectional observational study. Archives of Dermatology, 147(10), 1167–1170. doi:10.1001/archdermatol.2011.236
  • Planalp, S., Defrancisco, V. L., & Rutherford, D. (1996). Varieties of cues to emotion in naturally occurring situations. Cognition & Emotion, 10(2), 137–154. doi:10.1080/026999396380303
  • Pons, F., Harris, P. L., & de Rosnay, M. (2004). Emotion comprehension between 3 and 11 years: Developmental periods and hierarchical organization. The European Journal of Developmental Psychology, 1(2), 127–152. doi:10.1080/17405620344000022
  • Quam, C., & Swingley, D. (2012). Development in children’s interpretation of pitch cues to emotions. Child Development, 83(1), 236–250. doi:10.1111/j.1467-8624.2011.01700.x
  • Ruba, A. L., & Pollak, S. D. (2020). Children’s emotion inferences from masked faces: Implications for social interactions during COVID-19. PloS One, 15(12), e0243708. doi:10.1371/journal.pone.0243708
  • Scheller, E., Büchel, C., & Gamer, M. (2012). Diagnostic features of emotional expressions are processed preferentially. PloS One, 7(7), e41792. doi:10.1371/journal.pone.0041792
  • Schirmer, A., & Adolphs, R. (2017). Emotion perception from face, voice, and touch: Comparisons and convergence. Trends in Cognitive Sciences, 21(3), 216–228. doi:10.1016/j.tics.2017.01.001
  • Seité, S., Del Marmol, V., Moyal, D., & Friedman, A. J. (2017). Public primary and secondary skin cancer prevention, perceptions and knowledge: An international cross-sectional survey. Journal of the European Academy of Dermatology and Venereology, 31(5), 815–820. doi:10.1111/jdv.14104
  • Senju, A., Vernetti, A., Kikuchi, Y., Akechi, H., & Hasegawa, T. (2013). Cultural modulation of face and gaze scanning in young children. PloS One, 8(8), e74017. doi:10.1371/journal.pone.0074017
  • Tanaka, A., Koizumi, A., Imai, H., Hiramatsu, S., Hiramoto, E., & de Gelder, B. (2010). I feel your voice: Cultural differences in the multisensory perception of emotion. Psychological Science, 21(9), 1259–1262. doi:10.1177/0956797610380698
  • Vroomen, J., & de Gelder, B. D. (2000). Sound enhances visual perception: Cross-modal effects of auditory organization on vision. Journal of Experimental Psychology Human Perception and Performance, 26(5), 1583–1590. doi:10.1037/0096-1523.26.5.1583
  • Weiss, N. H., Thomas, E. D., Schick, M. R., Reyes, M. E., & Contractor, A. A. (2022). Racial and ethnic differences in emotion regulation: A systematic review. Journal of Clinical Psychology, 78(5), 785–808. doi:10.1002/jclp.23284
  • Widen, S. C., & Russell, J. A. (2008). Children acquire emotion categories gradually. Cognitive Development, 23(2), 291–312. doi:10.1016/j.cogdev.2008.01.002
  • Yuki, M., Maddux, W. W., & Masuda, T. (2007). Are the windows to the soul the same in the East and West? Cultural differences in using the eyes and mouth as cues to recognize emotions in Japan and the United States. Journal of Experimental Social Psychology, 43(2), 303–311. doi:10.1016/j.jesp.2006.02.004