3,725
Views
14
CrossRef citations to date
0
Altmetric
Original Articles

The role of motion and intensity in deaf children’s recognition of real human facial expressions of emotion

, &
Pages 102-115 | Received 21 Dec 2016, Accepted 24 Jan 2017, Published online: 14 Feb 2017

ABSTRACT

There is substantial evidence to suggest that deafness is associated with delays in emotion understanding, which has been attributed to delays in language acquisition and opportunities to converse. However, studies addressing the ability to recognise facial expressions of emotion have produced equivocal findings. The two experiments presented here attempt to clarify emotion recognition in deaf children by considering two aspects: the role of motion and the role of intensity in deaf children’s emotion recognition. In Study 1, 26 deaf children were compared to 26 age-matched hearing controls on a computerised facial emotion recognition task involving static and dynamic expressions of 6 emotions. Eighteen of the deaf and 18 age-matched hearing controls additionally took part in Study 2, involving the presentation of the same 6 emotions at varying intensities. Study 1 showed that deaf children’s emotion recognition was better in the dynamic rather than static condition, whereas the hearing children showed no difference in performance between the two conditions. In Study 2, the deaf children performed no differently from the hearing controls, showing improved recognition rates with increasing rates of intensity. With the exception of disgust, no differences in individual emotions were found. These findings highlight the importance of using ecologically valid stimuli to assess emotion recognition.

Typically developing children learn about emotions in a linguistic and social context, usually through interactions with siblings and friends (Taumeopeau & Ruffman, Citation2008), and by discussing or overhearing discussions about emotional experiences with their parents (Symons, Citation2004). One important element of emotion understanding is the ability to accurately label emotional facial expressions, a skill that influences social interaction and academic attainment (Greenberg & Kusché, Citation1993). This ability has been shown to develop from the age of 4 or 5, and is gradually refined through both age and experience, to levels of ability commensurate with that of adults (Widen, Citation2013; Widen & Russell, Citation2003).

The ability to acquire labels for emotional expressions may differentiate deaf children from hearing children in their learning of emotions, based on level of access to a shared language. In addition to having a reduced access to spoken language, the majority of deaf children are born to hearing parents who are not fluent in a sign language, which creates fewer opportunities for incidental learning and communication about their own and others’ experiences of emotion (Morgan et al., Citation2014; Rieffe, Netten, Broekhof, & Veiga, Citation2015). For a late-signing deaf child, fluency in sign language is not typically achieved until primary school. In addition, it can take many years to reach the level of lip-reading required to take part in fluent conversation, even for children with cochlear implants (CIs) (Kyle, Campbell, Mohammed, Coleman, & MacSweeney, Citation2013), and receptive and expressive language skills remain behind age-appropriate levels for many children (Niparko et al., Citation2010). Several studies have shown that in general, deaf children of hearing parents have poorer emotion understanding than hearing children, such as the ability to assign emotions to story characters (Gray, Hosie, Russell, Scott, & Hunter, Citation2007), and poorer regulation of their emotions (Rieffe, Citation2012). It remains unclear whether reduced opportunities to talk about emotions, and a dependence on visual cues in the absence of vocal cues, impact the emotion recognition abilities of deaf children of hearing parents (e.g. Hosie, Gray, Russell, Scott, & Hunter, Citation1998; Ludlow, Heaton, Rosset, Hills, & Deruelle, Citation2010).

Emotion recognition in deaf children

The relatively few studies testing deaf children’s facial emotion recognition of the six “basic” emotions (i.e. happiness, anger, fear, disgust, and surprise) have involved a broad age range of children, from pre-schoolers to adolescents, and results from these studies are inconclusive. For example, moderate-profoundly deaf preschool children, using both hearing aids (HAs) and CIs, were shown to have difficulty in emotion recognition in both visual and auditory domains (Most & Michaelis, Citation2012; Wang, Su, Fang, & Zhou, Citation2011). In addition, 3–4-year-old deaf children with CIs were poorer than their hearing peers at labelling emotional facial expressions in cartoons (Wiefferink, Rieffe, Ketelaar, De Raeve, & Frijns, Citation2013). These findings may suggest that deaf children have a delay in emotion recognition in the early years. However, a recent study by Laugen, Jacobsen, Rieffe, and Wichstrøm (Citation2016) found comparable emotion recognition performance in 4–5-year-old hearing and deaf children with mild–severe hearing loss, who may have better auditory access to conversations about emotions. It is possible that some deaf children are disadvantaged in developing emotion recognition abilities because emotion vocalisations, and/or rich exposure to language is important for learning about emotions.

There is some evidence that emotion recognition delays extend beyond the preschool period for moderate-profoundly deaf children. For example, Sidera, Amadó, and Martínez (Citation2016) recently found 3–8-year-old deaf children to be relatively poorer than hearing controls in their ability to match emotion words to facial expressions of emotion portrayed in cartoons, but only in the emotions, fear, disgust and surprise. These results revealed that linguistic skills were related to emotion recognition, even when controlling for age. Furthermore, Dyck, Farrugia, Shochet, and Holmes-Brown (Citation2004) tested a group of children and adolescents (aged 6–18 years) and found that the deaf participants were poorer than their hearing peers at recognising pictures of emotion facial expressions. However, there were no group differences when covarying for verbal ability, suggesting that language was more important than age in predicting emotion recognition. In addition, Ludlow et al. (Citation2010) also found that age did not predict performance in a group of 6–16-year-old deaf children. In this study, they used an emotion recognition task comprising human and cartoon faces, and found that deaf children performed poorly in relation to both chronological and mental age-matched hearing controls.

In contrast, other studies have shown no evidence of differences in emotion recognition between deaf and hearing controls. Ziv, Most, and Cohen (Citation2013) found that both 5–7-year-old native signing deaf children (i.e. born to deaf parents and so exposed to sign language from birth), and deaf children with CIs, performed similarly to hearing children in recognising emotion facial expressions shown in photographs. In addition, Hosie et al. (Citation1998) found that a group of 6–12-year-old profoundly deaf children showed comparable performance to hearing children in both matching and labelling intense facial expressions of emotion. Results also revealed that the performance of both groups of children improved with age. Hosie and colleagues propose that deaf children may be able to sufficiently capitalise on environmental inputs to learn about emotion expressions, and visual and contextual cues may aid emotion recognition rather than language. However, it is difficult to determine the precise role of language in their emotion recognition ability, as the authors did not include an actual measure of language ability. Other studies with deaf children wearing CIs have also shown that older children and adolescents performed similarly to their hearing peers (7–17-year-olds; Hopyan-Misakyan, Gordon, Dennis, & Papsin, Citation2009; Most & Aviner, Citation2009). While CIs might improve access to conversations about emotions, it is important to note that CIs remain highly variable in their effectiveness in improving access to spoken language (Niparko et al., Citation2010), and many deaf children still use HAs as their hearing amplification device.

The variability of previous findings suggests that further research is necessary to clarify emotion recognition abilities in school age deaf children. An important factor to consider, in addition to age, access to language and language ability, is the type of stimuli used to present facial expressions of emotion. A number of studies where poorer performance in deaf children has been observed have utilised cartoons or still photographs (e.g. Ludlow et al., Citation2010; Sidera et al., Citation2016; Wiefferink et al., Citation2013). For example, Ludlow et al. (Citation2010) found that both deaf and hearing children were able to recognise emotions in real human faces better than cartoons. It can be argued that the use of cartoons and static stimuli are not as ecologically valid (Ambadar, Schooler, & Cohn, Citation2005), particularly because in everyday life, encounters with emotional facial expressions are dynamic and with varying levels of intensity.

Dynamic presentation of emotions

A recent development in emotion recognition research is the use of dynamic stimuli – moving facial expressions – of real human faces. Real-life facial expressions are dynamic and they indicate moment-to-moment changes in emotional states (Sato & Yoshikawa, Citation2004). During social interactions, our emotions involve a great deal of movement; therefore, it is expected that we learn to discriminate and interpret emotional displays through these interchanges. In comparison, static images largely correspond to clear, distinct and identifiable peaks of socially meaningful movements (Atkinson, Dittrich, Gemmell, & Young, Citation2004; Manstead, Fischer, & Jakobs, Citation1999).

Research with adults has suggested a dynamic advantage in emotion recognition of facial expressions (Krumhuber, Kappas, & Manstead, Citation2013). Importantly, a number of neuroimaging studies have revealed higher brain activity in regions linked to the processing of social (superior temporal sulci) and emotion relevant information (amygdalae) when viewing dynamic rather than static expressive faces (Kessler et al., Citation2011; Kilts, Egan, Gideon, Ely, & Hoffman, Citation2003). In labelling emotion faces, studies using subtle expressions have shown an advantage for dynamic over static (Ambadar et al., Citation2005; Wehrle, Kaiser, Schmidt, & Scherer, Citation2000); however, some adult studies using intense prototypical expressions have shown no difference (Bould & Morris, Citation2008; Wehrle et al., Citation2000). A recent study carried out with typically developing children suggests the same lack of dynamic advantage when using intense expressions (Widen & Russell, Citation2015).

Facial expressions experienced in everyday life vary from fleeting and subtle to more expressive and intense. Intensity can be defined as the relative degree of movement away from a neutral expression of muscles that are activated in a particular facial expression of emotion (Hess, Blairy, & Kleck, Citation1997). The intensity of the facial emotional expression of happiness, for example, can be signified by the extent of identifiable activity in the zygomaticus major and orbicularis oculi muscles moving away from their relaxed states (Ekman & O'Sullivan, Citation1991). Importantly, intensity has been shown to mediate accuracy in typically developing children’s identification of emotion expressions: the ability to recognise the negative emotions at subtle levels of intensity continues to develop into adulthood (Gao & Maurer, Citation2010; Herba, Landau, Russell, Ecker, & Phillips, Citation2006). It is assumed that intense affective facial expressions have larger movements, making them easier to recognise (Montagne, Kessels, De Haan, & Perrett, Citation2007). This effect was clearly demonstrated using a morphing technique to display facial emotion expressions at four levels of intensity (35%, 50%, 75% and 100%; Montirosso, Peverelli, Frigerio, Crespi, & Borgatti, Citation2010).

The present study

The use of static and dynamic images has not been directly compared in deaf populations, and it is plausible that dynamic faces may provide a compensatory effect for deaf children. In the absence of auditory input, the development of visual attention is different in deaf individuals. It appears that a greater sensitivity to motion is a notable consequence. For instance, neuroimaging studies and electrophysiological data have shown increased activation in motion sensitive areas in deaf individuals when monitoring motion flow fields to detect velocity changes in peripheral stimuli (Armstrong, Neville, Hillyard, & Mitchell, Citation2002). Moreover, in addition to the perception of emotion facial expressions, deaf children using sign language also depend heavily on dynamic cues to distinguish between linguistic markers (e.g. why-question face) and emotion expressions (e.g. angry), which have similar features (i.e. brow furrowing). These similar configural features can be confused in a static snapshot, yet a dynamic presentation delineates the distinct dynamic visuo-spatial changes (Grossman & Kegl, Citation2007). In addition, as deaf children have less opportunity to discuss emotions, they may consequently have less well-developed internal representations, which form by hearing emotion words embedded in conversations and by seeing dynamic facial movements in a rich set of contextual cues to an emotion (Russell & Widen, Citation2002). This may mean that dynamic faces will be an advantage for deaf children, as temporal cues that are not provided by static images may help disambiguate emotion expressions.

The following two studies aimed to use more dynamic, life-like displays to clarify deaf children’s emotion recognition ability in middle childhood (i.e. 6–12 years): a period in which gradual improvement in the recognition of the basic facial expressions of emotion has been observed in typically developing children (Widen, Citation2013), particularly in the more demanding task of identifying emotions of low intensity (Herba et al., Citation2006). A language measure was also included, as language has been found to relate to emotion recognition ability in deaf (e.g. Dyck et al., Citation2004) and in typically developing populations (Beck, Kumschick, Eid, & Klann-Delius, Citation2012).

More specifically, Study 1 aimed to compare deaf and hearing children’s emotion recognition of the six basic emotions in static and dynamic displays, to test the hypothesis that dynamic facial expressions would enhance emotion recognition compared to static displays in deaf children. However, this effect was predicted to be smaller in hearing children. Study 2 examined the role of intensity in facial emotion recognition in deaf children using the more challenging task identifying low-level intensity emotion facial expressions. It was hypothesised that deaf children may be poorer than hearing children on this more complex task, as some previous research has suggested that deaf children have delays in emotion recognition.

Study 1: emotion recognition of deaf and hearing children in real dynamic faces

The main aim of Study 1 was to investigate deaf and hearing children’s ability to label the six basic emotions – happiness, sadness, anger, disgust, fear and surprise – presented in static and dynamic displays. The human faces were selected from the Amsterdam Dynamic Facial Expression Set (ADFES; Van der Schalk, Hawk, Fischer, & Doosje, Citation2011), chosen because the set was carefully developed to represent the prototypes of the universal emotion signals based on the Facial Action Coding System (Ekman, Friesen, & Hager, Citation1978). To our knowledge, it is the only standardised set of faces that contains filmed, natural expressions of real people. It was expected that dynamic displays of emotions would facilitate deaf children’s recognition compared to static displays.

Participants

Fifty-two children participated in Study 1. Twenty-six deaf children (15 female) took part. The group of deaf children were aged between 6 years 6 months and 12 years 0 months (mean age = 9 years; SD = 1 year 6 months). Each participant attended one of five mainstream schools, with special units for hearing impaired children, across the East of England. Deaf children were selected if they had the presence of pre-lingual hearing loss of either moderate-to-severe level (>60 db; N = 20) or profound level (>90 db; N = 6) in their better ear. None of the deaf children had any known concomitant disorders such as autism, attention deficit disorder or cerebral palsy.

Twenty-four deaf children preferred to communicate in SSE (Sign Supported English: spoken English supported with British Sign Language (BSL) signs), and two preferred to communicate in BSL. Eighteen of the children had family members with sign language (BSL) skills: the majority signed at a basic level (Level 1 or below; N = 13) and five signed at an intermediate level (Level 2/3). None of the children had a deaf parent. All children received auditory amplification or CIs and used these devices during testing. Fifteen deaf children wore HAs and 11 wore CIs. Nine of the deaf children with CIs were bilaterally implanted and the majority of CI wearers were implanted late (>2 years; N = 7).

A group of 26 children (13 female) with normal hearing, matched on gender and chronological age (CA), were recruited as controls from four local primary schools in the East of England. The hearing control children were aged between 6 years 3 months and 11 years 8 months (mean age = 9 years; SD = 1 year 5 months). None of the children presented with either developmental or psychological disorders. Non-verbal IQ scores were obtained for both groups using the Raven Coloured Progressive Matrices test (RCPM; Raven, Court, & Raven, Citation1990). displays the mean CA age and RCPM scores for both groups. There was no significant difference in mean CA age between deaf and hearing control children; however, the hearing control group children had significantly higher non-verbal IQ scores than the deaf children, t (50) −2.5, p = .02 ().

Table 1. Details of the participants of Study 1: means (SDs) and ranges.

Verbal ability was measured using the BSL Receptive Language Skills Test in deaf children (Herman, Holmes, & Woll, Citation1999), and the British Picture Vocabulary Scale III (BPVS) in hearing children (Dunn & Dunn, Citation2009). The BSL Receptive Language Skills Test is an equivalent test of language ability for the deaf population and is thought to be more reflective of deaf children’s overall language ability (Jackson, Citation2001). The receptive test involves a vocabulary check followed by a measure of receptive BSL ability. The BSL task is similar in format to the BPVS in that they both require the child to choose from an array of four pictures the one that best describes the word or sign sequence they have just heard or seen. It also provides standardised scores within a similar range to the BPVS. The deaf children were also tested on their lip-reading ability using the Craig Revised Lip-reading Ability Inventory (Updike, Rasmussen, Arndt, & German, Citation1992). This inventory tests word and sentence recognition to ascertain the communication level of deaf children. The test includes a word test, used to record selected phonemes, and a sentence test to measure lip-reading for more intricate language patterns. Means and standard deviations for language and communication measures are also displayed in .

Ethical statement

The Anglia Ruskin University Research Ethics Sub-Committee approved this project. Informed, written consent was obtained from parents for all children to participate. Each study was explained to the children prior to beginning the testing session in the appropriate language (BSL, SSE or English). Written consent was obtained from all children before the start of the testing session.

Materials

The stimuli comprised 30 unedited videos of 5 actors (three male) portraying the 6 basic emotions: happiness, sadness, anger, disgust, fear and surprise, each lasting five seconds. The static stimuli were a frozen snapshot of the 30 videos taken at the highest intensity of the actors’ expressions. The videos and static images were selected from the previously standardised ADFES (Van der Schalk et al., Citation2011). All faces were presented in colour, in full-face presentation and were cropped at the neckline ().

Figure 1. Real human face stimuli displaying prototypical emotions (from top left to right) happiness, sadness, anger, disgust, fear and surprise. Source: Van der Schalk, Hawk, Fischer, Doosje.

Figure 1. Real human face stimuli displaying prototypical emotions (from top left to right) happiness, sadness, anger, disgust, fear and surprise. Source: Van der Schalk, Hawk, Fischer, Doosje.

Procedure

Participants were tested individually and seated in front of a computer screen. They were asked to categorise the emotions presented as happiness, sadness, anger, disgust, fear or surprise. In order to ensure that participants understood the test instructions, three training trials with feedback preceded the test trials. The test consisted of two blocks (one static and one dynamic) each containing 30 trials, showing five of each of the six emotions. The order in which the trials were presented was randomised across the children and the order of presentation (static or dynamic) was counterbalanced. Each stimulus was presented on the screen one at a time in a random order. The videos lasted for five seconds beginning from a neutral pose to the apex of the expressed emotion, and the static faces were also each presented for five seconds. A prompt card with the emotion words and iconic faces was shown to participants before beginning the task to show the response options. The children were required to press the space bar to either proceed to the next static image or begin each dynamic video clip, and were asked to identify whether the facial expression showed happiness, sadness, anger, fear, disgust or surprise. Verbal or signed responses were recorded by the experimenter and different lexical forms of the target emotion word were accepted as correct as well as a number of synonyms (e.g. “cross” and “furious” for anger; “yucky” for disgust; “frightened” and “scared” for fear; and “shocked” for surprise). A score of 1 was awarded for a correct response and a score of 0 if incorrect. The children had a short break between the blocks if necessary.

Design

The study had a 2 (Group: Deaf vs. CA matched hearing controls) × 2 (Motion: static vs. dynamic) × 6 (Emotion: happiness, sadness, anger, disgust, fear and surprise) mixed-model design.

Results

displays the mean and SD and percentage of errors for each Emotion (happiness, sadness, anger, disgust, fear, surprise) and level of Motion (static or dynamic) for deaf and hearing children. A repeated measures ANCOVA, including AgeFootnote1 as a covariate, revealed a significant effect of Age on overall emotion recognition errors, F (1, 49) = 17.77, MSE = 2.27, p = .02, η² = .10. There was a non-significant main effect of Group, F (1, 49) = 2.54, MSE = 2.27, p = .12, η² = .05; but a significant main effect of Motion was present, F (1, 49) = 4.42, MSE = .43, p = .04, η² = .04. There was also a significant main effect of Emotion, F (5, 245) = 3.67, MSE = 1.71, p = .003, η² = 30. Happiness was identified most accurately, followed by anger, sadness, surprise, disgust and then fear. Post-hoc tests with Bonferroni corrections showed that happiness was significantly more accurately recognised than all other emotions (p < .001; ). Fear and disgust were significantly less accurately recognised than any other expressions, although error scores for fear and disgust were not significantly different from one another ().

Table 2. Mean and standard deviation, and percentage of errors in identifying Emotions by Group and Motion (Maximum 5).

Results also revealed a significant interaction between Age and Motion, F (1, 49) = 5.5, MSE = .43, p = .02, η² = .10, as well as a significant Group × Motion interaction, F (1, 49) = 6.4, MSE = .43, p =.02, η² = .12, and an Emotion × Group interaction, F (5, 245) = 4.36, MSE = 1.71, p < .001, η² = .08. These interactions were analysed post-hoc with Bonferroni corrections. Results of the Group × Motion interaction revealed that deaf children (M = .90, SD = .58; 18%) made more errors compared to hearing children (M = .57, SD = .44; 11%) on the static presentation, t (50) = 2.34, p = .02, Cohen’s d = .65, but not on the dynamic presentation (Deaf: M = .71, SD = .52; 14%; Hearing: M = .63, SD = .44; 13%), t (50) = .58, p = .57, Cohen’s d = .16. Importantly, deaf children made significantly fewer errors in the dynamic presentation compared to the static presentation, t (25) = 2.24, p = .03, Cohen’s d = .36, whereas there was no difference between static and dynamic presentation for hearing children, t (25) = .88, p = .39, Cohen’s d = .13. The Emotion × Group results showed that the hearing children (M = 1.65, SD = 2.26; 17%) made significantly fewer errors in recognising the emotion, disgust, than deaf children, (M = 3.96, SD = 3.61; 40%), t (50) = 2.77, p = .008, Cohen’s d = .89. Finally, the Emotion × Motion interaction, F (5, 245) = .73, MSE = .41, p < .61, η² = .02, and the Emotion × Motion × Group interaction resulted not significant, F (5, 245) = 1.12, MSE = .41, p = .35, η² = .02. Age did not moderate any further effects (all F < 1.61, all ps> .16).

Deaf children’s better performance in labelling dynamic over static images suggests a compensatory role for motion in emotion recognition for deaf children. With the use of ecologically valid stimuli – dynamic presentations of emotions in real human faces – deaf children are able to identify most of the basic emotions at a similar level to hearing children (Hosie et al., Citation1998).

Study 2: the effect of intensity of expression on emotion recognition in deaf children

The main aim of Study 2 was to explore the role of the intensity of emotion in deaf children’s emotion recognition ability. Analysing the recognition of emotions at different levels of intensity allowed the ability to investigate the subtle and brief emotional displays that mimic every day real-life interactions. Deaf children’s experience of non-verbal communication may be indicative of how they respond to levels of expressiveness as measured by intensity. It could be that deaf children are more habituated to animated expressions when communicating emotions due to experience with sign language (Goldstein, Sexton, & Feldman, Citation2000; Koester, Papousek, & Smith-Gray, Citation2000), whereas hearing children may have become accustomed to interpreting subtle cues from emotion facial expressions alongside auditory ones that may be more challenging for deaf children. It was expected that deaf children would perform worse than hearing controls at the lower levels of intensity on this more demanding task.

Participants

Thirty-two children in total participated in Study 2. Eighteen deaf children (10 male) aged between 6 years 11 months and 11 years 6 months (mean age = 9 years and 2 months; SD = 1 year 4 months) took part, all of whom had participated in Study 1. Twelve were moderate-severely deaf and six were profoundly deaf in their better ear. Seventeen of the deaf children preferred to communicate in SSE and one preferred BSL. Twelve of the children had a family member who could sign in BSL: eight family members signed at a basic level (Level 1 or below) and four at an intermediate level (Level 2/3). None of the children had a deaf parent. All children received auditory amplification or CIs. Ten deaf children had HAs and eight had CIs. Seven of those with CIs were bilaterally implanted, and the majority were late implanted (>2 years; N = 15).

A group of 18 hearing controls (10 male) matched for CA and non-verbal IQ (RCPM) were recruited from the same schools as Study 1. The hearing children were between 6 years 9 months and 11 years 5 months of age (mean age = 9 years, 1 month; SD = 10 months). displays the mean and standard deviations of CA, non-verbal ability (RCPM) and language and communication measures. The children were tested on the same language and communication measures as Study 1.

Table 3. Details of the participants of Study 2: Means (SDs) and ranges.

Materials

To create the experimental stimuli, dynamic clips of human faces were again selected from the ADFES (Van der Schalk et al., Citation2011). Three sets of videos clips of the six emotions (happiness, sadness, anger, disgust, fear and surprise), portrayed by four of the actors, were selected. Carefully examining the selected videos frame-by-frame, they were edited into clips by dividing them into four levels of emotion intensity: 0–25% (from neutral to 25% expression), 25–50%, 50–75% and 75–100% (from 75% to the apex of the emotional expression), using the software VirtualDub (VirtualDub.org; see for an example of sadness stimuli). In total, 72 dynamic video clips were created, each lasting 500 ms: three sets of six the basic emotions at four levels of intensity. The set of video clips was piloted on undergraduate students (N = 29) by asking them to categorise the emotion and rate the intensity of the clip on a scale of 1–5: from 1 – not at all intense to 5 – extremely intense. Participants were able to accurately identify all emotions by the 50–75% level, and ratings of intensity increased accordingly with each increasing level of intensity. See Table S1 for accuracy percentages and intensity ratings.

Figure 2. Static screenshots of the apex of the dynamic clips showing sadness at four levels of increasing intensity (from left to right, 0%, 25%, 75% and 100%). Source: Van der Schalk, Hawk, Fischer, Doosje.

Figure 2. Static screenshots of the apex of the dynamic clips showing sadness at four levels of increasing intensity (from left to right, 0%, 25%, 75% and 100%). Source: Van der Schalk, Hawk, Fischer, Doosje.

Procedure

Instructions were given using SSE or BSL, explaining to the child that the task involved watching video clips of actors pulling facial expressions of emotion and telling the experimenter whether each face showed happiness, sadness, anger, fear, disgust or surprise. Once the child was attending to the screen, the experimenter started the first video clip. Each clip appeared on the screen for 500 ms and then disappeared, followed by the emotion words appearing in a randomised order (anger, surprise, sadness, disgust, fear and happiness). After watching the video clip, the child needed to select the emotion by telling the experimenter or by clicking on the appropriate emotion word. Older children clicked on the emotion word independently and were asked to simultaneously say aloud, or sign, the emotion word, to ensure reading errors were not made. Three practice trials were first given to ensure that the children had understood the task. Breaks between trials were taken if necessary. The procedure took approximately 15 minutes.

Design

The study used a 2 × 6 × 4 mixed-factor design, with one between-participants factor (Group: Deaf vs. CA Hearing controls) and two within-participants factors: Emotion (happiness, sadness, anger, disgust, fear and surprise); and Intensity (0–25%, 25–50%, 50–75% and 75–100%). The number of errors at identifying each emotion was analysed as the dependant variable.

Results

A repeated measures analysis of varianceFootnote2 revealed a non-significant main effect of Group, F (1, 34) = .20, MSE = 1.32, p = .66, η² = .006, suggesting that overall, deaf children’s performance on emotion recognition was not significantly different from that of the hearing control children. As expected, there was a main effect of Intensity, F (3, 102) = 326.31, MSE = .50, p < .001, η² = .91. Further post-hoc analysis with Bonferroni corrections revealed that each level of intensity was significantly different from all the other levels (all p < .003, means and standard deviations and percentage of errors are shown in ). A significant main effect of Emotion was also present, F (5, 170) = 44.47, MSE = .77, p < .001, η² = .57. Further analysis showed that significantly fewer errors were made in recognising happiness and significantly more errors in recognising fear were made than all other emotions (ps< .001; ).

Table 4. Mean and standard deviation, and percentage of errors in identifying Emotions by Group and Level of Intensity (Maximum 3).

There was not a significant Intensity x Group interaction, F (3, 102) = 1.48, MSE = .47, p = .23, η² = .04, suggesting that deaf and hearing children showed the same pattern of recognising emotions increasingly better as the level of intensity increases. Deaf children did not show a poorer level of performance relative to hearing controls at lower levels of intensity as predicted. Results revealed a significant Emotion × Intensity interaction, F (15, 510) = 9.07, MSE = .60, p < .001, η² = .21. Further analysis of this interaction revealed that the emotions of happiness, anger, fear, disgust and surprise were significantly better recognised at the 25–50% level compared to the 0–25% level, whereas the emotion of fear was significantly better recognised at the 50–75% level compared to the 0–25% level (ps< .001; ). There was a significant Emotion × Group interaction, F (5, 170) = 2.80, MSE = .77, p = .02, η² = .08. As per Study 1, post-hoc analysis with Bonferroni corrections revealed that deaf children (M = 1.5, SD = .79; 50%) made significantly more errors than hearing children (M = 1.07, SD = .41; 36%) only on the emotion of disgust, t (34) = 2.05, p = .05, Cohen’s d = .68. There was no significant Emotion × Group × Intensity interaction, F (15, 510) = 1.2, MSE = .60, p = .27, η² = .03.

Factors predicting performance in deaf and hearing children

Further analysis was conducted to investigate the relationship between age and language (BSL receptive test for deaf children and BPVS for hearing children) and overall errors made by both groups in each study. For the deaf children, lip-reading ability was additionally considered. In Study 1, none of the factors correlated with deaf children’s error scores (ps > .05). For the hearing children, there was a significant moderate negative correlation between total errors and age in Study 1, r (24) = −.40, p = .05. In Study 2, none of the factors correlated with either the deaf or the hearing children’s error scores (ps > .05).

Heterogeneity of deaf children

A possible difference in performance based on level of hearing loss (moderate-severe vs. profound) and the type of hearing device used (CI vs. HA) was examined by comparing percentage error scores in each study; no significant difference in CA age or non-verbal IQ for either pair of subgroups emerged in either study. For Study 1, there was no significant difference between the mean number of errors made by deaf children with a moderate-severe level of deafness (N = 20; M = 4.25, SD = 3.65; 17%) and those made by deaf children with a profound level of deafness (N = 6; M = 3.0, SD = 3.58; 13%), t (24) =−1.0, p = .33, Cohen’s d = .48. However, those deaf children with a CI (N = 11; M = 2.45, SD = 3.64; 11%) made significantly fewer errors than children with a HA (N = 15; M = 5.07, SD = 3.26; 20%), t (24) = 2.48, p = .02, Cohen’s d = .95.

Similarly for Study 2, there was no significant difference between the mean percentage of errors made by deaf children with a moderate-severe level of deafness (N = 12; M = 1.24, SD = .30; 41%) and those made by deaf children with a profound level of deafness (N = 6; M = 1.06, SD = .14; 35%), t (16) = −1.32, p = .21, Cohen’s d = .57. Those deaf children with a CI (N = 8; M = 1.05, SD = .13; 35%) madefewer errors than children with a HA (N = 10; M = 1.29, SD = .31; 43%); while the results showed only a marginally significant difference, the effect size was large, t (16) = 2.05, p = .06, Cohen’s d = 1.01.

Discussion

The studies presented in this paper were the first to investigate the effect of motion on deaf children’s emotion identification, and to clarify whether deaf children have difficulty in emotion recognition relative to their hearing peers in middle childhood (i.e. 6–12-year-olds). The main finding was that deaf children were poorer at recognising static images relative to hearing controls, even when controlling for age, but there were no differences between groups in recognising dynamic images. Importantly, the performance of both groups was not significantly different, even when movement was presented at low levels of intensities. This then suggests that the use of dynamic stimuli improves the performance of emotion recognition, even with a minimum amount of movement compared to static stimuli.

The interaction between motion and group present in Study 1 suggests that motion is an important factor for emotion recognition in deaf children but not necessarily for hearing children. This non-uniform effect of motion confirms a previous study with typically developing children that when static images show clear, exaggerated facial expressions posed by human actors, dynamic information does not improve emotion recognition (Widen & Russell, Citation2015). These findings do, however, suggest a role for motion in emotion recognition for deaf children. While motion may not always be essential for emotion recognition, the supplementary movement pattern in dynamic images could provide further cues to disambiguate emotional expression in deaf children. Furthermore, dynamic facial expressions may be especially helpful for deaf children as they are particularly reliant on both visual cues of emotion and linguistic markers in their communication (Corina et al., Citation2007).

Deaf children’s difficulty in recognising emotion in static faces could be attributed to a delay in forming internal representations of emotions due to fewer opportunities to discuss them, including overhearing (or “over-seeing”) conversations with emotional content (Rieffe et al., Citation2015). In addition, it is important to consider that other channels through which children learn about emotions, such as storybooks and television, are often presented in static and/or cartoon format. Therefore, it may be necessary for parents and teachers to ensure that deaf children understand the emotional content presented in cartoon pictures or photographs.

There were no significant differences between deaf and hearing children on any of the intensity levels in Study 2, with both groups showing increasing accuracy of performance with increasing levels of intensity. The ability to recognise emotion facial expressions of low intensity develops over a more extended period in hearing children (Gao & Maurer, Citation2010; Montirosso et al., Citation2010) and has been shown to be a more sensitive measure to detect emotion recognition deficits in other clinical populations (e.g. Autism Spectrum Disorder; Law-Smith, Montagne, Perrett, Gill, & Gallagher, Citation2010). Considering the increased task demands in this study, these findings do not support the hypothesis that deaf children show poorer performance in emotion recognition than hearing children at lower intensities. The dynamic advantage found in Study 1 is extended to the findings in Study 2, showing that deaf children are able to recognise the basic emotion expressions when presented in a dynamic form, even at low levels of intensity.

Despite not being an explicit aim of the current studies, differences across specific emotions were noted in the deaf and hearing children. In both studies, the deaf and hearing children showed a similar pattern of accuracy for each emotion, most accurately identifying happiness, followed by sadness, anger and surprise. In addition, both the deaf and hearing groups made most errors in labelling the more complex emotions of disgust and fear, consistent with previous studies in deaf (Hosie et al., Citation1998) and hearing children (Widen & Russell, Citation2003). In both studies, deaf children were significantly poorer at recognising disgust than hearing controls. Deaf children may have a less well-developed concept of disgust as a result of differences in opportunity to discuss and overhear conversations about emotions, impacting their emotion recognition (Widen & Russell, Citation2013). Disgust is similar to anger in intensity and has similar perceptual features, such as furrowing the brow and raising the upper lip, arguably making it difficult to disambiguate. Disgust is often the last emotion for typically developing children to accurately label, and so it is logical that this emotion would pose the most difficulty for deaf children (Widen, Citation2013).

In contrast to some previous studies, no relationship was found between language ability and emotion recognition (e.g. Dyck et al., Citation2004; Sidera et al., Citation2016). This may indicate that the visual and contextual cues are sufficient for deaf children to recognise emotions when using ecologically valid stimuli in the domain of emotion recognition. The deaf children’s relative difficulty in recognising disgust may be the result of the visual similarity of disgust and anger. However, it is important to highlight that only one language measure was included in the current study, and therefore it is difficult to decipher the precise role that language played in their emotion recognition ability. Considering that the order of accuracy in deaf children’s labelling of emotions matches that of typically developing children, it can be suggested that this provides evidence for the role socialisation in the gradual emergence of the ability to accurately categorise emotions (Widen, Citation2013). While deaf children’s BSL receptive vocabulary might not directly relate to their emotion recognition performance, differences in early discourse with parents about emotions, for example, may be more explicitly related. For instance, younger native signing deaf children (i.e. 5–7-year-olds) with early access to language have been shown to have comparable emotion recognition to their hearing peers (Ziv et al., Citation2013).

The current finding that deaf children perform similarly to their hearing peers in middle childhood is consistent with the studies of Hosie et al. (Citation1998), Hopyan-Misakyan et al. (Citation2009) and Most and Aviner (Citation2009), but contrasts with the findings of Dyck et al. (Citation2004), Ludlow et al. (Citation2010) and Sidera et al. (Citation2016). In addition to the extra social information that our stimuli provided by using real human and dynamic faces, it is noteworthy that the children in our study were within the normal range in their non-verbal intellectual ability. For example, while the emotion recognition performance of the deaf children in Ludlow et al.’s study was poorer than both age-matched and mental age-matched controls, it remains possible that the deaf children’s poor non-verbal ability (2 SDs below the mean) may partly account for group differences in that study. Although non-verbal intellectual ability does not predict emotion recognition performance in typically developing populations, it has been shown to predict performance in clinical populations (e.g. Autism; Buitelaar, van der Wees, Swaab-Barneveld, & van der Gaag, Citation1999). An increasing number of deaf children are being born with additional needs, such as a general cognitive or developmental delay (Oghalai et al., Citation2012), and it may be that they are particularly susceptible to delays in emotion recognition.

These studies have some limitations. There is no measurement that allows a direct comparison of language performance in deaf and hearing groups. It is possible that a common language measure would allow more explicit comparisons to be drawn, yet measures that can assess both signed and spoken languages do not currently exist (Haug & Mann, Citation2008). These measures may prove crucial in determining the role language plays in deaf children’s emotion understanding, and whether conversing with others can lead to improvements in emotion recognition, as has previously been indicated (e.g. Peterson & Siegal, Citation1995; Weisel & Bar-Lev, Citation1992). Finally, a larger sample is needed to address the effects of hearing amplification in the future, as the current studies’ findings were suggestive of better performance of deaf children with CIs than deaf children with HAs. Some previous studies have also found that the performance of deaf children with CIs was similar to hearing children (Hopyan-Misakyan et al., Citation2009; Most & Aviner, Citation2009). These findings might suggest that CIs are giving better access to sound and therefore greater access to conversations about emotions. However, it is important to highlight there appears to exist wide variability in the effectiveness of CIs for deaf individuals (Niparko et al., Citation2010).

While the results of these studies suggest that less difference exists between deaf and hearing children’s emotion recognition than previous studies have indicated, further clarification needs to be made by examining the impact of reduced intensity in human faces, as this better reflects what typically occurs in day-to-day life. It is plausible that improvements in cognitive skills, such as perspective taking, could aid the development of understanding the meaning of subtle emotion expressions (Choudhury, Blakemore, & Charman, Citation2006). For example, in a recent study, Ketelaar, Wiefferink, Frijns, Broekhof, and Rieffe (Citation2015) found that 4–5-year-old deaf children with CIs displayed moral emotions (shame, guilt and surprise) to a lesser degree than hearing children. Therefore, future studies could explore the role that intensity plays in the recognition of these more complex emotions. Given that self-conscious emotions require an understanding of social norms and an awareness of others, deaf children of hearing parents could face delays in developing these skills into middle childhood (Heerey, Keltner, & Capps, Citation2003).

The current findings are encouraging in terms of showing similarities in emotion recognition in deaf and hearing children when using more ecologically valid stimuli. These studies add to the evidence that the use of dynamic expressions of real facial expressions of emotion is important to assess emotion recognition ability in clinical populations where deficits have been identified in static images. Importantly, the present studies reflect that differences in emotion recognition between the deaf and hearing children may vary as a result of stimuli type, and could partly explain the mixed findings of previous research. Clearly, deaf children may be disadvantaged on tasks less reflective of real-life stimuli. Therefore, it is important for further studies to consider how the stimuli used in research might impact emotion processing within deaf populations, for whom facial behaviours serve both grammatical and emotion-cuing functions.

Supplemental material

Supplementary_Material.docx

Download MS Word (67 KB)

Acknowledgements

We are very grateful to all the children, parents and schools who participated in this research. We are also thankful to the developers of the ADFES for giving us permission to use the their images and video clips.

Disclosure statement

No potential conflict of interest was reported by the authors.

Additional information

Funding

This research was funded by an Anglia Ruskin University (Cambridge, UK) research bursary, and their support is greatly appreciated. The support of the Economic and Social Research Council (ESRC) is also gratefully acknowledged. Anna Jones was supported by the ESRC Deafness, Cognition and Language Research Centre (DCAL) [grant number RES-620-28-0002].

Notes

1. Although there was a significant difference in non-verbal IQ scores between the hearing and deaf children (), there was no significant relationship between overall errors in emotion recognition and non-verbal IQ scores (p > .05), so non-verbal IQ was not included as a covariate.

2. Neither age nor non-verbal IQ was added as a covariate as there was no relationship with total error scores (ps> .05).

References

  • Ambadar, Z., Schooler, J. W., & Cohn, J. F. (2005). Deciphering the enigmatic face the importance of facial dynamics in interpreting subtle facial expressions. Psychological Science, 16(5), 403–410. doi: 10.1111/j.0956-7976.2005.01548.x
  • Armstrong, B. A., Neville, H. J., Hillyard, S. A., & Mitchell, T. V. (2002). Auditory deprivation affects processing of motion, but not color. Cognitive Brain Research, 14(3), 422–434. doi: 10.1016/S0926-6410(02)00211-2
  • Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33, 717–746. doi: 10.1068/p5096
  • Beck, L., Kumschick, I. R., Eid, M., & Klann-Delius, G. (2012). Relationship between language competence and emotional competence in middle childhood. Emotion, 12(3), 503–514. doi: 10.1037/a0026320
  • Bould, E., & Morris, N. (2008). Role of motion signals in recognizing subtle facial expressions of emotion. British Journal of Psychology, 99(2), 167–189. doi: 10.1348/000712607X206702
  • Buitelaar, J. K., van der Wees, M., Swaab-Barneveld, H., & van der Gaag, R. (1999). Verbal memory and performance IQ predict theory of mind and emotion recognition ability in children with autistic spectrum disorders and in psychiatric control children. Journal of Child Psychology and Psychiatry, 40(6), 869–881. doi: 10.1111/1469-7610.00505
  • Choudhury, S., Blakemore, S. J., & Charman, T. (2006). Social cognitive development during adolescence. Social Cognitive and Affective Neuroscience, 1(3), 165–174. doi: 10.1093/scan/nsl024
  • Corina, D., Chiu, Y., Knapp, H., Greenwald, R., San Jose-Robertson, L., & Braun, A. (2007). Neural correlates of human action observation in hearing and deaf subjects. Brain Research, 1152, 111–129. doi: 10.1016/j.brainres.2007.03.054
  • Dunn, L. M., & Dunn, D. M. (2009). The British Picture Vocabulary Scale: Third edition. London: GL Assessment Limited.
  • Dyck, M. J., Farrugia, C., Shochet, I. M., & Holmes-Brown, M. (2004). Emotion recognition/understanding ability in hearing or vision-impaired children: Do sounds, sights, or words make the difference? Journal of Child Psychology and Psychiatry, 45(4), 789–800. doi: 10.1111/j.1469-7610.2004.00272.x
  • Ekman, P., Friesen, W. V., & Hager, J. C. (1978). Facial action coding system. Salt Lake City, UT: A Human Face.
  • Ekman, P., & O'Sullivan, M. (1991). Facial expression: Methods, means, and moues. In R. S. Feldman & E. Bernard (Eds.), Fundamentals of nonverbal behavior. Studies in emotion & social interaction (pp. 163–199). New York, NY: Cambridge University Press.
  • Gao, X., & Maurer, D. (2010). A happy story: Developmental changes in children’s sensitivity to facial expressions of varying intensities. Journal of Experimental Child Psychology, 107(2), 67–86. doi: 10.1016/j.jecp.2010.05.003
  • Goldstein, N. E., Sexton, J., & Feldman, R. S. (2000). Encoding of facial expressions of emotion and knowledge of American sign language. Journal of Applied Social Psychology, 30(1), 67–76. doi: 10.1111/j.1559-1816.2000.tb02305.x
  • Gray, C., Hosie, J., Russell, P., Scott, C., & Hunter, N. (2007). Attribution of emotions to story characters by severely and profoundly deaf children. Journal of Developmental and Physical Disabilities, 19(2), 145–159. doi: 10.1007/s10882-006-9029-1
  • Greenberg, M. T., & Kusché, C. A. (1993). Promoting social and emotional development in deaf children. The PATHS project. Seattle: University of Washington Press.
  • Grossman, R. B., & Kegl, J. (2007). Moving faces: Categorization of dynamic facial expressions in American sign language by deaf and hearing participants. Journal of Nonverbal Behavior, 31(1), 23–38. doi: 10.1007/s10919-006-0022-2
  • Haug, T., & Mann, W. (2008). Adapting tests of sign language assessments for other sign languages-a review of linguistic, cultural and psychometric problems. Journal of Deaf Studies and Deaf Education, 13(1), 138–147. doi: 10.1093/deafed/enm027
  • Heerey, E. A., Keltner, D., & Capps, L. M. (2003). Making sense of self-conscious emotion: Linking theory of mind and emotion in children with autism. Emotion, 3(4), 394–400. doi: 10.1037/1528-3542.3.4.394
  • Herba, C. M., Landau, S., Russell, T., Ecker, C., & Phillips, M. L. (2006). The development of emotion-processing in children: Effects of age, emotion, and intensity. Journal of Child Psychology and Psychiatry, 47(11), 1098–1106. doi: 10.1111/j.1469-7610.2006.01652.x
  • Herman, R., Holmes, S., & Woll, B. (1999). Assessing British language development: Receptive skills test. Coleford: Forest Bookshop.
  • Hess, U., Blairy, S., & Kleck, R. E. (1997). The intensity of emotional facial expressions and decoding accuracy. Journal of Nonverbal Behavior, 21(4), 241–257. doi: 10.1023/A:1024952730333
  • Hopyan-Misakyan, T. M., Gordon, K. A., Dennis, M., & Papsin, B. C. (2009). Recognition of affective speech prosody and facial affect in deaf children with unilateral right cochlear implants. Child Neuropsychology, 15(2), 136–146. doi: 10.1080/09297040802403682
  • Hosie, J., Gray, C., Russell, P., Scott, C., & Hunter, N. (1998). The matching of facial expressions by deaf and hearing children and their production and comprehension of emotion labels. Motivation and Emotion, 22(4), 293–313. doi: 10.1023/A:1021352323157
  • Jackson, A. L. (2001). Language facility and theory of mind development in deaf children. Journal of Deaf Studies and Deaf Education, 6(3), 161–176. doi: 10.1093/deafed/6.3.161
  • Kessler, H., Doyen-Waldecker, C., Hofer, C., Hoffmann, H., Traue, H. C., & Abler, B. (2011). Neural correlates of the perception of dynamic versus static facial expressions of emotion. GMS Psycho-Social-Medicine, 8, 1–8.
  • Ketelaar, L., Wiefferink, C. H., Frijns, J. H., Broekhof, E., & Rieffe, C. (2015). Preliminary findings on associations between moral emotions and social behavior in young children with normal hearing and with cochlear implants. European Child & Adolescent Psychiatry, 24(11), 1369–1380. doi: 10.1007/s00787-015-0688-2
  • Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., & Hoffman, J. M. (2003). Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. Neuroimage, 18(1), 156–168. doi: 10.1006/nimg.2002.1323
  • Koester, L., Papousek, H., & Smith-Gray, S. (2000). Intuitive parenting, communication, and interaction with deaf infants. In P. E. Spencer, C. J. Erting, & M. Marschark (Eds.), The deaf child in the family and at school (pp. 55–71). Mahwah, NJ: Psychology Press.
  • Krumhuber, E. G., Kappas, A., & Manstead, A. S. (2013). Effects of dynamic aspects of facial expressions: A review. Emotion Review, 5(1), 41–46. doi: 10.1177/1754073912451349
  • Kyle, F. E., Campbell, R., Mohammed, T., Coleman, M., & MacSweeney, M. (2013). Speechreading development in deaf and hearing children: Introducing the test of child speechreading. Journal of Speech Language and Hearing Research, 56(2), 416–426. doi: 10.1044/1092-4388(2012/12-0039)
  • Laugen, N. J., Jacobsen, K. H., Rieffe, C., & Wichstrøm, L. (2016). Emotion understanding in preschool children with mild-to-severe hearing loss. Journal of Deaf Studies and Deaf Education, 21(3), 259–267. doi: 10.1093/deafed/enw005
  • Law-Smith, M. J., Montagne, B., Perrett, D. I., Gill, M., & Gallagher, L. (2010). Detecting subtle facial emotion recognition deficits in high-functioning autism using dynamic stimuli of varying intensities. Neuropsychologia, 48(9), 2777–2781. doi: 10.1016/j.neuropsychologia.2010.03.008
  • Ludlow, A., Heaton, P., Rosset, D., Hills, P., & Deruelle, C. (2010). Emotion recognition in children with profound and severe deafness: Do they have a deficit in perceptual processing?Journal of Clinical and Experimental Neuropsychology, 32(9), 923–928. doi: 10.1080/13803391003596447
  • Manstead, A. S., Fischer, A. H., & Jakobs, E. B. (1999). The social and emotional functions of facial displays. In P. Philippot, R. S. Feldman, & E. J. Coats (Eds.), The social context of nonverbal behavior (pp. 287–314). Cambridge: Cambridge University Press.
  • Montagne, B., Kessels, R. P., De Haan, E. H., & Perrett, D. I. (2007). The emotion recognition task: A paradigm to measure the perception of facial emotional expressions at different intensities. Perceptual and Motor Skills, 104(2), 589–598. doi: 10.2466/pms.104.2.589-598
  • Montirosso, R., Peverelli, M., Frigerio, E., Crespi, M., & Borgatti, R. (2010). The development of dynamic facial expression recognition at different intensities in 4-to 18-year-olds. Social Development, 19(1), 71–92. doi: 10.1111/j.1467-9507.2008.00527.x
  • Morgan, G., Meristo, M., Mann, W., Hjelmquist, E., Surian, L., & Siegal, M. (2014). Mental state language and quality of conversational experience in deaf and hearing children. Cognitive Development, 29, 41–49. doi: 10.1016/j.cogdev.2013.10.002
  • Most, T., & Aviner, C. (2009). Auditory, visual, and auditory–visual perception of emotions by individuals with cochlear implants, hearing aids, and normal hearing. Journal of Deaf Studies and Deaf Education, 14(4), 449–464. doi: 10.1093/deafed/enp007
  • Most, T., & Michaelis, H. (2012). Auditory, visual, and auditory–visual perceptions of emotions by young children with hearing loss versus children with normal hearing. Journal of Speech Language and Hearing Research, 55(4), 1148–1162. doi: 10.1044/1092-4388(2011/11-0060)
  • Niparko, J. K., Tobey, E. A., Thal, D. J., Eisenberg, L. S., Wang, N. Y., Quittner, A., & Fink, N. E. (2010). Spoken language development in children following cochlear implantation. The Journal of the American Medical Association, 303(15), 1498–1506. doi: 10.1001/jama.2010.451
  • Oghalai, J. S., Caudle, S. E., Bentley, B., Abaya, H., Lin, J., Baker, D., … Winzelberg, J. (2012). Cognitive outcomes and familial stress after cochlear implantation in deaf children with and without developmental delays. Otology & Neurotology, 33(6), 947–956.
  • Peterson, C. C., & Siegal, M. (1995). Deafness, conversation and theory of mind. Journal of Child Psychology and Psychiatry, 36(3), 459–474. doi: 10.1111/j.1469-7610.1995.tb01303.x
  • Raven, J., Court, J. H., & Raven, J. (1990). Coloured Progressive Matrices. Manual for Raven’s Progressive Matrices and Vocabulary Scales. Oxford: Oxford Psychologists Press.
  • Rieffe, C. (2012). Awareness and regulation of emotions in deaf children. British Journal of Developmental Psychology, 30(4), 477–492. doi: 10.1111/j.2044-835X.2011.02057.x
  • Rieffe, C., Netten, A. P., Broekhof, E., & Veiga, E. (2015). The role of the environment in children’s emotion socialization. In H. Knoors & M. Marschark (Eds.), Educating deaf learners: Creating a global evidence base (pp. 369–399). New York, NY: Oxford University Press.
  • Russell, J. A., & Widen, S. C. (2002). Words versus faces in evoking preschool children’s knowledge of the causes of emotions. International Journal of Behavioral Development, 26(2), 97–103. doi: 10.1080/01650250042000582
  • Sato, W., & Yoshikawa, S. (2004). The dynamic aspects of emotional facial expressions. Cognition and Emotion, 18(5), 701–710. doi: 10.1080/02699930341000176
  • Sidera, F., Amadó, A., & Martínez, L. (2016). Influences on facial emotion recognition in deaf children. Journal of Deaf Studies and Deaf Education. doi: 10.1093/deafed/enw072.
  • Symons, D. K. (2004). Mental state discourse, theory of mind and the internalization of self-other understanding. Developmental Review, 24, 159–188. doi: 10.1016/j.dr.2004.03.001
  • Taumeopeau, M., & Ruffman, T. (2008). Stepping stones to others’ minds: Maternal talk relates to child mental state language and emotion understanding. Child Development, 79, 284–302. doi: 10.1111/j.1467-8624.2007.01126.x
  • Updike, C. D., Rasmussen, J. M., Arndt, R., & German, C. (1992). Revised Craig Lipreading Inventory. Perceptual and Motor Skills, 74(1), 267–277. doi: 10.2466/pms.1992.74.1.267
  • Van der Schalk, J., Hawk, S. T., Fischer, A. H., & Doosje, B. (2011). Moving faces, looking places: Validation of the Amsterdam Dynamic Facial Expression Set (ADFES). Emotion, 11(4), 907–920. doi: 10.1037/a0023853
  • Wang, Y., Su, Y., Fang, P., & Zhou, Q. (2011). Facial expression recognition: Can preschoolers with cochlear implants and hearing aids catch it? Research in Developmental Disabilities, 32(6), 2583–2588. doi: 10.1016/j.ridd.2011.06.019
  • Wehrle, T., Kaiser, S., Schmidt, S., & Scherer, K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology, 78(1), 105–119. doi: 10.1037/0022-3514.78.1.105
  • Weisel, A., & Bar-Lev, H. (1992). Role taking ability, nonverbal sensitivity, language and social adjustment of deaf adolescents. Educational Psychology, 12(1), 3–13. doi: 10.1080/0144341920120101
  • Widen, S. C. (2013). Children’s interpretation of facial expressions: The long path from valence-based to specific discrete categories. Emotion Review, 5(1), 72–77. doi: 10.1177/1754073912451492
  • Widen, S. C., & Russell, J. A. (2003). A closer look at preschoolers’ freely produced labels for facial expressions. Developmental Psychology, 39(1), 114–128. doi: 10.1037/0012-1649.39.1.114
  • Widen, S. C., & Russell, J. A. (2013). Children’s recognition of disgust in others. Psychological Bulletin, 139(2), 271–299. doi: 10.1037/a0031640
  • Widen, S. C., & Russell, J. A. (2015). Do dynamic facial expressions convey emotions to children better than do static ones?Journal of Cognition and Development, 16(5), 802–811. doi: 10.1080/15248372.2014.916295
  • Wiefferink, C. H., Rieffe, C., Ketelaar, L., De Raeve, L., & Frijns, J. H. (2013). Emotion understanding in deaf children with a cochlear implant. Journal of Deaf Studies and Deaf Education, 18(2), 175–186. doi: 10.1093/deafed/ens042
  • Ziv, M., Most, T., & Cohen, S. (2013). Understanding of emotions and false beliefs among children versus deaf children. Journal of Deaf Studies and Deaf Education, 18(2), 161–174. doi: 10.1093/deafed/ens073