240
Views
0
CrossRef citations to date
0
Altmetric
Brief Articles

Children across cultures respond emotionally to the acoustic environment

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 1144-1152 | Received 12 Jul 2022, Accepted 08 Jun 2023, Published online: 20 Jun 2023
 

ABSTRACT

Among human and non-human animals, the ability to respond rapidly to biologically significant events in the environment is essential for survival and development. Research has confirmed that human adult listeners respond emotionally to environmental sounds by relying on the same acoustic cues that signal emotionality in speech prosody and music. However, it is unknown whether young children also respond emotionally to environmental sounds. Here, we report that changes in pitch, rate (i.e. playback speed), and intensity (i.e. amplitude) of environmental sounds trigger emotional responses in 3- to 6-year-old American and Chinese children, including four sound types: sounds of human actions, animal calls, machinery, and natural phenomena such as wind and waves. Children’s responses did not differ across the four types of sounds used but developed with age – a finding observed in both American and Chinese children. Thus, the ability to respond emotionally to non-linguistic, non-music environmental sounds is evident at three years of age – an age when the ability to decode emotional prosody in language and music emerges. We argue that general mechanisms that support emotional prosody decoding are engaged by all sounds, as reflected in emotional responses to non-linguistic acoustic input such as music and environmental sounds.

Acknowledgement

We thank the children who participated in this research and Lingyan Zhang for preparing the visual stimuli.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data accessibility statement

The datasets supporting this article are available in Supplementary Information (SI).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Notes

1 Minimum sample sizes for each condition (i.e., intensity, rate, pitch) (n = 52) were established by conducting a power analysis using G*Power, based on a medium expected effect size (f = .25), and power (1- β error probability = .90), and the use of a repeated measures ANOVA, within-between interaction, with four groups (2 culture groups [American, Chinese] x 2 age groups [4-, 5-year-olds]) and four measures (four types of environmental sounds). This minimal sample size is consistent with previous research on children’s sensitivity to emotion in music and speech (Dalla Bella et al., Citation2001; Ma et al., Citation2022; Morton & Trehub, Citation2001; Mote, Citation2011; Nawrot, Citation2003; Quam & Swingley, Citation2012). To maximise statistical power, we permitted modest increases to sample sizes according to the recruitment capacity for each group, but limited the total sample across groups to 200 participants to avoid p-hacking.

2 As in Ma et al. (Citation2022), each experiment began with three practice trials that familiarised children with the task demands, that is, pointing at the face that matched the second sound. Children had to respond correctly on all the three trials in order to continue with the experiment. All participants met this criterion, suggesting they had no difficulty performing the task and had no bias for responding to either the happy or sad face (see SI).

3 To determine whether children’s responses differed between the trials where the happy faces were the target and those where the sad faces were the target, a chi-square analysis was conducted with 3200 trials included (American children: 1568 trials [98 children x 16 trials]). Children’s responses did not differ between the two types of trials (X2 = 1.01, p = .31). Among the 1600 trials where the sad faces were the target, children offered 1171 target responses; among the 1600 trials when the happy faces were the target, children offered 1196 target responses.

Additional information

Funding

P.Z. is supported by National Natural Science Foundation of China (U20B2062).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 503.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.