415
Views
4
CrossRef citations to date
0
Altmetric
Original Articles

Processing emotions in sounds: cross-domain aftereffects of vocal utterances and musical sounds

& ORCID Icon
Pages 1610-1626 | Received 02 Dec 2015, Accepted 24 Oct 2016, Published online: 16 Nov 2016
 

ABSTRACT

Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217–223. doi:10.1016/j.cognition.2010.08.008], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction – vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. Experiments 1c and 1d were run separately from 1a and 1b and did not have equal sample sizes. Experiment 1a and 1b tested adaptation with the same stimulus type (e.g. adapted to voice, tested on voice), whereas Experiments 1c and 1d examined cross-domain adaptation for voice and instrumental sounds (e.g. adapt to an instrumental sound and test on a voice sound). Because Experiment 1c was the first cross-domain adaptation study, we increased the number of participants to examine if cross-domain adaptation would be present. Additional analyses with reduced sample sizes for Experiment 1d (n = 36 to equal Exp. 1c and n =20) and a reduced sample size for Experiment 1c (n = 20) were run and the basic results did not change the overall interpretation of the data.

2. The baseline phase for each experiment (Exp. 1a–1d and 2a–2d) functioned as a stimulus emotion evaluation. In the baseline phase, participants listened to and judged whether each sound stimulus (all morphed sounds stimuli, steps 1–7, including those used as adaptors in the adaptation phase of the experiments) sounded angry or fearful. In doing this, we can evaluate the emotion of the sound stimuli by examining the baseline phase for each experiment; see the baseline phase of Figures 2–9.

3. In addition to the analysis in the main text we performed a two-way ANOVA ((3 (baseline, anger, fear) × 7 (step 1, 2, 3, 4, 5, 6, 7)) with morphing steps (7 levels) as a within-subjects factor. We found a two-way interaction effect in Experiment 1a, 1c, and 1d and the directions of these interaction effects were all consistent with the main effects described in the results for all experiments.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 503.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.