415
Views
4
CrossRef citations to date
0
Altmetric
Original Articles

Processing emotions in sounds: cross-domain aftereffects of vocal utterances and musical sounds

& ORCID Icon
Pages 1610-1626 | Received 02 Dec 2015, Accepted 24 Oct 2016, Published online: 16 Nov 2016
 

ABSTRACT

Nonlinguistic signals in the voice and musical instruments play a critical role in communicating emotion. Although previous research suggests a common mechanism for emotion processing in music and speech, the precise relationship between the two domains is unclear due to the paucity of direct evidence. By applying the adaptation paradigm developed by Bestelmeyer, Rouger, DeBruine, and Belin [2010. Auditory adaptation in vocal affect perception. Cognition, 117(2), 217–223. doi:10.1016/j.cognition.2010.08.008], this study shows cross-domain aftereffects from vocal to musical sounds. Participants heard an angry or fearful sound four times, followed by a test sound and judged whether the test sound was angry or fearful. Results show cross-domain aftereffects in one direction – vocal utterances to musical sounds, not vice-versa. This effect occurred primarily for angry vocal sounds. It is argued that there is a unidirectional relationship between vocal and musical sounds where emotion processing of vocal sounds encompasses musical sounds but not vice-versa.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. Experiments 1c and 1d were run separately from 1a and 1b and did not have equal sample sizes. Experiment 1a and 1b tested adaptation with the same stimulus type (e.g. adapted to voice, tested on voice), whereas Experiments 1c and 1d examined cross-domain adaptation for voice and instrumental sounds (e.g. adapt to an instrumental sound and test on a voice sound). Because Experiment 1c was the first cross-domain adaptation study, we increased the number of participants to examine if cross-domain adaptation would be present. Additional analyses with reduced sample sizes for Experiment 1d (n = 36 to equal Exp. 1c and n =20) and a reduced sample size for Experiment 1c (n = 20) were run and the basic results did not change the overall interpretation of the data.

2. The baseline phase for each experiment (Exp. 1a–1d and 2a–2d) functioned as a stimulus emotion evaluation. In the baseline phase, participants listened to and judged whether each sound stimulus (all morphed sounds stimuli, steps 1–7, including those used as adaptors in the adaptation phase of the experiments) sounded angry or fearful. In doing this, we can evaluate the emotion of the sound stimuli by examining the baseline phase for each experiment; see the baseline phase of Figures 2–9.

3. In addition to the analysis in the main text we performed a two-way ANOVA ((3 (baseline, anger, fear) × 7 (step 1, 2, 3, 4, 5, 6, 7)) with morphing steps (7 levels) as a within-subjects factor. We found a two-way interaction effect in Experiment 1a, 1c, and 1d and the directions of these interaction effects were all consistent with the main effects described in the results for all experiments.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.