319
Views
7
CrossRef citations to date
0
Altmetric
Original Articles

Fast and unintentional evaluation of emotional sounds: evidence from brief segment ratings and the affective Simon task

&
Pages 312-324 | Received 28 Oct 2014, Accepted 16 Oct 2015, Published online: 20 Nov 2015
 

ABSTRACT

In the present study, we raised the question of whether valence information of natural emotional sounds can be extracted rapidly and unintentionally. In a first experiment, we collected explicit valence ratings of brief natural sound segments. Results showed that sound segments of 400 and 600 ms duration—and with some limitation even sound segments as short as 200 ms—are evaluated reliably. In a second experiment, we introduced an auditory version of the affective Simon task to assess automatic (i.e. unintentional and fast) evaluations of sound valence. The pattern of results indicates that affective information of natural emotional sounds can be extracted rapidly (i.e. after a few hundred ms long exposure) and in an unintentional fashion.

Acknowledgements

The authors thank Ullrich Ecker for his helpful comments and Thorid Römer for assistance in data collection.

Notes

1. Despite of its relative neglect compared with visual affective research (which encompasses hundreds of published studies), there are remarkable attempts to investigate sound evaluation, which should be mentioned: There are studies on preferential processing of conditioned valence of sounds (Bröckelmann et al., Citation2011, Citation2013; Folyi, Liesefeld, & Wentura, Citation2015), on functional magnetic resonance imaging and electrophysiological correlates of complex emotional sounds such as environmental sounds, emotional vocalizations, and music (e.g. Czigler, Cox, Gyimesi, & Horváth, Citation2007; Grandjean et al., Citation2005; Koelsch, Fritz, von Cramon, Müller, & Friederici, Citation2006; Mitchell, Elliott, Barry, Cruttenden, & Woodruff, Citation2003; Sander, Frome, & Scheich, Citation2007; Sander & Scheich, Citation2001; Sauter & Eimer, Citation2010; Scott, Sauter, & McGettigan, Citation2009; Shinkareva et al., Citation2014), on identifying non-symbolic, low-level acoustic features that contribute to the evaluation of a wide range of sounds by using the approach of computational modelling (e.g. Weninger, Eyben, Schuller, Mortillaro, & Scherer, Citation2013), and on multisensory integration of emotional information (e.g. Dolan, Morris, & de Gelder, Citation2001; Pourtois, de Gelder, Bol, & Crommelinck, Citation2005).

2. Sample size was determined by considerations about the reliability of mean ratings (see Materials).

3. All correlations are associated with p < .001. However, due to the multimodal distribution of the norm ratings, inferential statistics might be biased. Thus, the correlations should be dominantly taken as a descriptive index of the correspondence between brief segments ratings and the full ratings.

4. Alternatively, we conducted a 3 (valence) × 2 (animacy category: animate vs. inanimate) × 3 (duration) MANOVA. All effects reported below are essentially the same in this analysis. Additionally, there were significant effects involving animacy. However, for the sake of succinctness and because these effects are rather uninteresting due to their ambiguity (i.e. they might be an effect of better discriminability of one category relative to the other or they might reflect a response bias), we report only the reduced analysis.

Additional information

Funding

The present research was conducted within the International Research Training Group “Adaptive Minds” supported by the German Research Foundation [GRK 1457].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 503.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.