References
- Anikin, A. (2019). Soundgen: An open-source tool for synthesizing nonverbal vocalizations. Behavior Research Methods, 51(2), 778–792. https://doi.org/10.3758/s13428-018-1095-7
- Anikin, A., & Persson, T. (2017). Non-linguistic vocalizations from online amateur videos for emotion research: A validated corpus. Behavior Research Methods, 49(2), 758–771. https://doi.org/10.3758/s13428-016-0736-y
- Arnal, L. H., Flinker, A., Kleinschmidt, A., Giraud, A. L., & Poeppel, D. (2015). Human screams occupy a privileged niche in the communication soundscape. Current Biology, 25(15), 2051–2056. https://doi.org/10.1016/j.cub.2015.06.043
- August, P. V., & Anderson, J. G. (1987). Mammal sounds and motivation-structural rules: A test of the hypothesis. Journal of Mammalogy, 68(1), 1–9. https://doi.org/10.2307/1381039
- Ball, M., & Bruck, D. (2004, October 1–3). The salience of fire alarm signals for sleeping individuals: A novel approach to signal design. Proceedings of the Third Human Behaviour in Fire Conference (pp. 303–314), Belfast, Northern Ireland, UK.
- Bao, S. (2015). Perceptual learning in the developing auditory cortex. European Journal of Neuroscience, 41(5), 718–724. https://doi.org/10.1111/ejn.12826
- Briefer, E. F. (2012). Vocal expression of emotions in mammals: Mechanisms of production and evidence. Journal of Zoology, 288(1), 1–20. https://doi.org/10.1111/j.1469-7998.2012.00920.x
- Bürkner, P. C. (2017). Brms: An R package for Bayesian multilevel models using Stan. Journal of Statistical Software, 80(1), 1–28. https://doi.org/10.18637/jss.v080.i01
- Davila-Ross, M. D., Owren, M. J., & Zimmermann, E. (2009). Reconstructing the evolution of laughter in great apes and humans. Current Biology, 19(13), 1106–1111. https://doi.org/10.1016/j.cub.2009.05.028
- Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pašukonis, A., Hoeschele, M., Ocklenburg, S., de Boer, B., Sturdy, C. B., Newen, A., & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284(1859), Article 20170990. https://doi.org/10.1098/rspb.2017.0990
- Fischer, J., Kitchen, D. M., Seyfarth, R. M., & Cheney, D. L. (2004). Baboon loud calls advertise male quality: Acoustic features and their relation to rank, age, and exhaustion. Behavioral Ecology and Sociobiology, 56(2), 140–148. https://doi.org/10.1007/s00265-003-0739-4
- Fischer, J., Wadewitz, P., & Hammerschmidt, K. (2017). Structural variability and communicative complexity in acoustic communication. Animal Behaviour, 134, 229–237. https://doi.org/10.1016/j.anbehav.2016.06.012
- Fitch, W. T. (2018). The biology and evolution of speech: A comparative analysis. Annual Review of Linguistics, 4, 255–279. https://doi.org/10.1146/annurev-linguistics-011817-045748
- Fitch, W. T., Neubauer, J., & Herzel, H. (2002). Calls out of chaos: The adaptive significance of nonlinear phenomena in mammalian vocal production. Animal Behaviour, 63(3), 407–418. https://doi.org/10.1006/anbe.2001.1912
- Foote, J. (2000, July). Automatic audio segmentation using a measure of audio novelty. In Proceedings of IEEE 2000 international conference on multimedian and expo (pp. 452–455). New York: IEEE.
- Fuller, R. C., Houle, D., & Travis, J. (2005). Sensory bias as an explanation for the evolution of mate preferences. The American Naturalist, 166(4), 437–446. https://doi.org/10.1086/444443
- Gobl, C., & Ní Chasaide, A. (2010). Voice source variation and its communicative functions. In W. J. Hardcastle, J. Laver, & F. E. Gibbon (Eds.), The handbook of phonetic sciences (2nd ed., pp. 378–423). Wiley-Blackwell.
- Hamilton-Fletcher, G., Pisanski, K., Reby, D., Stefańczyk, M., Ward, J., & Sorokowska, A. (2018). The role of visual experience in the emergence of cross-modal correspondences. Cognition, 175, 114–121. https://doi.org/10.1016/j.cognition.2018.02.023
- Huang, N., & Elhilali, M. (2017). Auditory salience using natural soundscapes. The Journal of the Acoustical Society of America, 141(3), 2163–2176. https://doi.org/10.1121/1.4979055
- Jégh-Czinege, N., Faragó, T., & Pongrácz, P. (2019). A bark of its own kind: The acoustics of ‘annoying’ dog barks suggests a specific attention-evoking effect for humans. Bioacoustics, 1–16. https://doi.org/10.1080/09524622.2019.1576147
- Karp, D., Manser, M. B., Wiley, E. M., & Townsend, S. W. (2014). Nonlinearities in meerkat alarm calls prevent receivers from habituating. Ethology, 120(2), 189–196. https://doi.org/10.1111/eth.12195
- Kaya, E. M., & Elhilali, M. (2014). Investigating bottom-up auditory attention. Frontiers in Human Neuroscience, 8, 327. https://doi.org/10.3389/fnhum.2014.00327
- Kaya, E. M., & Elhilali, M. (2017). Modelling auditory attention. Philosophical Transactions of the Royal Society B: Biological Sciences, 372(1714), 1–10. Article 20160101. https://doi.org/10.1098/rstb.2016.0101
- Kayser, C., Petkov, C. I., Lippert, M., & Logothetis, N. K. (2005). Mechanisms for allocating auditory attention: An auditory saliency map. Current Biology, 15(21), 1943–1947. https://doi.org/10.1016/j.cub.2005.09.040
- Kelley, K., Maxwell, S. E., & Rausch, J. R. (2003). Obtaining power or Obtaining Precision. Evaluation & the Health Professions, 26(3), 258–287. https://doi.org/10.1177/0163278703255242
- Kim, K., Lin, K. H., Walther, D. B., Hasegawa-Johnson, M. A., & Huang, T. S. (2014). Automatic detection of auditory salience with optimized linear filters derived from human annotation. Pattern Recognition Letters, 38, 78–85. https://doi.org/10.1016/j.patrec.2013.11.010
- Köppl, C. (2009). Evolution of sound localisation in land vertebrates. Current Biology, 19(15), R635–R639. https://doi.org/10.1016/j.cub.2009.05.035
- Lassalle, A., Pigat, D., O’Reilly, H., Berggen, S., Fridenson-Hayo, S., Tal, S., Elfström, S., Råde, A., Golan, O., Bölte, S., Baron-Cohen, S., & Lundqvist, D. (2019). The EU-emotion voice database. Behavior Research Methods, 51(2), 493–506. https://doi.org/10.3758/s13428-018-1048-1
- LeDoux, J. (2012). Rethinking the emotional brain. Neuron, 73(4), 653–676. https://doi.org/10.1016/j.neuron.2012.02.004
- Ligges, U., Krey, S., Mersmann, O., & Schnackenberg, S. (2018). tuneR: Analysis of music and speech. https://CRAN.R-project.org/package=tuneR
- Lima, C. F., Castro, S. L., & Scott, S. K. (2013). When voices get emotional: A corpus of nonverbal vocalizations for research on emotion processing. Behavior Research Methods, 45(4), 1234–1245. https://doi.org/10.3758/s13428-013-0324-3
- Lingle, S., Wyman, M. T., Kotrba, R., Teichroeb, L. J., & Romanow, C. A. (2012). What makes a cry a cry? A review of infant distress vocalizations. Current Zoology, 58(5), 698–726. https://doi.org/10.1093/czoolo/58.5.698
- Ma, W., & Thompson, W. F. (2015). Human emotions track changes in the acoustic environment. Proceedings of the National Academy of Sciences, 112(47), 14563–14568. https://doi.org/10.1073/pnas.1515087112
- Manser, M. B. (2001). The acoustic structure of suricates’ alarm calls varies with predator type and the level of response urgency. Proceedings of the Royal Society of London. Series B: Biological Sciences, 268(1483), 2315–2324. https://doi.org/10.1098/rspb.2001.1773
- Maurage, P., Joassin, F., Philippot, P., & Campanella, S. (2007). A validated battery of vocal emotional expressions. Neuropsychological Trends, 2(1), 63–74.
- McElreath, R. (2018). Statistical rethinking: A Bayesian course with examples in R and Stan. Chapman and Hall/CRC.
- Morton, E. S. (1977). On the occurrence and significance of motivation-structural rules in some bird and mammal sounds. The American Naturalist, 111(981), 855–869. https://doi.org/10.1086/283219
- Neuhoff, J. G. (2018). Adaptive biases in visual and auditory looming perception. In T. L. Hubbard (Ed.), Spatial biases in perception and cognition (pp. 180–190). Cambridge University Press.
- Ohala, J. J. (1984). An ethological perspective on common cross-language utilization of F₀ of voice. Phonetica, 41(1), 1–16. https://doi.org/10.1159/000261706
- Reby, D., & Charlton, B. D. (2012). Attention grabbing in red deer sexual calls. Animal Cognition, 15(2), 265–270. https://doi.org/10.1007/s10071-011-0451-0
- Ron, S. R. (2008). The evolution of female mate choice for complex calls in túngara frogs. Animal Behaviour, 76(6), 1783–1794. https://doi.org/10.1016/j.anbehav.2008.07.024
- Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178. https://doi.org/10.1037/h0077714
- Ryan, M. J., & Cummings, M. E. (2013). Perceptual biases and mate choice. Annual Review of Ecology, Evolution, and Systematics, 44, 437–459. https://doi.org/10.1146/annurev-ecolsys-110512-135901
- Scherer, K. R. (1986). Vocal affect expression: A review and a model for future research. Psychological Bulletin, 99(2), 143–165. https://doi.org/10.1037/0033-2909.99.2.143
- Schröder, M., Cowie, R., Douglas-Cowie, E., Westerdijk, M., & Gielen, S. (2001, September 3–7). Acoustic correlates of emotion dimensions in view of speech synthesis. Proceedings of Eurospeech 2001, Aalborg, Denmark.
- Singh, N. C., & Theunissen, F. E. (2003). Modulation spectra of natural sounds and ethological theories of auditory processing. The Journal of the Acoustical Society of America, 114(6), 3394–3411. https://doi.org/10.1121/1.1624067
- Smith, E. C., & Lewicki, M. S. (2006). Efficient auditory coding. Nature, 439(7079), 978–982. https://doi.org/10.1038/nature04485
- Southwell, R., Baumann, A., Gal, C., Barascud, N., Friston, K., & Chait, M. (2017). Is predictability salient? A study of attentional capture by auditory patterns. Philosophical Transactions of the Royal Society B: Biological Sciences, 372(1714), 1–11. Article 20160105. https://doi.org/10.1098/rstb.2016.0105
- Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73(4), 971–995. https://doi.org/10.3758/s13414-010-0073-7
- Stebbins, W. C. (1980). The evolution of hearing in the mammals. In A. Popper & R. Fay (Eds.), Comparative studies of hearing in vertebrates (pp. 421–436). Springer.
- Tajadura-Jiménez, A., Väljamäe, A., Asutay, E., & Västfjäll, D. (2010). Embodied auditory perception: The emotional impact of approaching and receding sound sources. Emotion, 10(2), 216–229. https://doi.org/10.1037/a0018422
- Theunissen, F. E., & Elie, J. E. (2014). Neural processing of natural sounds. Nature Reviews Neuroscience, 15(6), 355–366. https://doi.org/10.1038/nrn3731
- Tinbergen, N. (1963). On aims and methods of ethology. Zeitschrift für tierpsychologie, 20(4), 410–433. https://doi.org/10.1111/j.1439-0310.1963.tb01161.x
- Vachon, F., Labonté, K., & Marsh, J. E. (2017). Attentional capture by deviant sounds: A noncontingent form of auditory distraction? Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(4), 622–634. https://doi.org/10.1037/xlm0000330
- Woolley, S. M., Fremouw, T. E., Hsu, A., & Theunissen, F. E. (2005). Tuning for spectro-temporal modulations as a mechanism for auditory discrimination of natural sounds. Nature Neuroscience, 8(10), 1371–1379. https://doi.org/10.1038/nn1536
- Zhao, S., Yum, N. W., Benjamin, L., Benhamou, E., Furukawa, S., Dick, F., & Chait, M. (2018). Rapid ocular responses are a robust marker for bottom-up driven auditory salience. BioRxiv, Article 498485.