References
- Anderson, S. F., Kelley, K., & Maxwell, S. E. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias and uncertainty. Psychological Science, 28(11), 1547–1562. https://doi.org/https://doi.org/10.1177/0956797617723724
- Barrett, L. F., Mesquita, B., & Gendron, M. (2011). Context in emotion perception. Current Directions in Psychological Science, 20(5), 286–290. https://doi.org/https://doi.org/10.1177/0963721411422522
- Bermant, R. I., & Welch, R. B. (1976). Effect of degree of separation of visual-auditory stimulus and eye position upon spatial interaction of vision and audition. Perceptual and Motor Skills, 43(2), 487–493. https://doi.org/https://doi.org/10.2466/pms.1976.43.2.487
- Boersma, P., & Weenink, D. (2009). Praat: doingphonetics by computer (Version 5.1.05) [Computerprogram]. Retrieved May 1, 2009, from http://www.praat.org/
- Bruyer, R., & Brysbaert, M. (2011). Combining speed and accuracy in cognitive psychology: Is the inverse efficiency score (IES) a better dependent variable than the mean reaction time (RT) and the percentage of errors (PE)? Psychologica Belgica, 51(1), 5–13. https://doi.org/https://doi.org/10.5334/pb-51-1-5
- Calvert, G. A., & Thesen, T. (2004). Multisensory integration: Methodological approaches and emerging principles in the human brain. Journal of Physiology-Paris, 98(1-3), 191–205. https://doi.org/https://doi.org/10.1016/j.jphysparis.2004.03.018
- Chen, Y.-C., & Spence, C. (2017). Assessing the role of the ‘unity assumption’on multisensory integration: A review. Frontiers in Psychology, 8, 445. https://doi.org/https://doi.org/10.3389/fphys.2017.00445
- Choi, I., Lee, J.-Y., & Lee, S.-H. (2018). Bottom-up and top-down modulation of multisensory integration. Current Opinion in Neurobiology, 52, 115–122. https://doi.org/https://doi.org/10.1016/j.conb.2018.05.002
- Colavita, F. B. (1974). Human sensory dominance. Attention, Perception, & Psychophysics, 16(2), 409–412. https://doi.org/https://doi.org/10.3758/BF03203962
- Collignon, O., Girard, S., Gosselin, F., Roy, S., Saintamour, D., Lassonde, M., & Lepore, F. (2008). Audio-visual integration of emotion expression. Brain Research, 1242(4), 126. https://doi.org/https://doi.org/10.1016/j.brainres.2008.04.023
- De Gelder, B., & Bertelson, P. (2003). Multisensory integration, perception and ecological validity. Trends in Cognitive Sciences, 7(10), 460–467. https://doi.org/https://doi.org/10.1016/j.tics.2003.08.014
- de Gelder, B., Böcker, K. B., Tuomainen, J., Hensen, M., & Vroomen, J. (1999). The combined perception of emotion from voice and face: Early interaction revealed by human electric brain responses. Neuroscience Letters, 260(2), 133–136. https://doi.org/https://doi.org/10.1016/S0304-3940(98)00963-X
- De Gelder, B., & Vroomen, J. (2000). The perception of emotions by ear and by eye. Cognition & Emotion, 14(3), 289–311. https://doi.org/https://doi.org/10.1080/026999300378824
- Doehrmann, O., & Naumer, M. J. (2008). Semantics and the multisensory brain: How meaning modulates processes of audio-visual integration. Brain Research, 1242, 136–150. https://doi.org/https://doi.org/10.1016/j.brainres.2008.03.071
- Dolan, R. J., Morris, J. S., & de Gelder, B. (2001). Crossmodal binding of fear in voice and face. Proceedings of the National Academy of Sciences, 98(17), 10006–10010. https://doi.org/https://doi.org/10.1073/pnas.171288598
- Ethofer, T., Pourtois, G., & Wildgruber, D. (2006). Investigating audiovisual integration of emotional signals in the human brain. Progress in Brain Research, 156, 345–361. https://doi.org/https://doi.org/10.1016/s0079-6123(06)56019-4
- Föcker, J., Gondan, M., & Röder, B. (2011). Preattentive processing of audio-visual emotional signals. Acta Psychologica, 137(1), 36–47. https://doi.org/https://doi.org/10.1016/j.actpsy.2011.02.004
- Föcker, J., & Röder, B. (2019). Event-related potentials reveal evidence for late integration of emotional prosody and facial expression in dynamic stimuli: An ERP study. Multisensory Research, 32(6), 473–497. https://doi.org/https://doi.org/10.1163/22134808-20191332
- Gao, C., Wedell, D. H., Green, J. J., Jia, X., Mao, X., Guo, C., & Shinkareva, S. V. (2018). Temporal dynamics of audiovisual affective processing. Biological Psychology, 139, 59–72. https://doi.org/https://doi.org/10.1016/j.biopsycho.2018.10.001
- Gao, C., Wedell, D. H., Kim, J., Weber, C. E., & Shinkareva, S. V. (2018). Modelling audiovisual integration of affect from videos and music. Cognition and Emotion, 32(3), 516–529. https://doi.org/https://doi.org/10.1080/02699931.2017.1320979
- Hietanen, J., Leppänen, J., Illi, M., & Surakka, V. (2004). Evidence for the integration of audiovisual emotional information at the perceptual level of processing. European Journal of Cognitive Psychology, 16(6), 769–790. https://doi.org/https://doi.org/10.1080/09541440340000330
- Holmes, N. P. (2007). The law of inverse effectiveness in neurons and behaviour: Multisensory integration versus normal variability. Neuropsychologia, 45(14), 3340–3345. https://doi.org/https://doi.org/10.1016/j.neuropsychologia.2007.05.025
- Howard, I. P., & Templeton, W. B. (1966). Human spatial orientation.
- Jack, C. E., & Thurlow, W. R. (1973). Effects of degree of visual association and angle of displacement on the “ventriloquism” effect. Perceptual and Motor Skills, 37(3), 967–979. https://doi.org/https://doi.org/10.1177/003151257303700360
- Jia, X., Gao, C., Wang, Y., Han, M., Cui, L., & Guo, C. (2019). Emotional arousal influences remembrance of goal-relevant stimuli. Emotion, 20(8), 1357–1368. https://doi.org/https://doi.org/10.1037/emo0000657
- Klasen, M., Chen, Y.-H., & Mathiak, K. (2012). Multisensory emotions: Perception, combination and underlying neural processes. Reviews in the Neurosciences, 23(4), 381. https://doi.org/https://doi.org/10.1515/revneuro-2012-0040
- Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. https://doi.org/https://doi.org/10.3389/fpsyg.2013.00863
- Laurienti, P. J., Kraft, R. A., Maldjian, J. A., Burdette, J. H., & Wallace, M. T. (2004). Semantic congruence is a critical factor in multisensory behavioral performance. Experimental Brain Research, 158(4), 405–414. https://doi.org/https://doi.org/10.1007/s00221-004-1913-2
- Livingstone, S. R., Russo, F. A., & Najbauer, J. (2018). The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. Plos One, 13(5), e0196391. https://doi.org/https://doi.org/10.1371/journal.pone.0196391
- Margiotoudi, K., Kelly, S., & Vatakis, A. (2014). Audiovisual temporal integration of speech and gesture. Procedia - Social and Behavioral Sciences, 126, 154–155. https://doi.org/https://doi.org/10.1016/j.sbspro.2014.02.351
- Mather, M., & Sutherland, M. R. (2011). Arousal-biased competition in perception and memory. Perspectives on Psychological Science, 6(2), 114–133. https://doi.org/https://doi.org/10.1177/1745691611400234
- Meredith, M. A., & Stein, B. E. (1983). Interactions among converging sensory inputs in the superior colliculus. Science, 221(4608), 389–391. https://doi.org/https://doi.org/10.1126/science.6867718
- Molholm, S., Ritter, W., Javitt, D. C., & Foxe, J. J. (2004). Multisensory visual–auditory object recognition in humans: A high-density electrical mapping study. Cerebral Cortex, 14(4), 452–465. https://doi.org/https://doi.org/10.1093/cercor/bhh007
- Parise, C., & Spence, C. (2013). Audiovisual cross-modal correspondences in the general population. In J. Simner & E. Hubbard (Eds.), The Oxford handbook of synaesthesia (pp. 790–815). Oxford University Press.
- Paulmann, S., & Pell, M. D. (2011). Is there an advantage for recognizing multi-modal emotional stimuli? Motivation & Emotion, 35(2), 192–201. https://doi.org/https://doi.org/10.1007/s11031-011-9206-0
- Petrini, K., McAleer, P., & Pollick, F. (2010). Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence. Brain Research, 1323, 139–148. https://doi.org/https://doi.org/10.1016/j.brainres.2010.02.012
- Posner, M. I., Nissen, M. J., & Klein, R. M. (1976). Visual dominance: An information-processing account of its origins and significance. Psychological Review, 83(2), 157. https://doi.org/https://doi.org/10.1037/0033-295X.83.2.157
- Shams, L., & Seitz, A. R. (2008). Benefits of multisensory learning. Trends in Cognitive Sciences, 12(11), 411–417. https://doi.org/https://doi.org/10.1016/j.tics.2008.07.006
- Spence, C. (2009). Explaining the Colavita visual dominance effect. Progress in Brain Research, 176, 245–258. https://doi.org/https://doi.org/10.1016/S0079-6123(09)17615-X
- Spence, C. (2011). Crossmodal correspondences: A tutorial review. Attention, Perception, & Psychophysics, 73(4), 971–995. https://doi.org/https://doi.org/10.3758/s13414-010-0073-7
- Spence, C., & Deroy, O. (2013). How automatic are crossmodal correspondences? Consciousness and Cognition, 22(1), 245–260. https://doi.org/https://doi.org/10.1016/j.concog.2012.12.006
- Stein, B. E., & Stanford, T. R. (2008). Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews Neuroscience, 9(4), 255–266. https://doi.org/https://doi.org/10.1038/nrn2331
- Takagi, S., Hiramatsu, S., Tabei, K., & Tanaka, A. (2015). Multisensory perception of the six basic emotions is modulated by attentional instruction and unattended modality. Frontiers in Integrative Neuroscience, 9, 1. https://doi.org/https://doi.org/10.3389/fnint.2015.00001
- Talsma, D., Senkowski, D., Soto-Faraco, S., & Woldorff, M. G. (2010). The multifaceted interplay between attention and multisensory integration. Trends in Cognitive Sciences, 14(9), 400–410. https://doi.org/https://doi.org/10.1016/j.tics.2010.06.008
- Tanaka, A., Koizumi, A., Imai, H., Hiramatsu, S., Hiramoto, E., & de Gelder, B. (2010). I feel your voice: Cultural differences in the multisensory perception of emotion. Psychological Science, 21(9), 1259–1262. https://doi.org/https://doi.org/10.1177/0956797610380698
- Townsend, J. T., & Ashby, F. G. (1983). Stochastic modeling of elementary psychological processes. CUP Archive.
- Van den Stock, J., Grèzes, J., & de Gelder, B. (2008). Human and animal sounds influence recognition of body language. Brain Research, 1242, 185–190. https://doi.org/https://doi.org/10.1016/j.brainres.2008.05.040
- Vatakis, A., Ghazanfar, A. A., & Spence, C. (2008). Facilitation of multisensory integration by the “unity effect” reveals that speech is special. Journal of Vision, 8(9), 14–14. https://doi.org/https://doi.org/10.1167/8.9.14
- Vatakis, A., & Spence, C. (2007). Crossmodal binding: Evaluating the “unity assumption” using audiovisual speech stimuli. Perception & Psychophysics, 69(5), 744–756. https://doi.org/https://doi.org/10.3758/BF03193776
- Vroomen, J., Driver, J., & De Gelder, B. (2001). Is cross-modal integration of emotional expressions independent of attentional resources? Cognitive, Affective, & Behavioral Neuroscience, 1(4), 382–387. https://doi.org/https://doi.org/10.3758/CABN.1.4.382
- Welch, R. B., & Warren, D. H. (1980). Immediate perceptual response to intersensory discrepancy. Psychological Bulletin, 88(3), 638–667. https://doi.org/https://doi.org/10.1037/0033-2909.88.3.638
- Whelan, R. (2008). Effective analysis of reaction time data. The Psychological Record, 58(3), 475–482. https://doi.org/https://doi.org/10.1007/BF03395630