References
- Alais, D., & Burr, D. (2004). The ventriloquist effect results from near-optimal bimodal integration. Current Biology, 14(3), 257–262. doi: 10.1016/j.cub.2004.01.029
- Barsics, C., & Bredart, S. (2012). Recalling semantic information about newly learned faces and voices. Memory, 20(5), 527–534. doi: 10.1080/09658211.2012.683012
- Barton, J. J., & Corrow, S. L. (2016). Recognizing and identifying people: A neuropsychological review. Cortex, 75, 132–150. doi: 10.1016/j.cortex.2015.11.023
- Belin, P., Fecteau, S., & Bedard, C. (2004). Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8(3), 129–135. Retrieved from http://ac.els-cdn.com/S1364661304000257/1-s2.0-S1364661304000257-main.pdf?_tid=c7035a1a-e56c-11e5-8d08-00000aacb361&acdnat=1457469286_87945d5a440f144d158ce0d0c3527592 doi: 10.1016/j.tics.2004.01.008
- Bernstein, L. E., Auer, E. T., & Takayanagi, S. (2004). Auditory speech detection in noise enhanced by lipreading. Speech Communication, 44(1), 5–18. doi: 10.1016/j.specom.2004.10.011
- Blank, H., Anwander, A., & von Kriegstein, K. (2011). Direct structural connections between voice- and face-recognition areas. Journal of Neuroscience, 31(36), 12906–12915. doi:10.1523/JNEUROSCI.2091-11.201131/36/12906 [pii] doi: 10.1523/JNEUROSCI.2091-11.2011
- Blank, H., Kiebel, S. J., & von Kriegstein, K. (2015). How the human brain exchanges information across sensory modalities to recognize other people. Human Brain Mapping, 36(1), 324–339. doi: 10.1002/hbm.22631
- Blank, H., Wieland, N., & von Kriegstein, K. (2014). Person recognition and the brain: Merging evidence from patients and healthy individuals. Neuroscience & Biobehavioral Reviews, 47, 717–734. doi: 10.1016/j.neubiorev.2014.10.022
- Bruce, V., & Young, A. (1986). Understanding face recognition. British Journal of Psychology, 77, 305–327. doi: 10.1111/j.2044-8295.1986.tb02199.x
- Bruyer, R. (1990). La reconnaissance des visages. Paris: Delachaux & Niestle.
- Bülthoff, I., & Newell, F. N. (2017). Crossmodal priming of unfamiliar faces supports early interactions between voices and faces in person perception. Visual Cognition, 17, 1–18. doi: 10.1080/13506285.2017.1290729
- Burton, A., Bruce, V., & Hancock, P. (1999). From pixels to people: A model of familiar face recognition. Cognitive Science, 23, 1–31. doi: 10.1207/s15516709cog2301_1
- Burton, A. M., & Bruce, V. (1993). Naming faces and naming names: Exploring an interactive activation model of person recognition. Memory, 1(4), 457–480. doi: 10.1080/09658219308258248
- Burton, A. M., Bruce, V., & Johnston, R. A. (1990). Understanding face recognition with an interactive activation model. British Journal of Psychology, 81(3), 361–380. doi: 10.1111/j.2044-8295.1990.tb02367.x
- Campanella, S., & Belin, P. (2007). Integrating face and voice in person perception. Trends in Cognitive Sciences, 11(12), 535–543. doi: 10.1016/j.tics.2007.10.001
- Crosse, M. J., Di Liberto, G. M., & Lalor, E. C. (2016). Eye can hear clearly now: Inverse effectiveness in natural audiovisual speech processing relies on long-term crossmodal temporal integration. Journal of Neuroscience, 36(38), 9888–9895. doi: 10.1523/JNEUROSCI.1396-16.2016
- Diederich, A., & Colonius, H. (2004). Bimodal and trimodal multisensory enhancement: Effects of stimulus onset and intensity on reaction time. Perception & Psychophysics, 66(8), 1388–1404. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15813202 doi: 10.3758/BF03195006
- Ellis, A. W., Young, A. W., Flude, B. M., & Hay, D. C. (1987). Repetition priming of face recognition. The Quarterly Journal of Experimental Psychology Section A, 39(2), 193–210. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/3615943 doi: 10.1080/14640748708401784
- Ellis, H. D., Jones, D. M., & Mosdell, N. (1997). Intra- and inter-modal repetition priming of familiar faces and voices. British Journal of Psychology, 88(Pt 1), 143–156. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/9061895 doi: 10.1111/j.2044-8295.1997.tb02625.x
- Ernst, M. O., & Bulthoff, H. H. (2004). Merging the senses into a robust percept. Trends in Cognitive Sciences, 8(4), 162–169. doi: 10.1016/j.tics.2004.02.002
- Gainotti, G. (2014). Cognitive models of familiar people recognition and hemispheric asymmetries. Frontiers in Bioscience, E6, 148–158. doi:E698 [pii] doi: 10.2741/E698
- Grant, K. W., & Seitz, P.-F. (2000). The use of visible speech cues for improving auditory detection of spoken sentences. The Journal of the Acoustical Society of America, 108(3), 1197–1208. doi: 10.1121/1.1288668
- Hecht, D., & Reiner, M. (2009). Sensory dominance in combinations of audio, visual and haptic stimuli. Experimental Brain Research, 193(2), 307–314. doi: 10.1007/s00221-008-1626-z
- Hecht, D., Reiner, M., & Karni, A. (2008). Enhancement of response times to bi- and tri-modal sensory stimuli during active movements. Experimental Brain Research, 185(4), 655–665. doi: 10.1007/s00221-007-1191-x
- Hoffman, P., Jones, R. W., & Ralph, M. A. (2012). The degraded concept representation system in semantic dementia: Damage to pan-modal hub, then visual spoke. Brain, 135(Pt 12), 3770–3780. doi:10.1093/brain/aws282aws282 [pii] doi: 10.1093/brain/aws282
- Jefferies, E. (2013). The neural basis of semantic cognition: Converging evidence from neuropsychology, neuroimaging and TMS. Cortex, 49(3), 611–625. doi:10.1016/j.cortex.2012.10.008S0010-9452(12)00310-3 [pii] doi: 10.1016/j.cortex.2012.10.008
- Kamachi, M., Hill, H., Lander, K., & Vatikiotis-Bateson, E. (2003). Putting the face to the voice&rsqu:O; matching identity across modality. Current Biology, 13(19), 1709–1714. doi: 10.1016/j.cub.2003.09.005
- Lachs, L., & Pisoni, D. B. (2004). Crossmodal source identification in speech perception. Ecological Psychology, 16(3), 159–187. doi: 10.1207/s15326969eco1603_1
- Latinus, M., & Belin, P. (2011). Human voice perception. Current Biology, 21(4), R143–145. doi:10.1016/j.cub.2010.12.033S0960-9822(10)01701-X [pii] doi: 10.1016/j.cub.2010.12.033
- O’Mahony, C., & Newell, F. N. (2012). Integration of faces and voices, but not faces and names, in person recognition. British Journal of Psychology, 103(1), 73–82. Retrieved from http://onlinelibrary.wiley.com/store/10.1111/j.2044-8295.2011.02044.x/asset/j.2044-8295.2011.02044.x.pdf?v=1&t=iljvkjdc&s=e025fdd603bf800dc7a0217aebe72b891487eb57 doi: 10.1111/j.2044-8295.2011.02044.x
- Ross, L. A., Saint-Amour, D., Leavitt, V. M., Javitt, D. C., & Foxe, J. J. (2007). Do you see what I am saying? Exploring visual enhancement of speech comprehension in noisy environments. Cerebral Cortex, 17(5), 1147–1153. doi: 10.1093/cercor/bhl024
- Schweinberger, S. R., Herholz, A., & Sommer, W. (1997). Recognizing famous voicesinfluence of stimulus duration and different types of retrieval cues. Journal of Speech Language and Hearing Research, 40(2), 453–463. Retrieved from http://jslhr.pubs.asha.org/article.aspx?articleid=1781315 doi: 10.1044/jslhr.4002.453
- Schweinberger, S. R., Herholz, A., & Stief, V. (1997). Auditory long term memory: Repetition priming of voice recognition. The Quarterly Journal of Experimental Psychology Section A, 50(3), 498–517. doi: 10.1080/027249897391991
- Schweinberger, S. R., Kloth, N., & Robertson, D. M. (2011). Hearing facial identities: Brain correlates of face--voice integration in person identification. Cortex, 47(9), 1026–1037. doi: 10.1016/j.cortex.2010.11.011
- Schweinberger, S. R., Robertson, D., & Kaufmann, J. M. (2007). Hearing facial identities. The Quarterly Journal of Experimental Psychology, 60(10), 1446–1456. Retrieved from http://www.tandfonline.com/doi/pdf/10.1080/17470210601063589
- Sheffert, S. M., & Olson, E. (2004). Audiovisual speech facilitates voice learning. Perception & Psychophysics, 66(2), 352–362. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/15129754 doi: 10.3758/BF03194884
- Snowden, J. S., Thompson, J. C., & Neary, D. (2012). Famous people knowledge and the right and left temporal lobes. Behavioural Neurology, 25(1), 35–44. doi:10.3233/BEN-2012-0347T7355360W863M300 [pii] doi: 10.1155/2012/360965
- Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. The Journal of the Acoustical Society of America, 26(2), 212–215. doi: 10.1121/1.1907309
- von Kriegstein, K. (2012). A multisensory perspective on human auditory communication.
- von Kriegstein, K., Dogan, Ö, Grüter, M., Giraud, A.-L., Kell, C. A., Grüter, T., … Kiebel, S. J. (2008). Simulation of talking faces in the human brain improves auditory speech recognition. Proceedings of the National Academy of Sciences, 105(18), 6747–6752. doi: 10.1073/pnas.0710826105
- Von Kriegstein, K., & Giraud, A.-L. (2006). Implicit multisensory associations influence voice recognition. PLoS Biology, 4(10), 1809–1820. doi: 10.1371/journal.pbio.0040326
- Von Kriegstein, K., Kleinschmidt, A., Sterzer, P., & Giraud, A.-L. (2005). Interaction of face and voice areas during speaker recognition. Journal of Cognitive Neuroscience, 17(3), 367–376. Retrieved from http://www.mitpressjournals.org/doi/pdfplus/10.1162/0898929053279577
- Young, A. W., Flude, B. M., Hellawell, D. J., & Ellis, A. W. (1994). The nature of semantic priming effects in the recognition of familiar people. British Journal of Psychology, 85(Pt 3), 393–411. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/7921746 doi: 10.1111/j.2044-8295.1994.tb02531.x
- Young, A. W., Hellawell, D., & De Haan, E. H. (1988). Cross-domain semantic priming in normal subjects and a prosopagnosic patient. The Quarterly Journal of Experimental Psychology Section A, 40(3), 561–580. Retrieved from http://www.ncbi.nlm.nih.gov/pubmed/3175035 doi: 10.1080/02724988843000087