139
Views
127
CrossRef citations to date
0
Altmetric
Original Articles

Crossmodal integration in the identification of consonant segments

Pages 647-677 | Received 01 Jan 1991, Published online: 29 May 2007

Keep up to date with the latest research on this topic with citation updates for this article.

Read on this site (5)

Daniel C. Hyde, Ross Flom & Chris L. Porter. (2016) Behavioral and Neural Foundations of Multisensory Face-Voice Perception in Infancy. Developmental Neuropsychology 41:5-8, pages 273-292.
Read now
Nicholas Altieri & Daniel Hudock. (2016) Normative data on audiovisual speech integration using sentence recognition and capacity measures. International Journal of Audiology 55:4, pages 206-214.
Read now
Nicholas Altieri & Daniel Hudock. (2014) Assessing variability in audiovisual speech integration skills using capacity and accuracy measures. International Journal of Audiology 53:10, pages 710-718.
Read now
Lorin Lachs & David B. Pisoni. (2004) Crossmodal Source Identification in Speech Perception. Ecological Psychology 16:3, pages 159-187.
Read now
Eugen Diesch. (1995) Left and Right Hemifield Advantages of Fusions and Combinations in Audiovisual Speech Perception. The Quarterly Journal of Experimental Psychology Section A 48:2, pages 320-333.
Read now

Articles from other publishers (122)

Sandeep A. Phatak, Danielle J. Zion & Ken W. Grant. (2023) Consonant Perception in Connected Syllables Spoken at a Conversational Syllabic Rate. Trends in Hearing 27, pages 233121652311566.
Crossref
Iliza M. Butera, Ryan A. Stevenson, René H. Gifford & Mark T. Wallace. (2023) Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions. Trends in Hearing 27.
Crossref
Lynne E. Bernstein, Nicole Jordan, Edward T. Auer & Silvio P. Eberhardt. (2022) Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective Training. American Journal of Audiology 31:2, pages 453-469.
Crossref
Joel MyersonNancy Tye-MurrayBrent SpeharSandra HaleMitchell Sommers. (2021) Predicting Audiovisual Word Recognition in Noisy Situations: Toward Precision Audiology. Ear & Hearing 42:6, pages 1656-1667.
Crossref
Yangjun Ou, Zhenzhong Chen & Feng Wu. (2021) Multimodal Local-Global Attention Network for Affective Video Content Analysis. IEEE Transactions on Circuits and Systems for Video Technology 31:5, pages 1901-1914.
Crossref
Mitchell S. Sommers. 2021. The Handbook of Speech Perception. The Handbook of Speech Perception 517 539 .
Hao Lu, Martin F. McKinney, Tao Zhang & Andrew J. Oxenham. (2021) Investigating age, hearing loss, and background noise effects on speaker-targeted head and eye movements in three-way conversations. The Journal of the Acoustical Society of America 149:3, pages 1889-1900.
Crossref
John Plass, David BrangSatoru Suzuki & Marcia Grabowecky. (2020) Vision perceptually restores auditory spectral dynamics in speech. Proceedings of the National Academy of Sciences 117:29, pages 16920-16927.
Crossref
Joshua G. W. Bernstein, Jonathan H. Venezia & Ken W. Grant. (2020) Auditory and auditory-visual frequency-band importance functions for consonant recognition. The Journal of the Acoustical Society of America 147:5, pages 3712-3727.
Crossref
Gavin M. Bidelman, Lauren Sigley & Gwyneth A. Lewis. (2019) Acoustic noise and vision differentially warp the auditory categorization of speech. The Journal of the Acoustical Society of America 146:1, pages 60-70.
Crossref
Steven Greenberg & Thomas U. Christiansen. (2019) The perceptual flow of phonetic information. Attention, Perception, & Psychophysics 81:4, pages 884-896.
Crossref
Kaylah Lalonde & Lynne A. Werner. (2019) Perception of incongruent audiovisual English consonants. PLOS ONE 14:3, pages e0213588.
Crossref
Ken W. Grant & Joshua G. W. Bernstein. 2019. Multisensory Processes. Multisensory Processes 33 57 .
Laura Getz, Elke Nordeen, Sarah Vrabic & Joseph Toscano. (2017) Modeling the Development of Audiovisual Cue Integration in Speech Perception. Brain Sciences 7:12, pages 32.
Crossref
Nicholas Altieri. 2017. Systems Factorial Technology. Systems Factorial Technology 177 198 .
Nicholas Altieri, Jennifer J. Lentz, James T. Townsend & Michael J. Wenger. (2016) The McGurk effect: An investigation of attentional capacity employing response times. Attention, Perception, & Psychophysics 78:6, pages 1712-1727.
Crossref
Paula C. Stacey, Pádraig T. Kitterick, Saffron D. Morris & Christian J. Sumner. (2016) The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure. Hearing Research 336, pages 17-28.
Crossref
Rachel Ostrand, Sheila E. Blumstein, Victor S. Ferreira & James L. Morgan. (2016) What you see isn’t always what you get: Auditory word signals trump consciously perceived words in lexical access. Cognition 151, pages 96-107.
Crossref
Vidya Krull & Larry E. Humes. (2016) Text as a Supplement to Speech in Young and Older Adults. Ear & Hearing 37:2, pages 164-176.
Crossref
Agnès C. Léger, Charlotte M. Reed, Joseph G. Desloge, Jayaganesh Swaminathan & Louis D. Braida. (2015) Consonant identification in noise using Hilbert-transform temporal fine-structure speech and recovered-envelope speech for listeners with normal and impaired hearing. The Journal of the Acoustical Society of America 138:1, pages 389-403.
Crossref
Jonathan E. Peelle & Mitchell S. Sommers. (2015) Prediction and constraint in audiovisual speech perception. Cortex 68, pages 169-181.
Crossref
Tobias S. Andersen. (2015) The early maximum likelihood estimation model of audiovisual integration in speech perception. The Journal of the Acoustical Society of America 137:5, pages 2884-2891.
Crossref
Kaylah Lalonde & Rachael Frush Holt. (2015) Preschoolers Benefit From Visually Salient Speech Cues. Journal of Speech, Language, and Hearing Research 58:1, pages 135-150.
Crossref
Agnès C. Léger, Joseph G. Desloge, Louis D. Braida & Jayaganesh Swaminathan. (2015) The role of recovered envelope cues in the identification of temporal-fine-structure speech for hearing-impaired listeners. The Journal of the Acoustical Society of America 137:1, pages 505-508.
Crossref
Julia Strand, Allison Cooperman, Jonathon Rowe & Andrea Simenstad. (2014) Individual Differences in Susceptibility to the McGurk Effect: Links With Lipreading and Detecting Audiovisual Incongruity. Journal of Speech, Language, and Hearing Research 57:6, pages 2322-2331.
Crossref
Nicholas Altieri & Daniel Hudock. (2014) Hearing impairment and audiovisual speech integration ability: a case study report. Frontiers in Psychology 5.
Crossref
Kaoru Sekiyama, Takahiro Soshi & Shinichi Sakamoto. (2014) Enhanced audiovisual integration with aging in speech perception: a heightened McGurk effect in older adults. Frontiers in Psychology 5.
Crossref
Jayaganesh Swaminathan, Charlotte M. Reed, Joseph G. Desloge, Louis D. Braida & Lorraine A. Delhorne. (2014) Consonant identification using temporal fine structure and recovered envelope cues. The Journal of the Acoustical Society of America 135:4, pages 2078-2090.
Crossref
Matthieu Dubois, David Poeppel & Denis G. Pelli. (2013) Seeing and Hearing a Word: Combining Eye and Ear Is More Efficient than Combining the Parts of a Word. PLoS ONE 8:5, pages e64803.
Crossref
Hynek Hermansky. (2013) Multistream Recognition of Speech: Dealing With Unknown Unknowns. Proceedings of the IEEE 101:5, pages 1076-1088.
Crossref
Ross Flom. 2013. Integrating Face and Voice in Person Perception. Integrating Face and Voice in Person Perception 71 93 .
Christophe Micheyl & Andrew J. Oxenham. (2012) Comparing models of the combined-stimulation advantage for speech recognition. The Journal of the Acoustical Society of America 131:5, pages 3970-3980.
Crossref
Thomas U. Christiansen & Steven Greenberg. (2012) Perceptual Confusions Among Consonants, Revisited—Cross-Spectral Integration of Phonetic-Feature Information and Consonant Recognition. IEEE Transactions on Audio, Speech, and Language Processing 20:1, pages 147-161.
Crossref
Virginie van Wassenhove & Charles E. Schroeder. 2012. The Human Auditory Cortex. The Human Auditory Cortex 295 331 .
Fabien Seldran, Christophe Micheyl, Eric Truy, Christian Berger-Vachon, Hung Thai-Van & Stéphane Gallego. (2011) A model-based analysis of the “combined-stimulation advantage”. Hearing Research 282:1-2, pages 252-264.
Crossref
Deniz Başkent & Danny Bazo. (2011) Audiovisual Asynchrony Detection and Speech Intelligibility in Noise With Moderate to Severe Sensorineural Hearing Impairment. Ear & Hearing 32:5, pages 582-592.
Crossref
Ying-Yee Kong & Louis D. Braida. (2011) Cross-Frequency Integration for Consonant and Vowel Identification in Bimodal Hearing. Journal of Speech, Language, and Hearing Research 54:3, pages 959-980.
Crossref
Daniel C. Hyde, Blake L. Jones, Ross Flom & Chris L. Porter. (2011) Neural signatures of face-voice synchrony in 5-month-old human infants. Developmental Psychobiology 53:4, pages 359-370.
Crossref
E. Courtenay Wilson, Charlotte M. Reed & Louis D. Braida. (2010) Integration of auditory and vibrotactile stimuli: Effects of frequency. The Journal of the Acoustical Society of America 127:5, pages 3044-3059.
Crossref
Jean-Luc Schwartz. (2010) A reanalysis of McGurk data suggests that audiovisual fusion in speech perception is subject-dependent. The Journal of the Acoustical Society of America 127:3, pages 1584-1594.
Crossref
Elad Sagi, Ted A. Meyer, Adam R. Kaiser, Su Wooi Teoh & Mario A. Svirsky. (2010) A mathematical model of vowel identification by users of cochlear implants. The Journal of the Acoustical Society of America 127:2, pages 1069-1083.
Crossref
Jeesun Kim, Chris Davis & Christopher Groot. (2009) Speech identification in noise: Contribution of temporal, spectral, and visual speech cues. The Journal of the Acoustical Society of America 126:6, pages 3246-3257.
Crossref
Michael Pilling. (2009) Auditory Event-Related Potentials (ERPs) in Audiovisual Speech Perception. Journal of Speech, Language, and Hearing Research 52:4, pages 1073-1081.
Crossref
Joshua G. W. Bernstein & Ken W. Grant. (2009) Auditory and auditory-visual intelligibility of speech in fluctuating maskers for normal-hearing and hearing-impaired listeners. The Journal of the Acoustical Society of America 125:5, pages 3358-3372.
Crossref
Wei Ji Ma, Xiang Zhou, Lars A. Ross, John J. Foxe & Lucas C. Parra. (2009) Lip-Reading Aids Word Recognition Most in Moderate Noise: A Bayesian Explanation Using High-Dimensional Feature Space. PLoS ONE 4:3, pages e4638.
Crossref
E. Courtenay Wilson, Charlotte M. Reed & Louis D. Braida. (2009) Integration of auditory and vibrotactile stimuli: Effects of phase and stimulus-onset asynchrony. The Journal of the Acoustical Society of America 126:4, pages 1960.
Crossref
Kota Hattori & Paul Iverson. (2009) English /r/-/l/ category assimilation by Japanese adults: Individual differences and the link to identification accuracy. The Journal of the Acoustical Society of America 125:1, pages 469-479.
Crossref
K. von Kriegstein, O. Dogan, M. Gruter, A.-L. Giraud, C. A. Kell, T. Gruter, A. Kleinschmidt & S. J. Kiebel. (2008) Simulation of talking faces in the human brain improves auditory speech recognition. Proceedings of the National Academy of Sciences 105:18, pages 6747-6752.
Crossref
Sheetal Desai, Ginger Stickney & Fan-Gang Zeng. (2008) Auditory-visual speech perception in normal-hearing and cochlear-implant listeners. The Journal of the Acoustical Society of America 123:1, pages 428-440.
Crossref
Nancy Tye-Murray, Mitchell Sommers & Brent Spehar. (2016) Auditory and Visual Lexical Neighborhoods in Audiovisual Speech Perception. Trends in Amplification 11:4, pages 233-241.
Crossref
Nancy Tye-Murray, Mitchell S. Sommers & Brent Spehar. (2007) Audiovisual Integration and Lipreading Abilities of Older Adults with Normal and Impaired Hearing. Ear & Hearing 28:5, pages 656-668.
Crossref
Ken W. Grant, Jennifer B. Tufts & Steven Greenberg. (2007) Integration efficiency for speech perception within and across sensory modalities by normal-hearing and hearing-impaired individuals. The Journal of the Acoustical Society of America 121:2, pages 1164-1176.
Crossref
Hanfeng Yuan, Charlotte M. Reed & Nathaniel I. Durlach. (2005) Tactual display of consonant voicing as a supplement to lipreading. The Journal of the Acoustical Society of America 118:2, pages 1003-1015.
Crossref
Jürgen M Kaufmann & Stefan R Schweinberger. (2016) Speaker Variations Influence Speechreading Speed for Dynamic Faces. Perception 34:5, pages 595-610.
Crossref
Hanfeng Yuan, Charlotte M. Reed & Nathaniel I. Durlach. (2004) Envelope-onset asynchrony as a cue to voicing in initial English consonants. The Journal of the Acoustical Society of America 116:5, pages 3156-3167.
Crossref
Sascha Fagel & Caroline Clemens. (2004) An articulation model for audiovisual speech synthesis—Determination, adjustment, evaluation. Speech Communication 44:1-4, pages 141-154.
Crossref
Diane Ronan, Ann K. Dix, Phalguni Shah & Louis D. Braida. (2004) Integration across frequency bands for consonant identification. The Journal of the Acoustical Society of America 116:3, pages 1749-1762.
Crossref
Lorin Lachs & David B. Pisoni. (2004) Specification of cross-modal source information in isolated kinematic displays of speech. The Journal of the Acoustical Society of America 116:1, pages 507-518.
Crossref
Emily Buss, Joseph W. HallIIIIII & John H. Grose. (2004) Spectral integration of synchronous and asynchronous cues to consonant identification. The Journal of the Acoustical Society of America 115:5, pages 2278-2285.
Crossref
Jeesun Kim & Chris Davis. (2016) Hearing Foreign Voices: Does Knowing What is Said Affect Visual-Masked-Speech Detection?. Perception 32:1, pages 111-120.
Crossref
Kathleen M. Cienkowski & Arlene Earley Carney. (2002) Auditory-Visual Speech Perception and Aging. Ear and Hearing 23:5, pages 439-449.
Crossref
Ken W. Grant. (2002) Measures of auditory-visual integration for speech understanding: A theoretical perspective (L). The Journal of the Acoustical Society of America 112:1, pages 30-33.
Crossref
Jean-Pierre Gagné, Anne-Josée Rochette & Monique Charest. (2002) Auditory, visual and audiovisual clear speech. Speech Communication 37:3-4, pages 213-230.
Crossref
Brian E. Walden, Kenneth W. Grant & Mary T. Cord. (2001) Effects of Amplification and Speechreading on Consonant Recognition by Persons with Impaired Hearing. Ear and Hearing 22:4, pages 333-341.
Crossref
Lorin Lachs, David B. Pisoni & Karen Iler Kirk. (2001) Use of Audiovisual Information in Speech Perception by Prelingually Deaf Children with Cochlear Implants: A First Report. Ear and Hearing 22:3, pages 236-251.
Crossref
Maroula S. Bratakos, Charlotte M. Reed, Lorraine A. Delhorne & Gail Denesvich. (2001) A Single-Band Envelope Cue as a Supplement to Speechreading of Segmentals: A Comparison of Auditory versus Tactual Presentation. Ear and Hearing 22:3, pages 225-235.
Crossref
Ken W. Grant. (2001) The effect of speechreading on masked detection thresholds for filtered speech. The Journal of the Acoustical Society of America 109:5, pages 2272-2275.
Crossref
Ken W. Grant & Philip-Franz Seitz. (2000) The use of visible speech cues for improving auditory detection of spoken sentences. The Journal of the Acoustical Society of America 108:3, pages 1197-1208.
Crossref
Dominic W. Massaro & Michael M. Cohen. (2000) Tests of auditory–visual integration efficiency within the framework of the fuzzy logical model of perception. The Journal of the Acoustical Society of America 108:2, pages 784-789.
Crossref
S. Dupont & J. Luettin. (2000) Audio-visual speech modeling for continuous speech recognition. IEEE Transactions on Multimedia 2:3, pages 141-151.
Crossref
Keith Johnson, Elizabeth A Strand & Mariapaola D'Imperio. (1999) Auditory–visual integration of talker gender in vowel perception. Journal of Phonetics 27:4, pages 359-384.
Crossref
Ken W Grant. (1999) Hearing by Eye II: Advances in the Psychology of Speechreading and Auditory–Visual Speech, edited by Ruth Campbell, Barbara Dodd, and Denis Burnham. Trends in Cognitive Sciences 3:8, pages 319-320.
Crossref
Dominic W. Massaro & Michael M. Cohen. (1999) Speech Perception in Perceivers With Hearing Loss. Journal of Speech, Language, and Hearing Research 42:1, pages 21-41.
Crossref
P. Teissier, J. Robert-Ribes, J.-L. Schwartz & A. Guerin-Dugue. (1999) Comparing models for audiovisual fusion in a noisy-vowel recognition task. IEEE Transactions on Speech and Audio Processing 7:6, pages 629-642.
Crossref
Ken W. Grant & Philip F. Seitz. (1998) Measures of auditory–visual integration in nonsense syllables and sentences. The Journal of the Acoustical Society of America 104:4, pages 2438-2450.
Crossref
Paul Iverson, Lynne E. Bernstein & Edward T. Auer Jr. (1998) Modeling the interaction of phonemic intelligibility and lexical structure in audiovisual word recognition. Speech Communication 26:1-2, pages 45-63.
Crossref
Rosalie M. Uchanski & Louis D. Braida. (1998) Effects of token variability on our ability to distinguish between vowels. Perception & Psychophysics 60:4, pages 533-543.
Crossref
Ken W. Grant, Brian E. Walden & Philip F. Seitz. (1998) Auditory-visual speech recognition by hearing-impaired subjects: Consonant recognition, sentence recognition, and auditory-visual integration. The Journal of the Acoustical Society of America 103:5, pages 2677-2690.
Crossref
Juergen Luettin & Stéphane Dupont. 1998. Computer Vision — ECCV’98. Computer Vision — ECCV’98 657 673 .
Karen S. Helfer. (1997) Auditory and Auditory-Visual Perception of Clear and Conversational Speech. Journal of Speech, Language, and Hearing Research 40:2, pages 432-443.
Crossref
Alex Waibel, Minh Tue Vo, Paul Duchnowski & Stefan Manke. (1996) Multimodal interfaces. Artificial Intelligence Review 10:3-4, pages 299-319.
Crossref
Ken W. Grant & Brian E. Walden. (1996) Spectral Distribution of Prosodic Information. Journal of Speech, Language, and Hearing Research 39:2, pages 228-238.
Crossref
Dominic W. Massaro & Michael M. Cohen. (1996) Perceiving speech from inverted faces. Perception & Psychophysics 58:7, pages 1047-1065.
Crossref
K. G. Munhall, P. Gribble, L. Sacco & M. Ward. (1996) Temporal constraints on the McGurk effect. Perception & Psychophysics 58:3, pages 351-362.
Crossref
Alex Waibel, Minh Tue Vo, Paul Duchnowski & Stefan Manke. 1996. Integration of Natural Language and Vision Processing. Integration of Natural Language and Vision Processing 299 319 .
J.B. Allen. (1994) How do humans process and recognize speech?. IEEE Transactions on Speech and Audio Processing 2:4, pages 567-577.
Crossref
Dominic W. Massaro & Michael M. Cohen. (1993) Perceiving asynchronous bimodal speech in consonant-vowel and vowel syllables. Speech Communication 13:1-2, pages 127-134.
Crossref
Dominic W. Massaro, Michael M. Cohen & Antoinette T. Gesi. (1993) Long-term training, transfer, and retention in learning to lipread. Perception & Psychophysics 53:5, pages 549-562.
Crossref
Brian E. Walden, Debra A. Busacco & Allen A. Montgomery. (1993) Benefit From Visual Cues in Auditory-Visual Speech Recognition by Middle-Aged and Elderly Persons. Journal of Speech, Language, and Hearing Research 36:2, pages 431-436.
Crossref
Gregory R. Lockhead. (2011) Constancy in a changing world. Behavioral and Brain Sciences 15:3, pages 587-600.
Crossref
Richard M. Warren. (2011) Relation of sensory scales to physical scales. Behavioral and Brain Sciences 15:3, pages 586-587.
Crossref
Mark Wagner. (2011) Keeping the bath water along with the baby: Context effects represent a challenge, not a mortal wound, to the body of psychophysics. Behavioral and Brain Sciences 15:3, pages 585-586.
Crossref
J. van Brakel. (2011) Ceteris paribus laws. Behavioral and Brain Sciences 15:3, pages 584-585.
Crossref
Michel Treisman. (2011) Do we scale “objects” or isolated sensory dimensions?. Behavioral and Brain Sciences 15:3, pages 581-584.
Crossref
Robert Teghtsoonian. (2011) Selecting one attribute for judgment is not an act of stupidity. Behavioral and Brain Sciences 15:3, pages 580-581.
Crossref
Bruce Schneider. (2011) Should the psychophysical model be rejected?. Behavioral and Brain Sciences 15:3, pages 579-580.
Crossref
Kenneth H. Norwich. (2011) Context effects in the entropic theory of perception. Behavioral and Brain Sciences 15:3, pages 578-579.
Crossref
Keith K. Niall. (2011) The evident object of inquiry. Behavioral and Brain Sciences 15:3, pages 578-578.
Crossref
John S. Monahan. (2011) Attributes or objects: A paradigm shift in psychophysics. Behavioral and Brain Sciences 15:3, pages 577-577.
Crossref
Robert D. Melara. (2011) How important are dimensions to perception?. Behavioral and Brain Sciences 15:3, pages 576-577.
Crossref
Sergio C. Masin. (2011) Psychophysics and quantitative perceptual laws. Behavioral and Brain Sciences 15:3, pages 575-576.
Crossref
Lawrence E. Marks. (2011) The perplexing plurality of psychophysical processes. Behavioral and Brain Sciences 15:3, pages 574-575.
Crossref
Neil A. Macmillan. (2011) Covert converging operations for multidimensional psychophysics. Behavioral and Brain Sciences 15:3, pages 573-574.
Crossref
Donald Laming. (2011) Two categories of contextual variable in perception. Behavioral and Brain Sciences 15:3, pages 572-573.
Crossref
Lester E. Krueger. (2011) Will the real stimulus please step forward?. Behavioral and Brain Sciences 15:3, pages 570-572.
Crossref
Donald L. King. (2011) Context effects: Pervasiveness and analysis. Behavioral and Brain Sciences 15:3, pages 570-570.
Crossref
Peter R. Killeen. (2011) Psychophysics: Plus ça change …. Behavioral and Brain Sciences 15:3, pages 569-569.
Crossref
Robert A. M. Gregson. (2011) Walking in a psychophysical dustbowl creates a dustcloud. Behavioral and Brain Sciences 15:3, pages 568-569.
Crossref
Richard L. Gregory. (2011) Scales falling from the eyes?. Behavioral and Brain Sciences 15:3, pages 567-568.
Crossref
George A. Gescheider. (2011) The complexity and importance of the psychophysical scaling of sensory attributes. Behavioral and Brain Sciences 15:3, pages 567-567.
Crossref
Hannes Eisler. (2011) Psychophysical invariance, perceptual invariance and the physicalistic trap. Behavioral and Brain Sciences 15:3, pages 566-567.
Crossref
Ehtibar N. Dzhafarov. (2011) Can brightness be related to luminance by a meaningful function?. Behavioral and Brain Sciences 15:3, pages 565-566.
Crossref
Thomas R. Corwin. (2011) The determinants of perceived brightness are complicated, but not hopelessly so. Behavioral and Brain Sciences 15:3, pages 564-565.
Crossref
Stanley Coren. (2011) Psychophysical scaling: Context and illusion. Behavioral and Brain Sciences 15:3, pages 563-564.
Crossref
Marc Brysbaert. (2011) Accounting for an old inconsistency in the psychophysics of Plateau and Delboeuf. Behavioral and Brain Sciences 15:3, pages 562-563.
Crossref
Gunnar Borg. (2011) Psychophysical scaling: To describe relations or to uncover a law?. Behavioral and Brain Sciences 15:3, pages 561-562.
Crossref
Claude Bonnet. (2011) Psychophysical scaling within an information processing approach?. Behavioral and Brain Sciences 15:3, pages 560-561.
Crossref
Stanley J. Bolanowski. (2011) Lockhead's view of scaling: Something's fishy here. Behavioral and Brain Sciences 15:3, pages 560-560.
Crossref
Norman H. Anderson. (2011) Integration psychophysics is not traditional psychophysics. Behavioral and Brain Sciences 15:3, pages 559-560.
Crossref
Daniel Algom. (2011) Perception, apperception and psychophysics. Behavioral and Brain Sciences 15:3, pages 558-559.
Crossref
Gregory R. Lockhead. (2011) Psychophysical scaling: Judgments of attributes or objects?. Behavioral and Brain Sciences 15:3, pages 543-558.
Crossref
Charlotte M. Reed, William M. Rabinowitz, Nathaniel I. Durlach, Lorraine A. Delhorne, Louis D. Braida, Joseph C. Pemberton, Brian D. Mulcahey & Deborah L. Washington. (1992) Analytic Study of the Tadoma Method. Journal of Speech, Language, and Hearing Research 35:2, pages 450-465.
Crossref

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.