References
- Ahrlich, M. 2013. Optimierung und Evaluation des Oldenburger Satztests mit weiblicher Sprecherin und Untersuchung des Effekts des Sprechers auf die Sprachverständlichkeit. Optimization and evaluation of the female OLSA and investigation of the speaker‘s effects on speech intelligibility. Bachelor thesis.
- Akeroyd, M. A., S. Arlinger, R. A. Bentler, A. Boothroyd, N. Dillier, W. A. Dreschler, J. P. Gagne, et al. 2015. “International Collegium of Rehabilitative Audiology (ICRA) Recommendations for the Construction of Multilingual Speech Tests: ICRA Working Group on Multilingual Speech Tests.” International Journal of Audiology 54 (Suppl 2): 17–22. doi:https://doi.org/10.3109/14992027.2015.1030513.
- Auer, Edward T., and Lynne E. Bernstein. 2007. “Enhanced Visual Speech Perception in Individuals with Early-Onset Hearing Impairment.” Journal of Speech, Language, and Hearing Research 50 (5): 1157–1165. doi:https://doi.org/10.1044/1092-4388(2007/080).
- Baskent, D., and D. Bazo. 2011. “Audiovisual Asynchrony Detection and Speech Intelligibility in Noise with Moderate to Severe Sensorineural Hearing Impairment.” Ear and Hearing 32 (5): 582–592. doi:https://doi.org/10.1097/AUD.0b013e31820fca23.
- Bench, J., N. Daly, J. Doyle, and C. Lind. 1995. “Choosing Talkers for the BKB/a Speechreading Test: A Procedure with Observations on Talker Age and Gender.” British Journal of Audiology 29 (3): 172–187. doi:https://doi.org/10.3109/03005369509086594.
- Bernstein, L. E., E. T. Auer, Jr, and S. Takayanagi. 2004. “Auditory Speech Detection in Noise Enhanced by Lipreading.” Speech Communication 44 (1–4): 5–18. doi:https://doi.org/10.1016/j.specom.2004.10.011.
- Brand, T., and B. Kollmeier. 2002. “Efficient Adaptive Procedures for Threshold and Concurrent Slope Estimates for Psychophysics and Speech Intelligibility Tests.” The Journal of the Acoustical Society of America 111 (6): 2801–2810. doi:https://doi.org/10.1121/1.1479152.
- Brand, T., S. Kissner, T. Jürgens, D. Berg, and B. Kollmeier. 2011. “Adaptive Algorithmen Zur Bestimmung Der 80%-Sprachverständlichkeitsschwelle. Adaptive Algorithms for Determining the 80% Speech Intelligibility Threshold.” Jahrestagung Der Deutschen Gesellschaft Für Audiologie, Jena 14: 4.
- Bronkhorst , A. T. Brand, and K. Wagener. 2002. “Evaluation of Context Effects in Sentence Recognition.” The Journal of the Acoustical Society of America 111 (6): 2874–2886. doi:https://doi.org/10.1121/1.1458025.
- Corthals, P., B. Vinck, E. D. Vel, and P. V. Cauwenberge. 1997. “Audiovisual Speech Reception in Noise and Self-Perceived Hearing Disability in Sensorineural Hearing Loss.” Audiology 36 (1): 46–56. doi:https://doi.org/10.3109/00206099709071960.
- Devesse, A., A. Dudek, A. van Wieringen, and J. Wouters. 2018. “Speech Intelligibility of Virtual Humans.” International Journal of Audiology 57 (12): 914–922. doi:https://doi.org/10.1080/14992027.2018.1511922.
- Duchnowski, P., D. S. Lum, J. C. Krause, M. G. Sexton, M. S. Bratakos, and L. D. Braida. 2000. “Development of Speechreading Supplements Based on Automatic Speech Recognition.” IEEE Transactions on Biomedical Engineering 47 (4): 487–496. doi:https://doi.org/10.1109/10.828148.
- Fernandez-Lopez, A., and F. M. Sukno. 2017. Automatic viseme vocabulary construction to enhance continuous lip-reading. In Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017), Vol. 5, pp. 52–63.
- Gieseler, A., S. Rosemann, M. Tahden, and K. Wagener. 2020. “Linking Audiovisual Integration to Audiovisual Speech Recognition in Noise.” Preprint. 1: 1–48. doi:https://doi.org/10.31219/osf.io/46caf.
- Grant, K. W. 2002. “Measures of Auditory-Visual Integration for Speech Understanding: A Theoretical Perspective (L).” The Journal of the Acoustical Society of America 112 (1): 30–33. doi:https://doi.org/10.1121/1.1482076.
- Grant, K. W., V. V. Wassenhove, and D. Poeppel. 2003. Discrimination of Auditory-Visual Synchrony. In AVSP 2003-International Conference on Audio-Visual Speech Processing.
- Grimm, G., G. Llorach, M. M. Hendrikse, and V. Hohmann. 2019. Audio-visual stimuli for the evaluation of speech-enhancing algorithms. In Proceedings of the 23rd International Congress on Acoustics, Aachen.
- Hochmuth, S., T. Brand, M. A. Zokoll, F. Z. Castro, N. Wardenga, and B. Kollmeier. 2012. “A Spanish Matrix Sentence Test for Assessing Speech Reception Thresholds in Noise.” International Journal of Audiology 51 (7): 536–544. doi:https://doi.org/10.3109/14992027.2012.670731.
- Hochmuth, S., T. Jürgens, T. Brand, and B. Kollmeier. 2015. “Talker-and Language-Specific Effects on Speech Intelligibility in Noise Assessed with Bilingual Talkers: Which Language is More Robust against Noise and Reverberation?” International Journal of Audiology 54 (Suppl. 2): 23–34. doi:https://doi.org/10.3109/14992027.2015.1088174.
- Jamaluddin, S. A. 2016. “Development and Evaluation of the Digit Triplet and Auditory-Visual Matrix Sentence Tests in Malay.” Doctoral thesis.
- Jamaludin, A., J. S. Chung, and A. Zisserman. 2019. “You Said That?: Synthesising Talking Faces from Audio.” International Journal of Computer Vision 127 (11–12): 1713–1767. doi:https://doi.org/10.1007/s11263-019-01150-y.
- Kollmeier, B., A. Warzybok, S. Hochmuth, M. A. Zokoll, V. Uslar, T. Brand, and K. C. Wagener. 2015. “The Multilingual Matrix Test: Principles, Applications, and Comparison across Languages: A Review.” International Journal of Audiology 54 (Suppl. 2): 3–16. doi:https://doi.org/10.3109/14992027.2015.1020971.
- Lander, K., and R. Davies. 2008. “Does Face Familiarity Influence Speechreadability?” Quarterly Journal of Experimental Psychology 61 (7): 961–967. doi:https://doi.org/10.1080/17470210801908476.
- Lidestam, B., S. Moradi, R. Pettersson, and T. Ricklefs. 2014. “Audiovisual Training is Better than Auditory-Only Training for Auditory-Only Speech-in-Noise Identification.” The Journal of the Acoustical Society of America 136 (2): EL142–EL147. doi:https://doi.org/10.1121/1.4890200.
- Llorach, G., and V. Hohmann. 2019. “Word Error and Confusion Patterns in an Audiovisual German Matrix Sentence Test (OLSA.” In Proceedings of the 23rd International Congress on Acoustics, Aachen.
- Llorach, G., F. Kirschner, G. Grimm, and V. Hohmann. 2020. “Video Recordings for the Female German Matrix Sentence Test (OLSA)”. Zenodo. doi:https://doi.org/10.5281/zenodo.3673062.
- Llorach, G., G. Grimm, M. M. Hendrikse, and V. Hohmann. 2018. October. Towards realistic immersive audiovisual simulations for hearing research: Capture, virtual scenes and reproduction. In Proceedings of the 2018 Workshop on Audio-Visual Scene Understanding for Immersive Multimedia. London: ACM, pp. 33–40.
- MacLeod, A., and Q. Summerfield. 1987. “Quantifying the Contribution of Vision to Speech Perception in Noise.” British Journal of Audiology 21 (2): 131–141. doi:https://doi.org/10.3109/03005368709077786.
- Nuesse, T., B. Wiercinski, T. Brand, and I. Holube. 2019. “Measuring Speech Recognition with a Matrix Test Using Synthetic Speech.” Trends in Hearing 23: 2331216519862982. doi:https://doi.org/10.1177/2331216519862982.
- Puglisi, G. E., A. Astolfi, N. Prodi, C. Visentin, A. Warzybok, S. Hochmuth, and B. Kollmeier. 2014. “Construction and First Evaluation of the Italian Matrix Sentence Test for the Assessment of Speech Intelligibility in Noise.” In Forum Acusticum 2014. Lyon, France: European Acoustics Association, EAA, pp. 1–5.
- Sakoe, H., and S. Chiba. 1978. “Dynamic Programming Algorithm Optimization for Spoken Word Recognition.” IEEE Transactions on Acoustics, Speech, and Signal Processing 26 (1): 43–49. doi:https://doi.org/10.1109/TASSP.1978.1163055.
- Sanchez Lopez, R., F. Bianchi, M. Fereczkowski, S. Santurette, and T. Dau. 2018. “Data-Driven Approach for Auditory Profiling and Characterization of Individual Hearing Loss.” Trends in Hearing 22:2331216518807400. doi:https://doi.org/10.1177/2331216518807400.
- Schreitmüller, S., M. Frenken, L. Bentz, M. Ortmann, M. Walger, and H. Meister. 2018. “Validating a Method to Assess Lipreading, Audiovisual Gain, and Integration during Speech Reception with Cochlear-Implanted and Normal-Hearing Subjects Using a Talking Head.” Ear & Hearing 39 (3): 503–516. doi:https://doi.org/10.1097/AUD.0000000000000502.
- Schubotz, W., T. Brand, B. Kollmeier, and S. D. Ewert. 2016. “Monaural Speech Intelligibility and Detection in Maskers with Varying Amounts of Spectro-Temporal Speech Features.” The Journal of the Acoustical Society of America 140 (1): 524–540. doi:https://doi.org/10.1121/1.4955079.
- Smoorenburg, G. F. 1992. “Speech Reception in Quiet and in Noisy Conditions by Individuals with Noise‐Induced Hearing Loss in Relation to Their Tone Audiogram.” The Journal of the Acoustical Society of America 91 (1): 421–437. doi:https://doi.org/10.1121/1.402729.
- Souza, P. E., K. T. Boike, K. Witherell, and K. Tremblay. 2007. “Prediction of Speech Recognition from Audibility in Older Listeners with Hearing Loss: Effects of Age, Amplification, and Background Noise.” Journal of the American Academy of Audiology 18 (1): 54–65. doi:https://doi.org/10.3766/jaaa.18.1.5.
- Sumby, W. H., and I. Pollack. 1954. “Visual Contribution to Speech Intelligibility in Noise.” The Journal of the Acoustical Society of America 26 (2): 212–215. doi:https://doi.org/10.1121/1.1907309.
- Summerfield, Q. 1992. “Lipreading and Audio-Visual Speech Perception.” Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 335 (1273): 71–78.
- Talbott, R. E., and V. D. Larson. 1983. “Research Needs in Speech Audiometry.” Seminars in Hearing 4 (03): 299–308. doi:https://doi.org/10.1055/s-0028-1091432.
- Taylor, S. L., M. Mahler, B. J. Theobald, and I. Matthews. 2012, July. “Dynamic Units of Visual Speech.” In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation. New York; NY: Eurographics Association, pp. 275–284.
- Taylor, S., T. Kim, Y. Yue, M. Mahler, J. Krahe, A. G. Rodriguez, J. Hodgins, and I. Matthews. 2017. “A Deep Learning Approach for Generalized Speech Animation.” ACM Transactions on Graphics 36 (4): 1–11. doi:https://doi.org/10.1145/3072959.3073699.
- Trounson, R. H. 2012. “Development of the UC Auditory-Visual Matrix Sentence Test.” Master thesis.
- Tye-Murray, N., M. S. Sommers, and B. Spehar. 2007. “The Effects of Age and Gender on Lipreading Abilities.” Journal of the American Academy of Audiology 18 (10): 883–892. doi:https://doi.org/10.3766/jaaa.18.10.7.
- Van de Rijt, L. P. H., A. Roye, E. A. M. Mylanus, A. J. van Opstal, and M. M. van Wanrooij. 2019. “The Principle of Inverse Effectiveness in Audiovisual Speech Perception.” Frontiers in Human Neuroscience 13: 335. doi:https://doi.org/10.3389/fnhum.2019.00335.
- Wagener, K. C., and T. Brand. 2005. “Sentence Intelligibility in Noise for Listeners with Normal Hearing and Hearing Impairment: Influence of Measurement Procedure and Masking Parameters. La inteligibilidad de frases en silencio para sujetos con audición normal y con hipoacusia: la influencia del procedimiento de medición y de los parámetros de enmascaramiento.” International Journal of Audiology 44 (3): 144–156. doi:https://doi.org/10.1080/14992020500057517.
- Wagener, K., S. Hochmuth, M. Ahrlich, M. Zokoll, and B. Kollmeier. 2014. Der weibliche Oldenburger Satztest. The female version of the Oldenburg sentence test. In Proceedings of the 17th Jahrestagung der Deutschen Gesellschaft für Audiologie, Oldenburg, Germany.
- Wagener, K., T. Brand, and B. Kollmeier. 1999. “Entwicklung Und Evaluation Eines Satztests Für Die Deutsche Sprache I-III: Design, Optimierung Und Evaluation Des Oldenburger Satztests.” ZfA 38 (1–3): 4–15.
- Woodhouse, L., L. Hickson, and B. Dodd. 2009. “Review of Visual Speech Perception by Hearing and Hearing‐Impaired People: clinical Implications.” International Journal of Language & Communication Disorders 44 (3): 253–270. doi:https://doi.org/10.1080/13682820802090281.