1,830
Views
5
CrossRef citations to date
0
Altmetric
Original Articles

Development and evaluation of video recordings for the OLSA matrix sentence test

, , , , &
Pages 311-321 | Received 19 Feb 2020, Accepted 07 May 2021, Published online: 10 Jun 2021
 

Abstract

Objective

The aim was to create and validate an audiovisual version of the German matrix sentence test (MST), which uses the existing audio-only speech material.

Design

Video recordings were recorded and dubbed with the audio of the existing German MST. The current study evaluates the MST in conditions including audio and visual modalities, speech in quiet and noise, and open and closed-set response formats.

Sample

One female talker recorded repetitions of the German MST sentences. Twenty-eight young normal-hearing participants completed the evaluation study.

Results

The audiovisual benefit in quiet was 7.0 dB in sound pressure level (SPL). In noise, the audiovisual benefit was 4.9 dB in signal-to-noise ratio (SNR). Speechreading scores ranged from 0% to 84% speech reception in visual-only sentences (mean = 50%). Audiovisual speech reception thresholds (SRTs) had a larger standard deviation than audio-only SRTs. Audiovisual SRTs improved successively with increasing number of lists performed. The final video recordings are openly available.

Conclusions

The video material achieved similar results as the literature in terms of gross speech intelligibility, despite the inherent asynchronies of dubbing. Due to ceiling effects, adaptive procedures targeting 80% intelligibility should be used. At least one or two training lists should be performed.

Acknowledgements

The authors thank the Media Technology and Production of the CvO University of Oldenburg for helping out with the recordings. Special thanks to Anja Gieseler for giving feedback on the evaluation procedures and the manuscript and to Bernd T. Meyer for counselling on the video selection metric.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work received funding from the EU’s H2020 research and innovation program under the Marie Sklodowska-Curie Actions GA 675324 (ENRICH), from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project number 352015383 (SFB 1330 B1 and C4) and from the European Regional Development Fund – Project “Innovation network for integrated, binaural hearing system technology (VIBHear)”.