ABSTRACT
After countless petitions and complaints from end users, live subtitling quality is slowly attracting the attention of broadcasters, regulators, the subtitling industry and scholars working in Media Accessibility. These stakeholders share an interest in providing better live subtitles, but their quality assessment is a thorny issue. Although quality studies are still scarce, the research undertaken so far has proven valuable in identifying the weaknesses of live subtitles in several countries. This article presents the main findings of the pilot project that preceded the first national quality assessment in Spain, which is currently underway. By focusing on this case study, we will argue that live subtitling quality research may fulfil an additional purpose: serving as a didactic tool in respeaking courses. In this paper we will outline the quality assessment method that we followed, discuss how its main results informed about the accuracy, speed and latency of our samples, and describe how these data may be brought to the classroom to fine tune respeakers’ training under a Performance Analysis approach.
Disclosure statement
No potential conflict of interest was reported by the authors.
Notes
1. All throughout this paper we will use the term ‘live subtitles’ to refer exclusively to ‘intralingual live subtitles’, that is, live subtitles delivered in the same language as the programme that they accompany.
2. The NER model classifies subtitles according to their quality as follows: excellent (if they show an average accuracy rate over 99.5%), very good (99% – 99.5% accuracy rate), good (98.5% – 99% accuracy rate), acceptable (98% – 98.5% accuracy rate) and substandard (below 98% accuracy rate).
3. The average subtitling speeds for the entire sample and for each genre were estimated following the traditional approach, that is, by dividing the total number of words in the subtitles by the exposure time of those subtitles.