2,684
Views
17
CrossRef citations to date
0
Altmetric
Articles

Reception of game subtitles: an empirical study

 

Abstract

Subtitling practices in game localisation remain, to date, largely unexplored, and the existing standards as widely applied to subtitling for TV, DVD and cinema have not been adopted by the game industry. This can pose an accessibility barrier for deaf and hard of hearing players as there are, on occasion, no subtitles provided for audio game components, such as sound effects. This article presents a small-scale exploratory study concerning the reception of subtitles in video games by means of user tests with eye tracking technology and a questionnaire. The purpose of the study was to determine what type of subtitles would be most suitable for video games, given their interactive and ludic nature, based not only on users’ preferences but also on quantitative data obtained with an eye tracking technology. The article also highlights the need for the development of best practice and standards in subtitling for this emerging digital medium, which would enhance game accessibility and the gaming experience not only for deaf and hard of hearing players but for all players.

Acknowledgements

This research is supported by a Post Doc Scheme Ref. 2008 BP B 00074 by the Catalan Government and by the Spanish Ministry of Finance and Competivity project no. FFI2012-39056-C02-01. Subtitling for the deaf and hard of hearing and audio description: new formats, as well as the Catalan Government funds 2014SGR27, and the European project Hbb4All from the FP7 CIP-ICTPSP.2013.5.1 # 621014.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes

1. In the game industry, the term voice-over is used to refer to the recording of the script of a game by voice actors.

2. See, for example, Benecke (Citation2004), Braun (Citation2007), Greening and Rolph (Citation2007), Jiménez Hurtado (Citation2007), Díaz-Cintas, Orero, and Remael (Citation2007), Orero (Citation2007), Remael and Neves (Citation2007), Díaz-Cintas, Neves, and Matamala (Citation2010), Matamala and Orero (Citation2010), Maszerowska, Matamala, and Orero (Citation2014).

3. For a detailed account of the different applications of eye tracking research, see Duchowski (Citation2007).

4. The DTV4All project (TCP IP 224994) ran from 2008 to 2011 with the aim of facilitating the provision of access services on digital television across the European Union. More information is available from http://www.psp-dtv4all.org/. The DTV4ALL project has been followed by the European Commission funded Hbb4ALL, which started in 2014 with the aim of making hybrid broadband broadcast TV accessible for all users by customising accessibility services through options for personal preferences, as well as researching how users process subtitles in different devices, such as second screens. For more information, see the project website http://www.hbb4all.eu/.

5. See, for example, Arnáiz (Citation2008, forthcoming) and Szarkowska et al. (Citation2013) for SDH and Romero-Fresco (Citation2010, Citation2012) for respoken subtitles.

6. It is beyond the scope of this article to analyse in detail the different experiments cited here. For a detailed critical overview of existing eye tracking studies on AVT, see Szarkowska et al. (Citation2013) and Kruger, Szarkowska, and Krejtz (Citation2015).

7. While 25 subjects is not a high figure compared to other bigger scale eye-tracking studies on subtitling that had 40 to 70 participants (e.g. Caffrey Citation2008; Romero-Fresco Citation2012; Szarkowska et al. Citation2013; Kruger, Hefer, and Matthew Citation2013), there are other interesting middle size experiments that provide insides about how subtitles are processed ranging from 20 to 39 participants (e g. Künzli and Ehrensberger-Dow Citation2011; Di Giovanni Citation2014; Kruger and Steyn Citation2014), and others from five (Romero-Fresco Citation2010) to 19 participants (e.g. Perego et al. Citation2010; Caffrey Citation2012; Moran Citation2012; Ghia Citation2012). Some of those studies also had high attrition rates due to low tracking rates, such as Di Giovanni (Citation2014), where six sets of data out of 30 had to be discarded due to inaccurate fixations or Iriarte et al.’s study (2012), where three sets out of 17 had to be excluded due to the high sensitivity of the eye tracker to body movements. With regards to the participants, it should also be mentioned that none of them came from the field of Translation Studies, as this could have affected their preferences and use of subtitles, and they would not have constituted a neutral group.

8. An eye tracking rate of 80% has been applied in previous studies as a quality threshold, such as Kruger, Hefer, and Matthew (Citation2013).

9. The author would like to thank all the participants, as well as the associations for the deaf ACCAP and FESOCA, and Kristian Lara Escudero, a sign language interpreter who helped gather users for the test and interpreted during them. A special thank you is also due to Anna Vilaró for helping set up the experiment with the eye tracker.

10. The author would like to thank students Agnès Borrás, Alfons Mallol, Roger Plana and Laura Ruíz, as well as lecturers Jordi Arnal, Enric Martí, and Pere Nolla, for collaborating in this project.

11. Due to the unique interactive and dynamic nature of the video game stimulus, which required that each scene under analysis was cut manually for each of the participants, it was decided that five minutes would be sufficient to obtain sufficient data to enable further future research. In addition, there are several previous interesting experiments with audio visual material of shorter or equal length, such as those by Orero and Vilaró (Citation2012), Ghia (Citation2012), Moran (Citation2012) and Di Giovanni (Citation2014), ranging from 1.5 to five minutes.

Additional information

Notes on contributors

Carme Mangiron

Carmen Mangiron, PhD, is a member of the research group TransMedia Catalonia and a postdoctoral fellow at the Department of Translating, Interpreting and East Asian Studies at the Universitat Autònoma de Barcelona. She is the Chair of the MA in Audiovisual Translation at UAB and has over 10 years’ experience teaching game localisation. She has extensive experience as a translator, specialising in software and game localisation. Her research interests include game localisation and game accessibility. She is co-author of Game Localization: Translating for the Global Digital Entertainment Industry (O’Hagan and Mangiron Citation2013) and one of the editors of Fun for All: Translation and Accessibility Practices in Video Games (Mangiron, Orero and O’Hagan 2014). She is also one of the main organisers of the Fun for All: Translation and Accessibility in Video Games and Virtual Worlds Conference, which started in 2010 and runs every two years.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.