Abstract
The value of thinking aloud in usability tests depends on the content of the users’ verbalizations. We investigated moderated and unmoderated users’ verbalizations during relaxed thinking aloud (i.e., verbalization at Levels 1–3). Verbalizations of user experience were frequent and mostly relevant to the identification of usability issues. Explanations and redesign proposals were also mostly relevant, but infrequent. The relevance of verbalizations of user experience, explanations, and redesign proposals showed the value of relaxed thinking aloud but did not clarify the trade-off between rich verbalizations and test reactivity. Action descriptions and system observations—two verbalization categories consistent with both relaxed and classic thinking aloud—were frequent but mainly of low relevance. Across all verbalizations, the positive or negative verbalizations were more often relevant than those without valence. Finally, moderated and unmoderated users made largely similar verbalizations, the main difference being a higher percentage of high-relevance verbalizations by unmoderated users.
ACKNOWLEDGEMENTS
We are grateful to Vanessa Goedhart Henriksen from brugertest.nu for screening the test participants for their ability to think aloud and to Birna Dahl from Snitker for conducting the moderated test sessions. We thank Annika Olsen for transcribing the participants’ verbalizations. In the interest of full disclosure, we note that at the time of the study, the third author was an intern in Snitker. Special thanks are due to the test participants.
Additional information
Notes on contributors
Morten Hertzum
Morten Hertzum is Professor of Information Science at University of Copenhagen. His research interests include human–computer interaction, usability, computer supported cooperative work, information seeking, and medical informatics. He is co-editor of the book Situated Design Methods (MIT Press, 2014) and has published a series of papers about usability evaluation methods.
Pia Borlund
Pia Borlund is Professor of Information Science (with special obligations) at the University of Copenhagen. Her research interests lie within interactive information retrieval, human–computer interaction, and information seeking (behavior). She is the originator of the IIR evaluation model, which uses simulated work task situations as a central instrument of testing.
Kristina B. Kristoffersen
Kristina B. Kristoffersen holds a Master of Science in IT, Digital Design, and Communication from the IT University of Copenhagen. Her specialty is usability and user centered design. She works in the company Usertribe, where she designs remote thinking-aloud tests, analyzes user videos, and is responsible for the daily production.