1,683
Views
23
CrossRef citations to date
0
Altmetric
Original Articles

What Do Thinking-Aloud Participants Say? A Comparison of Moderated and Unmoderated Usability Sessions

, &
Pages 557-570 | Published online: 26 Aug 2015
 

Abstract

The value of thinking aloud in usability tests depends on the content of the users’ verbalizations. We investigated moderated and unmoderated users’ verbalizations during relaxed thinking aloud (i.e., verbalization at Levels 1–3). Verbalizations of user experience were frequent and mostly relevant to the identification of usability issues. Explanations and redesign proposals were also mostly relevant, but infrequent. The relevance of verbalizations of user experience, explanations, and redesign proposals showed the value of relaxed thinking aloud but did not clarify the trade-off between rich verbalizations and test reactivity. Action descriptions and system observations—two verbalization categories consistent with both relaxed and classic thinking aloud—were frequent but mainly of low relevance. Across all verbalizations, the positive or negative verbalizations were more often relevant than those without valence. Finally, moderated and unmoderated users made largely similar verbalizations, the main difference being a higher percentage of high-relevance verbalizations by unmoderated users.

ACKNOWLEDGEMENTS

We are grateful to Vanessa Goedhart Henriksen from brugertest.nu for screening the test participants for their ability to think aloud and to Birna Dahl from Snitker for conducting the moderated test sessions. We thank Annika Olsen for transcribing the participants’ verbalizations. In the interest of full disclosure, we note that at the time of the study, the third author was an intern in Snitker. Special thanks are due to the test participants.

Additional information

Notes on contributors

Morten Hertzum

Morten Hertzum is Professor of Information Science at University of Copenhagen. His research interests include human–computer interaction, usability, computer supported cooperative work, information seeking, and medical informatics. He is co-editor of the book Situated Design Methods (MIT Press, 2014) and has published a series of papers about usability evaluation methods.

Pia Borlund

Pia Borlund is Professor of Information Science (with special obligations) at the University of Copenhagen. Her research interests lie within interactive information retrieval, human–computer interaction, and information seeking (behavior). She is the originator of the IIR evaluation model, which uses simulated work task situations as a central instrument of testing.

Kristina B. Kristoffersen

Kristina B. Kristoffersen holds a Master of Science in IT, Digital Design, and Communication from the IT University of Copenhagen. Her specialty is usability and user centered design. She works in the company Usertribe, where she designs remote thinking-aloud tests, analyzes user videos, and is responsible for the daily production.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.