1,683
Views
23
CrossRef citations to date
0
Altmetric
Original Articles

What Do Thinking-Aloud Participants Say? A Comparison of Moderated and Unmoderated Usability Sessions

, &
 

Abstract

The value of thinking aloud in usability tests depends on the content of the users’ verbalizations. We investigated moderated and unmoderated users’ verbalizations during relaxed thinking aloud (i.e., verbalization at Levels 1–3). Verbalizations of user experience were frequent and mostly relevant to the identification of usability issues. Explanations and redesign proposals were also mostly relevant, but infrequent. The relevance of verbalizations of user experience, explanations, and redesign proposals showed the value of relaxed thinking aloud but did not clarify the trade-off between rich verbalizations and test reactivity. Action descriptions and system observations—two verbalization categories consistent with both relaxed and classic thinking aloud—were frequent but mainly of low relevance. Across all verbalizations, the positive or negative verbalizations were more often relevant than those without valence. Finally, moderated and unmoderated users made largely similar verbalizations, the main difference being a higher percentage of high-relevance verbalizations by unmoderated users.

ACKNOWLEDGEMENTS

We are grateful to Vanessa Goedhart Henriksen from brugertest.nu for screening the test participants for their ability to think aloud and to Birna Dahl from Snitker for conducting the moderated test sessions. We thank Annika Olsen for transcribing the participants’ verbalizations. In the interest of full disclosure, we note that at the time of the study, the third author was an intern in Snitker. Special thanks are due to the test participants.

Additional information

Notes on contributors

Morten Hertzum

Morten Hertzum is Professor of Information Science at University of Copenhagen. His research interests include human–computer interaction, usability, computer supported cooperative work, information seeking, and medical informatics. He is co-editor of the book Situated Design Methods (MIT Press, 2014) and has published a series of papers about usability evaluation methods.

Pia Borlund

Pia Borlund is Professor of Information Science (with special obligations) at the University of Copenhagen. Her research interests lie within interactive information retrieval, human–computer interaction, and information seeking (behavior). She is the originator of the IIR evaluation model, which uses simulated work task situations as a central instrument of testing.

Kristina B. Kristoffersen

Kristina B. Kristoffersen holds a Master of Science in IT, Digital Design, and Communication from the IT University of Copenhagen. Her specialty is usability and user centered design. She works in the company Usertribe, where she designs remote thinking-aloud tests, analyzes user videos, and is responsible for the daily production.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.