1,286
Views
2
CrossRef citations to date
0
Altmetric
ARTICLES

The Expertise Effect on Web Accessibility Evaluation Methods

, &
Pages 246-283 | Published online: 02 Sep 2011
 

Abstract

Web accessibility means that disabled people can effectively perceive, understand, navigate, and interact with the web. Web accessibility evaluation methods are needed to validate the accessibility of web pages. However, the role of subjectivity and of expertise in such methods is unknown and has not previously been studied. This article investigates the effect of expertise in web accessibility evaluation methods by conducting a Barrier Walkthrough (BW) study with 19 expert and 57 nonexpert judges. The BW method is an evaluation method that can be used to manually assess the accessibility of web pages for different user groups such as motor impaired, low vision, blind, and mobile users.

Our results show that expertise matters, and even though the effect of expertise varies depending on the metric used to measure quality, the level of expertise is an important factor in the quality of accessibility evaluation of web pages. In brief, when pages are evaluated with nonexperts, we observe a drop in validity and reliability. We also observe a negative monotonic relationship between number of judges and reproducibility: more evaluators mean more diverse outputs. After five experts, reproducibility stabilizes, but this is not the case with nonexperts. The ability to detect all the problems increases with the number of judges: With 3 experts all problems can be found, but for such a level 14 nonexperts are needed. Even though our data show that experts rated pages differently, the difference is quite small. Finally, compared to nonexperts, experts spent much less time and the variability among them is smaller, they were significantly more confident, and they rated themselves as being more productive. The article discusses practical implications regarding how BW results should be interpreted, how to recruit evaluators, and what happens when more than one evaluator is hired.

Supplemental materials are available for this article. Go to the publisher's online edition of Human–Computer Interaction for statistical details and additional measures for this article.

Notes

1Available from the publisher's online edition of Human–Computer Interaction.

Acknowledgments and Experimental Data. We thank all the experts that we contacted, and especially those who contributed to this study. Many thanks also to the students of the course “Progettazione di siti web 2008–2009” and those graduate students at the Human Centred Web (HCW) Lab who spent a lot of time performing the evaluation. We also thank the reviewers for their extremely detailed and helpful reviews. Data of this study can be found at the HCW Lab data repository, http://wel-eprints.cs.manchester.ac.uk/114/. Statistical processing was done with R (R Development Core Team, 2010).

HCI Editorial Record. First manuscript received August 28, 2009. Revision received April 29, 2010. Accepted by Clayton Lewis. Final manuscrpt received April 29, 2011. — Editor

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 329.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.