Abstract
On simple Web pages, the text to speech translation provided by a screen reader works relatively well. This is not the case for more sophisticated ‘Web 2.0’ pages, in which many interactive visual features, such as tickers, tabs, auto-suggest lists, calendars and slideshows currently remain inaccessible. Determining how to present these in audio is challenging in general, but may be particularly so for certain groups, such as people with congenital or early-onset blindness, as they are not necessarily familiar with the visual interaction metaphors that are involved. This article describes an evaluation of an audio Web browser designed using a novel approach, whereby visual content is translated to audio using algorithms derived from observing how sighted users interact with it. Both quantitative and qualitative measures showed that all participants, irrespective of the onset of their visual impairment, preferred the visual interaction-based audio mappings. Participants liked the fact that the mappings made the dynamic content truly accessible, rather than merely available to those who could find it, as is presently the case. The results indicate that this ‘visual-centred’ mapping approach may prove to be a suitable technique for translating complex visual content to audio, even for users with early-onset visual disabilities.
Acknowledgements
The SASWAT project is funded by the Engineering and Physical Sciences Research Council, reference: EP/E062954/1.
Notes
3. All the content used in the evaluation can be seen on the study website: http://hcw.cs.manchester.ac.uk/research/saswat/experiments/hotels/simple/intro.php. A demonstration of the browser can be seen at http://www.youtube.com/user/HumanCentredWeb#p/u/0/HTtOGJl5t4s.
4. The technical term for this type of dynamic content is ‘carousel’; however, the more familiar term ‘slideshow’ was used in the evaluation.