1,831
Views
49
CrossRef citations to date
0
Altmetric
Papers

Reading comprehension by people with chronic aphasia: A comparison of three levels of visuographic contextual supportFootnote

, , , &
Pages 1053-1064 | Received 23 Jul 2008, Accepted 07 Nov 2008, Published online: 04 Dec 2010
 

Abstract

Background: People with aphasia often have concomitant reading comprehension deficits that interfere with their full participation in leisure and social activities involving written text comprehension.

Aims: The purpose of this investigation was to explore the impact of three levels of visuographic support—(a) high‐context photographs, (b) low‐context photographs, and (c) no photographs—on the reading comprehension of narratives by people with chronic aphasia.

Methods & Procedures: Participants were seven adults with chronic aphasia and concomitant reading comprehension deficits. Participants read three narratives, each presented with a different level of visuographic support. Using a repeated measures design, the researchers examined (a) reading comprehension response accuracy (measured in number of correct responses), (b) response time (measured in seconds), and (c) the participants' perceptions of image helpfulness.

Outcomes & Results: Data analysis revealed that the participants demonstrated significantly increased response accuracy when either type of visuographic support was present. Participants demonstrated significantly faster response times in the no‐photographs condition than in the high‐ and low‐context conditions. Although not analysed for statistical significance, evaluation of descriptive statistics regarding participants' perception data supported the notion that pictures were helpful and tasks were easier when either type of visuographic support was present.

Conclusions: Continued research is necessary to delineate the most efficient way to present visuographic supports to people with aphasia during reading comprehension tasks.

Notes

This project formed part of Aimee Dietz's dissertation research. The authors thank Sarah Wallace and Kristy Weissling (Visual Scene Displays Research Team at the University of Nebraska – Lincoln) and Joan Erickson and Sharon Evans (of the first author's dissertation committee) for their assistance. Thanks also to Kelli Evans for her assistance with the reliability measures.

This research was performed in part with support from the Barkley Trust and under Grant #H114#980026 from the National Institute on Disability and Rehabilitation Research (NIDRR), US Department of Education. The opinions expressed in this publication are those of the authors and do not necessarily reflect those of NIDRR or the Department of Education.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.