4,551
Views
20
CrossRef citations to date
0
Altmetric
Review

Treatment for improving discourse in aphasia: a systematic review and synthesis of the evidence base

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon show all
Pages 1125-1167 | Received 11 Nov 2019, Accepted 03 May 2020, Published online: 01 Jun 2020
 

ABSTRACT

Background: Improved discourse production is a priority for all key stakeholders in aphasia rehabilitation. A Cochrane review of randomised controlled trials (RCTs) for aphasia found speech and language therapy treatment to be effective for improving the ability to communicate in everyday interaction. However, this large-scale review did not focus exclusively on treatment for discourse production and did not include other treatment research designs. Thus, the extent of the evidence base addressing discourse interventions is currently unclear.

Objective: The present study undertakes the first systematic review of research on treatment for discourse production in aphasia, appraises the quality of the evidence base; characterises the methods for measuring outcomes; and describes discourse treatment in terms of both content and efficacy.

Design: Scopus, Medline, and EmBase databases were searched, providing 334 records. Twenty-five studies (reporting on 127 participants) met inclusion criteria and were reviewed with the following research questions: What is the quality of the study designs used? How complete is the intervention reporting? What is the range, type, and content of outcome measures used? What is the range, type, and content of discourse treatments reported to date? Are discourse treatments efficacious?

Results: Seven of the 25 studies met the criteria for quality review, with 3 RCTs scoring moderately well and 3 (of 4) case studies scoring moderate-low. Most studies had adequate levels of completeness of treatment reporting, with 3 scoring highly. There were 514 different outcome measures reported across the 25 studies, with measures of words-in-discourse the most common. Studies were grouped into six treatment categories: “word production in discourse”, “sentence production in discourse”, “discourse macrostructure”, “discourse scripts”, “multi-level”, and “no consensus”. Twenty-two studies reported post-treatment gains, most commonly noted in increased word production. Changes in sentence production and discourse macrostructure were present but infrequently assessed.

Conclusions: Discourse treatment is an emerging field of research. Despite limitations in the evidence base, there are clear positive signs that discourse treatment is efficacious. There is emerging evidence for beneficial effects on word and sentence production in discourse, for improved discourse macrostructure, and for treatments working at multiple levels of language. To strengthen the evidence in this field and improve outcomes for people with aphasia, we need more discourse treatment research using an explicit theoretical rationale, high-quality study designs, more complete reporting, and agreed treatment and assessment methods.

Acknowledgements

This research is funded by The Stroke Association, grant number TSA2017/01.

Disclosure statement

Authors have no conflicts of interest to declare.

Notes

1. A study in which a number of similar people are randomly assigned to 2 (or more) groups to test a specific drug, treatment, or other intervention. One group (the experimental group) has the intervention being tested, the other (the comparison or control group) has an alternative intervention, a dummy intervention (placebo), or no intervention at all. The groups are followed up to see how effective the experimental intervention was. Outcomes are measured at specific times and any difference in response between the groups is assessed statistically. This method is also used to reduce bias. (NICE, Citation2018).

2. Non-agreements for the ROBIN-T were for sampling of behaviour (n = 4 studies); dependent variable (n = 3); design with control (n = 3); generalisation (n = 2); raw data record (n = 2); and a single non-agreement each for the items setting, interrater reliability, and data analysis. The non-agreements were generally due to different interpretations of scoring criteria or differences interpreting the information reported in the studies. For example, for item “sampling of behaviour”, the judges differed in their interpretation of what constituted a datapoint; and for the item “dependent variables”, the judges differed in their interpretation of what constituted an operational definition of a target behaviour, and clarity/precision of method of measurement.

3. The term “standardised” here indicates that a test or test battery is commercially available, has a standard form of administration, and has published normative and/or clinical data available.

4. One study (Penn & Beecham, 1992) had a multilingual participant for whom scores were reported in 4 languages. We have only reported the counts for the measure incidents reported for English, so as not to distort the findings.

5. Consensus could not be reached for categorising these two studies into one of the treatment groups. Although they describe technological support for language at multiple levels (word, sentence, discourse macrostructure), we could not reach an agreement about whether there was therapeutic activity at each level. The decision was therefore made to leave them uncategorised.

Additional information

Funding

This work was supported by the Stroke Association [TSA2017/01].

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 386.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.