7,511
Views
261
CrossRef citations to date
0
Altmetric
Original Articles

Rating the methodological quality of single-subject designs and n-of-1 trials: Introducing the Single-Case Experimental Design (SCED) Scale

, , , , &
Pages 385-401 | Received 01 Oct 2007, Published online: 25 Jun 2008
 

Abstract

Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITETM). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73–0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78–0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73–0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.

This project was funded in part by the Motor Accidents Authority of New South Wales, Australia, and supported by the library resources of the Royal Rehabilitation Centre Sydney and the University of Sydney. The authors warmly acknowledge the significant contributions of Vanessa Aird and Amanda Lane-Brown in providing methodological ratings for Study 2.

Notes

1 PsycBITETM (Tate et al., Citation2004) is modelled on the Physiotherapy Evidence Database (PEDro; Herbert, Moseley, & Sherrington, Citation1998/99). It is an interactive, electronic database (http://www.psycbite.com) that contains all published reports (meeting five selection criteria) which provide empirical data on the effectiveness of interventions to treat the psychological consequences of acquired brain impairment.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 375.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.