530
Views
1
CrossRef citations to date
0
Altmetric
EDITORIAL

Submitting a Systematic Review

Pages 209-213 | Published online: 10 Jul 2009

Evidence-based pediatric physical and occupational therapy practice involves finding, appraising, sharing, and applying research. Keeping abreast of research is a challenge for busy practitioners, especially in light of the increasing number of studies being published each year. The mission of Physical & Occupational Therapy in Pediatrics is to provide pediatric therapists the latest clinical research, knowledge, and practical applications. The systematic review is a means of providing readers with evidence in a format that is accessible and manageable. This editorial informs readers about the systematic review and provides guidelines for preparation and submission of a systematic review to Physical & Occupational Therapy in Pediatrics.

WHAT IS A SYSTEMATIC REVIEW?

A systematic review is a summary of research that addresses a focused question. Systematic reviews involve retrospective appraisal of original research (a “study of studies”). All relevant research is appraised in an effort to determine the overall evidence for the question. The word “systematic” is intended to communicate that rigorous methods are used to identify, select, appraise, and synthesize the results of several studies ([Cook, Mulrow, & Haynes, Citation1997]). These methods distinguish the systematic review from the traditional “narrative review,” which relies more heavily on the author's judgment and interpretation ([Herbert, Jamtvedt, Mead, & Hagen, Citation2005]).

WHAT IS A META-ANALYSIS?

Meta-analysis refers to a systematic review in which statistical methods are used to combine the results of two or more studies to determine the overall effect. Meta-analysis can only be performed on studies that used reliable and valid measures and some type of inferential statistic (e.g., t-test, analysis of variance). Effect size and odds ratio statistics are often used to report the results of a meta-analysis. Effect size is the mean difference in the outcomes of interest between the experimental and control group expressed in terms of the common standard deviation. The d-index or related statistic is used to measure effect size. Citation[Cohen (1988)] provided the following guidelines for interpretation of the d-index: d = 0.2 represents a small effect size, d = 0.5 represents a medium effect size, and d = 0.8 represents a large effect size. The odds ratio is a statistical method for determining the probability of an event occurring. An odds ratio of 1.0 indicates that subjects in the experimental and control groups had the same probability of achieving a particular outcome. Citation[Miser (2000)] suggested an odds ratio of 2.0 or greater represents a strong effect. An odds ratio of 2.0 indicates that subjects who received the intervention were two times more likely to have attained the outcome of interest compared with subjects who did not receive the intervention.

Meeting requirements for statistical analysis is a necessary but not sufficient condition for meta-analysis. In many areas of childhood disability, research is characterized by extreme variability in subject characteristics, interventions, and outcomes measured, which precludes meaningful interpretation of statistical analysis. When meta-analysis is either not possible or not appropriate, findings are summarized qualitatively. The simplest method is to report the proportion of studies with positive findings. For example, in six of the nine studies reviewed, children who received the intervention had a greater change in the outcome of interest compared with children in the control group. A simple proportion does not account for the internal validity or quality of each study, including design, sample size, how subjects were selected and assigned to each group, whether the intervention was provided in the intended manner, and reliability of outcome measures.

Level of evidence and strength of recommendation are additional approaches to qualitative analysis. There are several systems used to classify level of evidence. All are based on the assumption that the randomized control trial with a large sample provides the strongest evidence of a cause–effect relationship. (The intervention is the cause of the outcome.) The Oxford Centre for Evidence-Based Medicine Levels of Evidence ([Oxford Centre for Evidence-Based Medicine, Citation2001]) is a five-level system. The result of a systematic review or randomized clinical trial with a narrow confidence interval is considered level 1 evidence. Expert opinion or basic research provides level 5 evidence. Criteria to rate the strength of a recommendation based on the overall evidence from a systematic review have also been developed by the Oxford Centre for Evidence-Based Medicine. A grade of A is assigned when studies provide consistent level 1 evidence. In contrast, a grade of D is assigned when results represent level 5 evidence or when findings across studies are inconsistent or inconclusive.

PERFORMING A SYSTEMATIC REVIEW

The following steps are recommended when performing a systematic review. The steps are intended to provide standards for identification, selection, and appraisal of studies, as well as synthesis of results, thereby producing an objective, high-quality review:

  1. Formulate a focused question. Citation[Sackett, Straus, Richardson, Rosenberg, and Haynes (2000)] proposed that a question about the effects of an intervention should include four parts (comprising the mnemonic PICO): patient or problem, intervention or management strategy, comparative intervention (not relevant for all questions), and outcome.

  2. Perform a comprehensive literature search. The literature search should be rigorous to ensure that research relevant to the question is identified. A comprehensive search typically includes several databases, synonyms and alternate terms for key words, multiple key word combinations, and other sources of evidence, such as the reference lists of the studies reviewed.

  3. Select studies for inclusion in the review. Not all studies identified by a literature search are appropriate for inclusion in a systematic review. Based on the question, criteria are defined for inclusion and exclusion of studies. Criteria might pertain to characteristics of the subjects, intervention, or outcomes of interest.

  4. Appraise each study. The quality of each study included in a systematic review should be appraised. Appraisal of the quality of each study is important for determining the level of evidence and strength of recommendation. The Centre of Evidence-Based Physiotherapy has developed a scale for rating the quality of a randomized control trial ([Maher, Sherrington, Herbert, Moseley, & Elkins, Citation2003]).

  5. Synthesize and analyze results. As previously discussed, analysis of the results of a systematic review may be quantitative or qualitative. Systematic reviews that use descriptive methods of analysis are published in Physical & Occupational Therapy in Pediatrics, provided criteria for steps 1 to 4 are met.

  6. Discuss the implications for practice. Thoughtful discussion of the implications of the results is essential to enable pediatric therapists to apply evidence to practice. The term “clinical bottom line” refers to the process of summarizing the results in a manner that facilitates application to pediatric therapy practice.

PREPARING A SYSTEMATIC REVIEW FOR PUBLICATION

The steps for performing a systematic review lend themselves to the format for manuscripts submitted to Physical & Occupational Therapy in Pediatrics. The Introduction section should state the question and indicate why the topic is pertinent to pediatric therapy practice. The Methods section begins with a detailed description of how the literature search was performed. Sufficient information should be provided to enable the reader to replicate the search. The date(s) the search was performed is listed. Next, the criteria for inclusion of studies in the review are presented, and the procedures used to appraise the quality of each study are described. Figures and tables are an efficient way to summarize important information, enabling readers to compare and contrast studies. References should be provided for all studies reviewed. The Data Analysis subsection describes how the results were synthesized and analyzed. Statistics and assumptions of statistics should be explained. Steps 2 to 4 distinguish the systematic review from the narrative review. Evidence that each step was performed in a thorough and unbiased manner is an important consideration for publication. The Results section should present findings that address the question or aim of the review. Statistics should be interpreted for readers. The Discussion section includes appraisal of the strengths and limitations of the review and the implications for pediatric therapy practice.

The editors and editorial board of Physical & Occupational Therapy in Pediatrics encourage publication of systematic reviews as a means of providing readers with current evidence in a format that facilitates application to practice. Topics for systematic reviews include methods of service delivery, specific interventions, determinants of outcomes, and characteristics and abilities of children with a particular condition. A systematic review of the self-concept of children with cerebral palsy by Citation[Dunn, Shields, Taylor, and Dodd (2007)] and a systematic review of determinants of participation in leisure activities by children and youth with cerebral palsy by Citation[Shikako-Thomas, Majnemer, Law, and Lach (2008)] in the current issue are examples of reviews recently published in Physical & Occupational Therapy in Pediatrics.

REFERENCES

  • Cohen J. Statistical power for analysis of behavioral sciences, 2nd ed. Erlbaum, Hillsdale, NJ 1988
  • Cook D. J., Mulrow C. D., Haynes R. B. Systematic reviews: Critical links in the great chain of evidence. Annals of Internal Medicine 1997; 126(5)376–380
  • Dunn N., Shields N., Taylor N. F., Dodd K. J. A systematic review of the self-concept of children with cerebral palsy and perceptions of parents and teachers. Physical & Occupational Therapy in Pediatrics 2007; 27(3)55–71
  • Herbert R., Jamtvedt G., Mead J., Hagen K. B. Practical evidence-based physiotherapy. Elsevier, Philadelphia 2005
  • Maher C. G., Sherrington C., Herbert R. D., Moseley A. M., Elkins M. Reliability of the PEDro scale for rating quality of randomized controlled trials. Physical Therapy 2003; 83: 713–721
  • Miser W. F. Applying a meta-analysis to daily clinical practice. Evidence-based clinical practice, J. P. Geyman, R. A. Deyo, S. D. Ramsey. Butterworth-Heinemann, Boston 2000
  • Oxford Centre for Evidence-Based Medicine. Oxford Centre for Evidence-Based Medicine levels of evidence (May 2001). 2001, from www.cebm.net/levels_of_evidence.asp Retrieved March 28, 2008
  • Sackett D. L., Straus S. E., Richardson W. S., Rosenberg W., Haynes R. B. Evidence-based medicine: How to practice and teach EBM, 2nd ed. Churchill Livingstone, Edinburgh 2000
  • Shikako-Thomas K., Majnemer A., Law M., Lach L. Determinants of participation in leisure activities in children and youth with cerebral palsy: A systematic review. Physical & Occupational Therapy in Pediatrics 2008; 28(3)

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.