1,618
Views
29
CrossRef citations to date
0
Altmetric
SPOTLIGHT SERIES: ANIMAL ASSISTED INTERVENTIONS IN SPECIAL POPULATIONS

Strategies to improve the evidence base of animal-assisted interventions

Pages 150-164 | Published online: 23 Feb 2017
 

ABSTRACT

Animal-assisted interventions (AAIs) reflect an extraordinary opportunity to improve mental health given the special role that animals (pets) occupy in everyday life and general public receptivity to animals. To enhance the evidence-base for AAIs, four recommendations are discussed: (a) conduct more well-designed studies, including the use of a broader array of methodologies; (b) make explicit “small theories” about precisely why and how human-animal interaction changes health and well-being; (c) increase laboratory-based studies of processes likely to underlie therapeutic applications; and (d) develop a strategic plan to guide AAI research, a means of implementing that plan, and ways of monitoring progress to ensure implementation has had impact on AAI research. There is enormous potential for AAI to overcome many obstacles of more traditional physical and mental health services, but developing that potential depends on establishing a stronger evidence base.

Notes

1In this article, I use “animals” to refer to nonhuman animals, even though the preferred terminology in psychological research underscores the distinction by referring to “human and non-human animals.”

2Effect size (ES) refers to the magnitude of the difference between two (or more) conditions or groups and is expressed in standard deviation units. For the case in which there are two groups in the study, effect size equals the differences between means, divided by the standard deviation:

where m1 and m2 are the sample means for two groups or conditions (e.g., treatment and control groups), and s equals the pooled standard deviation for these groups.

3A point to note in passing: An RCT alone does not guarantee that inferences can be drawn about the effects of treatment or that a particular component (e.g., use of an animal) was the basis for the change. In some RCTs, single-nonvalidated measures are used, improper statistics are provided (e.g., no direct statistical comparisons in between-group analyses but looking only at within-group change), and control groups unsuitable for drawing inferences about whether the presence of an animal was needed in explaining any effects. In one effort to conduct a meta-analysis of AAI, the authors noted they could not complete the analyses in light of the paucity and poor quality of the studies (Kamioka et al., Citation2014).

4Much of my research has been RCTs to develop interventions for children referred for extremes of aggressive and antisocial behavior (Kazdin, in press). These studies have required protracted funding and have required approximately 5–7 years to complete.

5“Qualitative” as a term occasionally is used to refer to case studies on unsystematic anecdotal narrative accounts (as might be in a novel, a good book, or docudrama). There are descriptive features of qualitative research but the methodology is a huge departure from more casual narratives because of the systematic, empirical, and scientific approach and reliance on core tenets of science (e.g., replicability, reliability, and validity, developing and testing theory) (see Kazdin, Citation2017).

6The futility of exhortations to improve methodology is nicely illustrated by the following example. Approximately six decades ago, a seminal review of psychological research was conducted and concluded that as a rule studies do not have sufficient statistical power to detect small-to-medium effects when they really exist (Cohen, Citation1962). That is, most studies are likely to show no differences statistically even if there really are differences in the world. The problem has since been shown in many areas of biomedical research well beyond psychology and the conclusions remain the same (Abdullah, Davis, Fabricant, Baldwin, & Namdari, Citation2015; Schutz, Je, Richards, & Choueiri, Citation2012; Tsang, Colley, & Lynd, Citation2009). Studies as a rule do not have sufficient power to detect small and medium effects (e.g., Bakker, van Dijk, & Wicherts, Citation2012; Kazdin & Bass, Citation1989; Maxwell, Citation2004). The exhortations did not seem to make a difference. What has had impact is that some funding agencies and some journals require a statement of statistical power in their applications or journal submissions. However, most research is not funded; most journals do not make this requirement.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 397.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.