155
Views
5
CrossRef citations to date
0
Altmetric
ARTICLES

Mathematical expression and sampling issues of treatment contrasts: Beyond significance testing and meta-analysis to clinically useful research synthesis

Pages 58-75 | Received 31 Jan 2016, Accepted 01 Aug 2016, Published online: 01 Sep 2016

References

  • Abelson, R. P. (1997). A retrospective on the significance ban of 1999 (If there were no significance tests, they would have to be invented). In L. A. Harlow, S. A. Mulaik, & J. H. Steiger (Eds.), What if there were no significance tests? (pp. 117–141). Mahwah, NJ: Earlbaum.
  • Adolf, J., Schuurman, N. K., Borkenau, P., Borsboom, D., & Dolan, C. V. (2014). Measurement invariance within and between individuals: A distinct problem in testing the equivalence of intra- and inter-individual model structures. Frontiers in Psychology, 5, 716. doi:10.3389/fpsyg.2014.00883
  • Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions. Thousand Oaks, CA: Sage.
  • Alexe, G., Alexe, S., Bonates, T. O., & Kogan, A. (2007). Logical analysis of data – the vision of Peter L. Hammer. Annals of Mathematics and Artificial Intelligence, 49, 265–312. doi: 10.1007/s10472-007-9065-2
  • Andersson, G. (1999). The role of meta-analysis in the significance test controversy. European Psychologist, 4, 75–82. doi: 10.1027//1016-9040.4.2.75
  • van Assen, M. A. L. M., van Aert, R. C. M., & Wicherts, J. M. (2015). Meta-analysis using effect size distributions of only statistically significant studies. Psychological Methods, 20, 293–309. doi: 10.1037/met0000025
  • Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554. doi: 10.1177/1745691612459060
  • Balluerka, N., Gómez, J., & Hidalgo, D. (2005). The controversy over null hypothesis significance testing revisited. Methodology, 1, 55–70. doi: 10.1027/1614-1881.1.2.55
  • Barth, J., Munder, T., Gerger, H., Nüesch, E., Trelle, S., Znoj, H., … Cuijpers, P. (2013). Comparative efficacy of seven psychotherapeutic interventions for patients with depression: A network meta-analysis. PLOS Medicine. doi:10.1371/journal.pmed1001454
  • Baskin, T. W., Tierney, S. C., Minami, T., & Wampold, B. E. (2003). Establishing specificity in psychotherapy: A meta-analysis of structural equivalence of placebo controls. Journal of Consulting and Clinical Psychology, 71, 973–979. doi: 10.1037/0022-006X.71.6.973
  • Bauer, D. J., & Curran, P. J. (2005). Probing interactions in fixed and multilevel regression: Inferential and graphical techniques. Multivariate Behavioral Research, 40, 373–400. doi: 10.1207/s15327906mbr4003_5
  • Benjamini, Y., & Braun, H. (2002). John W. Tukey’s contributions to multiple comparisons. The Annals of Statistics, 30, 1576–1594. doi: 10.1214/aos/1043351247
  • Bennett, D. J. (1998). Randomness. Cambridge, MA: Harvard University Press.
  • Berger, V. W. (2005). Selection bias and covariate imbalances in RCTs. New York, NY: Wiley.
  • Berger, V. W. (2006). A review of methods for ensuring the comparability of comparison groups in randomized clinical trials. Reviews on Recent Clinical Trials, 1, 81–86. doi: 10.2174/157488706775246139
  • Beyth-Marom, R., Fidler, F., & Cumming, G. (2008). Statistical cognition: Towards evidence based practice in statistics and statistics education. Statistics Education Research Journal, 7, 20–39.
  • Bőhnke, J. R., & Croudace, T. J. (2015). Factors of psychological distress: Clinical value, measurement substance, and methodological artifacts. Social Psychiatry and Psychiatric Epidemiology, 50, 515–524. doi: 10.1007/s00127-015-1022-5
  • Borenstein, M. (2009). Effect sizes for continuous data. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 221–235). New York, NY: Russell Sage Foundation.
  • Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. New York, NY: Wiley.
  • Borman, G. D., & Grigg, J. A. (2009). Visual and narrative interpretation. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 497–519). New York, NY: Russell Sage Foundation.
  • Borsboom, D. (2008). Latent variable theory. Measurement, 6, 25–53.
  • Brown, G. S. (1957). Probability and scientific inference. London: Longmans, Green.
  • Brown, S. D., Furrow, D., Hill, D. F., Gable, J. C., Porter, L. P., & Jacobs, W. J. (2014). A duty to describe: Better the devil you know than the devil you don’t. Perspectives on Psychological Science, 9, 626–640. doi: 10.1177/1745691614551749
  • Chow, S. L. (1996). Statistical significance: Rationale, validity, and utility. Beverly Hills, CA: Sage.
  • Coburn, K. M. & Vevea, J. L. (2015). Publication bias as a function of study characteristics. Psychological Methods, 20, 310–330.
  • Cooper, H., & Koenka, A. C. (2012). The overview of reviews: Unique challenges and opportunities when research syntheses are the principle elements of new integrative scholarship. American Psychologist, 67, 446–462. doi: 10.1037/a0027119
  • Cronbach, L. J. (1982). Prudent aspirations for social inquiry. In W. H. Kruskal, (Ed.), The social sciences: Their nature and uses (pp. 61–81). Chicago, IL: The University of Chicago Press.
  • Cumming, G. (2010). Replication, prep, and confidence intervals: Comment prompted by Iverson, Wagenmakers, and Lee (2010); Lecoutre, Lecoutre, and Poitevineau (2010); and Maraun and Gabriel (2010). Psychological Methods, 15, 192–198. doi: 10.1037/a0019521
  • Cumming, G. (2014). The new statistics: Why and how? Psychological Science, 25, 7–29. doi: 10.1177/0956797613504966
  • Eddy, K. T., Dutra, L., Bradley, R., & Westen, D. (2004). A multidimensional meta-analysis of psychotherapy and pharmacotherapy for obsessive-compulsive disorder. Clinical Psychology Review, 24, 1011–1030. doi: 10.1016/j.cpr.2004.08.004
  • Feller, W. (1968). An introduction to probability theory and its applications. Vol. I. (3rd ed.). New York, NY: Wiley.
  • Ferguson, C. J., & Brannick, M. T. (2012). Publication bias in psychological science: Prevalence, methods for identifying and controlling, and implications for the use of meta-analyses. Psychological Methods, 17, 120–128. doi: 10.1037/a0024445
  • Fisher, R. A. (1966). The design of experiments (8th ed.). London: Oliver & Boyd.
  • Fleiss, J. L., & Berlin, J. A. (2009). Effect sizes for dichotomous data. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 237–253). New York, NY: Russell Sage Foundation.
  • Freedman, D. A. (2010). Statistical models and causal inference: A dialogue with the social sciences. New York, NY: Cambridge University Press.
  • Fritz, C. O., Morris, P. E., & Richler, J. L. (2012). Effect size estimates: Current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141, 2–18. doi: 10.1037/a0024338
  • Glass, G. V. (2000). Meta-analysis at 25. Paper presented to Office of Special Education Programs Research Project Directors’ Conference, U. S. Department of Education. Washington DC, July 15, 1999. Retrieved May 15, 2016, from www.gvglass.info/papers/meta25.html
  • Greenland, S., Robins, J. M., & Pearl, J. (1999). Confounding and collapsibility in causal inference. Statistical Science, 14, 29–46. doi: 10.1214/ss/1009211805
  • Hacking, I. (1965). Logic of statistical inference. Cambridge: Cambridge University Press.
  • Hedges, L. V. (1982). Estimation of effect size from a series of independent experiments. Psychological Bulletin, 92, 490–499. doi: 10.1037/0033-2909.92.2.490
  • Higgens, J. P. T., Jackson, D., Barrett, J. K., Lu, G., Ades, A. E., & White, I. R. (2012). Consistency and inconsistency in network meta-analysis: Concepts and models for multi-arm studies. Research Synthesis Methods, 3, 98–110. doi: 10.1002/jrsm.1044
  • Hogben, L. (1957). Statistical theory: The relationship of probability, credibility, and error. London: Allen & Unwin.
  • Horing, B., Weimer, K., Muth, E. R., & Enck, P. (2014). Prediction of placebo responses: A systematic review of the literature. Frontiers in Psychology, 5, 1079. doi:10.3389/fpsyg.2014.01079
  • Howard, K. I., Krause, M. S., & Orlinsky, D. (1986). The attrition dilemma: Toward a new strategy for psychotherapy research. Journal of Consulting and Clinical Psychology, 54, 106–110. doi: 10.1037/0022-006X.54.1.106
  • Howard, K. I., Krause, M. S., & Vessey, J. (1994). Analyzing clinical trail data: The problem of outcome overlap. Psychotherapy: Theory, Research, Practice, Training, 31, 302–307. doi: 10.1037/h0090213
  • Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis (2nd ed.). Thousand Oaks, CA: Sage.
  • Hussong, A. M., Curran, P. J., & Bauer, D. J. (2013). Integrative data analysis in clinical psychology research. Annual Review of Clinical Psychology, 9, 61–89. doi: 10.1146/annurev-clinpsy-050212-185522
  • Ioannidis, J. P. A. (2011). Meta-research: The art of getting it wrong. Research Synthesis Methods, 1, 169–184. doi: 10.1002/jrsm.19
  • Ioannidis, J. P. A. (2014). How to make more published research true. PLoS Medicine, 11(10), 1–6. doi: 10.1371/journal.pmed.1001747
  • Kanji, G. K. (1993). 100 statistical tests. London: Sage.
  • Kelley, K., & Preacher, K. J. (2012). On effect size. Psychological Methods, 17, 137–152. doi: 10.1037/a0028086
  • Kish, L. (1965). Survey sampling. New York, NY: Wiley.
  • Kliem, S., Krőger, C., & Kosfelder, J. (2010). Dialectical behavior therapy for borderline personality disorder: A meta-analysis using mixed-effects modeling. Journal of Consulting and Clinical Psychology, 78, 936–951. doi: 10.1037/a0021015
  • Kline, R. B. (2011). Principles and practice of structural equation modelling (3rd ed.). New York, NY: Guilford.
  • Kline, R. B. (2013). Beyond significance testing: Statistics reform in the behavioral sciences (2nd ed.). Washington, DC: APA.
  • Kraemer, H. C., & Gibbons, R. D. (2009). Why does the randomized clinical trial methodology so often mislead clinical decision making? Focus on moderators and mediators of treatment. Psychiatric Annals, 39, 736–745. doi: 10.3928/00485713-20090625-06
  • Krause, M. S. (2005). How the psychotherapy research community must work toward measurement validity and why. Journal of Clinical Psychology, 61, 269–283. doi: 10.1002/jclp.20020
  • Krause, M. S. (2010). Trying to discover sufficient condition causes. Methodology, 6, 59–70. doi: 10.1027/1614-2241/a000007
  • Krause, M. S. (2011). Significance testing and clinical trials. Psychotherapy, 48, 217–222 & 234–236. doi: 10.1037/a0022088
  • Krause, M. S. (2012). Measurement validity is fundamentally a matter of definition, not correlation. Review of General Psychology, 16, 391–400. doi: 10.1037/a0027701
  • Krause, M. S. (2013a). The incompatibility of achieving a fully specified linear model with assuming that residual dependent-variable variance is random. Quality & Quantity, 47, 3201–3204. doi: 10.1007/s11135-012-9712-5
  • Krause, M. S. (2013b). The data analytic implications of human Psychology’s dimensions being ordinally scaled. Review of General Psychology, 17, 318–325. doi: 10.1037/a0032292
  • Krause, M. S. (2016). Case sampling for psychotherapy practice, theory, and policy guidance: Qualities and quantities. Psychotherapy Research, 26, 530–544. doi: 10.1080/10503307.2015.1051161
  • Krause, M. S., & Howard, K. I. (2002). The Linear Model is a very special case: How to explore data for their full clinical implications. Psychotherapy Research, 12, 475–490. doi: 10.1093/ptr/12.4.475
  • Krause, M. S., & Howard, K. I. (2003). What random assignment does and does not do. Journal of Clinical Psychology, 59, 751–766. doi: 10.1002/jclp.10170
  • Krause, M. S., Howard, K. I., & Lutz, W. (1998). Exploring individual change. Journal of Consulting and Clinical Psychology, 66, 838–845. doi: 10.1037/0022-006X.66.5.838
  • Krause, M. S., & Lutz, W. (2006). How we really ought to be comparing treatments for clinical purposes. Psychotherapy: Theory, Research, Practice, Training, 43, 359–361. doi: 10.1037/0033-3204.43.3.359
  • Krause, M. S., & Lutz, W. (2009a). What should be used for baselines against which to compare treatments’ effectiveness?. Psychotherapy Research, 19, 358–367. doi: 10.1080/10503300902926539
  • Krause, M. S., & Lutz, W. (2009b). Process transforms inputs to determine outcomes: Therapists are responsible for managing process. Clinical Psychology: Science and Practice, 16, 73–81.
  • Krause, M. S., Lutz, W., & Bőhnke, J. R. (2011). The role of sampling in clinical trial design. Psychotherapy Research, 21, 243–251. doi: 10.1080/10503307.2010.549520
  • Krause, M. S., Lutz, W., & Saunders, S. M. (2007). Empirically certified treatments or therapists: The issue of separability. Psychotherapy: Theory, Research, Practice, Training, 44, 347–353. doi: 10.1037/0033-3204.44.3.347
  • Kruschke, J. K. (2013). Bayesian estimation supersedes the t test. Journal of Experimental Psychology: General, 142, 573–603. doi: 10.1037/a0029146
  • Laska, K. M., Gurman, A. S., & Wampold, B. E. (2014). Expanding the lens of evidence-based practice in psychotherapy: A common factors perspective. Psychotherapy, 51, 467–481. doi: 10.1037/a0034332
  • Lau, J., Ioannidis, J. P. A., & Schmid, C. H. (1998). Summing up evidence: One answer is not always enough. The Lancet, 351, 123–127. doi: 10.1016/S0140-6736(97)08468-7
  • Light, R. J., & Pillemer, D. B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.
  • Little, R. J., D’Agostino, R., Cohen, M. L., Dickersin, K., Emerson, S. S., Farrar, J. T., … Stern, H. (2012). The prevention and treatment of missing data in clinical trials. New England Journal of Medicine, 367, 1355–1360. doi: 10.1056/NEJMsr1203730
  • Lutz, W., Leach, C., Barkham, M., Lucock, M., Stiles, W. B., Evans, C., … Iveson, S. (2005). Predicting change for individual psychotherapy clients on the basis of their nearest neighbors. Journal of Consulting and Clinical Psychology, 73, 904–913. doi: 10.1037/0022-006X.73.5.904
  • de Maat, S., Dekker, J., Schoevers, R., & de Jonghe, F. (2007). The effectiveness of long-term psychotherapy: Methodological research issues. Psychotherapy Research, 17, 59–65. doi: 10.1080/10503300600607605
  • Maltz, M. D. (1994). Deviating from the mean: The declining significance of significance. Journal of Research in Crime and Delinquency, 31, 434–463. doi: 10.1177/0022427894031004005
  • Matt, G. E., & Cook, T. D. (2009). Threats to the validity of generalized inferences. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 537–560). New York, NY: Russell Sage Foundation.
  • Mayo, D. G. (1996). Error and the growth of experimental knowledge. Chicago, IL: University of Chicago Press.
  • McClintock, C. C., Brannon, D., & Maynard-Moody, S. (1979). Applying the logic of sample surveys to qualitative case studies: The case cluster method. Administrative Science Quarterly, 24, 612–629. doi: 10.2307/2392367
  • McGrath, R. E., & Meyer, G. J. (2006). When effect sizes disagree: The case of r and d. Psychological Methods, 11, 386–401. doi: 10.1037/1082-989X.11.4.386
  • Mervis, J. (2014). Why null results rarely see the light of day. Science, 345, 992. doi: 10.1126/science.345.6200.992
  • Miller, G. A., & Chapman, J. P. (2001). Misunderstanding analysis of covariance. Journal of Abnormal Psychology, 110, 40–48. doi: 10.1037/0021-843X.110.1.40
  • Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). The PRISMA group: Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(7), e1000097. doi:10.1371/journal.pmed.1000097
  • Mulaik, S. A., Raju, N. S., & Hadshman, R. A. (1997). There is a time and a place for significance testing. In L. A. Harlow, S. A. Mulaik, & J. H. Steiger (Eds.), What if there were no significance tests? (pp. 65–115). Mahwah, NJ: Earlbaum.
  • Nickerson, R. S. (2000). Null hypothesis significance testing: A review of an old and continuing controversy. Psychological Methods, 5, 241–301. doi: 10.1037/1082-989X.5.2.241
  • Onwuegbuzie, A. J., & Levin, J. R. (2003). Without supporting statistical evidence, where would reported measures of substantive importance lead to? To no good effect. Journal of Modern Applied Statistical Methods, 2, 133–151. doi: 10.22237/jmasm/1051747920
  • Orlinsky, D. E., Rønnestad, M. H., & Willutzki, U. (2004). Fifty years of psychotherapy process-outcome research: Continuity and change. In M. J. Lambert (Ed.), Bergin and Garfields’ Handbook of psychotherapy and behavior change (5th ed. pp. 307–389). New York, NY: Wiley.
  • Paul, G. L. (1967). Strategy of outcome research in psychotherapy. Journal of Consulting Psychology, 31, 109–118. doi: 10.1037/h0024436
  • Pfammatter, M., Junghan, U. M., & Brenner, H. D. (2006). Efficacy of psychological therapy in schizophrenia: Conclusions from meta-analyses. Schizophrenia Bulletin, 32, S64–S80. doi: 10.1093/schbul/sbl030
  • Phillips, E. L. (2014). Psychotherapy revised: New frontiers in research and practice. New York, NY: Routledge.
  • Psychotherapy Research, 25(1). (2015). Special Issue: Building collaboration and communication between researchers and clinicians.
  • Publication Manual of the American Psychological Association (6th edn.). (2010). Meta-analysis reporting standards. Washington, DC: Author. (pp. 251–252).
  • Ramseyer, F., Kupper, Z., Caspar, F., Znoj, H., & Tschacher, W. (2014). Time-series panel analysis (TSPA): Multivariate modeling of temporal associations in psychotherapy process. Journal of Consulting and Clinical Psychology, 82, 828–838. doi: 10.1037/a0037168
  • Raudenbush, S. W. (2009). Analyzing effect sizes: Random-effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 295–315). New York, NY: Russell Sage Foundation.
  • Roberts, J. K., & Henson, R. K. (2003). Not all effects are created equal: A rejoinder to Sawilowsky. Journal of Modern Applied Statistical Methods, 2, 226–230. doi: 10.22237/jmasm/1051748520
  • Rodgers, J. L. (2010). The epistemology of mathematical and statistical modeling: A quiet methodological revolution. American Psychologist, 65, 1–12. doi: 10.1037/a0018326
  • Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: Sage.
  • Rosnow, R. L., & Rosenthal, R. (2009). Effect sizes: Why, when, and how to use them. Zeitschrift für Psychologie/Journal of Psychology, 217, 6–14. doi: 10.1027/0044-3409.217.1.6
  • Salmon, W. C. (1971). Statistical explanation. In W. C. Salmon (Ed.), Statistical explanation & statistical relevance (pp. 29–87). Pittsburgh, PA: University of Pittsburgh Press.
  • Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1, 115–129. doi: 10.1037/1082-989X.1.2.115
  • Schmidt, F. L., & Hunter, J. E. (2005). Meta-analysis. In N. Anderson, D. S. Ones, H. K. Sinangil, & C. Viswesvaran (Eds.), Handbook of industrial, work, & organizational psychology, Volume I: Personnel psychology (pp. 51–71). Thousand Oaks, CA: Sage.
  • Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90–100. doi: 10.1037/a0015108
  • Schneider, J. W. (2013). Caveats for using statistical significance tests in research assessments. Journal of Informetrics, 7, 50–62. doi: 10.1016/j.joi.2012.08.005
  • Seidel, J. A., Miller, S. D., & Chow, D. L. (2014). Effect size calculations for the clinician: Methods and comparability. Psychotherapy Research, 24, 470–484. doi: 10.1080/10503307.2013.840812
  • Seidenfeld, T. (1979). Philosophical problems of statistical inference: Learning from R. A. Fisher. Boston, MA: Reidel.
  • Shadish, W. R., & Haddock, C. K. (2009). Combining estimates of effect size. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 257–277). New York, NY: Russell Sage Foundation.
  • Shadish, W. R., & Sweeney, R. B. (1991). Mediators and moderators in meta-analysis: There’s a reason we don’t let dodo birds tell us which psychotherapies should have prizes. Journal of Consulting and Clinical Psychology, 59, 883–893. doi: 10.1037/0022-006X.59.6.883
  • Shrier, I. (2011). Structural approach to Bias in Meta-analyses. Research Synthesis Methods, 2, 223–237. doi: 10.1002/jrsm.52
  • Sidani, S. (2006). Random assignment: A systematic review. In R. R. Bootzin & P. E. McKnight (Eds.), Strengthening research methodology: Psychological measurement and evaluation (pp. 125–141). Washington, DC: APA.
  • Stiles, W. B., Honos-Webb, L., & Surko, M. (1998). Responsiveness in psychotherapy. Clinical Psychology: Science and Practice, 5(4), 439–458.
  • Stuart, A., Ord, J. K., & Arnold, S. (1999). Kendall’s advanced theory of statistics, Volume 2a: Classical inference and the linear model (6th ed.). London: Arnold.
  • Stuart, E. A. (2010). Matching methods for causal inference: A review and a look forward. Statistical Science, 25, 1–21. doi: 10.1214/09-STS313
  • Sutton, A. J. (2009). Publication bias. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 435–452). New York, NY: Russell Sage Foundation.
  • Sutton, A. J., & Higgins, J. P. T. (2008). Recent developments in meta-analysis. Statistics in Medicine, 27, 625–650. doi: 10.1002/sim.2934
  • Swift, J. K., & Greenberg, R. P. (2014). A treatment by disorder meta-analysis of dropout from psychotherapy. Journal of Psychotherapy Integration, 24, 193–207. doi: 10.1037/a0037512
  • Trafimow, D. (2003). Hypothesis testing and theory evaluation at the boundaries: Surprising insights from Bayes’ theorem. Psychological Review, 110, 526–535. doi: 10.1037/0033-295X.110.3.526
  • Tukey, J. W. (1977). Some thoughts on clinical trials, especially problems of multiplicity. Science, 198, 679–684. doi: 10.1126/science.333584
  • van der Veer, R., van Ijzendoorn, M., & Valsiner, J. (1994). Epilogue. In R. van der Veer, M. van Ijzendoorn, & J. Valsiner (Eds.), Reconstructing the mind: Replicability in research on human development (pp. 271–283). Norwood, NJ: Ablex.
  • Wachter, K. W., & Straf, M. L. (Eds.). (1990). The future of meta-analysis. New York, NY: Russel Sage Foundation.
  • Walker, H. M. (1929). Studies in the history of statistical method. Baltimore, MD: Williams & Wilkins.
  • West, S. G., & Thoemmes, F. (2010). Campbell’s and Rubin’s perspectives on causal inference. Psychological Methods, 15, 18–37. doi: 10.1037/a0015917
  • Ziliak, S. T., & McCloskey, D. N. (2008). The cult of statistical significance. Ann Arbor: University of Michigan Press.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.