222
Views
103
CrossRef citations to date
0
Altmetric
Original Articles

What Statistical Significance Testing Is, and What It Is Not

Pages 293-316 | Published online: 15 Apr 2014

References

  • Bakan, D. (1967). On method: Toward a reconstruction of psychological investigation. San Francisco: Jossey-Bass.
  • Bangert-Drowns, R. L. (1986). Review of developments in meta-analytic methods. Psychological Bulletin, 99, 388–399.
  • Barber, B. (1961). Resistance by scientists to scientific discovery. Science, 134, 596–602.
  • Becker, G. (1991). Alternative methods of reporting research results. American Psychologist, 46(6), 654–655.
  • Berk, R. A., & Brewer, M. (1978). Feet of clay in hobnail boots: An assessment of statistical inference in applied research. Evaluation Studies Review Annual, 3, 190–214.
  • Bracey, G. W. (1991). Sense, non-sense, and statistics. Phi Delta Kappan, 73(4), 335.
  • Brewer, J. K. (1985). Behavioral statistics textbooks: Source of myths and misconceptions? Journal of Educational Statistics, 10(3), 252–268.
  • Campbell, K. E., & Jackson, T. T. (1979). The role of and need for replication research in social psychology. Replications in Social Psychology, 1(1), 3–14.
  • Carlberg, C. G., & Miller, T. L. (1984). Introduction. The Journal of Special Education, 18, 9–10.
  • Carver, R. P. (1978). The case against statistical significance testing. Harvard Educational Review, 48(3), 378–399.
  • Cohen, J. (1969). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Erlbaum.
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.
  • Cohen, J. (1990). Things I have learned (so far). American Psychologist, 45(12), 1304–1312.
  • Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.
  • Cohen, S. A., & Hyman, J. S. (1981, April). Testing research hypotheses with critical ES instead of statistical significance in educational research. Paper presented at the annual conference of the American Educational Research Association, Los Angeles.
  • Cronbach, L. J. (1975). Beyond the two disciplines of scientific psychology. American Psychologist, 30, 116–127.
  • Dar, R. (1987). Another look at Meehl, Lakatos, and the scientific practices of psychologists. American Psychologist, 42(2), 145–151.
  • Ferguson, G. A., & Takane, Y. (1989). Statistical analysis in psychology and education (6th ed.). New York: McGraw-Hill.
  • Festinger, L. (1957). A theory of cognitive dissonance. Evanston, IL: Row, Peterson.
  • Fiske, D. W. (1983). The meta-analytic revolution in outcome research. Journal of Consulting and Clinical Psychology, 51, 65–70.
  • Gigerenzer, G., Swijtink, Z., Porter, T., Daston, L., Beatty, J., & Krüger, L. (1989). The empire of chance: How probability changed science and everyday life. Cambridge: Cambridge University Press.
  • Glass, G. V (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8.
  • Glass, G. V., & Hopkins, K. D. (1984). Statistical methods in education and psychology (2nd ed.). Englewood Cliffs, NJ: Prentice-Hall.
  • Glass, G. V. McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research. Beverly Hills, CA: Sage.
  • Harcum, E. R. (1989). The highly inappropriate calibrations of statistical significance. American Psychologist, 44(6), 964.
  • Hays, W. L. (1973). Statistics for the social sciences (2nd ed.). New York: Holt, Rinehart and Winston.
  • Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. New York: Academic Press.
  • Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage.
  • Jackson, G. B. (1980). Methods for integrative reviews. Review of Educational Research, 50, 438–460.
  • Kupfersmid, J. (1988). Improving what is published: A model in search of an editor. American Psychologist, 43(8), 635–642.
  • Lightman, A., & Gingerich, O. (1992). When do anomalies begin? Science, 255, 690–695.
  • McHugh, R. B. (1964). Need the randomized block design be replicated? The Journal of Experimental Education, 33(2), 169–174.
  • Meehl, P. E. (1967). Theory-testing in psychology and physics: A methodological paradox. Philosophy of Science, 34, 103–115.
  • Messick, S. (1989). Meaning and values in test validation: The science and ethics of assessment. Educational Researcher, 18(2), 5–11.
  • Morrison, D. E., & Henkel, R. E. (Eds.). (1970). The significance test controversy: A reader. Chicago: Aldine.
  • Neuliep, J. W. (Ed.). (1991). Replication research in the social sciences. Newbury Park, CA: Sage.
  • Pillemer, D. B. (1991). One-versus two-tailed hypothesis tests in contemporary educational research. Educational Researcher, 20(9), 13–17.
  • Rosenthal, R. (1984). Meta-analytic procedures for social research. Beverly Hills, CA: Sage.
  • Rosenthal, R. (1991). Replication in behavioral research. In J. W. Neuliep (Ed.), Replication research in the social sciences (pp. 1–30). Newbury Park, CA: Sage.
  • Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological sciences. American Psychology, 44, 1276–1284.
  • Salsburg, D. S. (1985). The religion of statistics as practiced in medical journals. The American Statistician, 39(3), 220–223.
  • Schlaefli, A., Rest, J. R., & Thoma, S. J. (1985). Does moral education improve judgment? A meta-analysis of intervention studies using the Defining Issues Test. Review of Educational Research, 55, 319–352.
  • Seeman, J. (1973). On supervising student research. American Psychologist, 28, 900–906.
  • Shaver, J. P. (1979). The productivity of educational research and the applied-basic research distinction. Educational Researcher, 8(1), 3–9.
  • Shaver, J. P. (1980, April). Readdressing the role of statistical tests of significance. Paper presented at the annual meeting of the American Educational Research Association, Boston.
  • Shaver, J. P. (1985a). Chance and nonsense: A conversation about interpreting tests of statistical significance, Part 1. Phi Delta Kappan, 67, 57–60.
  • Shaver, J. P. (1985b). Chance and nonsense: A conversation about interpreting tests of statistical significance, Part 2. Phi Delta Kappan, 67, 138–141. Erratum, 1986, 67, 624.
  • Shaver, J. P. (1987, October). Mandates for curriculum and instruction from educational and psychological research. Keynote address presented at the annual meeting of the Northern Rocky Mountain Educational Research Association, Park City, UT.
  • Shaver, J. P. (1991). Quantitative reviewing of research. In J. P. Shaver (Ed.), Handbook of research on social studies teaching and learning (pp. 83–95). New York: Macmillan.
  • Shaver, J. P., & Norton, R. S. (1980a). Populations, samples, randomness, and replication in two social studies journals. Theory and Research in Social Education, 8(2), 1–20.
  • Shaver, J. P., & Norton, R. S. (1980b). Randomness and replication in ten years of the American Educational Research Journal. Educational Researcher, 9(1), 9–15.
  • Signorelli, A. (1974). Statistics: Tool or master of the psychologist? American Psychologist, 29, 774–777.
  • Skinner, B. F. (1956). A case history in scientific method. American Psychologist, 11, 221–223.
  • Thompson, B. (1987, April). The use (and misuse) of statistical significance testing: Some recommendations for improved editorial policy and practice. Paper presented at the annual meeting of the American Educational Research Association, Washington, DC. (ERIC Document Reproduction Service No. ED 287 868)
  • Thompson, B. (1989). The place of qualitative research in contemporary social science: The importance of post-paradigmatic thought. In B. Thompson (Ed.), Advances in social science methodology (Vol. 1, pp. 1–42). Greenwich, CT: JAI Press.
  • Tukey, J. W. (1969). Analyzing data: Sanctification or detective work? American Psychologist, 24(2), 83–91.
  • Winch, R. F. & Campbell, D. T. (1969). Proof: No. Evidence? Yes. The significance of tests of significance. The American Sociologist, 4, 140–143.
  • Winer, B. J., Brown, D. R., & Michels, K. M. (1991). Statistical principles in experimental design (3rd ed.). New York: McGraw-Hill.
  • Wolf, F. M. (1986). Meta-analysis: Quantitative methods for research synthesis. Beverly Hills, CA: Sage.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.