Abstract
Methodologists have written for years about the importance of attending to important details in quantitative research, yet there has been little research investigating methodological practice in the social sciences. This study assessed the extent to which innovations and practices are adopted by researchers voluntarily. In particular, I use the case of power analysis and effect size reporting as the primary example, but I also examine other reporting behaviours. Results show that while observed power and effect sizes in the educational psychology literature tend to be strong, researchers do not seem eager to adopt practices such as reporting effect sizes and power, and neither do they tend to report their testing assumptions or the quality of their measurement. There is room for much improvement in how we attend to the basics of quantitative research, and it does not appear that persuasion and professional communication are effective in changing practice.
Notes
1. I would like to thank William R. Christensen II, Jason S. Gunter, and Cheryl Murdock for help with data extraction and earlier drafts of this paper. Some of this research was performed while the author was at the University of Oklahoma. Readers interested in incorporating desirable practices into their research may wish to refer to Best Practices in Quantitative Methods (Osborne, in press).
2. Many authors have acknowledged the significant issues with null hypothesis statistical testing (NHST) and some (see particularly Killeen, in press) have proposed alternatives such as the probability of replication as a more interesting/useful replacement. Interested readers should examine Thompson (Citation1993) and Fidler (Citation2005) as an introduction to the issues.
3. It is impossible to demonstrate conclusively that these journals or years are truly representative of the field. To do that would require a much more comprehensive survey of the field, which I have no interest in doing. I believe it is reasonable to argue that these journals are at least reasonable representatives of the corpus of the literature, and I can see no rationale for arguing that these years selected should not be representative of the period of time immediately around them. If there had been some effect of the recently published writings by Cohen and others in 1969, and if the imminent report by the APA Task Force influenced current practice, in both cases it the prevalence of these practices should have been inflated. Sadly, it is tough to argue that these statistics, particularly those in Table , are inflated.
4. Due to space limitations, and because they are widely available, these algebraic conversions are not reproduced here but are available from the author upon request