Publication Cover
Educational Research and Evaluation
An International Journal on Theory and Practice
Volume 24, 2018 - Issue 1-2
825
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Being cautious about what “research shows”

&

Supervisors of theses and dissertations routinely castigate their students for using the phrase “research shows” or “research has shown”, and they often delight in showing (off) that, contrary to their students’ claims, research has not shown such and such, thereby scuppering or qualifying the questionable claims proclaimed so happily by the students in their charge. Indeed, the same holds for other researchers, where authors of articles are also advised to provide more than one possible explanation for their research findings and to indicate why their favoured explanation holds the most water. All too often, enthusiastic and perhaps ambitious or novice researchers make claims for what their research shows when, in reality, their research simply doesn’t show it.

Reviewing what a piece of research really “shows” is an important and worthwhile pursuit, as such claims are subject to many questions not only about external validity – generalizability and wider applicability – but also about internal validity, that is, whether the claims made are supported by the research design together with the evidence and analysis provided by the author. For example, Simpson’s paper in this volume usefully dissects and takes to task a paper by Ainsworth et al. (Citation2015), demonstrating how their interpretation of effect size “is fundamentally flawed” and that “the fact that two different tests, with different designs and disjoint questions . . . have very different effect sizes should not be a surprise to anyone who avoids the effect-size category error” and that their paper “attempts to answer a question that never actually needed to be asked, addresses it with flawed logic, and reaches a conclusion which should already have been appreciated”. Such analytical, carefully explained criticism is important for researchers.

The extent to which a finding, however strong its internal validity, might apply to another context or might differ under different conditions and with different factors included, is unclear, even contentious (Cartwright & Hardie, Citation2012). Given this, contributing authors might simply avoid making claims as to what their research shows, leaving it perhaps irresponsibly to the readers alone to make of it what they will; being inconclusive is perhaps the last resort of the crouching contributor.

In the face of questions about what the research really “shows” and what can be taken from it, prudence often prevails, and authors should indicate that their research shows that it abides by certain iff requirements, for example, by stating that in such-and-such a context, under such-and-such a set of conditions, taking account of such-and-such a set of qualifications or constraints, mindful of such-and-such a set of alternative explanations, the results suggest rather than prove. Indeed, in the articles in the present issue, the authors are careful to bound their conclusions. For example, Thurlings and Den Brok use the phrase “[o]n the basis of the findings”; Wijsman, Saab, Warrens, Van Driel, and Westenberg write that “the positive association between favouring a subject and performance on that subject suggests that it is important that a student continues to favour a given subject” (italics added) and that “we cannot make any statements about . . .”; Jacobs and Wolbers write that “the findings of this article imply that . . .” and “we only examined . . .” (italics added); such caution wisely recognizes that educational research must both acknowledge and disclose its boundaries. These include meeting the iff requirements as mentioned above, as, without these, the boundaries might be arbitrary and post-hoc self-protection.

What can really be taken from a piece of research and applied elsewhere is often unavoidably unclear. Such uncertainty also calls into question, for example, the familiar appeal to, indeed the requirement of – or resort to – the ceteris paribus condition in randomised controlled trials (RCTs), even though Fisher’s The Design of Experiments (Citation1966) makes a fundamental case for it. Indeed, theories of chaos and complexity challenge the appeal to the ceteris paribus condition. The ceteris paribus clause may be no more than a statistical article of faith rather than an empirical certainty, constitutionally unable to catch the irreducible richness and evolving dynamics of the everyday world of children and teachers; an impoverished, unworthy race to the bottom in considering the nature and purpose of education in the real lives of sentient humans. As Mitchell (Citation2010) remarks, “[c]eteris paribus laws are problematic since ‘other things being equal’ may not in fact occur in situations for empirical test and explanation” (p. 141) and “[t]he cost of the ceteris paribus clause is high. First, although making a generalization universally true in this way can always be done, it is at the risk of vacuity” (pp. 141–142). Further, where is the wisdom of seeking to apply educational research derived from those RCTs which only deal in averages, to students whose uniqueness, diversity, humanity, make-up, and differences are to be celebrated rather than suppressed or overlooked, despite claims that might be made for the similarity between people? We are still searching for an “average” child.

Totalising narratives, reductionist analyses, and simplistic claims from research are all too easy to make, even in the age of big data and totalitarian regimes. How do they overcome the ecological fallacy or even take seriously the here-and-now lived experiences of children, each unique, growing up in a world marked by diversity and the call to creativity? What “research shows”, as the papers in this issue demonstrate, must really be cautious, bounded, contextualized, and faithful to the panoply of conditions operating in the situation. Without this, it risks being little more than “sounding brass or a tinkling cymbal”, that is, empty.

References

  • Ainsworth, H., Hewitt, C. E., Higgins, S., Wiggins, A., Torgerson, D. J., & Torgerson, C. J. (2015). Sources of bias in outcome assessment in randomised controlled trials: A case study. Educational Research and Evaluation, 21(1), 3–14. doi: 10.1080/13803611.2014.985316
  • Cartwright, N., & Hardie, J. (2012). Evidence-based policy: A practical guide to doing it better. Oxford, UK: Oxford University Press.
  • Fisher, R. A. (1966). The design of experiments (8th ed.). New York, NY: Hafner.
  • Mitchell, S. (2010). Complexity and explanation in the social sciences. In C. Mantzavinos (Ed.), Philosophy of the social sciences: Philosophical theory and scientific practice (pp. 130–145). Cambridge, UK: Cambridge University Press. doi: 10.1017/CBO9780511812880.012

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.