571
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Improving research reporting to avoid waste in psychological research

, &

Why do you publish your research findings? It is likely that, on reflection, we can identify many reasons that are not simply related to improving the state of science or the health and well-being of those we study. The answer to this question often produces multiple reasons, such as degree requirements, career development, requirements of funders, to name but a few. The demands of these additional drivers can unintentionally influence both the quality of reporting and the slant of a paper’s message due to tight timeframes and expectations of all parties involved.

There are some basic principles in research reporting that are widely adhered to. Hill (Citation1965) identified four questions in the reports of research: what questions were addressed and why, what was done, what was shown and what the findings means. Most research papers follow this format. However, it has become increasingly apparent that much research falls short of adequate reporting (Glasziou et al., Citation2014).

Glasziou et al. (Citation2014) identify three good reasons why we need to improve research reporting. The first is replicability. In the biggest project of its kind, the Open Science Collaboration repeated work reported in 98 original papers from three psychology journals, to see if they independently came up with the same results (Open Science Collaboration, Citation2015). Only 39 of the 100 replication attempts were successful. This project has sparked much debate and not just within psychology. While the issues raised by the project are complex and beyond the scope of this editorial, it does highlight the contribution that standardised, complete reporting could make to improving the replicability of our work.

The second reason is distortion of findings. Psychology has had adverse publicity in recent years around fraud, http://www.sciencemag.org/news/2012/09/harvard-psychology-researcher-committed-fraud-us-investigation-concludes; however, there are no grounds for concluding that research fraud is any more common in psychology than in other disciplines. Fanelli (Citation2009) found that a pooled estimated of 18 surveys suggests that 2% of scientists admit to fabricating, falsifying or modifying data at least once.

A more common practice in distortion of findings is spin. In an analysis of 72 randomised controlled trials, Boutron, Dutton, Ravaud, and Altman (Citation2010) found results were distorted in the reports of trials with non-significant differences in primary outcome. The temptation in non-significant studies is to focus on additional results such as subgroup analyses or analyses of secondary outcomes rather than the primary outcome/hypothesis. It is important that all findings are reported and within the context of the greater body of research so that the reader can make a more complete evaluation of the value of the findings. For this reason, authors are strongly encouraged to register their trial and review protocols prospectively to provide a statement of intent, reduce duplication and reduce opportunity for reporting bias (http://www.crd.york.ac.uk/PROSPERO/about.php?about=about).

The Journal of Reproductive and Infant Psychology is committed to publishing both significant and non-significant findings. Rigorously conducted research studies, irrespective of outcome, are worthy of publication and need to be comprehensively reported to support replication and to fully inform the reader. Ultimately, understanding why an effect has not been detected in an appropriately designed study is as crucial for future work as a positive effect.

Finally, the usability of research is possibly the biggest concern. Chalmers and Glasziou (Citation2009) suggest that at least 50% of research reports are not usable because they were poorly written or incomplete, which represents a waste of tens of billions of pounds. To counter this, reporting guidelines have been evolving in recent years and as a starting point to improving reporting standards, we are requesting that standardised reporting guidelines are used and cited when preparing manuscripts for the Journal of Reproductive and Infant Psychology. This should make preparing and reviewing manuscripts easier for all concerned and we hope will contribute to the reduction of waste in science. The CONSORT checklist for RCT’s is well established (http://www.consort-statement.org) as is the PRISMA checklist for systematic reviews (http://www.prisma-statement.org). Less well-known but equally helpful is the range of other reporting standards that are available for other designs. For example, the STROBE checklist for cohort studies, case control studies and cross-sectional studies http://strobe-statement.org/fileadmin/Strobe/uploads/checklists/STROBE_checklist_v4_combined.pdf.

Information on the checklists you can use can be found through the EQUATOR (Enhancing the QUAlity and Transparency of health Research) network http://www.equator-network.org/reporting-guidelines/ and information will be available on the Journal’s website. A workshop will also be available at this year’s annual Society for Reproductive and Infant Psychology conference in Leeds to highlight how these reporting standards can improve the readability of your paper. We value your research contribution, and we want to enhance its usability and contribution to what we know about the psychological well-being of women and infants around the time of birth.

Fiona Alderdice
Chair in Perinatal Health and Well-being, School of Nursing and Midwifery, Queens University Belfast, Belfast, Northern Ireland
[email protected]
James Newham
Post doctoral research associate, Institute of Health and Society, Newcastle University, Newcastle, UK
[email protected]
John Worobey
Professor of Nutritional Sciences, School of Environmental and Biological Sciences, Rutgers University, New Brunswick, NJ, USA
[email protected]

References

  • Boutron, I. , Dutton, S. , Ravaud, P. , & Altman, D. G. (2010). Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA , 303 , 2058–2064. 10.1001/jama.2010.651
  • Chalmers , & Glasziou P. (2009). Avoidable waste in the production and reporting of research evidence. Lancet , 374 , 86–89. 10.1016/S0140-6736(09)60329-9
  • Fanelli, D. (2009). How many scientist fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE , 4 , e5738. 10.1371/journal.pone.0005738
  • Glasziou P. , Altman D. G. , Bossuyt P. , Boutron I. , Clarke M. , Julious S. , …, Wager E. (2014). Reducing waste from incomplete or unusable reports of biomedical research Lancet , 383 , 267–276. 10.1016/S0140-6736(13)62228-X
  • Hill, B. (1965). Reasons for writing. BMJ , 2 , 870.
  • Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science , 349 (6251). doi:10.1126/science.aac4716

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.