1,929
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Improving psychological science: further thoughts, reflections and ways forward

, , , , , , , , & show all

1. An introduction to Cogent psychology

Cogent Psychology is a pioneering and dynamic Open Access journal for the psychology community, publishing original research, reviews, and replications that span the full spectrum of psychological inquiry. In 2021, it relaunched with a new Editor-in-Chief and Section Editors with an exciting vision to combine open access publishing with open research practices. As such, the journal welcomes traditional and new article formats, including Registered Reports, Brief Replication Reports, Review Articles, and Brief Reports. This broader range of formats is designed to reflect the evolving nature of psychological research and open science approaches. To the best of our knowledge, no other psychology journal offers such a distinctive combination of article publishing formats. Moreover, we welcome submissions in nine key areas of psychological science: Clinical Psychology, Cognitive & Experimental Psychology, Developmental Psychology, Educational Psychology, Health Psychology, Neuropsychology, Personality & Individual Differences, Social Psychology and Work, Industrial & Organisational Psychology.

In the last 5–10 years, much has changed in how science is conducted, and specifically in how psychological science is performed. One of the key drivers of change was the publication of the Open Science Collaboration’s (Citation2015) paper estimating the reproducibility of psychological science. This international collaborative effort set out to replicate 100 experimental and correlational studies from three leading psychology journals. The findings were alarming: fewer than 40% of psychology studies were able to be replicated. Numerous factors—known as questionable research practices or ”QRPs”—have been suggested to explain these low levels of replication including low statistical power, hypothesizing after the results are known (HARKing), p-hacking, the “garden of forking paths” and failure to control for biases (Gelman & Loken, Citation2013; Kerr, Citation1998; Munafò et al., Citation2017; Norris & O’Connor, Citation2019). Following this so-called Replication Crisis, it has been argued that psychological science has been undergoing a renaissance (O’Connor, Citation2021). Part of its “rebirth” has involved the development of numerous new tools and approaches to help improve replication and reproducibility and to reduce use of questionable research practices. At Cogent Psychology, we are keen to support these efforts in order to help increase openness, integrity and reproducibility in scientific research and ultimately improve the robustness of our evidence base.

To this end, in addition to standard article formats, Cogent Psychology now offers two innovative and novel publishing formats: Registered Reports and Brief Replication Reports. Registered Reports differ from conventional empirical articles by performing part of the review process before the researchers collect and analyse data. Unlike the more conventional scientific process where a full report of empirical research is submitted for peer review, Registered Reports are considered as proposals for empirical research, which are evaluated on their merit prior to the data being collected (see, Chambers & Tzavella, Citation2022; https://osf.io/rr/). Once the Stage 1 Registered Report has been accepted and has received In Principle Acceptance, data collection can begin. Importantly, following successful data collection and analysis, the full Registered Report will be accepted for publication irrespective of the significance of your findings. Crucially, it is hoped this will help reduce publication bias that favours statistically significant effects. Cogent Psychology also welcomes Brief Replication Reports. The main purpose of this format is to help facilitate and simplify the publication of replication studies whereby researchers can repeat research or present similar results to previously published research with the aim of reinforcing previous studies to determine their validity, elaborating on earlier findings, developing academic knowledge and directing future research (see Instructions to Authors for more details).

Cogent Psychology also encourages preregistration of all types of empirical studies (e.g., observational studies, randomised controlled trials and experimental studies), Brief Replication Studies, systematic reviews and meta-analyses. Preregistration of clinical trials, behaviour change interventions and systematic reviews and meta-analyses is commonplace on repositories (such as https://clinicaltrials.gov/; https://www.isrctn.com/, https://www.crd.york.ac.uk/PROSPERO/). However, preregistration of other observational and experimental studies is less common. Preregistration of study plans before conducting a study has been identified as an important tool to help increase the transparency of science and to improve the robustness of psychological research findings (Bosnjak et al., Citation2022). To this end, in addition to other existing options, a new template for the preregistration of quantitative empirical studies in psychology—known as the Psychological Research Preregistration-Quantitative (PRP-QUANT) Template—has recently been introduced (https://doi.org/10.23668/psycharchives.4584; for other helpful open research primers see https://www.ukrn.org/primers/).

As outlined above, there have been many excellent developments in improving how psychological science is conducted, many of which are being adopted by Cogent Psychology. However, there are a number of other important issues that psychological researchers should also consider as we continue to improve psychological science and related disciplines. This editorial turns to some of these next.

2. Need for greater transparency and openness

Whilst the estimated replication rate for cognitive studies in the seminal study of the Open Science Collaboration (Citation2015) was better than the average rate across all studies (50% vs. 36%), there is clearly much room for improvement. At Cogent Psychology, we are ideally situated to make a considerable impact on improving attempts at replication, reproducibility, and the uptake of open research practices more generally. For example, although Cogent offers Brief Replication Reports that communicate attempts to replicate an already published finding, submissions to Cogent that report new experimental cognitive findings can be strengthened by including a direct and/or conceptual replication (Brandt et al., Citation2014) of the finding within the same submission. Such presentation has replication “built-in” by design, and as such helps provide the field with reassurance as to the robustness of new effects reported. Although not a prerequisite for acceptance, submissions that provide such reassurance will certainly be viewed as a priority for publication.

More broadly, experimental psychologists also have an important opportunity to maximise the openness of their research, including by sharing their experimental materials, their data, and their analysis code. Although sharing of data and analysis code is becoming common, the sharing of experimental materials is less so. As many methods in the field of experimental cognitive psychology are digital (e.g., digital stimuli and computerised experiment scripts) the barrier to openly sharing all of the code and stimuli associated with our experimental work is low. As such, we strongly recommend that all submissions make their experimental methods openly available where possible.

Beyond the openness and reproducibility of experimental findings, Cogent Psychology also places strong emphasis on the openness and reproducibility of theoretical work. For example, computational modelling significantly aids rigorous theory building in cognitive psychology (e.g., Guest & Martin, Citation2021; Oberauer & Lewandowsky, Citation2019), but it also allows for clearer theoretical communication between researchers (Farrell & Lewandowsky, Citation2010). In contrast to verbal theories, theories expressed computationally can be communicated with other researchers by sharing the computer code. It is important therefore that we make our code openly available and to take steps to ensure our models are fully reproducible. Cogent Psychology also welcomes submissions reporting studies aimed at assessing the impact of the many researcher degrees of freedom (Simmons et al., Citation2011) inherent in the modelling process to the inferences made (see, for example, Dutilh et al., Citation2019). We also welcome tutorial papers that help lower the barrier of entry for colleagues to implement computational modelling into their own research programme.

3. Single-case studies in the context of replication and reproducibility

Many of the sections at Cogent Psychology are devoted to theoretical, experimental, and applied contributions that advance the understanding of cognitive and behavioural impairments in neurological conditions, their recovery and rehabilitation. Taken at face value, it may be assumed that an area such as neuropsychology has escaped the Reproducibility Crisis as it often deals with large effect sizes. However, replication problems might differ depending on the clinical disorder investigated and sample sizes can vary widely in neuropsychology.

Single-case studies (N = 1) are sometimes the only way to study rare neurological conditions, but replication attempts are rare (e.g., Rossetti et al., Citation2017; Rossit et al., Citation2018). In addition, in single-case studies it is hard to determine whether findings can generalise to other cases. Therefore, replication is crucial to establish the reliability and generalisability of neuropsychological findings, and at Cogent we welcome the submission of such replication attempts.

Data sharing holds significant promise to address the challenges of small samples as it allows testing the reliability and generalisability of findings across neurological cases, research groups, countries, languages, and cultures. Larger group studies can reveal important patterns of more prominent neurological disorders, but there is also an important need to understand how group data can inform us about the individual, both in terms of symptom presentation over time and intervention efficacy. Directly measuring within-subject variability in large cohort studies is critical to determine which failures to replicate are driven by a lack of single-subject analysis. For example, in neuropsychological rehabilitation, reproducibility is at least partially linked to how well group-level data represent individual responses to treatment.

A paradigm shift is needed in neuropsychology focusing on adoption of open research to accelerate the field and bring researchers and clinicians closer to important advancements in assessment, diagnosis, and interventions for people with neurological conditions and their families. This shift would help determine if neuropsychological findings are robust and should be implemented in clinical practice. Moreover, as outlined earlier, open materials, code, and data can provide research teams with access to the methods and outcomes of all studies which in turn will facilitate replication, combination of multiple datasets and meta-analyses further strengthening findings and their translation into clinical practice. Moreover, open data can ultimately facilitate the investigation of population, sample level and single-person level effect sizes and, even, provide the foundations for testing a range of hypotheses (including the null) within a Bayesian framework.

At Cogent Psychology we are excited to fully support the open research paradigm shift in neuropsychology and encourage authors to consider study preregistration and replication and welcome submission of new studies as Registered Reports or Brief Replication Reports. In a field of psychology with such direct ramifications to the care of neurological patients the adoption of open, reproducible, and robust research practices is too important to be delayed.

4. Large-scale datasets and secondary data analyses: opportunities and challenges for open research

The availability of survey data from national and international studies and from researchers who have posted their data to trusted repositories presents both opportunities and challenges for open research. Combined with open analysis code, open data allows the primary findings of major surveys to be reproduced and the findings of published studies to be verified. Ensuring results are computationally reproducible is crucial for the integrity of psychological research and journal editors have begun to call for stronger practice in this area (Aczel et al., Citation2020; Bauer, Citation2022). This is because providing open data and code allows journal reviewers and independent researchers to retrace how scientific findings are reached and better understand the potential role of researcher degrees of freedom (e.g., the choice of statistical tests, variable coding, and exclusion criteria). By exposing research findings to stronger collective scrutiny open data and code should increase confidence in the findings of psychological science.

Another major benefit of open data is that independent researchers can test entirely new ideas using pre-existing data. Secondary analysis of large-scale national and international surveys is already commonplace and has helped increase the utilisation and impact of publicly funded research. Efforts to improve data transparency (e.g., the Transparency and Openness Promotion guidelines; Nosek et al., Citation2015) and open data mandates from funding bodies are set to further accelerate the availability of research data. While the potential benefits of open data are vast, the proliferation of easily accessible secondary data presents significant challenges. Most critically, if researchers access data and test a range of hypotheses in different ways without openly declaring this practice, this will result in a high rate of false-positive findings (Simmons et al., Citation2011).

An array of approaches to reduce the rate of false-positives arising from analyses of secondary data have been proposed and these practices are welcomed across our sections. First, multiverse analysis or specification-curve analysis has been proposed as a way to understand the impact of flexibility in analysis on study estimates (Simonsohn, Simmons, & Nelson, Citation2020). Multiverse analysis involves testing and presenting all plausible statistical models and can be implemented using independently developed packages in R, Stata, and Python (e.g., http://urisohn.com/specification-curve/). Multiverse analysis can reduce bias by making explicit the impact of using different specifications or examining different outcomes within a given study. Similarly, “outcome-wide” designs—where all relevant available outcomes included in a dataset are examined—have been proposed as a way to reduce the practice of cherry-picking outcomes related to the exposure of interest when examining secondary data. This approach allows the overall link between a predictor and a range of outcomes to be estimated (VanderWeele, Mathur, & Chen, Citation2020). Taken together, these approaches have the potential to substantially reduce the amount of false-positive findings arising from analysis of secondary data.

The practice of reverse engineering hypotheses based on observed relationships in the data (i.e. HARKing) is more difficult to address using multiverse or outcome-wide analyses. Instead, it is crucial that researchers acknowledge when analyses are exploratory or hypothesis generating and perform a confirmatory test of post-hoc hypotheses in replication samples or a preregistered study (cf., Bosnjak et al., Citation2022; O’Connor, Citation2021). A related approach involves the use of “hold-out” or “split-sample” strategies to control the false discovery rate. Exploratory analyses are performed on a publicly available fraction of the data and the hypotheses that emerge can be registered and tested on a portion of the data that was initially withheld by the data controller (Anderson & Magruder, Citation2017). Preregistering studies prior to applying for access to secondary data is another, perhaps more straightforward, approach to ensuring hypotheses are tested as planned. Of course, preregistration is less feasible when an application is not needed or the data has already been accessed by the research team. In this situation, detailed analysis protocols can be prepared and posted to a trusted repository in advance of beginning a new study drawing on the data, once again to make explicit exploratory and confirmatory tests. Therefore, at Cogent Psychology, we would welcome papers based upon large-scale datasets and secondary data analyses following the principles outlined above.

5. Importance of greater collaborative working

As noted earlier, an additional critique of psychological science research has been the reliance on small sample sizes, bringing into question statistical power. This has been particularly pertinent in research with hard-to-reach populations (e.g., individuals with developmental disorders) or groups of participants that may require high levels of resources to engage in the research process (e.g., infants). Open research practices have aimed to address this issue by ensuring cross-laboratory collaboration through initiatives such as Many Babies (https://manybabies.github.io) which includes researchers from across 47 countries (but also see the Psychological Science Accelerator - a globally distributed network of psychological science laboratories, https://psysciacc.org/. Many Babies aims to replicate key findings in developmental science by pooling resources to address fundamental research questions. Interestingly, this approach not only addresses critical issues around sample size, transparency, and the sharing of advanced research methods, but also increases both the diversity in study samples and the researchers involved. Given that most of the research in developmental science has been generated with participants from Western, Educated, Industrialised, Rich, Democratic (WEIRD) samples these international collaborative efforts are critical to ensure that scientific findings and theory development incorporate human diversity.

Increasingly, data sharing forms an important cornerstone of open research practices. This is not just important to increase transparency of decision-making processes and analyses, but also to increase data pooling and collaborations. Of course, data sharing is not without its issues around upholding ethical standards and protecting the confidentiality of participants. However, initiatives such as https://nyu.databrary.org have successfully and safely generated a large resource for educational and developmental psychology research through the housing of video and transcript data for further exploration. Access to the database is restricted to recognised researchers whose identities are verified by their host institutions. Major funders have also supported the development and maintenance of large-scale databases to encourage data sharing and data combining to make substantial advances in our understanding of critical areas of research. A specific example is “LDbase” (https://ldbase.org), funded by the National Institutes of Health in the United States, which aims to support big data approaches to understanding reading difficulties. By encouraging the use of secondary data sources and combining datasets, it is also important to emphasise the preregistration of data analytic plans (including how variables will be selected, how missing data will be dealt with, etc.). This not only enables researchers to successfully plan out their approach to their research, but also increases transparency in the context of having “a garden of forking paths” of statistical analyses options (Gelman & Loken, Citation2013).

At Cogent Psychology we would particularly welcome research that embeds open research practices within its workflow, with a particular focus on multi-laboratory collaboration and the inclusion of diverse samples. Through these approaches, we will increase the chances of generating impactful research that aims to improve outcomes of children and young people.

6. Statistical power and sample size justification

As outlined earlier, it is now well accepted that most research fields in psychology have historically had low statistical power; they generally have a low probability of detecting the effect or effects they were interested in (e.g., Button et al., Citation2013; Cohen, Citation1962). It is also now common to see calculations estimating statistical power in papers, grant applications or preregistrations, at least in part as a response to mandates from journals or funding bodies. This sometimes backfires: researchers produce a calculation that will satisfy reviewers and funders rather than one that may be informative about the study they are planning. Common benchmarks and thresholds (e.g., 80% power to detect a medium or large effect) encourage misunderstandings and poor practice (Baguley, Citation2004; Giner-Sorolla et al., Citation2020).

A better and more transparent approach is to think about the range of factors that influence the statistical power of a study and what constraints exist on those factors. For example, if you collect data from a small school with 80 children, n is capped at 80. Any justification of sample size in terms of the effect size you are trying to detect is likely to be spurious. It would be better to justify the sample size on pragmatic grounds (e.g., see, Lakens, Citation2022). However, this sort of constraint does not mean that you should not think about and plan to maximize statistical power.

To understand why it is important to realize that statistical power is not a number but a function or curve. For any particular study, the shape of the power curve depends on a range of parameters (representing the factors that influence your ability to detect an effect). Our aim in planning a study is not to predict a point on that curve (which for practical purposes will always be wrong). Rather, we want to understand how the shape of that curve is influenced by those parameters and (for the parameters that we can manipulate) pick values that give high statistical power across a range of plausible parameter values (sometimes termed a sensitivity analysis).

Furthermore, even in a simple study there are options to increase statistical power despite only having a few parameters to worry about: usually n, alpha and effect size. Often the focus is on n and this leads to neglect of other factors that influence statistical power such as the design of the study (Baguley, Citation2004). This should be informed by the minimum effect sizes we would like to detect; the SESOI or smallest effect size of interest (e.g., see, Giner-Sorolla et al., Citation2020). Researchers are reluctant to shift alpha from the conventional .05 level but there are good arguments for considering this (Lakens et al., Citation2018) or for adopting one-sided tests for preregistered hypotheses.

In a more complex study it may well be more important to focus on other factors that decrease statistical power—notably missing data or drop-out. Investing in preventing attrition will often have a greater practical impact on statistical power than increasing sample size (as well as being desirable for other reasons). It is still possible that after all this effort, power remains stubbornly low. Under these circumstances it may still be worth running the study, but focus should shift to making research synthesis easier. Open data, standardized protocols and (ideally) collaboration between research groups can facilitate this.

In summary, papers submitted to Cogent Psychology should include a clear justification for choice of sample size. This need not always involve consideration of statistical power (notably for single case or qualitative studies). Where statistical power analysis is involved it is better to focus on understanding the sensitivity of your sampling strategy or design, and to make plausible assumptions about the research context rather than try and think of statistical power as a way to arrive a single, fixed correct answer.

7. Conclusion

In conclusion, in the last decade or so, it is clear that the discipline of psychology has made huge advances in how psychological science is conducted (Chambers & Tzavella, Citation2022; Munafò et al., Citation2017; O’Connor, Citation2020, Citation2021). The development of new tools, approaches and publication formats have helped to reduce use of questionable research practices that will ultimately improve the robustness of our evidence base. We hope that the relaunch of Cogent Psychology can play a role in helping to further improve psychological science and that you want to contribute to this and will consider submitting some of your work to us in the near future.

Additional information

Funding

The authors received no direct funding for this research.

References

  • Aczel, B., Szaszi, B., Sarafoglou, A., Kekecs, Z., Kucharský, Š., Benjamin, D., and Wagenmakers, E. J. (2020). A consensus-based transparency checklist. Nature Human Behaviour, 4(1), 4–8. https://doi.org/10.1038/s41562-019-0772-6
  • Anderson, M.L., & Magruder, J. (2017). Split-sample strategies for avoiding false discoveries. NBER Working Papers 23544, National Bureau of Economic Research, Inc.
  • Baguley, T. (2004). Understanding statistical power in the context of applied research. Applied Ergonomics, 35(2), 73–80. https://doi.org/10.1016/j.apergo.2004.01.002
  • Bauer, P. J. (2022). Psychological science stepping up a level. Psychological science, 33(2), 179–183. https://doi.org/10.1177/09567976221078527
  • Bosnjak, M., Fiebach, C., Mellor, D., Müller, S., O’Connor, D. B., Oswald, F. L., & Sokol, R. I. (2022). A template for preregistration of quantitative research in psychology: Report of the joint psychological societies preregistration task force. American Psychologist, 77(4), 602–615. http://dx.doi.org/10.1037/amp0000879 In press
  • Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., Grange, J. A., Perugini, M., Spies, J. R., & van ’t Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50(1), 217–224. https://doi.org/10.1016/j.jesp.2013.10.005
  • Button, K., Ioannidis, J., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. https://doi.org/10.1038/nrn3475
  • Chambers, C. D., & Tzavella, L. (2022). The past, present and future of registered reports. Nature Human Behaviour, 6(1), 29–42. https://doi.org/10.1038/s41562-021-01193-7
  • Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. The Journal of Abnormal and Social Psychology, 65(3), 145–153. https://doi.org/10.1037/h0045186
  • Dutilh, G., Annis, J., Brown, S. D., Cassey, P., Evans, N. J., Grasman, R. P. P. P., Hawkins, G. E., Heathcote, A., Holmes, W. R., Krypotos, A.-M., Kupitz, C. N., Leite, F. P., Lerche, V., Lin, Y.-S., Logan, G. D., Palmeri, T. J., Starns, J. J., Trueblood, J. S., van Maanen, L., … Donkin, C. (2019). The quality of response time data inference: A blinded, collaborative assessment of the validity of cognitive models. Psychonomic Bulletin & Review, 26(4), 1051–1069. https://doi.org/10.3758/s13423-017-1417-2
  • Farrell, S., & Lewandowsky, S. (2010). Computational models as aids to better reasoning in psychology. Current Directions in Psychological Science, 19(5), 329–335. https://doi.org/10.1177/0963721410386677
  • Gelman, A., & Loken, E. (2013). The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. http://www.stat.columbia.edu/~gelman/research/unpublished/p_hacking.pdf
  • Giner-Sorolla, R., Carpenter, T., Lewis, N. A., Jr., Montoya, A. K., Aberson, C. L., Bostyn, D. H., Conrique, B. G., Ng, B. W., Reifman, A., Schoemann, A. M., & Soderberg, C. (2020). Power to detect what? Considerations for planning and evaluating sample size. Retrieved from https://osf.io/d3v8t
  • Guest, O., & Martin, A. E. (2021). How computational modeling can force theory building in psychological science. Perspectives on Psychological Science, 16(4), 789–802. https://doi.org/10.1177/1745691620970585
  • Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196–217. https://doi.org/10.1207/s15327957pspr0203_4
  • Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., Baguley, T., Becker, R. B., Benning, S. D., Bradford, D. E., Buchanan, E. M., Caldwell, A. R., Van Calster, B., Carlsson, R., Chen, S.-C., Chung, B., Colling, L. J., Collins, G. S., Crook, Z., … Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2(3), 168–171. https://doi.org/10.1038/s41562-018-0311-x
  • Lakens, D. (2022). Sample size justification. Collabra: Psychology, 8(1), 33267. https://doi.org/10.1525/collabra.33267
  • Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., Du Sert, N. P., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., & Ioannidis, J. P. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 0021. https://doi.org/10.1038/s41562-016-0021
  • Norris, E., & O’Connor, D. B. (2019). Science as behaviour: Using a behaviour change approach to increase uptake of open science. Psychology & Health, 34(12), 1397–1406. https://doi.org/10.1080/08870446.2019.1679373
  • Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
  • O’Connor, D. B. (2020). The future of health behaviour change interventions: Opportunities for open science and personality research. Health Psychology Review, 14(1), 176–181. https://doi.org/10.1080/17437199.2019.1707107
  • O’Connor, D. B. (2021). Leonardo da Vinci, pre-registration and the architecture of science: Towards a more open and transparent research culture. Health Psychology Bulletin, 5(1), 39–45. https://doi.org/10.5334/hpb.20
  • Oberauer, K., & Lewandowsky, S. (2019). Addressing the theory crisis in psychology. Psychonomic Bulletin & Review, 26(5), 1596–1618. https://doi.org/10.3758/s13423-019-01645-2
  • Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716
  • Rossetti, Y., Pisella, L., & McIntosh, R. D. (2017). Rise and fall of the two visual systems theory. Annals of Physical and Rehabilitation Medicine, 60(3), 130–140. https://doi.org/10.1016/j.rehab.2017.02.002
  • Rossit, S., Harvey, M., Butler, S. H., Szymanek, L., Morand, S., Monaco, S., & McIntosh, R. D. (2018). Impaired peripheral reaching and on-line corrections in patient DF: Optic ataxia with visual form agnosia. Cortex, 98(1), 84–101. https://doi.org/10.1016/j.cortex.2017.04.004
  • Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632
  • Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2020). Specification curve analysis. Nature Human Behaviour, 4, 1208–1214. https://doi.org/10.1038/s41562-020-0912-z
  • VanderWeele, T. J., Mathur, M. B., & Chen, Y. (2020). Outcome-Wide longitudinal designs for causal inference: A new template for empirical studies. Statistical Science, 35(3), 437–466. https://doi.org/10.1214/19-STS728