688
Views
0
CrossRef citations to date
0
Altmetric
Debates

Debate: Peer reviews at the crossroads—‘To AI or not to AI?’

, &

Introduction

Artificial intelligence (AI) is relentlessly infiltrating our lives. Academia is now debating the application of AI models, like ChatGPT and Bard, to the peer review process in scholarly journals. This article explores the potential enhancements AI offers to the review process and the significant challenges and ethical dilemmas accompanying this novel integration (Bauer et al., Citation2023; Checco et al., Citation2021; Heaven, Citation2018; Price & Flach, Citation2017; Salah et al., Citation2023).

The potential of AI

The power of AI lies not just in its capacity to process vast amounts of data but also in its ability to automate routine tasks with a level of efficiency that far exceeds human capabilities. This could be beneficial in the context of peer reviews in academic journals. AI might be employed to automate initial checks on academic papers, effectively managing an array of tasks ranging from ensuring the consistency of format and the accuracy of citations to detecting plagiarism and verifying the correctness of statistical analyses. We suggest that this systematic, meticulous scrutiny of every submission would substantially accelerate the review process, reducing errors and elevating efficiency.

Moreover, AI's inherent objectivity could be a powerful tool to improve the quality of reviews. AI could be instrumental in implementing a stringent, double-blind review process, circumventing the potential for bias—an issue that has been a persistent concern in traditional peer reviews. Furthermore, we feel that AI would significantly enhance the accuracy and efficiency of reviewer selection by leveraging the wealth of data available in the repository of previous reviews and the expertise of reviewers. This would ensure that each manuscript would be assessed by the most qualified and suitable expert in the field, promoting a fair and comprehensive evaluation process (Checco et al., Citation2021; Salah et al., Citation2023).

Concerns about AI

Despite these promising opportunities, applying AI to academic peer reviews is inviting skepticism and criticism. A notable apprehension stems from AI’s inability to emulate the depth of human creativity, intuition and critical reasoning. These are vital elements often required for the nuanced understanding, interpretation and critique of complex academic work. With its analytical and pattern-recognizing strengths, AI falls short in areas requiring subjective interpretation, intuitive leaps, or a deep understanding of the broader context and implications of the research.

In addition, the conundrum of AI's ‘black box’ nature surfaces significant questions about accountability and transparency. The intricate nature of AI's decision-making algorithms, which often remain opaque to end users, risks obscuring the rationale behind the operations of the review process. This lack of transparency may compromise academic integrity, a cornerstone of scholarly communication, eroding trust in review outcomes.

Lastly, there is the disquieting potential of systemic biases being perpetuated in the AI-powered review process. AI models might reproduce biases in the review process by learning from data steeped in historical and societal biases. This could lead to skewed evaluation outcomes—an issue that directly conflicts with the foundational principles of objectivity and fairness in scholarly peer reviews (Heaven, Citation2018; Price & Flach, Citation2017).

Symbiosis: striking a balance

Finding a balanced approach may reside in a synergistic model that effectively amalgamates AI's computational capabilities with the profound depth of human intellectual discernment. Such a partnership could allow AI to shoulder the responsibility of conducting initial assessments and overseeing quality control. These tasks involve large-scale, rapid data processing that AI can execute with remarkable precision and speed.

Simultaneously, human reviewers could concentrate on providing substantive, contextual evaluations of academic submissions. Such reviews demand a nuanced understanding, interpretive insights and critical appraisal that AI, at its current stage of development, cannot adequately replicate. Human expertise remains irreplaceable here, grounded in years of study, research and experiential knowledge.

Navigating the challenge of AI's black box—the opaqueness that often shrouds AI decision-making—requires strategic solutions. One possibility lies in integrating explainable AI models into the review process. These models, designed to make their operations more transparent, can help elucidate the mechanisms driving their decision-making. Coupling this with human oversight at critical decision-making junctures can add another layer of transparency and accountability, helping maintain the integrity of the peer review process.

Furthermore, the concern of systemic bias infiltrating AI-powered reviews requires proactive and thoughtful responses. Ensuring the use of diverse, representative and bias-free training data sets for AI models is a crucial starting point. It is not just about who or what is included in the data but also how the AI interprets and uses it. Implementing robust audits of AI operations and utilizing fairness algorithms can further safeguard the review process, aiming to sustain the essential objectivity of evaluations.

Ultimately, this symbiotic model represents a path that harmoniously merges AI's transformative potential with human reviewers’ indispensable intellectual contributions. By striking this balance, we stand a better chance of enhancing the efficiency, fairness and quality of peer reviews in academic journals while preserving their fundamental ethos.

Conclusion

In conclusion, AI models, like ChatGPT and Bard, indisputably have the potential to reshape the peer review process in academic journals, offering enhancements in efficiency and bias mitigation. However, integrating this pioneering technology requires thoughtful deliberation, acknowledging its limitations and potential risks and ensuring that transparency, accountability and scholarly intuition are not compromised.

The ongoing discourse indicates the dynamic evolution of AI and its potential implications for scholarly communication. As we navigate this unexplored terrain, the challenge lies in creating a harmonious blend of AI's innovative prowess with human intellect's irreplaceable depth and intuition. By striking this balance, we can uphold the integrity and rigor of academic scholarship, even as we welcome the transformative potential of AI in the academic realm.

The views in this debate article are those of the authors alone and are not necessarily shared by this journal’s editors or publisher.

Conflict of Interest

The authors declare no conflicts of interest.

Acknowledgements

Open Access funding provided by the Qatar National Library.

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Notes on contributors

Mohammed Salah

Mohammed Salah is an Assistant Professor at the Modern College of Business and Science in Oman. His research primarily encompasses public administration, social psychology, and artificial intelligence. Currently, he is delving into understanding how artificial intelligence, specifically generative AI, influences decision-making processes.

Fadi Abdelfattah

Fadi Abdelfattah is an Associate Professor at the Modern College of Business and Science in Oman. His research interests include consumer behavior, service quality, knowledge sharing management, and healthcare management.

Hussam Al Halbusi

Hussam Al Halbusi is currently working as a visiting Assistant Professor at the Management Department, Ahmed Bin Mohammed Military College, Qatar. His research interests lie in the areas of strategic management, leadership, innovation, and sustainability.

References

  • Bauer, E., Greisel, M., Kuznetsov, I., Berndt, M., Kollar, I., Dresel, M., Fischer, M. R., & Fischer, F. (2023). Using natural language processing to support peer-feedback in the age of artificial intelligence: A cross-disciplinary framework and a research agenda. British Journal of Educational Technology, https://doi.org/10.1111/bjet.13336
  • Checco, A., Bracciale, L., Loreti, P., Pinfield, S., & Bianchi, G. (2021). AI-assisted peer review. Humanities and Social Sciences Communications, 8(1), 1–11.
  • Heaven, D. (2018). The age of AI peer reviews. Nature, 563(7733), 609–610.
  • Price, S., & Flach, P. A. (2017). Computational support for academic peer review: A perspective from artificial intelligence. Communications of the ACM, 60(3), 70–79.
  • Salah, M., Al Halbusi, H., & Abdelfattah, F. (2023). May the force of text data analysis be with you: Unleashing the power of generative AI for social psychology research. Computers in Human Behavior: Artificial Humans, https://doi.org/10.1016/j.chbah.2023.100006