21
Views
0
CrossRef citations to date
0
Altmetric
Introduction

Foreword: Peering forward, Part II

Last year, when I first wrote about peer review in this journal, I floated a few comments about AI and the need for systemic renovation of current practices. Since then, I’ve been giving these topics more thought.Footnote1 My research began with a simple ask. Can you peer review an article? The reply comes swiftly, uncharacteristic in its enthusiasm: ‘Of course! I’d be happy to help peer review an article.’ In the world of journal editing today, keen responses of this ilk are rare, almost unnervingly so. Yet AI as we presently know it has been primed to indulge human requests. Ask and ye shall receive. As far as I am aware few if any journals in the humanities depend upon AI for peer review, including this one,Footnote2 although other academic arenas are more AI-curious, even AI-forward. So why query AI at all? I wanted to know the boundaries AI would draw for itself, and, even more, what those boundaries might suggest about the culture and project of peer review. I therefore pressed on. Can you review in Spanish, in Portuguese? Affirmative. Can you review humanities articles on colonial histories of Latin America? Another assent.

Unlike human colleagues, AI needs no time to consult its schedule, nor read an essay abstract to see if it has promise. Such ready compliance led me to shift the mode of my queries, I asked AI to explain what kind of peer they would be. Chat GPT spilled out its skill set. It could read for grammar and style, clarity and coherence, content accuracy (based on training through 2023), argument structure, and evidence-argument alignment. Not nothing. The chatbot wrapped up its answer, adding ‘I am not a subject-matter expert […] for specialized or highly technical fields, it’s still advisable to seek a review from a human expert in the specific area of study.’ Now I’m interested. For I’d not thought about human expertise this way.Footnote3 Other-than-human experts matter in all kinds of arenas: seasonal bird and whale migration, horse and rabbit birthing, but in the world of peer review? This notion prompted me to go one more round with the machine: did it have any reservations about taking on peer review? Actually, yes it did. Among the reservations my AI chat-mate listed: an inability to bring nuanced judgment to bear, such as evaluating an article for the novelty of the research or potential impact on a field; assessments of a kind that rest upon ‘years of experience in the field’; and little understanding of field-specific conventions.Footnote4

It is tempting to read all this and sigh with relief. AI really cannot peer review essays in ways that humanities journals like this one depend upon. But we should not fool ourselves. The AI answers I received are very much of the early 2020s. The sin of false modesty will be not long-lived; chatbots will be saying something else to us in a not-very-distant future. As editor of an academic journal, I am taking note. For some humanities work and editing will surely cede to AI. Maybe to good effect. Certainly, to some effect.Footnote5

What AI can now do—almost uncannily, and largely in spite of itself—is call attention to presumptions so deeply embedded in our work that we hardly notice them. What philosophers have called in other settings (and admittedly with other connotations), the myth of the given. For instance, I detect in the AI phrase ‘human expert,’ an implicit challenge especially to us, we human humanists. Can we explain what constitutes expertise, how it is constructed and by what criteria we recognize it? In the context of peer review as currently practiced in the humanities, the phrase human expert begs a seemingly simple yet prickly question: who are our peers?

Among the dilemmas that surface frequently in current conversations about peer review is the imbalance between supply and demand. So many articles in need of review, so few reviewers (hence the temptation of AI, ever ready, ever willing). This is not just a humanities problem. Last winter I attended a seminar on peer review run by Taylor & Francis, the publisher of CLAR, in which this imbalance was discussed at length. There were 60+ editors on the call, a majority anchored in the sciences. Their frustrations topped a scale I am not used to: one editor asked for advice after more than 30 scholars refused a review invitation; another wanted to know how to find reviewers for essays with more than two dozen authors. It is of course in T&F’s best interest to take note of their editors’ woes; I was struck nonetheless by how fluent in pipeline issues the publisher’s representatives were, and even more, how T&F is number-crunching,Footnote6 seeking data-inspired solutions to bolster the supply-side of peer review.

Compensation came up for discussion, although my impression is that realistic, scalable solutions are not to be found there. More interesting—at least to me—were some initial T&F analytics suggesting there are countries from which scholars submit a bounty of essays although agree to review almost none. By contrast there exist more review-balanced locales. This opened onto a conversation about mentoring. As in, what might happen to peer-review if better mentoring existed? Good question.

Online advice and worksheets aside, scholars in the humanities do not have a rulebook for what constitutes successful mentoring, much less what makes and sustains an intellectual community. There are ethical norms to be sure. In the circles in which I work, however, it is remarkably unclear who teaches people how to do peer review, or when, if ever, training happens. If the T&F seminar is any measure, this aligns with other editors’ experiences. There was some discussion and debate about whose responsibility mentoring might be. Is this the role a publisher should take on, a journal editor? Should it be taught in graduate school? Logistics aside, I heard a certain amount of editorial agreement: yes, please. Better mentoring. In any and every form.

Nevertheless, we should be honest: mentoring can take us only so far. To emphasize that part of the peer-review arrangement misses at least one crucial point. Not all parts of the process hold equal value in academia. CLAR is just one journal, distinctive in purview and ambition, so my observations may be outliers. But no grant-bestowing or tenure-granting committee has ever asked me how the journal vets peer-reviewers. Nor have I been asked whether review work was well-done. Rather, these committees often (and insistently) ask about the journal’s acceptance/rejection rate, their interest resting firmly in the ‘having been reviewed’ part of the peer-review arrangement. This is a systemic problem. And as AI becomes more present in our lives, it is a problem that will need redress. For no one is yet saying whether peer review by AI will ‘count’ in the same ways as peer review by humans, itself so presumed and little examined. Nor is anyone volunteering to make such a call. This is a site of impending contestation. One that needs leadership. Maybe sooner than we humanists currently presume.

The central question thus remains: just how do we—in academia, in the humanities—know who constitutes a peer? And relatedly, what does it mean for one’s field when journal editors are the ones to seek out peer-reviewers, to model (if not explicitly describe) a peer’s role and obligations? If one cruises through TikTok, X, Facebook, Substack and elsewhere one finds, very quickly indeed, much venom about the powers that publishers and journal editors have when it comes to choosing reviewers. No doubt there’s some truth in this. Although in the humanities journals I know best, power is not really the operative term. Yes, editorial decisions have tangible effects. On its best days, peer review—as broken, imbalanced, unkind or onerous as it may be, and it is, at times, all these things—creates invaluable commentary on scholarship-in-progress. Rather more transactionally, it undergirds bids for funding, for tenure and promotion. It fuels public recognition and claims to cultural authority. And for some, it can lead to quite lucrative publishing arrangements. Without getting too grand, peer review also shapes the ways that scholars construct their fields of study, their relationships to the academy, and their commitments to the production of shared knowledge.

It may seem counter-intuitive, but I find that peer review can undercut some of the seemingly autocratic aspects of editorship. No editors I know only publish essays that align with their own intellectual proclivities. Yet day-by-day, week-by-week editorial decisions about whom to ask for a peer-review are often quite mundane. They can be made in consultation with colleagues on an Editorial Board. They can be made with conflict-of-interest agreements as guides, and tacit understandings about what constitutes field-specific authority. They can also be made under duress given the number of declined invitations and stony silences that meet review requests. What this means in lived reality: criteria for deciding who is a peer can be lofty, and also quite pedestrian. That said, authors—and indeed readers—deserve to know whom editors trust as peers.

This topic most certainly warrants more discussion, serious debate. Writing only for myself, and my current practice, here is a bit of an opening. At some journals, editors take the ‘peer’ part of peer review quite literally. Full professors are asked to review the work of full professors, independent scholars are paired, as are contingent faculty. I work and think otherwise. For me, like many an editor, expertise is fundamental. Yet it has no singular form. Sometimes the most relevant expertise for an article can be found by seeking out a senior scholar, someone who can bring to bear perspectives that are both deep and expansive. And I do invite reviews from such folks. Consistently turning to the most well-established scholars amongst us, though, runs some risks. Not only can it replicate field orthodoxies, relying heavily on the most senior people in a field can reinscribe and revalidate—implicitly, if not explicitly—the views of those who rose to prominence when the academy was even less capacious (in all senses of that word) than it is now.

One corrective to the seniority bias, some have suggested, are graduate students since people close to finishing their dissertations are immersed in the relevant literature. Several of my editor colleagues do seek out graduate students as reviewers—including those they work with—seeing mentoring possibilities in the review-writing process. I understand the logic but do not go this route. For one, graduate students are among the most precarious members of the academy: it can be hard for them to refuse an invitation, even if their labor is uncompensated (an arrangement they already know quite a lot about). Moreover, experience has taught me that most graduate students are best able to address the depths of their field of research, less so its breadth. At CLAR the broad view matters. More importantly than all of this, however, graduate students have not often published many articles themselves. This makes it hard for me and others to gauge their mode of thinking, their methodological fluency, their willingness to engage in intellectual debate.

When it comes to expertise, then, I prefer peer-reviewers who have published journal articles and/or books, people whose intellectual habits and predilections are publicly known. This only seems fair: to authors, to readers, to reviewers. Yes, this can reproduce bias. Yet I see much value in relying upon the public record to assess the ways that scholars work, the reach of their thinking, the tolerance they have for intellectual contest. And further, I believe that authors should know their manuscripts are being read and commented upon by peers who have public profiles, who have been through the publication experience themselves. Readers should know the same. So, too, reviewers.

Beyond this, a few other things guide my work. I watch carefully not to return to the same reviewers time and again. Echo chambers serve the field poorly. In all honesty, I am sometimes tempted to tap a colleague who has recently written an excellent review (and met the deadline). However, I try my best not to narrow the perspectives brought to bear in a field as varied and, in some respects, still in formation as is colonial Latin American Studies.

The answer to my main question, about who are our peers, ultimately turns on a range of conditions. Articles themselves are crucial. Their topics, methodologies, writing styles form the initial, prime determining factor. Other decisions are journal-related, as I track who has recently reviewed work for CLAR and weigh who can bring an interdisciplinary perspective to both a specific essay and the field writ large. This artisanal way of working gets no points for efficiency; and, frankly, it is impossible to be fully transparent about its mechanisms for it involves creative thinking. Every essay. Every time. I suspect that one day these modes of working will become obsolete. Some tell me they already have.

Here is where the systemic renovation comes in. What I’m looking for is neither journal-specific nor limited to scholarship aligned under the rubric of colonial Latin American Studies. Today AI makes a woeful humanities peer, even when well-prompted: its large language models over-emphasize cultural clichés; its ‘training data’ offers too little when it comes to assessing innovative research or meaningful interventions in our field. Whether this will always be the case I cannot say. More certain, is that any sustained value that accrues to peer review requires greater clarity—with each other and with a broader public—about who can become a peer and why, about what being, if not also mentoring a peer entails. To take such knowledge as given, as my recent AI conversations suggest, is no longer a luxury human experts can afford.

Notes

1 Thank you to my colleague and CLAR Editorial Board member, Ken Mills, for his astute comments on a rougher version of this Foreword, also for pointing me to Ben Wakeman’s writing (noted below). For my earlier piece on peer review, see Dana Leibsohn, ‘Foreword: Peering forward.’ Colonial Latin American Review 32 (3): 307–11.

2 Taylor & Francis, the publisher of CLAR, has issued AI guidelines for both authors and reviewers. On the review side, instructions are explicit ‘Reviewers must not use artificial intelligence tools to generate manuscript review reports, including LLM based tools like ChatGPT.’ See, ‘Guidelines for peer reviewers,’ at https://editorresources.taylorandfrancis.com/reviewer-guidelines/ . On content creation, see, for instance, ‘Taylor & Francis clarifies responsible use of AI tools in academic content creation,’ 17 February 2023. Accessible at: https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/. Both sites last accessed 16 May 2024.

3 Some readers will rightly point out there exists a whole of field of expert systems which dates from the post-WWII period.

4 All quotes and paraphrases come from a series of conversations I had with versions of Open AI’s Chat GPT early in May 2024.

5 For those interested in an advocate’s position, and particularly an argument for self-training AI, see for instance, Ben Wakeman, ‘AI denial? It’s time to get over yourself.’ Catch and Release 5 April 2024. Accessed 20 May 2024 at https://www.catchrelease.net/p/ai-denial-its-time-to-get-over-yourself.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.