1,966
Views
1
CrossRef citations to date
0
Altmetric
Research articles

A scholarly dialogue: writing scholarship, authorship, academic integrity and the challenges of AI

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Pages 578-590 | Received 28 Jun 2023, Accepted 30 Oct 2023, Published online: 25 Mar 2024

ABSTRACT

Concerns about the role of technology and the quality of student writing in higher education are not new. Historically, writing scholars have been at the forefront of initiatives that scrutinise and integrate new technologies in higher education. This article contends that writing scholars are again uniquely equipped to assist students, teachers in all disciplines, and institutions of higher education in navigating the challenges created by the public availability of generative artificial intelligence (AI). Adopting a dialogical approach, the article brings together six scholars from diverse disciplinary backgrounds within the broad umbrella of writing studies. Through the lens of writing scholarship, this chorus of critical voices illuminates the important questions posed by, and possible responses to, AI in higher education. Although AI complicates key issues in higher education such as academic integrity, assessment, and authorship, writing scholarship provides an essential framework for educators to respond to these challenges. Collectively, these scholars map the history of writing scholars’ responses to technological change in higher education and suggest how writing scholars can contribute to the debates and discussions concerning the impact of AI on higher education.

Introduction

Concerns about the role of technology and the quality of student writing in higher education are not new, and writing scholars have contributed to developing effective responses to such concerns in terms of pedagogy, student support, and assessment. In the US, the 1975 Newsweek magazine story ‘Why Johnny Can't Write’ precipitated a crisis in notions of writing and assessment in higher education. The story would impact teaching and assessment in higher education across English-speaking countries. Writing scholarsFootnote1 in the US led the way in generating solutions, including the writing-across-the-curriculum (WAC) movement (Palmquist et al., Citation2020), which focuses on integrating the teaching of writing across disciplines and promotes writing to learn as a key pedagogical approach. ‘Why Johnny Can’t Write’ effected a sector-wide shift in the English-speaking world, helping to move higher education away from testing to evaluate learning to a broader understanding of assessment as both a mechanism for evaluating learning and as a vehicle for learning itself. Writing instruction, while remaining controversial in some fields, became embedded in both what we teach and how we teach in most disciplines across the sector. In Australia and New Zealand, academic writing programmes developed in response to the need to support non-traditional or ‘outsider’ student cohorts whose disciplinary literacy skills were seen as lacking, as social changes impacted the demographics of the Australasian student body. In Australia, writing specialists responded primarily through the development of Academic Language and Learning (ALL) units, and in New Zealand, writing scholars were the drivers of new academic writing courses and writing support centres (Emerson & Clerehan, Citation2009).

With the advent of generative AI, we now face yet another crisis of writing and assessment. Arguably all Johnnys can write in polished prose using generative AI tools, whether they be second language learners, creative writers, or writers in any discipline. What does this mean for assessment and learning? What does it mean for writing scholars? Universities across the globe are struggling to respond (Ziebell & Skeat, Citation2023). In this paper, Australian and New Zealand writing scholars bring their expertise to the conversation of ‘what to do about student writing in the age of AI’, as old problems resurface in this new guise.

This paper uses a dialogical approach to spark an initial set of scholarly approaches to understanding the challenges of, and possible responses to, the place of generative AI in higher education. It brings together six writing scholars from a range of disciplinary backgrounds (including writing in the disciplines, rhetoric and composition, technical communication, creative writing, and second language learning) to explore key issues relating to generative AI in higher education from the perspective of writing scholarship. We co-developed key questions to consider, brought our perspectives together using an online conversation and written responses, and then reshaped these into a written narrative. The resulting dialogue outlines a series of challenges facing the university and provides a series of perspectives from a range of writing disciplines on how universities might respond to the current crisis in student writing.

What ideas, practices, changes and controversies relating to generative AI and writing have you observed in your local context? How do you respond as a scholar/teacher of writing?

Ariella:

I have observed a spectrum, from concern in terms of AI’s threat to academic integrity to creative enthusiasm in response to generative AI. Many teaching academics quickly identified Turnitin’s AI detection software as functionally useless and, with little policy guidance, have turned to their disciplinary training and hidden assumptions about students’ motivations and capacities to identify misuse of AI. For example, a linguist preparing an academic integrity case based on suspected AI use observed that the AI had better grammar and punctuation than most of their students, and they dismissed cases with grammar mistakes.

At the other end of the spectrum, many creative writers frame AI as a collaborator or writing partner (Gero et al., Citation2023). Collaboration is common in creative writing, although acts of collaboration with peers or editors (as distinct from co-authorship) are elided by the myth of the lone ‘genius’ author whose name appears on the cover of a book, a concept creative writing pedagogy critiques via its emphasis on process. This stance suggests AI belongs in a creative writing classroom. However, no act of writing is neutral, and these acts of collaboration can be coupled with acts of reflection deeply embedded in scholarship, as is suggested by practice-led research methodology.
Bronwen:

As a linguist, it’s interesting to see Ariella’s example about the linguist who dismissed the AI cases with errors since their students made more errors. This idea is something I’ve been thinking about machine translationFootnote2 in my own context, where there are many students with English as an additional language, some with quite limited English language proficiency. On the one hand, neural machine translation has improved AI accuracy so much, particularly for English, that translation errors are increasingly rare. So, you could conclude, as Ducar and Schocket (Citation2018) suggest, that error-free text indicates MT use whereas errors indicate students’ own writing, although there’s little empirical evidence to support that assumption. On the other hand, MT in many languages is not yet highly accurate. For example, MT in Korean has been shown to make more errors with word choice than Korean learners of English (Chon et al., Citation2021). Even so, a linguistic perspective may provide an alternative detection strategy to Turnitin,, which relies on matching sources, rather than language.

Beck:

I also thought about grammatical errors, although from a different angle. As Lunsford and Lunsford (Citation2008) say, ‘Mistakes are a fact of life’ and error rates have been consistent through many changes in technology, but the type and context of error matter. We expect increased error rates when writers stretch themselves, both when writing in new genres and engaging with new ideas. This is because writing is connected to thinking and learning. Grammatically standard writing at sentence or paragraph level paired with error at the conceptual level seems like a better indicator of potentially AI-generated texts.

Lisa:

As Director of Teaching and Learning, I have been inundated by emails from academics unsure how to respond to Turnitin detection results, and challenged on the question of what constitutes writing in English when students with English as an additional language submit assignments using flawless English syntax by writing in their own language and then using translation tools and editing software. Is this plagiarism? Is it acceptable? But I’ve also seen creative responses: I’ve been part of a discussion, for example, about working with students to use AI effectively in first- and second-year courses and turning them into pass/fail, then using comprehensive assessment in third year courses.

As I’ve engaged with the fallout, I’ve found Danny Liu’s (Citation2023) paper helpful. He outlines three current approaches to managing the impact of AI: avoid, outrun, and adapt. And I would add one other: control. In my own institution, the current situation is being managed by an attempt at control: we rushed to establish policy on the use of generative AI, and cases are being policed through our academic integrity policies. Teaching staff and Academic Integrity officers are completely overwhelmed and their strategies for managing the situation are changing by the week as they understand more about the technology. It’s unsustainable. Subsequently, there is a call to avoid the situation through a demand for invigilated exams, a move which is strongly opposed by student bodies, and too expensive for universities to consider post-Covid. More creative academics are attempting to outrun AI through changing assessment genres – for example, through requiring annotated images as part of an assessment, or increasing the oral component. But as AI develops, it will soon overtake these efforts. Adapting, says Liu, through authentic assessment, is our only option. And this is where writing scholars can make a contribution: we’ve been using authentic, process-driven, discipline-appropriate assessment for decades.

What worries me most, though, is a focus on evaluating student learning. But one of the key purposes of written assessment is to help students think and learn (Dann, Citation2014; Earl, Citation2012; Silliman et al., Citation2020). We must keep this in the forefront of any discussion about changes to assessment.

Susan:

Exactly! The discourse around AI isn’t new. We’ve heard similar arguments before about various types of writing assistance – technologically mediated or otherwise: Grammarly, online writing classes, earlier text generators, peer review apps, even writing centres. To those outside writing studies, these changes suggest threats to authorship, academic integrity, authenticity, and even employment! However, writing scholars actively resist technological determinism by designing better (rhetorical) writing assignments that treat any technology as merely a tool to assist the writing to learn process.

Why is technology change in general and generative AI more specifically of interest to writing scholars? Why are writing scholars’ perspectives important for other disciplines’ responses to these technologies?

Susan:

Unlike scholars in other disciplines, writing scholars’ disciplinary ‘content’ is writing, so we are best placed to understand technological and other external factors that complement/disrupt/interrogate the writing process as merely part of the invention stage. Writing scholars understand, perhaps better than most, that writing is rarely a solitary act, even when it is produced by singular authors (Ede & Lunsford, Citation2006).

Recently, I was discussing a history assignment on Egyptian pharaohs with my thirteen-year-old daughter, who was procrastinating. Upon my suggestion, she asked ChatGPT a couple of questions about pharaohs, with the answers sparking new questions and further thinking. Before long, she was happily writing away. Once she’d finished, I suggested she ask ChatGPT to generate an essay, based on the assigned prompt, to compare with her own. The result was a jumble of facts with no logical argument or organisation. What I wanted her to realise, of course, is that where ChatGPT fails as an author, it can succeed as an interlocutor. If we learn to think of generative AI as merely another way of interfacing with texts (Yancey, Citation2004), we can harness its power in positive ways.
Ariella:

Yes! Writing scholars are aware that technology disruption shapes genres, modes of expression and ways of thinking. Genres are at once categories for ordering, methods of constructing knowledge, and they shape ways of seeing and being in the world. Let me give you an example from another technological disruptor: print magazines. In the nineteenth century – a period marked by war and violent colonisation – the genre of the short story emerged alongside this new medium (Marler, Citation1974; Whitehead, Citation2011). Poe’s now-famous definition of a short story as a narrative that can be ‘read in one sitting’ reflects the reading practices that magazines, and the printing technologies that produce them, invite. That ‘single sitting’ shaped the form’s characteristic style, with its single point of view and ‘slice of life’ narratives.

Writing scholars can offer an insight into the on-going writing processes and new insights influenced by technological disruption. More than that though – we can actively shape the conversation via creative experimentation and intervention in the evolution of writing genres through research and reflection on the processes and ethics of collaborating with AI to create new work.
Bronwen:

Interesting point, Ariella. My training in linguistics leads me to be interested in genre – such as the academic essay – as one level of language operating in AI, alongside words, semantics, phrases, sentences and discourse. Of course, linguists interested in the way society shapes language would agree that genres are formed by their social/technological context (Martin & Rose, Citation2008). But apart from this, linguistics contributes – and contributed to the creation of AI – its rich account of the levels of language operating in language and natural language processing (Kempen, Citation2012). This approach to writing may seem atomistic, but I think it’s informative for understanding how students use the version of AI I’m most familiar with – machine translation. To translate one language into another with machine translation, you enter a word, phrase, sentence, or stretch of discourse. Of course, as one writes, these linguistic categories interact with other elements of the writing process – authorial voice, conceptual content, intended audience – but the linguistic categories must be present. With machine learning, these linguistic categories are also apparently learned in similar ways by learners and machines. So, if a deep knowledge of writing is what writing scholars generally bring to disciplinary debate about AI, writing scholars with a linguistic/ acquisition background bring the important role of language in AI and, potentially, similarities – and differences – between machine and human language.

Lisa:

Picking up a point Susan made, another thing we can highlight is the question of authorship. From the point of view of writing scholarship, the idea of the individual author has always been problematic (Brunner, Citation1991; Ede & Lunsford, Citation2001). And yet the university is based on the assumption that there is an individual author – we assess on the basis of individual authorship or contribution. The advent of generative AI brings this notion of what is an author into sharp relief. And I think that is a very rich disciplinary conversation that we're already part of and can bring to the discussion in higher education.

Collin:

Related to this, generative AI also raises important questions about audiences. Writers write for audiences. And university students often struggle with learning how to write for their audience and their disciplinary discourse community (Bartholomae, Citation1986). Part of the problem is that many university writing assignments don’t clearly define their audience (other than the instructor who will grade them). These assignments miss the opportunity to encourage students to mobilise knowledge and share it with specific readers. In testing generative AI technologies, I’ve found tools like ChatGPT often provide richer and more detailed responses when I specify an audience in my prompt. So an interesting question to pursue is: what assumptions do generative AI tools make about their audiences? Writers are constantly making assumptions – sometimes right, sometimes wrong – about their audiences. So too are AI writing tools. It’s important that writing studies scholars interrogate the assumptions that generative AI makes about certain audiences and draw attention to problematic stereotypes and prejudices. In fact, this critical exercise could be a great assignment for advanced students (see Graham, Citation2022).

What ethical, pedagogical and practical issues emerge from the use/misuse/abuse of AI writing tools in these settings?

Susan:

In some instances, AI is perpetuating discrimination. Kate Crawford (Citation2021) uncovers how AI technologies are entrenching inequality, exposing the capitalistic data-gathering processes and vast data collections required for AI to function. Meanwhile, Futurism has deemed AI an ‘automated mansplaining machine’ (Harrison, Citation2023). In short, AI’s capabilities are bound by the worldview of its programmers: usually white, Silicon Valley heterosexual men.

Bronwen:

I agree that AI raises major ethical issues, including Crawford’s point about AI’s planetary costs, given its huge, underestimated carbon footprint. But I’d like to return to the question of academic integrity as an AI issue because a linguistic/ acquisitional viewpoint offers what seems a neat, practical benchmark for distinguishing between academic honesty and dishonesty in the use of machine translation. Both students (Murtisari et al., Citation2019; Xu, Citation2022) and teachers/ academics (Groves & Mundt, Citation2021) have made the point that the use of MT may become a case of academic dishonesty when students translate stretches of text, rather than words. What may appear an unimportant difference may lead to the submission of writing qualitatively different from the actual language proficiency level of the student. Could this be taken up by academics in the academic integrity advice they offer – keep to translating words if you use MT to write? Maybe.

Collin:

Generative AI also undermines the outdated and romanticised notion of single authorship. For instance, when generative AI creates a new text, it’s easy to mistake that text as the creation of a single ‘author’: the generative AI. But that text is actually built on the work of many authors. And AI deliberately obscures those authors. We don't know which earlier authors we're in conversation with when we use AI.

Beck:

We mostly can’t know: the data sets are proprietary, and the human labour of moderation is hidden.

Collin:

Yup. Instead, we have this anonymous algorithmic amalgamation of co-authors who are effaced by the AI. In this sense, generative AI reifies both the obfuscation of co-authorship and the idolisation of the single author.

Ariella:

That is such an interesting way of framing it, Collin: to consider the other authors we might collaborate with when we use AI, both present and past. We might even think of generative AI as an archive of texts. This parallels the dilemma of historical fiction writers drawing on archival material to write a narrative for contemporary readers (see for example: Poon, Citation2008). We might ask the same questions we do drawing on archival material in fiction writing: How is this source constructed and in what context? Whose voices are silenced? What are the ethics of representing the past? Are we perpetuating harmful stereotypes? Or are we subverting them? And for what purposes do we write? Those questions at the heart of critical reflection in creative writing (Barrett, Citation2010) remain salient when you're working with generative AI’s textual archives and invisible collaborators from different historical periods.

Beck:

Recently I received an email that, in one paragraph, said ‘using translation tools is not acceptable’ and ‘using ChatGPT to make an outline for an assessment is helpful’. My immediate response was: if the problem with translation is that it’s not the student’s words, are we not concerned that the outline isn’t the student’s ideas? Choosing what to include in a document is part of the writing process, but also part of demonstrating subject expertise. In the Groves and Mundt (Citation2021) study Bronwen mentioned, participants made that point too.

I’m seeing questions about AI detection software for policing academic integrity: does it work? Should we use it? What should we use it for? There are important ethical questions that precede these practical ones: what are these tools flagging? What assumptions are they built on? Early studies show that these tools are more likely to identify English text by developing multilingual writers as AI-generated (Liang et al., Citation2023), suggesting disproportionate impacts on marginalised students. Upstream, we should consider ethical and legal implications of student writing being submitted to third parties by both students and academics/institutions, and teach students to engage critically with both AI writing tools and AI detectors. The overarching question is, as Collin mentioned, when students write with AI, who’s the author? But under this is: who owns the data? Who owns the final product? How are data sovereignty and intellectual property protected? Who benefits? We must ask these questions to break out of ‘the circular logic of avoiding plagiarism/catching plagiarists/punishing plagiarism and prizing singular authorship above all other forms’ (Vie, Citation2013, p. 3), and understand and leverage relational networks of writing, learning and expertise.

What roles and responsibilities do writing, writing studies, and rhetoric scholars have in understanding and responding to such changes?

Bronwen:

We have a major role in understanding and responding to such changes. The main role, I think, is to conduct research on AI and writing – in my case, machine translation and second language learning/ writing – and make a case for the relevance of this research for policy and practice on machine translation in universities. For example, there are now three decades of pertinent research on the role of machine translation in language learning and teaching (Jolley & Maimone, Citation2022). This research uncovers not only the increasing use of machine translation by students to write in an additional language, particularly for pre-writing and revision, but also two major positions in debate about responses to students’ machine translation use by students, which can provide a valuable compass for universities. One body of research views MT as cheating that should be countered by a Detect-React-Prevent Response (Jolley & Maimone, Citation2022). Since universities mainly operate in this detection mode, writing scholars’ research could contribute by steering policy away from the acknowledged limitations of Turnitin-based approaches and towards language-based strategies. Nonetheless, our role in contributing to detection as a part of assessment should be complemented by the increasingly important second position: machine translation as Resource and the Integrate-Educate-Model Approach (Jolley & Maimone, Citation2022). This approach calls for the integration of machine translation into the curriculum and the highlighting of best practice in language learning.

Lisa:

As Susan has suggested, another area where we can contribute is our understanding of how writing, critical thinking, and information literacy are linked. Generative AI can’t teach students to think or engage critically with sources. I did an experiment with Chat GPT, asking it to do a peer review of a document I’d written. The review it generated focused solely on generic issues (e.g., make sure sentence length and sentence structure vary, check your punctuation/grammar) without addressing the specific text – which made it functionally useless. As writing scholars, we understand authentic assessment (Liu, Citation2023): we know how to teach the critiquing of text, how to engage with information, and how to develop a writing process that will promote students’ critical thinking and critical engagement. And because we are familiar with working in disciplines, we are well-placed to support our colleagues as they strive to develop in their students the skills to think and construct meaning/knowledge in their discipline.

Ariella:

Writing scholars also have a toolkit of responses we can bring to understanding and collaborating with generative technologies in higher education settings. The first tools are the self-reflexive skills that are at the heart of creative writing’s practice-led research methodology (Candy & Edmonds, Citation2018). Second, writing can offer a method of teaching and researching writing practices that emphasises the writing process in a generative way that improves writing skills (Pilegaard & Philipsen, Citation2023). Third, we can contribute an understanding of the work writing does in the world and how composition choices shape those effects. These modes of being and doing can help students and researchers think about how we collaborate with machine and human writing partners. They also highlight questions of ethical representation (Cosgrove, Citation2009) at the intersection of power structures (Crenshaw, Citation2017) and the way writing shapes social discourses.

Beck:

In other words, writing scholars can prepare students to engage ethically, productively, hopefully and creatively with writing technologies. As Susan said, these questions have long histories in our field: consider Hawisher and Selfe’s Citation1991 collection, published 10 years after the first commercial word-processing software came out. As I read, I kept thinking how does this still hold up so well? Collin reminded me that these questions of equity, access and labour remain relevant because technologies have changed, but the politics of these technologies hasn’t. In eighteenth century Europe, tech bros of yore celebrated clockwork writing machines at the same time Black slaves were kept illiterate as a tool for control (Vee, Citation2022). Literacy practices are always a product of their context; they exist in and are shaped by privileged networks of people, institutions, texts and technologies. Practical literacy skills give writers the tools they need to perform in their disciplines and jobs. But writing scholars know that we also need critical literacies in order to write well. When writers examine and reflect on the political and ethical implications of their work as part of the writing process, they can perform in these contexts but also critique and redress those kinds of inequities. This includes responding effectively and inclusively to technological change.

Collin:

One responsibility writing scholars bring to the conversation about artificial intelligence and higher education is a focus on equity. Writing is about relationships between writers, audiences, communities, cultures. This means that writing always invokes power hierarchies and histories of relationships – often unequal relationships – between different communities. These unequal relationships are especially important when writing in countries shaped by colonisation. In Writing while colonised, Alice Te Punga Somerville (Citation2022) reminds us of ‘the inextricability of writing from historical and ongoing violence’. This is where equity and justice come into play. As a writer, what is your relationship to your readers? What words do you use to speak to them? And how do those words reinforce or undermine healthy relationships? My early testing of ChatGPT indicates that it’s not equitable or just. ChatGPT appears to prioritise Standard American English over other Englishes like Black English (Bjork, Citation2023b). And the release of GPT-4 – which requires a monthly payment to access its high-end functionality – exacerbates inequity. But higher education professionals should also be wary of banning AI because that too can amplify inequities. For instance, multilingual writers and writers with disabilities might benefit from composing with generative AI, which means that banning AI in universities, similarly to banning MT, unfairly penalises these writers (Bjork, Citation2023a).

Beck:

Data sovereignty is part of the issue. In a workshop for our Indigenous health research centre, someone asked how senior scholars manage data sovereignty when researching with Indigenous community partners, since institutional policies say that all our work is the university’s intellectual property. They were using special agreements, operating alongside university policy, that the data belongs to the community, because existing policy can’t accommodate Indigenous data sovereignty. There is a gap here in thinking about knowledge, writing and communication as relational and communal, rather than as property. Addressing it would require a huge rethink of the way that universities operate, which prioritizes the individual doing their individual work to get their individual degree or their individual promotion. This is not some abstract ivory tower concern: graduates enter workplaces where a huge proportion of writing is collaboratively authored in some capacity, without ever learning collaborative writing. When it is taught, it’s often instrumental, with too little attention to interpersonal and cultural dimensions of writing (Feuer & Wolfe, Citation2023). Now there’s an opportunity to think about those questions of community, collaboration, context, credit, and authorship more broadly and productively.

Collin:

Even though generative AI challenges the notion of sole authorship, it’s also a slippery slope to consider generative AI as a co-author. For example, the editors of Nature recently explained that generative AI cannot be listed as a co-author on scientific manuscripts because ‘any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility’ (Tools, Citation2023). That's another important aspect of writing with AI in a university setting: agency and responsibility.

Beck:

I am seeing the term ‘collaboration’ most often, like Knowles’ (Citation2022) study of copywriters’ use of AI to generate drafts, but we have other ways of understanding shared work. For example, Verhulsdonck et al. (Citation2021, pp. 482–3) described writing with AI tools as delegation: ‘AI has the potential to change [technical and professional communication] and [user experience] work … in introducing us to shifting forms of autonomy by the principle of delegation … where technologies act as agents that can carry out tasks – in varying forms of autonomy – for users’. That does a better job of capturing agency and responsibility. They also note that delegation creates a ‘responsibility gap’ and identify this as somewhere technical writing scholars and practitioners can promote just outcomes, like in Graham and Hopkins’ work (Citation2022) on supervised machine learning to advance justice-oriented health and communication research.

Susan:

Hesse (Citation2005) poses these same questions, and his answer is that writing experts own writing, not as colonists or profiteers, but stewards. So what does stewardship entail in an age of AI? For me, it means keeping embodied learning and interactive meaning making at the centre of the writing process by designing assessments that prioritise writing to learn and reflect based on personal experience. As Rosenblatt (Citation2019) maintains, the human acts of writing and reading are central to meaning making. Both transactions involve not only relationships with people beyond the text, such as authors and readers, but also with the self (through memory and experience). AI cannot replicate these things.

Lisa:

A useful discussion point would be this: what would universities look like if we rejected the idea of the individual author? What would assessment mean? It would constitute a fundamental shift in how we think about learning. Maybe this juncture, with the public availability of generative AI, is where we start interrogating the idea of individual authorship in the university and in academia more generally. Western perceptions are not universal: Christine Donahue (Citation2008) outlines a different perspective from the French education system, and Indigenous perspectives on authorship/ownership rarely focus on the individual (in New Zealand, for example, there is precedent in the concept of whakapapaFootnote3).

I’m thinking of Bakhtin’s contention that in any text, ‘[e]ach word tastes of the context and contexts in which it has lived its socially charged life … Language is not a neutral medium that passes freely and easily into the private property of the speaker’s intentions; it is populated, overpopulated – with the intentions of others’ (Citation1986, p. 274). What if academia took this seriously?
Ariella:

Yes! I’m thinking of the discussion that happened around creativity, Creative Commons licensing, and this invitation to rethink who owns data in a digital environment (Lessig, Citation2002). This movement ended up with a series of Creative Commons licences – the broadest one is the CC-BY, where you’re essentially putting your data out in the world while ensuring users attribute it appropriately. This response draws on theories of creative influences, but still emphasises attribution while foregrounding knowledge that's available to everyone. The Creative Commons community suggests that what is needed in response to generative AI is an approach to copyright that ‘supports open sharing while supporting legitimate creative interests in control and compensation for creative expression’ (Stihler, Citation2023).

Conclusion

As this discussion has illustrated, for writing scholars, generative AI offers opportunities for fruitful discussion about the nature of writing that allows us to move beyond simple reactivity or debates about academic integrity. The various branches of writing scholarship – writing in the disciplines, second language learning, creative writing, technical and professional writing, and writing across the curriculum’s engagement with the intersections between writing assessment and social justice – all provide a broader, critical context for engagement with the new technology. We cannot return to an old regime based on testing which raises questions of privilege and focuses simply on remembering facts. If we are to equip our students to think, to participate in the creation of disciplinary knowledge, and to engage critically and authentically with the world they live in, then writing is a necessity. And we all, as teachers, need to adapt creatively and critically.

Writing scholarship provides rich perspectives on how we might do that. We can bring fresh ideas to the latest crisis in student writing, including perspectives on the relationship between writing and learning, notions of audience and authorship, the relationship between technology and genre, the role of language in writing, and the relational (and therefore ethical) nature of writing and technology. We can offer collective knowledge of linguistic methodologies to compare machine and human language, and we can offer a perspective of writing as a cognitive process involving critical, creative, and disciplinary thinking – complex, time-honoured processes for which AI is no substitute. We can highlight how technological disruption reinscribes structural inequalities and how these injustices might be resisted. Writing scholars understand how authorship changes in collaboration with others – both human and machine, text generator, and audience. This paper introduces these concepts to the wider ongoing discussion of the impacts of AI on higher education and, above all, challenges notions of the lone author while foregrounding what is distinctly human in acts of writing.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

1 By writing scholars, we mean those actively engaged in the scholarship and teaching of writing – across the subfields of creative, academic, professional, and technical writing, including second language writing and language acquisition.

2 Machine translation is a form of AI where a computer translates from one language into another without human involvement, nowadays via generative technology. Examples of machine translation include Google Translate.

3 Whakapapa, a concept at the core of traditional mātauranga Māori (Māori knowledge), acknowledges that we never stand alone – that we (and all animate and inanimate objects) are a product of (and bounded to) our ancestors, place and culture. In this frame, any knowledge is ‘collectively held and multi-dimensional’ (Te Punga Somerville), not the product of a single thinker or author.

References

  • Bakhtin, M. M. (1986). Speech genres and other late essays. Trans. V. W. McKee. University of Texas Press (Original work published 1978).
  • Barrett, E. (2010). Foucault’s “what is an author”: Towards a critical discourse of practice as research. In E. Barrett & B. Bolt (Eds.), Practice as research: Approaches to creative arts enquiry (pp. 135–146). I.B. Tauris.
  • Bartholomae, D. (1986). Inventing the university. Journal of Basic Writing, 5(1), 4–23. https://doi.org/10.37514/JBW-J.1986.5.1.02
  • Bjork, C. (2023a). Don’t fret about students using ChatGPT to cheat – AI is a bigger threat to educational equality. The Conversation. https://theconversation.com/dont-fret-about-students-using-chatgpt-to-cheat-ai-is-a-bigger-threat-to-educational-equality-202842
  • Bjork, C. (2023b). ChatGPT threatens language diversity. More needs to be done to protect our differences in the age of AI. The Conversation. https://theconversation.com/chatgpt-threatens-language-diversity-more-needs-to-be-done-to-protect-our-differences-in-the-age-of-ai-198878
  • Brunner, D. D. (1991). Who owns this work? The question of authorship in professional/academic writing. Journal of Business and Technical Communication, 5(4), 393–411. https://doi.org/10.1177/1050651991005004004
  • Candy, L., & Edmonds, E. (2018). Practice-based research in the creative arts: Foundations and futures from the front line. Leonardo, 51(1), 63–69. https://doi.org/10.1162/LEON_a_01471
  • Chon, Y. V., Shin, D., & Kim, G. E. (2021). Comparing L2 learners’ writing against parallel machine-translated texts: Raters’ assessment, linguistic complexity and errors. System, 96, 102408. https://doi.org/10.1016/j.system.2020.102408
  • Cosgrove, S. (2009). WRIT101: Ethics of representation for creative writers. Pedagogy, 9(1), 134–141. https://doi.org/10.1215/15314200-2008-021
  • Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
  • Crenshaw, K. W. (2017). On intersectionality: Essential writings. The New Press.
  • Dann, R. (2014). Assessment as learning: Blurring the boundaries of assessment and learning for theory, policy and practice. Assessment in Education: Principles, Policy & Practice, 21(2), 149–166. https://doi.org/10.1080/0969594X.2014.898128
  • Donahue, C. (2008). When copying is not copying. In M. Vicinus & C. Eisner (Eds.), Originality, imitation, and plagiarism: Teaching writing in the digital age (pp. 90–103). University of Michigan Press.
  • Ducar, C., & Schocket, D. H. (2018). Machine translation and the L2 classroom: Pedagogical solutions for making peace with Google translate. Foreign Language Annals, 51(4), 779–795. https://doi.org/10.1111/flan.12366
  • Earl, L. M. (2012). Assessment as learning: Using classroom assessment to maximize student learning. Corwin Press.
  • Ede, L., & Lunsford, A. A. (2001). Collaboration and concepts of authorship. PMLA, 116(2), 354–369.
  • Ede, L. S., & Lunsford, A. A. (2006). Singular texts/plural authors: Perspectives on collaborative writing (Pbk. ed.). Southern Illinois University Press.
  • Emerson, L., & Clerehan, R. (2009). Writing program administration outside the North American context. In D. Strickland & J. Gunner (Eds.), The writing program interrupted (pp. 166–174). Boynton Cook.
  • Feuer, M., & Wolfe, J. (2023). Planning for difference: Preparing students to create flexible and elaborated team charters that can adapt to support diverse teams. IEEE Transactions on Professional Communication, 66(1), 78–93. https://doi.org/10.1109/TPC.2022.3228020
  • Gero, K., Long, T., & Chilton, L. B. (2023). Social dynamics of AI support in creative writing. In A. Schmidt, K. Väänänen, T. Goyal, P. O. Kristensson, A. Peters, S. Mueller, … M. L. Wilson (Eds.), Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1–15).
  • Graham, S. S. (2022). AI generated essays are nothing to worry about. https://www.insidehighered.com/views/2022/10/24/ai-generated-essays-are-nothing-worry-about-opinion
  • Graham, S. S., & Hopkins, H. R. (2022). AI for social justice: New methodological horizons in technical communication. Technical Communication Quarterly, 31(1), 89–102. https://doi.org/10.1080/10572252.2021.1955151
  • Groves, M., & Mundt, K. (2021). A ghostwriter in the machine? Attitudes of academic staff towards machine translation use in Internationalised Higher Education. Journal of English for Academic Purposes, 50(1-11), 100957. https://doi.org/10.1016/j.jeap.2021.100957
  • Harrison, M. (2023, February 9). ChatGPT is just an automated mansplaining machine. Futurism. https://futurism.com/artificial-intelligence-automated-mansplaining-machine
  • Hawisher, G. E., & Selfe, C. L. (1991). Evolving perspectives on computers and composition studies: Questions for the 1990s. National Council of Teachers of English. https://eric.ed.gov/?id=ED331088
  • Hesse, D. D. (2005). 2005 CCCC chair's address: Who owns writing? College Composition and Communication, 57(2), 335–357.
  • Jolley, J. R., & Maimone, L. (2022). Thirty years of machine translation in language teaching and learning: A review of the literature. L2 Journal, 14(1), 26–44. https://doi.org/10.5070/L214151760
  • Kempen, G. A. (Ed.). (2012). Natural language generation: New results in artificial intelligence, psychology and linguistics (Vol. 135). Springer Science & Business Media.
  • Knowles, A. M. (2022). Human-AI collaborative writing: Sharing the rhetorical task load. In 2022 IEEE International Professional Communication Conference (ProComm) (pp. 257–261). https://doi.org/10.1109/ProComm53155.2022.00053
  • Lessig, L. (2002). The future of ideas: The fate of the commons in a connected world. Vintage.
  • Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers (arXiv:2304.02819). arXiv. https://doi.org/10.48550/arXiv.2304.02819
  • Liu, D. (2023). ChatGPT is old news: How do we assess in the age of AI writing co-pilots? https://www.linkedin.com/pulse/chatgpt-old-news-how-do-we-assess-age-ai-writing-co-pilots-danny-liu/
  • Lunsford, A. A., & Lunsford, K. J. (2008). “Mistakes are a fact of life”: A national comparative study. College Composition & Communication, 59(4), 781–806.
  • Marler, R. F. (1974). From tale to short story: The emergence of a new genre in the 1850s. American Literature, 46(2), 153–169. https://doi.org/10.2307/2924690
  • Martin, J. R., & Rose, D. (2008). Genre relations: Mapping culture. Equinox.
  • Murtisari, E. T., Widiningrum, R., Branata, J., & Susanto, R. D. (2019). Google translate in language learning: Indonesian EFL students’ attitudes. Journal of Asia TEFL, 16(3), 978.
  • Palmquist, M., Childers, P., Maimon, E., Mullin, J., Rice, R., Russell, A., & Russell, D. R. (2020). Fifty years of WAC: Where have we been? Where are we going. Across the Disciplines, 17(3/4), 5–45. https://doi.org/10.37514/ATD-J.2020.17.3.01
  • Pilegaard, N. H., & Philipsen, H. (2023). A method off the beaten track: Refining creative writing process through practice-led research. Qualitative Studies, 8(1), 162–193. https://doi.org/10.7146/qs.v8i1.136805
  • Poon, A. (2008). Mining the archive: Historical fiction, counter-modernities, and Suchen Christine Lim’s ‘A Bit of Earth’. The Journal of Commonwealth Literature, 43(3), 25–42. https://doi.org/10.1177/0021989408095236
  • Rosenblatt, L. M. (2019). The transactional theory of Reading and writing. In Theoretical models and processes of literacy (7th ed., pp. 451–479). Routledge. https://doi.org/10.4324/9781315110592-28
  • Silliman, E. R., Bahr, R. H., & Wilkinson, L. C. (2020). Writing across the academic languages: Introduction. Reading and Writing, 33(1), 1–11. https://doi.org/10.1007/s11145-019-09993-0
  • Stihler, C. (2023). Better sharing for generative AI. Creative Commons. https://creativecommons.org/2023/02/06/better-sharing-for-generative-ai/
  • Te Punga Somerville, A. (2022, August 28). Writing while colonised. E-Tangata. https://e-tangata.co.nz/reflections/writing-while-colonised/
  • Tools such as ChatGPT threaten transparent science: Here are our ground rules for their use. (2023, January 24). Nature, 613. https://doi.org/10.1038/d41586-023-00191-1
  • Vee, A. (2022, May 28). Automating writing: How, why, and for whom? Panel: Trust the Machine: Inviting Algorithms into Our Textual Meaning-Making Process. Rhetoric Society of America.
  • Verhulsdonck, G., Howard, T., & Tham, J. (2021). Investigating the impact of design thinking, content strategy, and artificial intelligence: A “streams” approach for technical communication and user experience. Journal of Technical Writing and Communication, Article 00472816211041951. https://doi.org/10.1177/00472816211041951
  • Vie, S. (2013). A pedagogy of resistance toward plagiarism detection technologies. Computers and Composition, 30(1), 3–15. https://doi.org/10.1016/j.compcom.2013.01.002
  • Whitehead, S. (2011). Reader as consumer: The magazine short story. Short Fiction in Theory & Practice, 1(1), 71–84. https://doi.org/10.1386/fict.1.1.71_1
  • Xu, J. (2022). Proficiency and the use of machine translation: A case study of four Japanese learners. L2 Journal, 14(1), 77–104. https://doi.org/10.5070/l214151328
  • Yancey, K. B. (2004). Made not only in words: Composition in a new key. College Composition and Communication, 56(2), 297–328. https://doi.org/10.2307/4140651
  • Ziebell, N., & Skeat, J. (2023). How is generative AI being used by university students and academics? Semester 1, 2023. Melbourne Graduate School of Education, University of Melbourne.