ABSTRACT
OpenAI’s ChatGPT is a large language model (LLM) that excels at generating text and public controversy. Upon its release, many marveled at its ability to author intelligible and generically responsible texts (Herman). Writing about his students’ experiences using artificial intelligence (AI) writing assistants, S. Scott Graham remarks that the results were “consistently mediocre—and usually quite obvious in their fabrication.” Why might this be true? How can an LLM succeed in some respects and fail in others? We argue that the discrepant reactions to human and AI rhetoric are a question of genre, specifically that AI rhetoric is only generic; AI rhetoric represents a new enactment of “writing degree zero” (Barthes) that is disengaged from immediate rhetorical situations and knowledge bases. AI text generators (currently) have a more difficult time simulating the positioned perspectives that human writers bring to situations and communicate to audiences through their genre usage. Drawing on the work of Bakhtin, we treat this problem as a question of generic form and audience addressivity. We describe the interplay of form and addressivity as genre signaling and offer it as a construct for the analysis of AI rhetoric and genre as a cultural form (Miller). Genre signaling (Hart-Davidson and Omizo) describes a feature of communicative behavior as it occurs over time that can help both humans and machines evaluate written discourse as it exhibits certain stabilized formal features. When texts contain specific genre signals at expected frequencies and intensities, it may be recognized as being generally accurate, reliable, trustworthy. Without these signals, a text with a similar topical focus might fail to be taken as credible or useful. In this essay we propose to quantify genre signaling based on three measures: (1) stability, (2) frequency, and (3) periodicity.
Disclosure Statement
No potential conflict of interest was reported by the author(s).
Notes
1 This work aligns with Swarts’s (41–45) corpus study of how technical writers “qualify” audiences by signaling intratopic or intertopic information in topic-based writing by using restrictive “that” and “which” clauses. Swarts finds that the use of “which” regularly is inclusive of topic-supporting information found within the topic. The use of “that,” meanwhile, indicates associated information that audiences need to implement a task not directly included in the passage. The use of restrictive “that” clauses, then, gestures at the wider ecosystem of information that would “qualify” readers to perform tasks.
2 See also Janzing et al.; Molnar; Slack et al.; Sundararajan & Najmi).
3 Precision refers to proportion of correct decision to the total number of decisions per class. Recall refers to the proportion of correct decisions given the total available correct decisions per class. F1-scores refer to the harmonic mean of precision and recall scores, and indicate a balance between precision and recall.
4 ProjectAhead and PredictedFuture LATS are characterized by their future-oriented verb tenses or prepositional phrases, such as “in order to” (see Kaufer and Ishizaki, 1998).