291
Views
0
CrossRef citations to date
0
Altmetric
Introduction

Rhetoric of/with AI: An Introduction

It’s hard to know what to think or how to feel about artificial intelligence (AI). Everywhere you look, someone is either touting its revolutionary potential or declaring it to be the end of all things good and just. Exciting new innovations like AlphaFold’s use of AI to solve the longstanding and intractable protein folding problem compete for our attention with harrowing tales of inequitable predictive policing. Contrasting soundbites and social media posts either herald an exciting new era or warn of our eventual subjugation to the machines. On the one hand, Abraham Verghese argues in the forward to Eric Topol’s Deep Medicine that “we are living in the Fourth Industrial Age, a revolution so profound that it may not be enough to compare it to the invention of steam power, the railroads, electricity, mass production, or even the computer age in the magnitude of change it will bring” (Verghese, xx). On the other hand, one of the leading computer scientists and AI developers of our time, Geoffery Hinton, asserts, without irony, that AI “might take over” (qtd. in Pelley). Starkly contrasting positions like these evidence the substantial and widereaching impacts that AI already has on our lives, serving as rhetorical bookends that frame a deliberative crossroads in which we all find ourselves. Between the utopian and dystopian rhetorics about AI lie new technologies that have catalyzed significant disruptions in social media, criminal justice, medicine, finance, public health, and elsewhere. Likewise, we are currently witnessing parallel transformations in a wide range of academic disciplines that take up AI as a new object of critique and/or use AI to develop novel research methodologies. Furthermore, recent developments in AI-generated text, like in Meta’s Galactica or OpenAI’s ChatGPT, pose significant challenges to traditional concepts of human communication, rhetorical agency, and authorship, not to mention practical challenges for education. Developments like these have raised the deliberative stakes for assessing and determining if, how, when, or where AI technologies ought to impact our lives.

The collected essays presented here aim to help rhetoricians take stock of these dizzying and disparate developments. Organized around the sometimes complementary and sometimes competing twin themes of “Rhetoric of/with AI,” they help provide readers of Rhetoric Society Quarterly with a broad slice of rhetorical approaches to our new technological landscape. Specifically, the special issue aims to catalyze a conversation that brings together the different emerging tendrils of rhetoric that may be affected by AI. As the special issue theme indicates, we highlight the two primary areas of emerging inquiry: (1) rhetorical scholarship that takes up AI as object of criticism, and (2) rhetorical scholarship that uses AI methodologically. In bringing these two areas of inquiry together in a single issue, we aim to surface the myriad ways that rhetoricians might choose to attend to AI or with AI through their scholarship. By way of introduction, we provide a brief overview of historical scholarship in both rhetoric of AI and rhetoric with AI. Subsequently, we detail future horizons of inquiry in these areas and the individual contributions of this special issue. Finally, the introduction closes with some recommendations for how both rhetoric of AI and rhetoric with AI might be appropriately folded into the field more broadly and into methods curricula specifically.

Rhetoric of AI

Although never a central project of inquiry, rhetorical scholarship has engaged AI for decades. In the late 1970s and early 1980s, as cognitive science was coming into its own as a discipline, there were several efforts to explore what the emerging insights of this new field might contribute to rhetorical theory. In 1978, for example, Robert T. Craig reviewed several key works of cognitive science for the Quarterly Journal of Speech. At the center of the essay are Craig’s efforts to explore debates in cognitive science around whether AI can serve as a useful model of human cognition. In a similar vein, a 1984 Celeste Condit contribution to then ongoing debates in rhetorical relativism grounded part of its argument in Umberto Ecos’s rumination on AI, cognition, and networks of human language (Railsback 354). While the use of AI as a model of either human intelligence or rhetoric has largely fallen out of fashion, rhetorical engagement with the spaces that AI permeates persists. Among the most vibrant areas of attention is rhetorical research on social media and online communication (Carter and Alford; Cerja et al.; Gibbons; Hooker; Mercieca; Kong and Ding; Wang). A second popular area of inquiry centers around the AI-powered devices that make up personal health monitoring, digital assistants, and the broader Internet of Things (Lawrence; Pfister; Teston; Verhulsdonck and Tham). And, of course, rhetorical research related to AI engages all manner of specialty domains, including diagnostic medicine (Lindsley), counterintelligence and surveillance (Pollak and Bhardwaj), and AI-based text generation (Vee).

Here we describe much of this scholarship as engaged with “spaces of AI,” because AI has seldom been the focal object of inquiry in rhetoric. In studies of social media and other AI-driven technologies, “AI” is frequently treated as one constituent part among the many individual, social, and technological features that compose our digital landscapes. For example, Jennifer Mercieca’s analysis of online demagoguery outlines how “dangerous demagogues attempt to distort public sentiment through bots, manipulating algorithms, and computational propaganda” (Mercieca 271). Here it is the human actors that remain center stage, and the AIs of social media merely serve as tools of the demagogues. Similarly, Zhaozhe Wang invokes Facebook’s behind-the-screen workings through a discussion of how

affects are easily digitally manufactured through computer algorithms: share button, tag button, like button (or Facebook’s recent upgrade that replaced the “like button” with “reaction button” that allows users to react to posts with different emotions: “love,” “haha,” “wow,” “sad,” “angry”), emojis, stickers, gifs, short video clips, and enlarged pictures. (Wang 246)

Here the algorithm is a vector in efforts to understand social media rhetorics, but not the primary object of study. Until very recently, when AI technologies have become more central to rhetorical inquiry, they have often been folded into the theoretical resources of rhetorical inquiry. For example, Michelle Gibbons describes how search engine optimization prompts rhetors to construe the algorithm as audience:

The textual winks of search engine optimization are also generative, though differently so, giving rise to the algorithm as audience. That is to say, persona 4.0 is the machine as covert rhetorical audience, constituted as such by the textual wink. Via the wink, the machine algorithm is constituted as something other than process, formula, or set of rules; it takes shape as rhetorical audience, covertly addressed. (Gibbons 54)

Ultimately, this scholarship has done a lot to help rhetoricians better understand the role and function of AI in traditional domains of rhetorical inquiry. We now have a more sophisticated understanding of online rhetorical situations and the many roles that algorithms serve in those situations.

However, as AI has increasingly become a topic of popular conversation and a contested area of technological development, there have been broad interdisciplinary calls to center AI, itself, as the principal object of inquiry. In the wake of popular contentions and scholarly debates about AI, academia and investigative journalism have seen the emergence of a new interdisciplinary and often public-facing “critical algorithm studies” devoted to investigating the many dangers of AI. This vibrant area of research and advocacy has launched a number of widely read monographs, including Weapons of Math Destruction (O’Neil), Algorithms of Oppression (Noble), Race after Technology (Benjamin), and Artificial Unintelligence (Broussard). Broadly, the work of critical algorithm studies has been devoted to studying the significant social, economic, and environmental harms of these new technologies (Crawford; Mullaney et al.). Research in this area has documented how AI and its rhetorics accelerate inequitable policing (Benjamin; Brayne), further linguistic injustice in technology (Lawrence), lead to racist and misogynistic search engine results (Noble), damage the environment (Crawford), and exacerbate health disparities (Graham, The Doctor and the Algorithm).

As the citations above illustrate, rhetoricians, too, have become a part of this interdisciplinary initiative. Halcyon Lawrence’s “Siri Disciplines” is among the most direct contributors to critical algorithm studies, leveraging insights from rhetorically informed technical communication as part of the prominent interdisciplinary Your Computer Is on Fire (Lawrence). Similarly, Calvin Pollak and Savaini Bhardwaj’s analysis of social justice and military intelligence technologies centers and extends the work of Safyia Noble. As they write,

Noble’s framework can be extended to technological growth and innovation in any institutional setting, be that commercial, governmental, or academic. Notable examples include, “Cambridge Analytica, Google employee protests over military contracts, and biased algorithms in Amazon’s hiring processes” (Fiesler et al. 1; Pollak and Bhardwaj 3–4)

Damien Pfister and S. Scott Graham, likewise, draw significantly on research in critical algorithm studies to make their arguments about Google Glass and Health AI, respectively.

While critical algorithm studies has become a substantial influence on how our discipline writes about rhetorics of AI, it is not the only vehicle by which rhetoricians argue for addressing AI as a primary object of inquiry. AI is increasingly identified as something that must be brought into rhetorical pedagogy, both as audience and as coauthor. As King and Ding argue, “To help students write for algorithmic audiences, then, technical and professional communication (TPC) instructors can help students ‘unbox or demystify’ algorithms by ‘making them objects of study’ (Gallagher)” (Kong and Ding 55). Likewise, as Anette Vee argues, in regards to ChatGPT, “We will need to learn to teach by integrating this technology into the way we teach writing” (180).

These lines of inquiry on rhetorics of AI developed from critical algorithm studies and rhetorical pedagogy can contribute productively to a number of existing initiatives in rhetoric. Because AI now touches on almost all areas of human activity, rhetorics of AI can help contribute to longstanding discussions in rhetoric of science, rhetoric of health and medicine, cultural rhetorics, public address, writing studies, ideological rhetoric, and many other areas. But studies on the rhetoric of AI can also offer many insights to the broader, interdisciplinary study of AI itself. In particular, rhetorical methodologies are ideally suited to exploring and documenting arguments about the putative benefits of the technology, discursive tensions over the proper place of AI in civic institutions, and how cultural forces shape popular understandings of AI. For example, the study of AI rhetorics has become a preoccupation with popular-facing content like Mystery Hype Theater 3000, a webshow that documents episodes of extravagant promotional language around AI. Rhetoricians, of course, have deep theoretical and methodological resources that can make robust contributions to conversations about promotional rhetorics and extravagant doomerism. Similarly, rhetoric might apply its theoretical and methodological repertoire to AI criticism itself. Substantial bodies of rhetorical scholarship devoted to troubling the idea that intentional subjects are required for communication (Campbell; Graham, “Agency and the Rhetoric of Medicine”; Greene; C. R. Miller) complicate debates in critical algorithm studies about the role of agency in communication. The influential “On the Dangers of Stochastic Parrots” article, for example, argues emphatically that generative AI is not communicating because the model cannot be understood as intentional subject (Bender et al.). And certainly, the composition wing of the discipline also has much to contribute to discussions of generative AI and its use in writing education. There is much underinformed punditry about writing education (typically by those who write, but do not teach writing) and predatory sales pitches for technologies that improve writing and writing assessment (typically by those who neither write nor teach writing). Rhetoricians who study composition and writing studies have a wealth of prior knowledge about writing classrooms and developmental writing that must be centered in these conversations lest we see facile prohibitions and surveillance EdTech run roughshod over our carefully designed pedagogical spaces.

Rhetoric with AI

The same advances in AI technologies that raise concerns for critical algorithm studies are also driving promising new research in computational digital humanities and social sciences, including the burgeoning subarea of computational rhetoric. The parallel emergence of critical algorithm studies and AI-driven computational rhetoric present the discipline with something of a conundrum. On the one hand, AI represents a clear and present danger to society; yet it may also catalyze exciting new ways of conducting research in existing areas of rhetorical scholarship. For decades, scholars of rhetoric have developed frameworks for computationally identifying and classifying rhetorically salient text fragments in natural language corpora (see, e.g., Dubremetz and Nivre; Gallagher et al.; Harris and Di Marco; Hart, “Redeveloping DICTION”; Hart, Verbal Style and the Presidency; Ishizaki and Kaufer; Majdik and Wynn; Omizo and Hart-Davidson) and have deployed computational methods to analyze extremist political language (Mehran et al.), patient marginalization in policy discourses (Graham and Hopkins), genre development and stabilization (Graham et al.; Hart, “Genre and Automated Text Analysis”; Larson et al.), and the framing and rhetorical treatment of science (Taylor), opioids (Graham, “The Opioid Epidemic”), climate change (Koteyko et al.; Majdik; Tillery and Bloomfield), and feminine stylistic patterns in political discourse (Wäckerle and Castanho Silva).

The motivation to research rhetoric with computational methods often is driven by research questions that require analyzing rhetorical practices at the scale of large communicative/textual corpora (where questions of interest to rhetoric can be answered by comparing synchronic textual patterns in or across text corpora or by mapping diachronic changes to rhetorically salient patterns) or that seek to map the emergence of rhetorical norms and practices in discourse systems (where rhetorical structures flow through circulation [Gries and Brooke; Stuckey], across the boundaries of discrete exigencies [Edbauer], or as self-generating [Derkatch; Keränen; Majdik] processes and practices). Some of the methods developed to answer these kinds of questions require deterministic modeling of language features: experts in the field create reference corpora, dictionaries, syntaxes, or tag sets that allow researchers to deduce new insights from large corpora of targeted texts, supporting the systematic analysis of large and/or longitudinal datasets. In the field of rhetoric, projects like DocuScope and DICTION provide a framework for conducting sophisticated analyses of rhetorical moves and tone across large corpora of texts, and the development of rhetorical figure detection systems (Harris, Di Marco, Ruan, et al.) offers a computationally encoded syntax for the automated classification of complex rhetorical figures (Harris, Di Marco, Mehlenbacher, et al.) and tropes (Shutova et al.).

Advancements in AI technologies over the last few decades offer new ways of answering such research questions by shifting to nondeterministic ways of modeling language, using algorithmic processes that allow for the same language model to generate a varied—sometimes even unpredictable—set of outputs. The development of word vector embeddings capitalized on the increasing availability of cheap storage and processing power to relate words to each other in all of their complex semantic entanglements, creating models of language that reflected deep semantic linkages between words. Topic modeling algorithms were developed to classify documents based on likely topics contained in them. These techniques required no human supervision or intervention: they replaced the work of hand-crafting models for specific natural language processing tasks with algorithms that could nondeterministically detect patterns, and from those patterns model relationships between words. Research on artificial neural networks introduced concepts like gradient descent and nonlinear activation functions to help models learn from and adapt to mistakes; recurrent neural networks and Long-Short-Term Memory (LSTM) networks improved models’ ability to process words in sequence (rather than in isolation), taking a step toward processing language more like how it is naturally used; and finally, transformers added additional depth of context-awareness through something called “attention mechanism” (Vaswani et al. 2) that would end up launching this latest phase of major developments in AI (for an authoritative and deep history of deep learning, see Schmidhuber).

For computational approaches to researching rhetoric, which milestone along the developmental arc of machine and deep learning frameworks marks the boundary between older natural language processing (NLP) tools and AI is not particularly important. What is important is that advances in nondeterministic language modeling have opened new ways to study rhetorical texts: using topic models to map rhetorical patterns of climate change denial (Tillery and Bloomfield) or track disciplinary changes made manifest in a corpus of dissertations (B. Miller); building language models with LSTM to identify rhetorical moves and styles (Şeref et al.), with semantic word embeddings to reliably classify a range of appeals to expertise (Majdik and Wynn), and with sentence embeddings to capture rhetorical functions in texts (Sugimoto and Aizawa); using bidirectional transformer models to research the relationship between (in)civility and democratic engagement (Ballard et al.) or to investigate how patients are marginalized in medical policy discourse (Graham and Hopkins); and developing methods on top of topic modeling algorithms for tracking specifically rhetorical topoi (Omizo). Because of the design of their training processes—primarily, the use of multiple layers with nonlinear activation functions in a neural network architecture—the language models that drive these methodologies usually have high accuracy for processing complex language features. In Antoniak et al., for example, a transformer-based language model outperforms “classical models like SVMs or logistic regression” (n. pag.) for narrative classification tasks (for other examples of accuracy gains from AI-based language models, see Graham and Hopkins; Majdik and Wynn; Seol et al.; Şeref et al.).

While our discussion of rhetoric with AI here centers technological advancements in the field of AI, it is important to note that many of the traditional methodological practices of rhetorical inquiry remain central to computational rhetoric. A supervised machine learning approach involves researchers using their rhetorical insights to annotate textual data so that it might serve as a training set. Unsupervised techniques like topic modeling provide insights about corpora in the form of common words that probabilistically represent the range of topics in the corpus, but subject-matter expertise in rhetoric is still required to interpret topic distributions and analyze them in the context of research questions salient to advancing specifically rhetorical understandings of textual corpora. A typical workflow for researching textual corpora with more advanced AI systems pairs the unsupervised algorithms (which entail the aforementioned benefit of emerging latent patterns) with supervised techniques (seeding topic models; fine-tuning language models with human-coded data), allowing fine-grained control over model outputs without the need to create a fully deterministic model logic. Pragmatically, this means that relatively few expert-annotated text samples (anywhere from a few hundred to a few thousand) can create language models that classify complex rhetorical text elements with a high degree of accuracy (Antoniak et al.; Majdik et al.). Finally, there is some evidence to suggest that language models developed with insights from rhetorical theory and methods can outperform those that make use of off-the-shelf NLP techniques (Graham and Hopkins) or generic transformer-based language models (Majdik and Wynn).

Ultimately, what emerges next from this rapidly changing spectrum of methods and models remains to be seen. Beyond critiquing and deploying them, can we also learn something about rhetoric from language models, which are trained on vast amounts of documents containing rhetorical moves? Will the presence of AI systems disrupt—or advance—the long-standing commitments of our field to questions of equity and social justice? Should scholars of rhetoric be involved in developing large language models, and if so, how? In this special issue, scholars from different traditions of rhetoric begin to answer questions like these, elucidating what the recent moves toward AI systems as resources for generating communications, tools for discovery, disruptors of institutional practices, and above all, as models of language, mean for our field of study.

Contributions

The sometimes conflicting and sometimes coordinated efforts around the intersections of rhetorical scholarship on AI and research on rhetoric with AI present a critical opportunity for productive engagement about AI broadly and its role in the future of rhetoric. To that end, this special issue provides an important opportunity for reciprocal conversation among scholars critiquing AI, using AI, and using AI to critique. The contributions to this issue showcase the latest research at these intersections, bringing together contributors from a broad range of rhetorical homes, including those in departments of English, departments of communication, and departments of rhetoric. The capacious and multidisciplinary conversation in this special issue is designed to catalyze robust and enduring conversation about rhetoric and AI, both within Rhetoric Society Quarterly and beyond. To support this wide-ranging conversation, the collected contributions embody scholarship in both the rhetoric of AI and rhetoric with AI. In what follows, we briefly summarize the subsequent articles, which are organized to begin with rhetoric of AI and move on to rhetoric with AI.

Our first article under the rubric of rhetoric of AI is Atilla Hallsby’s “A Copious Void: Rhetoric as Artificial Intelligence 1.0.” This essay prompts a necessary, deep (re)consideration of the etymological, ontological, and formal dimensions of rhetoric in the face of emerging AI. In so doing, Hallsby invites rich conversation between critical algorithm studies and rhetorical theory through a thoughtful rumination on the origin of “stochastic” as a rhetorical term having to do with everyday occurrences. This analysis showcases one of the many ways that the rhetorical is tacitly embedded in AI technologies and critique. Second, Kem-Laurin Lubin and Randy Allen Harris’s “Sex after Technology: The Rhetoric of Health Monitoring Apps and the Reversal of Roe v. Wade” explores some of the darkest potentials of AI-driven technology through investigating how AI technologies have been used to support state monitoring and criminalization of pregnant people. Lubin and Harris’s article leverages insights from critical algorithm studies to develop “algorithmic ethopoeia,” a rhetorical framework for conceptualizing the computational representations of individuals through algorithmically mining human data. Finally, Emma Bedor Hiland’s “The Rhetorical Possibilities of Communicative Time Travel” explores a fascinating rhetorical case study where an artist-developer trains an interactive AI from her childhood diary entries. Bedor Hiland explores how subsequent engagement with this system fosters a kind of communicative time travel that allows one to experience a feeling somewhat like interacting with a former self. Ultimately, Bedor Hiland’s analysis offers important new insights for rhetorical understandings of memory and authenticity.

Lubin and Harris, Bedor Hiland, and Hallsby all assert, in very different ways, that there are deep and often hidden affinities between the algorithmically circumscribed conceptualizations of language that drive AI technologies and the ways that the discipline of rhetoric has long understood language. The final two essays in this collection surface those affinities and demonstrate how AI-based methods can help advance theories of rhetoric and analyses of rhetorical systems. Ryan Omizo and William Hart-Davidson’s paper assesses AI-generated writing from a genre perspective. Contrasting GPT-2-generated examples of a specific rhetorical genre with human-generated instances, they trace genre differences between human and AI writing through the technical implementation of language in LLMs. Finally, Misti H. Yang and Zoltan P. Majdik deploy a fine-tuned transformer-based language model to map a complex rhetorical structure in and across two distinct discourse sites. Their use of a custom-tuned AI-based classifier helps them find evidence of rhetorical contagion, allowing them to demonstrate how rhetorical patterns or structures can move over time from one site of discursive engagement to another. Their paper ends with a reflection of how researching rhetoric with AI always also requires being attuned to the kinds of rhetorics of AI developed in the first three essays.

Finally, in “This Is Not a Response,” Casey Boyle prompts readers to reconsider and reimagine the very nature of “prompting” both technologically and throughout broader discourses on AI, language, and rhetoric. In so doing, he reminds us that in the complex mixtures of social action, the line between prompt and response is never clear, and that we—humans—are prompted by ChatBot responses to our prompts in an infinite ouroboric recursion. Ultimately, the essay thoughtfully demonstrates these insights through an iterative human-human prompt cycle where the insights of this special issue’s collected authors prompt Boyle to prompt us to reconsider and reimagine AI, artifice, intelligence, and communication.

The Future of Rhetoric of/with AI

Certainly, not all rhetoricians will embrace the new communication practices and methodologies made available by AI technologies, either as objects of critical analysis or as ways of conducting our research. But as editors of this special issue, we want to draw attention to the growing footprint of computational rhetoric as a methodology and AI technologies as objects of cultural, political, social, and ethical analysis in academia to strongly suggest that there is a need for our field to develop AI literacies and to start building those literacies into graduate methods curricula. For better and worse, AI has gained a foothold in how we engage with each other. Thus, it will be increasingly important for scholars in the field to be able to understand how AI technologies model language, thereby affect how we use language, and thereby raise new sites of research and methods for researching rhetoric. These tightly interconnected spheres of human language-use and computational modeling of language introduce new opportunities for research in rhetoric, but also limitations that can lead research to go awry. As our interactions with and through large language models emerge as increasingly frequent objects of our research, and as studies driven by machine/deep learning find their way into traditional rhetoric journals, readers of those journals—and peer reviewers—will need to understand how the many facets of rhetoric—as a field of study and as a communicative practice; its patterns, moves, figures, semantic turns, ideological impacts, and circulation in—are implemented in, reflected by, and refracted through AI technologies.

In our experience, AI literacy for conducting research on or with AI about rhetoric should include some understanding of the following key concepts, and an appreciation of their affordances and limitations for research in rhetoric:

  • The raw ingredients for AI development: training data, evaluation data, learning algorithms, models, and model outputs.

  • Different approaches to AI development: supervised machine learning, unsupervised machine learning, deep learning, reinforcement learning.

  • Common ways of measuring model performance: precision, recall, F1, area under the curve (AUC). These measures are not terribly different from common measures of interrater reliability (e.g., Cohen’s κ) used in some wings of the discipline and can therefore be scaffolded into methods pedagogy relatively easily.

  • The many ways AI can go wrong: bias, inequity, overfitting, underfitting, use of proxy variables.

Developing these components of critical AI literacy and incorporating them into methods curricula in the discipline will no doubt be a challenging shift.Footnote1 Nevertheless, we think the benefits for both rhetoric of AI and rhetoric with AI are substantial. As these technologies increasingly permeate our daily lives, a stronger appreciation for their foundational principles can better prepare us and our students to be savvy users, researchers, and critics of AI technologies.

While we don’t have room to explain or demonstrate all these concepts in this introduction, we wish to illustrate how a stronger foundation in AI principles can animate critical engagement with these emerging technologies. ChatGPT recently received a lot of attention for scoring highly on the US Medical Licensing Exam (USMLE), the test fourth-year medical students take to become certified physicians (Kung et al.), furthering the technology’s reputation as a game-changer in AI. An understanding of key concepts in AI development—specifically, about model training—and an appreciation for the nuanced affordances of different training approaches prompts immediate skepticism about these results and can support—from a position of technical understanding—a critical argument that this event is little more than empty hype. AI models “learn” (or, more precisely, infer) text patterns from their training data: the set of documents from which deep learning algorithms generate probabilistic outputs like classifications of text spans or next-word predictions. An AI language model should be considered as performing well when it can execute these tasks on new data that were not part of its training process. ChatGPT was developed by training the underlying model on a massive swath of data scraped from the internet. This includes public domain, copyright protected, and illicit material. USMLE questions and practice questions are almost certainly included in the training data, which suggests— and could buttress an argument about AI hype—that this high score may be more a result of memorization than true learning.

The list above showcases a degree of detail in critical AI literacy that we believe is essential not only for those who wish to use AI technologies in computational rhetoric, but also—maybe even more so—for scholars who critique and critically engage with AI systems. At the core of our special issue is our belief that academic disciplines advance through robust dissoi logoi. We argued at the beginning of our introduction that we find ourselves at a deliberative crossroads, not only as a society having to determine how to integrate emerging AI technologies into our lives and politics, but also as a field faced with technologies that model our central object of study—rhetoric—in ways that are simultaneously recognizable yet unfamiliar, potentially productive yet also deeply problematic. Through our selection of articles, we hope to show the existing strength within our field for productively managing such tensions, the need for always maintaining a critical stance toward new technologies by those who embrace them as objects of study or ways of conducting research, and how critique of new technologies is more incisive when it is offered on the technology’s own terms.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Notes

1 For accessible entrées into these subjects, we would recommend Meredith Brousard’s Artificial Unintelligence, Cathy O’Neil’s Weapons of Math Destruction, and Janelle Shane’s You Look Like a Thing, and I Love You.

Works Cited

  • Antoniak, Maria, et al. “Where Do People Tell Stories Online? Story Detection across Online Communities.” arXiv:2311.09675, arXiv. 16 Nov. 2023. http://arxiv.org/abs/2311.09675.
  • Ballard, Andrew O., et al. “Incivility in Congressional Tweets.” American Politics Research, vol. 50, no. 6, 2022, pp. 769–80. SAGE Journals. doi:10.1177/1532673X221109516.
  • Bender, Emily M., et al. “On the Dangers of Stochastic Parrots: Can Language Models Be too Big?” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, ACM Digital Library, 2021, pp. 610–23. doi:10.1145/3442188.3445922.
  • Benjamin, Ruha. Race after Technology: Abolitionist Tools for the New Jim Code. John Wiley & Sons, 2019.
  • Brayne, Sarah. Predict and Surveil: Data, Discretion, and the Future of Policing, Oxford UP, 2020.
  • Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the World, MIT P, 2018.
  • Campbell, Karlyn Kohrs. “Agency: Promiscuous and Protean.” Communication and Critical/Cultural Studies, vol. 2, no. 1, 2005, pp. 1–19. doi:10.1080/1479142042000332134.
  • Carter, Jonathan S., and Caddie Alford. “Adoxastic Publics: Facebook and the Loss of Civic Strangeness.” Quarterly Journal of Speech, vol. 109, no. 2, 2023, pp. 176–98. doi:10.1080/00335630.2022.2139856.
  • Cerja, Cecilia, et al. “Misogynoir and the Public Woman: Analog and Digital Sexualization of Women in Public from the Civil War to the Era of Kamala Harris.” Quarterly Journal of Speech, 2023, vol. 1, pp. 1–27.
  • Craig, Robert T. “Cognitive Science: A New Approach to Cognition, Language, and Communication.” Quarterly Journal of Speech, vol. 64, no. 4, Dec. 1978, pp. 439–50. doi:10.1080/00335637809383449.
  • Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, Yale UP, 2021.
  • Derkatch, Colleen. “The Self-Generating Language of Wellness and Natural Health.” Rhetoric of Health & Medicine, vol. 1, no. 1–2, 2018, pp. 132–60. doi:10.5744/rhm.2018.1009.
  • Dubremetz, Marie, and Joakim Nivre. “Rhetorical Figure Detection: Chiasmus, Epanaphora, Epiphora.” Frontiers in Digital Humanities, vol. 5, 2018, pp. 1–16. doi:10.3389/fdigh.2018.00010.
  • Edbauer, Jenny. “Unframing Models of Public Distribution: From Rhetorical Situation to Rhetorical Ecologies.” Rhetoric Society Quarterly, vol. 35, no. 4, 2005, pp. 5–24. doi:10.1080/02773940509391320.
  • Gallagher, John R., et al. “Peering into the Internet Abyss: Using Big Data Audience Analysis to Understand Online Comments.” Technical Communication Quarterly, vol. 29, no. 2, 2020, pp. 155–73. doi:10.1080/10572252.2019.1634766.
  • Gibbons, Michelle G. “Persona 4.0.” Quarterly Journal of Speech, vol. 107, no. 1, 2021, pp. 49–72. doi:10.1080/00335630.2020.1863454.
  • Graham, S. Scott. “Agency and the Rhetoric of Medicine: Biomedical Brain Scans and the Ontology of Fibromyalgia.” Technical Communication Quarterly, vol. 18, no. 4, 2009, pp. 376–404. doi:10.1080/10572250903149555.
  • Graham, S. Scott. The Doctor and the Algorithm: Promise, Peril, and the Future of Health AI, Oxford UP, 2022.
  • Graham, S. Scott. “The Opioid Epidemic and the Pursuit of Moral Medicine: A Computational-Rhetorical Analysis.” Written Communication, vol. 38, no. 1, 2021, pp. 3–30. doi:10.1177/0741088320944918.
  • Graham, S. Scott. “Statistical Genre Analysis: Toward Big Data Methodologies in Technical Communication.” Technical Communication Quarterly, vol. 24, no. 1, 2015, pp. 70–104. doi:10.1080/10572252.2015.975955.
  • Graham, S. Scott, and R. Hopkins Hannah. “AI for Social Justice: New Methodological Horizons in Technical Communication.” Technical Communication Quarterly, vol. 31, no. 1, 2022, pp. 89–102.
  • Greene, Ronald Walter. “Rhetoric and Capitalism: Rhetorical Agency as Communicative Labor.” Philosophy and Rhetoric, vol. 37, no. 3, 2004, pp. 188–206. doi:10.1353/par.2004.0020.
  • Gries, Laurie, and Collin Gifford Brooke. Circulation, Writing, and Rhetoric, UP of Colorado, 2018.
  • Harris, Randy Allen, and Chrysanne Di Marco. “Rhetorical Figures, Arguments, Computation.” Argument & Computation, vol. 8, no. 3, 2017, pp. 211–31. doi:10.3233/AAC-170030.
  • Harris, Randy Allen, Chrysanne Di Marco, Ashley Rose Mehlenbacher, et al. “A Cognitive Ontology of Rhetorical Figures.” Cognition and Ontologies, 2017, pp. 18–21.
  • Harris, Randy Allen, Chrysanne Di Marco, Sebastian Ruan, et al. “An Annotation Scheme for Rhetorical Figures.” Argument & Computation, vol. 9, no. 2, Jan. 2018, pp. 155–75. doi:10.3233/AAC-180037.
  • Hart, Roderick P. “Genre and Automated Text Analysis: A Demonstration.” Rhetoric and the Digital Humanities, edited by Jim Ridolfo and William Hart-Davidson, U of Chicago P, 2015, pp. 152–68.
  • Hart, Roderick P. “Redeveloping DICTION: Theoretical Considerations.” Theory, Method, and Practice in Computer Content Analysis, edited by Mark D. West, Greenwood Publishing Group, 2001, pp. 43–60.
  • Hart, Roderick P. Verbal Style and the Presidency: A Computer-Based Analysis, Academic P, 1984.
  • Hooker, Tristin Brynn. “Tweeting Zebras: Social Networking and Relation in Rare Disease Advocacy.” Rhetoric of Health & Medicine, vol. 5, no. 1, 2022, pp. 93–121. doi:10.5744/rhm.2022.5005.
  • Ishizaki, Suguru, and David Kaufer. “Computer-Aided Rhetorical Analysis.” Applied Natural Language Processing: Identification, Investigation and Resolution, IGI Global, 2012, pp. 276–96. doi:10.4018/978-1-60960-741-8.ch016.
  • Keränen, Lisa B. “How Does a Pathogen Become a Terrorist? The Collective Transformation of Risk into Bio(in)Security.” Rhetorical Questions of Health and Medicine, edited by Joan Leach and Deborah Dysart-Gale, Lexington Books, 2010, 77–96.
  • Kong, Yeqing, and Huiling Ding. “Tools, Potential, and Pitfalls of Social Media Screening: Social Profiling in the Era of AI-Assisted Recruiting.” Journal of Business and Technical Communication, vol. 38, no. 1, 2023, pp. 33-65. doi:10.1177/10506519231199478.
  • Koteyko, Nelya, et al. “Climate Change and ‘Climategate’ in Online Reader Comments: A Mixed Methods Study.” The Geographical Journal, vol. 179, no. 1, 2013, pp. 74–86. doi:10.1111/j.1475-4959.2012.00479.x.
  • Kung, Tiffany H., et al. “Performance of ChatGPT on USMLE: Potential for AI-Assisted Medical Education Using Large Language Models.” PLOS Digital Health, vol. 2, no. 2, 2 Feb. 2023, pp. e0000198. doi:10.1371/journal.pdig.0000198.
  • Larson, Brian, et al. “Use What You Choose: Applying Computational Methods to Genre Studies in Technical Communication.” Proceedings of the 34th ACM International Conference on the Design of Communication, ACM, 2016, pp. 1–8. doi:10.1145/2987592.2987603.
  • Lawrence, Halcyon M. “Siri Disciplines.” in Your Computer Is on Fire, edited by Thomas S. Mullaney, Benjamin Peters, Mar Hicks and Kavita Philip, MIT P, 2021, 179-197. doi:10.7551/mitpress/10993.003.0013.
  • Lindsley, Tom. “Legitimizing the Wound: Mapping the Military’s Diagnostic Discourse of Traumatic Brain Injury.” Technical Communication Quarterly, vol. 24, no. 3, 2015, pp. 235–57. doi:10.1080/10572252.2015.1044120.
  • Majdik, Zoltan P. “A Computational Approach to Assessing Rhetorical Effectiveness: Agentic Framing of Climate Change in the Congressional Record, 1994–2016.” Technical Communication Quarterly, vol. 28, no. 3, 2019, pp. 207–22. doi:10.1080/10572252.2019.1601774.
  • Majdik, Zoltan P., and James Wynn. “Building Better Machine Learning Models for Rhetorical Analyses: The Use of Rhetorical Feature Sets for Training Artificial Neural Network Models.” Technical Communication Quarterly, vol. 32, no. 1, 2023, pp. 63–78. doi:10.1080/10572252.2022.2077452.
  • Majdik, Zoltan P., et al. “Sample Size Considerations for Fine-Tuning Large Language Models for Named Entity Recognition Tasks: Methodological Study.” (forthcoming in) JMIR AI. doi:http://dx.doi.org/10.2196/52095.
  • Mehran, Weeda, et al. “Two Sides of the Same Coin? A Largescale Comparative Analysis of Extreme Right and Jihadi Online Text(s).” Studies in Conflict & Terrorism, 2022, pp.1–24. doi:10.1080/1057610X.2022.2071712.
  • Mercieca, Jennifer R. “Dangerous Demagogues and Weaponized Communication.” Rhetoric Society Quarterly, vol. 49, no. 3, 2019, pp. 264–79. doi:10.1080/02773945.2019.1610640.
  • Miller, Benjamin. Distant Readings of Disciplinarity: Knowing and Doing in Composition/Rhetoric Dissertations, Utah State UP, 2022.
  • Miller, Carolyn R. “What Can Automation Tell Us about Agency?” Fifty Years of Rhetoric Society Quarterly, Routledge, 2018, pp. 183–200.
  • Mullaney, Thomas S., et al. Your Computer Is on Fire. MIT P, 2021.
  • Noble, Safiya Umoja. Algorithms of Oppression, New York UP, 2018.
  • O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown, 2017.
  • Omizo, Ryan, and William Hart-Davidson. “Finding Genre Signals in Academic Writing.” Journal of Writing Research, vol. 7, no. 3, 2016, pp. 485–509. doi:10.17239/jowr-2016.07.03.08.
  • Omizo, Ryan M. “Machining Topoi: Tracking Premising in Online Discussion Forums with Automated Rhetorical Move Analysis.” Computers and Composition, vol. 57, 2020, pp. 102578. doi:10.1016/j.compcom.2020.102578.
  • Pelley, Scott. “Geoffrey Hinton on the Promise, Risks of Artificial Intelligence.” 8 Oct. 2023. https://www.cbsnews.com/news/geoffrey-hinton-ai-dangers-60-minutes-transcript/.
  • Pfister, Damien Smith. “Technoliberal Rhetoric, Civic Attention, and Common Sensation in Sergey Brin’s ‘Why Google Glass?’” Quarterly Journal of Speech, vol. 105, no. 2, 2019, pp. 182–203. doi:10.1080/00335630.2019.1595103.
  • Pollak, Calvin, and Sanvi Bhardwaj. “Social Justice and ‘Harmful Tech’: Dis-Orienting Militarized Research.” Technical Communication Quarterly, 2023, pp. 1–17. doi:10.1080/10572252.2023.2240854.
  • Railsback, Celeste Condit. “Beyond Rhetorical Relativism: A Structural‐material Model of Truth and Objective Reality.” Quarterly Journal of Speech, vol. 69, no. 4, Nov. 1983, pp. 351–63. doi:10.1080/00335638309383662.
  • Schmidhuber, Jürgen. “Deep Learning in Neural Networks: An Overview.” Neural Networks, vol. 61, Jan. 2015, pp. 85–117. ScienceDirect. doi:10.1016/j.neunet.2014.09.003.
  • Seol, Jae-Wook, et al. “Causality Patterns and Machine Learning for the Extraction of Problem-Action Relations in Discharge Summaries.” International Journal of Medical Informatics, vol. 98, 2017, pp. 1–12. doi:10.1016/j.ijmedinf.2016.10.021.
  • Şeref, Michelle M. H., et al. “Rhetoric Mining: A New Text-Analytics Approach for Quantifying Persuasion.” INFORMS Journal on Data Science, vol. 2. no. 1, Apr. 2023, pp. 24–44. doi:10.1287/ijds.2022.0024.
  • Shutova, Ekaterina, et al. “Statistical Metaphor Processing.” Computational Linguistics, vol. 39. no. 2, 2013, pp. 301–53. doi:10.1162/COLI_a_00124.
  • Stuckey, Mary E. “On Rhetorical Circulation.” Rhetoric and Public Affairs, vol. 15, no. 4, 2012, pp. 609–12. doi:10.2307/41940623.
  • Sugimoto, Kaito, and Akiko Aizawa. “Incorporating the Rhetoric of Scientific Language into Sentence Embeddings Using Phrase-Guided Distant Supervision and Metric Learning.” Proceedings of the Third Workshop on Scholarly Document Processing, edited by Arman Cohan, et al., Association for Computational Linguistics, 2022, pp. 54–68. https://aclanthology.org/2022.sdp-1.7.
  • Taylor, Charlotte. “Science in the News: A Diachronic Perspective.” Corpora, vol. 5, no. 2, Nov. 2010, pp. 221–50. doi:10.3366/cor.2010.0106.
  • Teston, Christa. “Rhetoric, Precarity, and mHealth Technologies.” Rhetoric Society Quarterly, vol. 46, no. 3, 2016, pp. 251–68. doi:10.1080/02773945.2016.1171694.
  • Tillery, Denise, and Emma Frances Bloomfield. “Hyperrationality and Rhetorical Constellations in Digital Climate Change Denial: A Multi-Methodological Analysis of the Discourse of Watts up with That.” Technical Communication Quarterly, Dec. 2021, pp. 1–18. doi:10.1080/10572252.2021.2019317.
  • Vaswani, Ashish, et al. “Attention Is All You Need.” arXiv:1706.03762, 2017. doi:10.48550/arXiv.1706.03762.
  • Vee, Annette. “Large Language Models Write Answers.” Composition Studies, vol. 51, no. 1, 2023, pp. 176–221.
  • Verhulsdonck, Gustav, and Jason Tham. “Tactical (Dis) Connection in Smart Cities: Postconnectivist Technical Communication for a Datafied World.” Technical Communication Quarterly, vol. 31, no. 4, 2022, pp. 416–32. doi:10.1080/10572252.2021.2024606.
  • Wäckerle, Jens, and Bruno Castanho Silva. “Distinctive Voices: Political Speech, Rhetoric, and the Substantive Representation of Women in European Parliaments.” Legislative Studies Quarterly, vol. 48, no. 4, 2023, pp. 797–831. doi:10.1111/lsq.12410.
  • Wang, Zhaozhe. “Activist Rhetoric in Transnational Cyber-Public Spaces: Toward a Comparative Materialist Approach.” Rhetoric Society Quarterly, vol. 50, no. 4, 2020, pp. 240–53. doi:10.1080/02773945.2020.1748218.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.