973
Views
0
CrossRef citations to date
0
Altmetric
Research Article

A Copious Void: Rhetoric as Artificial Intelligence 1.0

ABSTRACT

Rhetoric is a trace retained in and by artificial intelligence (AI) technologies. This concept illuminates how rhetoric and AI have faced issues related to information abundance, entrenched social inequalities, discriminatory biases, and the reproduction of repressive ideologies. Drawing on their shared root terminology (stochastic/artifice), common logic (zero-agency), and similar forms of organization (trope+algorithm), this essay urges readers to consider the etymological, ontological, and formal dimensions of rhetoric as inherent features of contemporary AI.

I am never really satisfied that I understand anything: because, understand it well as I may, my comprehension can only be an infinitesimal fraction of all I want to understand about the many connections and relations which occur to me, how the matter was first thought of or arrived at, etc. etc.

—(Ada Lovelace 143)

Accumulation and Its Discontents

In The Order of Things: An Archeology of the Human Sciences, Michel Foucault connects the pre-Hellenic Greeks’ rhetorical epistemology to the knowledge systems of sixteenth-century Western Europe. Rhetoric was then at the cutting edge, much as artificial intelligence (AI) is allegedly at the forefront of technological progress today. Within the inegalitarian social structure of ancient Athens, rhetoric was a technē: a making, craft, or systematized art that enabled its users to know the mysteries of the natural world and promised landowning Athenian men exclusive political power. Rooted in a fourfold system of similitudes, the “rhetorical panoply” of convenientia, aemulatio, analogy, and sympathy held dominance for centuries (Foucault, “Order” 43). By the sixteenth century, however, this panoply had become “plethoric yet poverty-stricken,” having “condemned itself to never knowing anything but the same thing” (Foucault, “Order” 30). Over nearly 2,000 years, rhetoric gave rise to an informational abundance that degenerated the art’s new reputation, making it a stilted form of courtly artifice. Today, we are witnessing a similar problem on an even grander scale. In the widely cited “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” critics of emergent AI technologies argue that the increasing size of large language models (or LLMs) poses significant social risks. These include financial and environmental costs, the perpetuation of White and Western-centric inequalities, the intensification of encoded racial biases, and a general absence of entrepreneurial accountability. Beyond AI’s resemblance to the information overload problems of the sixteenth century, it also holds a mirror to rhetoric’s present-day disciplinary problems, including its centering of White and Western subjects and reproduction of entrenched settler colonial, anti-Black, and cis-gender ideology (Houdek; Wanzer et al.).

For rhetoricians, these problems may seem strangely familiar, making for an uncanny pairing. Coupled, AI’s resonances with rhetoric’s history evoke the disorientation experienced when we find ourselves in a place that seems familiar, despite experiencing it for the first time (Freud). One explanation for this uncanniness is that rhetoric remains functional within AI, having the form and feel of a trace. The trace is a deconstructive term of art that describes a relationship of identity mediated by differences in time and space. However, traces do not abide by the logic of cause and effect and rhetoric, by extension, is not the origin of AI. Rather, the two are connected by the logic of différance, that “sameness which is not identical: by the silent writing of its a, it has the desired advantage of referring to differing, both as spacing/temporalizing and as the movement that structures every dissociation” (Derrida 129–30, emphasis in original). Spoken aloud, différance sounds no different than difference, lending the term a performative dimension of simultaneous sameness and nonidentity. So too with rhetoric and AI, which are, despite their copious differences, indistinguishable.

As a trace, rhetoric is and can be many things. As a discipline, it is a discrete area of academic inquiry distributed across many humanist traditions. As a practice, it is a collection of techniques that grant a rhetor the unique capacity to act—or illuminate how such capacities are an illusion. As a system of discourse, it is a collection of axiom-like rules that shape the apparent naturalness of the world through invisible (often tropological) linkages. Rhetoric is also the connective logic that binds these distinct topoi; it “is derived from the configuring and reconfiguring of relations among [a text’s] elements” (Zemlicka 50). This multiplicity of linkages and connections is also what binds rhetoric to AI. Both are similar-but-not-identical objects whose superabundant applications make the task of establishing a singular definition challenging, even for the most well-initiated critic.

The phrase copious void names this trace, situating rhetoric and AI as related terms, similar contradictions of plentitude and emptiness. “At the beginning of its history, philosophy separates technē from ēpistēmē,” writes Bernard Stiegler, “the separation is determined by a political context, in which the philosopher accuses the Sophist of instrumentalizing the logos as rhetoric and logography, that is, as both an instrument of power and as a renunciation of knowledge” (1). This difference separates the philosopher from the logographer—the lesser sophists who personalized the writing of legal cases out of a collection of stock topics, arguments, and tropes. As a trace, this inaugurating critique of rhetoric also resembles that lodged against contemporary “stochastic parrots”: the lesser sophist and AI alike simulate speech in ways that seem more like the product of a knack (tribē) or trickery than a reflection of true artistry or knowledge (Kennedy 37).

Devoid of positive content and overfull of potential, rhetoric is a copious void, a trace within AI that fashions it as an AI version 1.0. Drawing on their shared etymology (“stochastic/artifice”), similar formulations of the capacity to act (“zero-agency”), and their parallel structure (“trope+algorithm”), the copious void names this connection as a careful parsing of differences. To take stock of this trace structure requires us to attend to its deleterious material consequences on, for instance, marginalized populations, anxious publics, and a disempowered labor force. It is also an injunction to attend to this void’s generative, life-affirming possibilities, namely those that foster shared embodied and intellectual space. That, ultimately, is our uncannily familiar situation: It is because certain features of AI are so evidently rhetorical that rhetoricians are well-equipped to speak to its many exigencies.

Stochastic/Artifice

The first way to illustrate how rhetoric inhabits AI as a trace concerns the terminological overlap between these modes of discourse-making. Specifically, stochastic and artifice are terms of art that have special significance to rhetoric and AI. Whereas rhetoric’s associations with these terms span from the fourth century BCE to the nineteenth century CE, their connection to AI is most apparent in the late twentieth and early twenty-first centuries. Although vastly different contexts separate their etymologies, the juxtaposition of this shared terminology illuminates how rhetoric and AI share resonances of probabilistic fakery and skilled invention.

Stochastic’s earliest recorded use describes an art’s distributed and conjectural qualities (“stochastic, adj.”). In the case of ancient Greek rhetoric, stochazein did not evoke the precision of mathematics or statistics but referred to probability as “the everyday,” “the usual,” or “things that normally or commonly happen” (Gaonkar 8). Aristotle, for instance, argues that rhetoric observes “the probabilities, plausibilities, or persuadabilities that exist before the work of persuasion begins” (Foley 242). He reserves the term stochastic to describe technai (e.g., medicine, navigation, and rhetoric) whose function (e.g., to treat an illness, chart a course, or generate suasive speech) does not guarantee its aim (e.g., to save a life, arrive at a destination, or move an addressee to action) (Nussbaum 290). Isocrates, by comparison, dismisses rhetoric as too “imprecise and unsystematic,” preferring to coin logos politikos as the probabilistic art of managing opinion (Poulakos 63). A similar sense of randomness is preserved in terms like “stochastic citizenship” (Beasley Von Burg 353), a mobile, borderless form of civic identity, and “stochastic terrorism,” (Amman and Meloy 2) which describes distributed patterns of hate-fueled violence. Likewise, twentieth-century philosophers like Jacques Lacan and Jacques Derrida knowingly used the word stochastic to reference probabilistic features of ancient rhetoric and the emerging field of cybernetics. However, this connection is often overlooked because stochastic has often been translated by word aleatory, likely “due to the English translator’s unfamiliarity with cybernetic terminology or probability theory” (Liu 305).

The above senses of rhetoric are retained in criticisms of “stochastic” AI, which is similarly alleged to be unable to navigate contingencies of intention and context. Before AI, stochastic’s “usual” or “everyday” meaning had turned toward specialized uses in mathematics, statistics, and computer science. In algorithmic design, stochastic references the calculable randomness used in dynamic systems like AI models. “Stochastic Parrots” highlights the indeterminate nature of AI-generated speech and writing, claiming such systems to be incapable of cognition or interpretation. According to these authors, AI cannot access context, the “something outside of language” that would give its outputs true depth or meaning: “Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader’s state of mind. It can’t have been, because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that” (Bender et al. 616).

Although AI cannot consider intention, worldview, or psychology, critics like Matthew Kirschenbaum express dismay with the above framing of human communication, which provides a “disarmingly linear account of how language, communication, intention, and meaning work” (“Again Theory”). To the extent that AI is incapable of producing intentional, reflexive discourse, literary and rhetorical theorists have long proselytized that human speech is—no less than AI—organized by relations of similarity and difference, infested with parapraxes, and circumscribed by rule structures of discourse formations. If anything, AI and speech share the default assumption that meaning is always an uncertain approximation. True communication—in which a sender’s intended meaning corresponds with its reception—is rare, if not impossible.

Rather than resorting to an antiquated sender/receiver model, the probabilities governing generative AI are different from what makes human-made rhetoric stochastic, unpredictable, and contingent—even if “stochastic” seemingly makes this difference inaudible. Rather than forming context-informed judgments, AI systems are guided by principles like perplexity (“How surprising is this language based on what I’ve seen?”) and burstiness (“the phenomenon where certain words or phrases appear in rapid succession or ‘bursts’ within a text”) (Edwards, “Constitution”). Whereas a rhetorician might assess context and audience disposition before crafting a rhetorical response, an AI output performs probabilistic analyses upon training data to simulate natural language and human-generated images. Outputs are, in other words, the product of formal rules that are “stochastic” because they take randomness into account. For that reason, critics who would call AI a “stochastic parrot” are not unlike Isocrates, who calls out the sophist for teaching rote persuasion (“as simply as they would teach letters of the alphabet” 169) and for their inability to take context (or its “fitness for the occasion” 171) and worldview (“the nature of each kind of knowledge” 169) into consideration.

A second rhetorical term retained and changed by AI is artifice, originally signifying the “action of an artificer” (or craftsman) and an act of invention that relies on skilled human intervention (“artifice, n.”). However, in keeping with rhetoric’s pejorative connotations, it more commonly connotes “trickery,” a “device intended to deceive,” lending the word its inflection as a diversion from reality. Ruminating on why rhetoric is “less systematically studied” than in earlier generations, Richard Whately’s 1828 Elements of Rhetoric enlists both inflections of artifice:

Perhaps this also may be in some measure accounted for from the circumstances which have been just noticed such as the distrust excited by any suspicion of rhetorical artifice, that every speaker or writer who is anxious to carry his point, endeavors to disown or keep out of sight any superiority of skill; and wishes to be considered as relying rather than on the strength of his cause, and the soundness of his views, then on the ingenuity and expertness as an advocate. (7–8, emphasis added)

Whereas trained speakers and writers might have once been unafraid to acknowledge their rhetorical proficiencies, rhetoric in the nineteenth century became a cause for suspicion. As with the Aristotelian notion of rhetoric as a system of “artificial” proofs, artifice referred to inventions originating with the rhetor instead of with a natural or objective source (15). Artifice, in other words, did not just diminish rhetoric. It connoted a creative human activity that warps the real to guide audiences to an intended end.

AI retains rhetoric’s ambivalent significations of artifice, alluding to an all-too-familiar reputation for deception and invention. Humans are apt to use AI tools disingenuously or in ways that reflect their own racial, cultural, and gender biases. Critics also predict an explosion in fake academic articles, pseudo-journalism, and AI-invented legal decisions, heralding a new age of Sokal Hoaxes, disinformation, and injustice (Caplan). Others point to AIs not as thinking, reasoning, or interpreting beings but as hallucinating entities that can be fooled “into perceiving things that aren’t there” (Schotz). Such hallucinations leave AI-based systems vulnerable to attacks, with the potential to sabotage autonomous vehicles and neural networks. AI’s potential for error is what Hayles calls “the noise of materiality,” or “deviations from ideality” that “can never be entirely excluded in measurements” (Hayles “Inside the Mind” 636). In the case of AIs specifically, such “noise” results from “overfitting,” in which “data that are part of the noise” are mistaken “for part of the pattern” (Hayles “Inside the Mind” 637). In other words, artifice easily lends itself to a description of deceptive users, sociopathic developers, and a psychotic machine, cultivating a mistrust of AI on par with rhetoric’s most pejorative connotations.

The other face of artifice—a set of inventional practices from whence all new ideas flow—is implied by the adjective “generative” often associated with AI. Often, the “generative capabilities” of AIs reference their ability to “synthesize high-quality images,” “produce sensible-sounding and impressive prose,” and “fundamentally alter the creative processes by which creators formulate ideas and put them into production” (Epstein et al. 1110). Placed in this almost-too-utopic light, AI is a productive supplement to an already available means of persuasion, a pharmakon without deleterious side effects. To borrow Kate Crawford’s phrasing, it cannot—must not—but thought of as a substitute for ingenuity because it is “embodied and material” in the sense that it requires “natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications” (8). Understanding AI’s artifice as deception alone sacrifices this dimension for a suspicious orientation, one that makes instruction and journal reviewing akin to an indefinite Turing test, dividing authentic human creations from machine-generated fever dreams. Artifice is, in other words, generative and critical, invoking AI’s as-yet-unrealized connections and the human invention necessary to have created it in the first place.

Zero-Agency

A second way rhetoric functions as AI’s trace concerns the logic of zero, which describes a toggle between presence and absence. Like any other number, zero is made “present” using a graphic representation (“0”), but it alone denotes an absence of value (Badiou 26). Zero materializes a similar contradiction at the level of language, denoting both the literal origin of the Cartesian number system (i.e., the meeting point of the x-, y-, and z-axes) and a circuitous non-Western etymology. Indeed, when traced back to its provisional source, zero is a copious void par excellence. Its earliest variation, śūnyatā, is a Mahayana Buddhist construct that signifies “a simultaneous lack of essence and fullness of possibility” (Church 230–31).

The word zero comes from śūnya, Sanskrit for “void,” which migrates into Arabic as sifr, meaning ‘empty,’ the root from which the modern English language form cypher derives (Smith and Karpinski 56–57). Thus it is no coincidence that the phrase ‘cipher in algorism’ was long used interchangeably with the word zero; sometimes cypher would be used to designate any of the Arabic numerals, making it synonymous with algorism. (Striphas 404, emphasis in original)

Zero’s trace-like quality arises because it is similar but not identical to any other integer, both a foundational and exceptional element of the number system. Although it is synonymous with the origin of the number system, zero was never “originally” Cartesian. Instead, it arrived in Latinized Europe by migrating from Sanskrit into Arabic into Latin. It was only belatedly appropriated as an essential tool of Western mathematics, an after-the-fact addition to systems of counting and tabulation. More than a numerical symbol, zero names a ritual whereby a void-like element is retroactively added to an organized system, dynamically transforming it (Seife 12–19).

Rhetoric’s most often recited narrative of origin similarly makes it a zero—-that is, the foundation and exception to the organized system called philosophy. As the story goes, Plato, Socrates’s GPT-like ghostwriter, adds the word “rhetoric” to philosophy to denigrate the earlier sophists’ spoken arts as worthless. Although there is no evidence that “rhetoric” existed as a term of art before this coining, the practice of rhetoric preceded the invention of philosophy, going by other names like logon technē (i.e., the art of the word) and dissoi logoi (i.e., conflicting words) (Schiappa). The latter, a strategy of dyadic idea testing, informs the Socratic/Platonic dialogue and its method of diaresis, in which an interlocutor’s starting premise is split into ever-smaller oppositions until, at last, it is shown to rest on an unsupportable foundation. Rhetoric is a zero because it allegedly added nothing to philosophy—but also functioned as its condition of possibility. Like śūnyatā, rhetoric’s origin story offers us an example of a copious void because it features a supplementary, seemingly empty element that transforms a whole system of values.

AI quite literally runs on ones and zeros. However, to say that it exemplifies the logic of zero means something quite different: It signals that AI, like rhetoric, is an element of questionable value added to an existing system of discourse. It also means this “nothing,” once added, promises to transform this system entirely. Nowhere is the logic of zero more apparent than in the way AI promises to transform agency, the how or by what means an act is accomplished. The signifier that titles this section, “zero-agency,” alludes to AI’s oscillation between presence and absence, between a human-centered capacity and one in which this capacity is an illusion. As rhetorical scholarship has long theorized, agency is a substantial “something” that can be possessed or wielded and a “nothing” that emerges in contingent, momentary concatenations between text and context. As a shared feature of rhetoric and AI, agency offers yet another instance of the copious void. Both a plentiful “something” and an absent “nothing,” it captures agency as the capacity to act and as an act that displays the futility of subject-centered control.

On the side of presence, zero-agency manifests as a positive, rhetor-centered “capacity to act” that has been denied to a person or people (Campbell 3). Commonly called inclusive exclusion, the denial of substantial agency is a process by which a people or population is “included” in a polity as the exception to the rule of law, thus “excluding” them from a rights-based system that applies to other subjects. Paradoxically, inclusive exclusion often guarantees democracy, making zero-agency the basis of many governing systems (Mouffe et al.). Rhetorical theory has many concepts strongly informed by this conception of agency:

  • The third persona and the null persona respectively describe a specific kind of addressee who is “rejected or negated through the speech” (Wander 209) and “the self-negation of the speaker and the creation in the text of an oblique silhouette indicating what is not utterable” (Cloud 200).

  • Ascriptive citizenship describes the taxonomies contrived by US elites to assign rights to White, straight, cis-gender, and male people by comparing them to right-less persons barred from civic life, including enslaved peoples, women, and immigrants (Smith 18). A prototypically civic logic, inclusive exclusion offers one basis for the resistive rhetoric of social movements ranging from women’s suffrage to the abolition of slavery (Stillion Southard 13).

  • Zoerhetorical sweep describes “the range of attributions of status for a given existent across multiple publics,” rendering people grievable for members of one public but subhuman among members of another (Rowland 10). Falling along “biopolitical axes of gender, race, and sexuality,” zoerhetorical sweep considers “inclusions and exclusions together, as two sides of the same coin” (11).

  • Contained agency describes the semi-agential role assumed by White women who become exemplary representatives of White nationalism but also experience restrictions on mobility and choice due to patriarchal norms emphasizing “home, beauty, and motherhood” (Anderson 107).

Across these examples, rhetorical agency, a rhetor-centered capacity to act, is touted as a substantial something refused to specific people and populations—even though their knowledge and labor are essential for the smooth functioning of the repressive social systems that deny them representation.

AI technologies can also deprive people of the capacity to act, making them vulnerable to exploitation. LLMs and related technologies have often been biased against Black populations, as their default settings frequently reflect racially biased training data. For example, the phrase “failure to enroll,” commonly used to describe facial recognition errors, describes how cameras and sensors “fail” to render melanated faces and hands (Browne 110–14). The phrase blame-shifts by projecting “failure” on those whom the machine cannot recognize: “You, [user], should have made yourself more available for detection, but there is no way for you to have done so because the system’s ability to recognize you was determined before you approached the sensor.” With AI, such “failures” are amplified. On the one hand, developers justify data-gathering practices by arguing that more data lead to more representative AI models. However, users most often do not consent to being “scraped,” and there is no correlation between ever-bigger training datasets and improvements to a model’s accuracy (Gunasekar et al.). On the other hand, developers also prohibit specific terms from their models and thus fail to distinguish a given community’s vernacular appropriations and terms of endearment from hate speech. As the authors of “Stochastic Parrots” argue, AI models and their developers systematically exclude terms like “twink,” thereby “suppressing the influence of online spaces built by and for LGBTQ [lesbian, gay, bisexual, transgender, and queer] people” (614). Understood as a substantial form of agency denied to non-White populations, AI’s zero-agency logic manifests as a dual process of data extraction and nonrepresentation. That is, the rhetoric underwriting the promise of a “representative” AI model depends on a pattern of expropriation that draws training data from communities who do not know their speech patterns are being used to train AI—and the purposeful exclusion of community vernaculars. This double gesture all but ensures that such communities will not see themselves in the models that profit from their discourse.

On the side of absence, “neither texts nor rhetors ‘have’ agency separate from their contextual articulations,” making it something that cannot be possessed as such (Rand 299). Instead, agency emerges in moments of encounter and uptake. As Carolyn R. Miller argues in “What Can Automation Tell Us About Agency?,” agency is a “nothing” that becomes a “something” when a reader/auditor attributes the capacity for symbolic action to the “invisible, mediated other within a written text” (149):

If agency is a potential energy, it will be thought of as a possession or property of an agent (like a stationary stone), but if agency is a kinetic energy, it must be a property of the rhetorical event or performance itself. Agency thus could not exist prior to or as a result of the evanescent act. … As the kinetic energy of performance, agency resolves its doubleness, positioned exactly between the agent’s capacity and the effect on an audience. (147, emphasis added)

As static protocols confined to rote tasks, AI may offer a way to “unproblematically delegate” certain activities to schedulers, editors, and automated grading systems (152). However, as AI increasingly becomes capable of higher-function actions, the agency that we attribute to machines may be experienced as a limit on our capacity to act. This variation of zero-agency is not something a user can possess—it only appears as absent, having been belatedly apprehended as vanished.

Another name for the absence-oriented variation on zero-agency logic is thanatocentrism,Footnote1 an encounter with death and a confrontation with one’s finite capacities. Whereas presence-centered agency presumes that subjects have the capacity to act, thanatocentrism describes agency as something that happens to a subject, not something that a subject does. Often, the encounter with technology brings about the subject’s literal death. As Katherine McKittrick explains, algorithms are “predicated on the negation of black life” (Dear Science 108). In their essay on “Black Puerto Rican Data,” Sarah Bruno and Jessica Marie Johnson concur: “Tidy datasets, definitions, and calculations … are the raw material of empire” (583). Restoring humanness to census tabulations and historical metadata requires an “embodied, affective” self-reflection to breathe life into numbers and refuses the positionality of a dataset’s original aggregators (585). For instance, as McKittrick writes of “archival, numerical evidence” gathered from the Middle Passage, “the slave’s status as object-commodity, or purely economic cargo, reveals that a black archival presence not only enumerates the dead and dying, but acts as an origin story. This is where we begin, this is where historic blackness comes from: the list, the breathless numbers, the absolutely economic, the mathematics of the unliving” (“Mathematics” 17). Stretching into the contemporary, thanatocentrism also captures present-day problems concerning datafication, including search engines’ programmed racial biases (Noble), the production of “racist robots” (Benjamin 36), and practices of “digital epidermalization” (Browne 108). If “the algorithm … is a warning sign that signals the limits and possibilities of how we do what we do,” then AI demands a similar set of warnings (Dear Science 117). The following list engages AI’s copious thanatocentrism. Beyond literal death, it forecasts the end of authorship, reasoning, archives, and labor:

  1. The death of the author. Intensifying dynamics Barthes and Foucault theorized in terms of the author’s detachment from the work, AI plunges the problem of intellectual property ownership into untraveled quagmires (Foucault, “Author” 117). These include the (likely temporary) prohibition on copyrighting AI-generated artworks (Brittain; Wenzel), the failure of AI developers to secure copyright permissions for published works that become LLM training data (Alter and Harris; Author’s Guild; Small), and the development of “poisoned” code to destroy models that do not seek artists’ consent before incorporating them as training data (Edwards, “Nightshade”). Finally, as illustrated by the emergence of AI sex workers and influencer-curated video chatbots (Dickson; Sternlicht), the LLMs promise even more personalized virtual personae to stand in for creators themselves. Such technology forecasts a level of personalization in which a user’s speech and writing become training data for a total simulation of inflection, syntax, and style. As Foucault phrased it, “Where a work had the duty of creating immortality, it now attains the right to kill, to become the murderer of its author” (“Author” 117).

  2. The death of reason. Resembling the ouroboros that eats its tail, AI’s capacity to produce high-quality results diminishes as “neural networks … pick up maladaptive patterns” (Leffer) or are trained on data gathered from previous outputs (Rao). This is evident in the refrain that AI detectors do not work, and AI-generated outputs require only a few iterations of AI-based revision before they are improperly identified as human-generated (Edwards, “Writing Detectors”). The reverse is also true: the larger LLMs grow, the more likely such detectors are to generate “false positives” that claim authentically human-generated content to have been written by AI. Even if a text is unmistakably human-generated (e.g., the US Constitution or the Bible), the fact that such texts are likely already within existing training data makes them more likely to be flagged as AI-generated (Edwards, “Constitution”). “The death of reason” signifies the increasing difficulty of assessing textual provenance, as well as the problematic future in which AI outputs decline in quality over time while retaining their status as state-of-the-art discourse machines.

  3. The death of the archive. AI repeats a familiar pattern of algorithmic iconoclasm, the purposeful demolition of digital knowledge. Although AI development has significantly profited from open-source sites like GitHub (Tiku), an increasing reliance on AI for simple answers to research questions from history to programming may decrease human users’ incentive to contribute original, human-generated content to such repositories. Moreover, the apocalypticism of entrepreneurs like Elon Musk and Sam Altman continuously feeds a hype cycle designed to generate attention for the technologies they promote. Indeed, the motif of the end-times associated with LLMs—perhaps most pronounced in comparisons between AI developers and physicist Robert Oppenheimer by members of the US Congress (Markey)—avows the need for greater regulatory oversight, situating entrepreneurs as the most likely group to inform the policies that would limit themselves (Henshall).

  4. The death of labor. Finally, AI fuels a public discourse about the end of conventional white- and blue-collar vocations while also obscuring global patterns of labor exploitation, including prison laborers (Meaker) and in the global south (Chandran et al.). Both have driven the explosive availability of AI technologies for Western consumers for years. The 2023 Writers Guild of America (WGA) and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) strikes offer a small window into the future of work, in which the proclaimed effort “to remove the inefficiencies and biases of human labor” (Mattern 3) threatens to nullify the value of skilled creative work, disavow the professional status of writing, and create representationally ‘diverse’ AIgenerated content while refusing to prioritize diversity in the writing room.

As the above list makes plain, the specter of agency arises in moments when the subject’s capacities are revealed to have been diminished, snuffed out. If these seem overstated as deaths, it is because, unlike the deaths that McKittrick and others describe, they are more figurative than literal. Rather than a mortal, corporeal finitude, they describe “little deaths” that bring agency forth as a nostalgic recollection, something that once was or might have been. A variation of the copious void, zero-agency puts presence (i.e., a subject-centered capacity that is gradually chipped away) and absence (i.e., an illusion of intentional “action” that is revealed to be mere “motion”) into a tentative, unreconciled relation.

Trope + Algorithm

Tropes generally—copia specifically—offer concluding instances of rhetoric’s trace-like influence on AI’s design. Whereas AI is composed of “hidden sets of instructions that intervene in organizing our world in astonishing ways,” rhetoric has its own axiomatic principles (Ingraham 62). These rule-like structures are tropes, which are typically defined as organized ways of embellishing literal meaning, motifs or categories of discourse, and modes of knowing/finding truth. In a close parallel to AI’s algorithmic protocols, trope also signifies in a fourth way: as symbolic rules that anticipate the connection between two (or more) signifiers. Taken in this last sense, tropes are most like AI. They are “generative rather than simply ornamental,” performing operations upon discourse that go beyond simple rearrangement (Lundberg 389). As quasi-grammatical structures, they naturalize the relationship between a subject and signifiers that function as sites of affective investment (Matheson 157). Finally, despite their rigorous structure, neither tropes nor AIs have guaranteed effects. Like generative AI models, which “cannot draw an exact one-to-one relationship between a single parameter [i.e., a given rule] and a single corresponding trait [a given output],” the tropes that organize our social and symbolic orders have unpredictable consequences (Leffer). Yet, despite this characteristic unpredictability, rhetoricians and AI developers alike maintain that their inventions have material consequences; each “[splits] entities off from some group and [grafts] them on to another,” reorganizing reality in ways that cannot be fully anticipated (Rowland 149). For those reasons, tropes resemble original natural-language ranking algorithms, promoting certain signifying connections at the expense of others.

One example is copia, conventionally a rhetorical trope that generates volume by enumerating all that can be said about a topic (Conley). This trope’s intended effect is to show that no matter how exhaustive the list, any effort to capture a topic in its totality is destined to fail. Listing AI’s various features creates just this impression of overwhelming magnitude. In addition to the harms already named in this essay, AI also poses dangers in the form of mounting energy costs (Henderson et al.), atmospheric pollution (Heikkilä), lackluster regulatory definitions (Atleson), algorithmic bias (Guo et al.), copyright infringement (Appel et al.), the erosion of privacy (Laslo), an intensifying “fog of war” (Knight), and new bio- and cyberweapons (The White House). Beyond these copious consequences, AI itself is a copia-generating machine. GPT-3 already exceeds humans’ daily word production by more than 3 billion words per day, operating at a scale that makes familiar devices minuscule by comparison (Rees 180).

GTP-3 is more than one hundred times bigger than its predecessor, GPT-2. It has about 175 billion parameters, and it was trained on about forty-five terabytes of text data from different datasets, with 60% coming from Common Crawl’s archive of web texts, 22% from WebText 2, 16% from books, and 3% from Wikipedia. Training GPT-3 at home using eight V100 GPUs would require about thirty-six years, so it seems clear that most users will be using APIs from OpenAI, their cost notwithstanding. (Hayles, “Inside the Mind of AI” 641).

Of course, GPT is hardly the only AI model, and so these figures must be multiplied by every startup, every subscription service, every new application. As AI-generated images, writing, and speed trail into the infinite, “finding a human-generated text will be looking for truffles in France—a scarce commodity prized for its rarity” (660). Considered in its most traditional sense, this is copia: a trope that organizes sensational excess in ways that defy at-a-glance comprehensibility.

Another variation on copia maintains that the trope is a valuable learning technique that spurs rhetorical invention by repeating the same idea with a difference. According to Desiderius Erasmus, this version of copia asks learners to “take a group of sentences and deliberately set out to express each of them in as many versions as possible” (303). Then, they “shall treat a connected line of thought in a number of ways,” thereby attaining “such facility in the end that we can vary it in two or three hundred ways with no trouble at all.” One affordance of generative AI that closely resembles this technique concerns how it may be a useful tool “to augment and spark our own divergent thinking” (Rodman 16). Like copia, this divergent thinking describes “a form of imaginative and playful thinking that generates wide-ranging, numerous, and varied ideas in response to open-ended tasks or prompts” (7). Capable of producing at least two or three hundred unique responses to a single prompt, generative AI “could be prompted with sections of early drafts or notes about a project to synthesize nascent ideas, suggest objections, develop lines of argument, raise related points, or highlight areas for clarification” (17). If, as a creativity-boosting technique, divergent thinking is another name for copia, then generative AI promises to do something that rhetoric has long done for human creativity. It also means that claims about AI’s wholesale novelty betray their lack of imagination, failing to see rhetoric as the antecedent and exponent for AI’s creativity-unlocking potential. Where we may be inclined to think that “AI images allow people to visualize a concept or idea—any concept or idea—in a way previously unimaginable,” there is something curiously, foundationally rhetorical about AI’s free-associative, diagrammatic utility:

Try spilling your unfiltered thoughts into its engine. AI can give them shape outside your mind, quickly and at little cost: any notion whatsoever, output visually in seconds. The results are not images to be used as media, but ideas recorded in a picture. For myself, not for others—like the contents of a notebook or a dream journal. (Bogost)

The aspect of AI that is not new—but still useful—is its capacity for copia, the methodical making of plentitudes through the arrangement and recombination of speech, writing, and image. In this regard, copia is what rhetoricians call a “representative anecdote” for how rhetorical tropes are inherent functions of an AI-augmented created process: both are “models” of the world that are “bad at facts—or good at imagination” (Rodman 15). Although they represent limited forms of knowledge, they have a function and purpose that is “dynamically iterative, responsive to provocation, intersubjective, and generated from randomness” (10).

Copia has a third and final function as a trace: it offers a tropological remedy to the anxiety-producing sprawl of datafication. In this final sense, copia does not just name a way to produce complexity but also names and contains it—distilling data whose size and scope are otherwise near impossible to parse into relevant outputs. The intersubjective connectivity that such copia affords productively exceeds human capacities because it facilitates an exchange of ideas otherwise made secondary to or less significant than the specialized knowledge of a given vocation. Consider, for instance, the use of AI chatbots as part of training in medical diagnosis. In such contexts, AI does not replace medical expertise but supplements a doctor’s thinking beyond their specific areas of specialization: “The idea is to use a chatbot in the same way that doctors turn to each other for suggestions and insights” (Kolata). A similar case concerns a medical patient who had endured chronic pain for years but found the correct diagnosis through ChatGPT (Holohan). By cross-referencing a copia of medical knowledge, AI generated an artificial conversation between siloed areas of medical knowledge in a way that had not, to that point, transpired among real medical professionals. Neither ChatGPT nor other systems can replace physicians any more than rhetoric could ever replace medicine (Reardon). They are different kinds of stochastic art. However, the rhetorical logic underpinning AI systems can productively supplement any number of domains, making the unwieldy excess of disorganized knowledge manageable, intelligible, and navigable in unprecedented ways.

Tropes like copia are tools for managing informational excess. They are also a way to discover new epistemic connections, linking signifiers to one another in constellations we otherwise might not. Rhetoric’s capaciousness is, after all, why there will always be room for so many of its distinct permutations, including AI. Given how our singular areas of expertise are often contained in ways that prevent cross-fertilization, rhetoric is most productive as a copious logic of linkage and critique. It is this copia that has allowed it to survive over time, to transform across its distinct iterations, and now, once again, to reappear as a void-like presence: the invisible design inherent to this emergent technology. An old vintage in a new bottle, the copious void that unites rhetoric and AI deserves apprehension for its capacity to do harm and our appreciation for its plentiful epistemic potentialities.

A Copious Void

In The Good Place (2016–2020), characters are informed early on that they have been admitted to Heaven: a statistically driven “Moneyball” afterlife arrived at by accumulating points for good behavior while alive (Egner). It exists primarily because of the many labors of Janet: an all-knowing, all-providing AI upon whom characters routinely—and incorrectly—call a “girl” or “woman.” In that way, Janet is constantly battling the default “feminine persona” of AI/VA (Artificial Intelligence/Virtual Assistant), which “mobilizes traditional, conservative values of homemaking, care-taking, and administrative ‘pink-collar’ labor” (Woods 335). When Janet is not being misgendered or at the beck and call of the deceased, they reside in a “void,” a “barren nothingness” that contains nothing—and potentially everything—within its undefined blankness. In the end (spoiler alert), “the good place” turns out to be “the bad place” in disguise, and Janet, an authentically angelic AI, has been kidnapped by demons to simulate a heaven-like place where characters experience endless psychological torture. However, without the Janets, there would neither be a good place, nor a bad place, nor even a purgatorial “middle place,” a bland neutral zone where souls live out their lives in relative isolation. Janet, in other words, does not just live in a copious void. Janet is the copious void that constitutes everything, even as their gender and true purpose are repeatedly constituted by the rhetoric of the recently deceased.

This concluding example synthesizes many of the themes considered in this essay: a demonic/utopic artificial reality, the virtual reproduction of human-world stereotypes, the ever-present specter of death, and a copious void that is boundless and empty. To say that the copious void is a trace means that rhetoricians have indispensable expertise concerning AI’s risks and rewards. Although they are separated by different disciplinary harbors and their respective statuses as vanguard technologies are spread over 2,000-plus years, rhetoric has the benefit of historical hindsight, already having developed conceptual apparatus for producing copias, having already seen how this kind of technology changes over its lifespan. In that way, rhetorical theorists remain strikingly—if not uniquely—relevant to discussions about what AI means—and whether its products can properly be called “meaningful.” Conversely, rhetoric is also vital for those who are interested in studying AI from a technical perspective or who see it as essential to maintain relevance amid a changing workforce. One of the challenges of such emergent technologies is that humans must bring their surplus to technological tools to avoid becoming obsolete themselves. After all, if rote tasks like coding and scheduling can be automated, then the purpose of learning new technologies and computer languages becomes self-defeating. What rhetoric offers is something “extra,” a technology and an aptitude, an art that exists before and within AI. Akin to learning a secret that gives the user special access not granted to others, studying rhetoric is like gaining a glimpse at the source code, helping us to understand what AI can, cannot, and must not do, even as it promises to make our lives better, easier, and less burdened by mechanical tasks.

The virtue of the copious void—the relation of nonrelation between rhetoric and AI—is that it embraces what rhetoricians do so well, and AIs simply cannot do alone: plug the textual machine into a surrounding context. These technologies do not operate in isolation: they are not apolitical; they have real, material effects. By adding rhetorical context back into our accounts of AI, we refuse the closure that comes with assuming that these technologies “belong” exclusively to one discipline or expert, and we gain the capacity to understand the varied audiences for whom it has life-giving and life-taking consequences. To that end, although this essay places a greater emphasis on AI’s likely harms, this does not mean we should avoid thinking about its utility or its more optimistic outcomes. Indeed, an exclusively anxious disposition toward AI risks playing into Silicon Valley’s fear-consumption-fear hype cycle (Wong). Hence Katherine McKittrick’s insistence that we do not dwell solely on algorithms as racist mechanisms for producing “premature and preventable death” (118) but that we also imagine better technological futures (121). By embracing rhetoric’s capaciousness and its own copious, void-like qualities, we may find familiar lessons from its history, theory, and practice, helping us to respond in real time to AI’s evolving shapes. Ultimately, tracking rhetoric’s trace-like structure must account for AI’s carceral and emancipatory potentials, its bivalent capacity for dystopia and being-in-common. To do otherwise would risk leaving our future interconnections undertheorized and unrealized.

Acknowledgments

I am grateful to Ronald Walter Greene, Laurie Ouellette, Emily Winderman, Kurt Zemlicka, special issue editors Scott Graham, Zoltan Majdik, and Joshua Trey Barnett, as well as my three anonymous reviewers. These folks offered copious inspiration, suggestions, and feedback that allowed this essay to transform and improve across its different iterations. I am likewise grateful to the Institute for Advanced Study (IAS) at the University of Minnesota, Twin Cities, most especially IAS Director Bianet Castellanos, IAS Coordinator Susannah Smith, the IAS staff, and my Spring 2024 cohort of faculty fellows: Arash Davari, Isaac Esposto, David Gore, Zornitsa Keremidchieva, Matthew Rahaim, Treasure Tinsley, and Kathryn Van Wert. These folks provided the time, resources, and intellectual community needed to see this project to fruition.

Disclosure Statement

No potential conflict of interest was reported by the author. Funds for Open Access were provided by the 2022-23 Donald V. Hawkins Endowed Professorship awarded by the Department of Communication Studies at the University of Minnesota, Twin Cities.

Notes

1 Thanatocentrism sits alongside related terms like thanatopolitics and necropolitics, credited to Giorgio Agamben and Achille Mbembe, respectively. Whereas thanatopolitics references a terminological complement to biopolitics, the racializing logic of the camp, and the dehumanization characteristic of totalitarian regimes (Agamben), necropolitics “accounts for the various ways in which, in our contemporary world, weapons are deployed in the interest of maximum destruction of persons and the creation of death-worlds, new and unique forms of social existence in which vast populations are subjected to conditions of life conferring upon them the status of living dead” (Mbembe 40). My term, thanatocentrism, describes how digital technologies like AI are predicated on sprawling forms of literal and figural death, encompassing dehumanizing and deadly anti-Blackness, the destruction of linked ecosystems, the mooting of intellectual property laws, the descent of “critical thinking” into incoherence, the destruction of third spaces and shared public memory, and the elimination of available means of sustaining oneself. It retains its partner terms’ emphasis on technology’s deathly qualities while highlighting the realization that one has lost agency as an experience analogous to meeting one’s mortal end.

Works Cited