6,939
Views
7
CrossRef citations to date
0
Altmetric
SPECIAL SECTION: REIMAGINING ADVERTISING RESEARCH: 50 YEARS AND BEYOND

Asking Questions of AI Advertising: A Maieutic Approach

ORCID Icon
Pages 608-623 | Received 10 Dec 2021, Accepted 07 Aug 2022, Published online: 29 Aug 2022

Abstract

Artificial intelligence (AI) is transforming advertising theory and practice. However, while applications of AI abound, it appears that not enough questions are being asked about the ontological, technical, and ethical consequences of artificially intelligent advertising ecosystems. Given the pace and unpredictability of technological change, this article adopts a question-driven approach, emphasizing the importance of adopting a maieutic attitude for academics, practitioners, and other advertising stakeholders, including the AI-ad-consuming public.

This article is part of the following collections:
Most Influential Articles in 2022—American Academy of Advertising Journals

Artificial intelligence (AI) is “one of the most disrupting technological revolutions in recent human history” (Mangiò, Pedeliento, and Andreini Citation2022, p. 188), so much so that IBM (Citation2021) recently opined that “if advertisers are not currently implementing AI tools into their strategy, it’s probably time they started considering it.” More generally, advertising practitioners are keen to extol the importance of AI applications (e.g., Schmelzer Citation2020; Purcell et al. Citation2021) and report that AI is already widely applied across organizations in almost every sector (Ammanath, Hupfer, and Jarvis Citation2020; Balakrishnan et al. Citation2020). It is perhaps unsurprising, therefore, that advertising theorists have taken a considerable interest in AI (Li Citation2019; Rogers 2021), pointing out that existing and forthcoming applications have received relatively little critical scrutiny or oversight thus far (Ryan Citation2020; Smith and Smith Citation2021).

Looking across the literature, three main lines of questioning can be identified. First, advertising academics have begun to question the ontology of AI. Particular focus has been paid to how AI applications may redefine the ontological status of advertising, changing how it is theorized and practiced (e.g., Huh Citation2016; Huh and Malthouse Citation2020; Rodgers Citation2021). Second, academics have investigated the technical capacities of commercial AI tools, evaluating their effectiveness and proposing improvements, as well as designing novel techniques of their own (e.g., Hayes et al. Citation2021; Watts and Adriano Citation2021). Third, the ethical problems arising from AI applications have been considered by advertising academics concerned by issues like inclusivity, privacy, and agency (Hermann Citation2022; Samuel et al. Citation2021).

These extant lines of questioning have made for stimulating reading and contributed significantly to the ongoing conversation about AI advertising. However, many more questions could be asked. Given the pace of change and the diverse range of actors involved, it is difficult to predict how the field will look in 50 years. However, most commentators agree that AI will certainly play a role across all areas of society, including advertising (Harari Citation2017; Russell Citation2021). As such, advertising researchers will need to continue asking ontological, technical, and ethical questions to ensure that the relationship between AI and advertising is as beneficial as possible.

The purpose of this article is to call for a more maieutic and questioning attitude among advertising theorists and practitioners. It begins by defining the maieutic method, justifying its use in general but also specifically in relation to fast-paced fields of development, such as AI advertising. It then mobilizes the maieutic method along ontological, technical, and ethical lines of inquiry. To illustrate this method, questions are raised in relation to topic areas from the Journal of Advertising’s first 50 years, such as computational advertising, creativity in advertising, and consumer choice. This article cautions against making specific predictions about the next half-century, instead proposing that maieutics provides a future-facing mode of inquiry for advertising research, whether AI related or otherwise.

Theoretical Foundations: Toward a Maieutics of AI Advertising

The spirit of this special issue is to look back over the past 50 years of research in the Journal of Advertising, but also to look forward to the next half-century. Within such a remit one may be inclined to adopt the oracular stance of the futurologist. A futurologist makes specific predictions (e.g., about the field of advertising) that allow relevant stakeholders to prepare for the changes to come. Yet futurological visions are often frustrated by the facts. This is especially true in an area like AI, where numerous individuals and organizations are competing to design and promote their own applications. As shown by historical cases—like the “war” between Betamax and VHS—the technologies that will eventually be adopted are not necessarily those that are technically superior but rather those that are promoted successfully by complex coalitions of stakeholders (Cusumano, Mylonadis, and Rosenbloom Citation1992).

With the stakes so high, oracles who put solid stakes in the ground often find their specific predictions disproven, often thanks to unexpected discoveries, behavioral tipping points, and other drivers of rapid, radical change (Taleb Citation2007). However, the general spirit of their speculation may still ring true. Consider the following example from the Journal of Advertising: Rust and Oliver (Citation1994a) once predicted the emergence of the “video dial tone,” describing “the instantaneous transmission of a full range of interactive voice, data, and full-motion video services . . . a global electronic shopping mall” (p. 6). Later, Dahlen and Rosengren (Citation2016) pointed out that this prediction had not come to pass. Yet Rust (Citation2016) countered that “what we were really describing, before Marc Andreessen invented the first Web browser, was something that looks a whole lot like what the Internet would become” (p. 346). From this, one may conclude that the details of a prediction are not always as important as the general zeitgeist or mood that the academic oracle is trying to capture and convey.

Pham, Lee, and Stephen (Citation2012) demonstrate empirically how those who trust their feelings make better predictions. They argue that this is because emotions provide indirect access to vast amounts of unconscious information accumulated from exposure to a given environment. This emotional oracle effect provides empirical evidence that “gut instinct” may help experts make specific predictions about the future. However, it also appears to suggest that the unarticulated wisdom of the unconscious may have access to insights that may be lost when these experts are asked to articulate their knowledge in more “dispassionate” and “rational” terms. Read in this way, the work of Pham, Lee, and Stephen (Citation2012) provides empirical support for reading between the lines of oracular works like Rust and Oliver (Citation1994a), as Rust (Citation2016) later attempted.

If specific predictions are problematic, what is a suitable alternative? This article proposes maieutics as a mode of research that places more emphasis on asking better questions about the future, rather than trying to provide predictions. The term maieutic comes from the Greek word for midwifery, describing an epistemic tradition where one neither “conceives nor gives birth to truths, but assists in their delivery” (Westfall Citation2009, p. 628). In more prosaic terms, the maieutic approach assumes that knowledge can be advanced by providing new constructs, theories, or procedures (MacInnis Citation2011) but also by posing questions. In particular, academics should ask more abstract or indefinite questions, which will be more relevant to multiple audiences and remain relevant over time.

Philosophers have long understood the power of questioning, with Socrates being famous in ancient Athens for his method of deconstructing assumed worldviews through persistent, ever-deeper lines of questioning (Grayling Citation2019). However, the art of questioning is often truncated in other areas of academia. In advertising scholarship, as in most areas of management theory, articles typically emphasize the importance of solving or resolving research questions. Providing answers, however partial and provisional, certainly helps to advance knowledge. However, the maieutic approach asserts that unresolved questions may also be a powerful pedagogic and epistemological tool. Asking a question can take something (e.g., a concept, theory, procedure, or phenomenon) and “trouble” it; leaving a question unresolved invites readers to “stay with the trouble” (Haraway Citation2016). Returning again and again to an unsettled (and potentially unsettling) topic may be more productive than trying to solve it and move on.

This article focuses on three main styles of questioning: ontological, technical, and ethical. As discussed presently, they are powerful in isolation but more so when interrelated.

Ontological questions interrogate the underlying status of reality, affecting how phenomena are perceived and concepts conceived (Grayling Citation2019). These perceptions and conceptions then shape working definitions, theoretical frameworks, managerial recommendations, and other intellectual resources, affecting what can be achieved (Blumer Citation1931; MacInnis Citation2011). Everyone has an ontological outlook on a given issue, but often it is tacit and taken for granted. As shown by the Socratic method of successive questioning (Grayling Citation2019), it is the role of ontological inquiry to unearth assumed worldviews and improve them.

Technical questions are more pragmatic, seeking to understand what is possible and how best to pursue these possibilities. Ontological questions (e.g., “What is X?”) lead to technical questions (e.g., “What can X do?”), but technical questions may be posed without interrogating underlying ontological assumptions. Indeed, this is more often the case.

Ethical questions were originally general questions about the nature of a good life or how to be a good person (Grayling Citation2019). However, in contemporary scholarship, questions of ethics tend to focus on specific issues of moral contention, evaluating which possibilities should be followed and why. Again, it makes sense for ethical questions (e.g., “What should we do with X?”) to follow technical questions (e.g. “What can we do with X?”) and ontological questions (e.g., “What is X?”). This is not always the case.

Based on this logical sequence, it is proposed that the maieutic method should follow the formula of O-T-E, rather than ask ontological, technical, or ethical questions in isolation. This logic could be applied to any number of topic areas within advertising (or beyond). The following sections illustrate the O-T-E process with examples drawn from the Journal of Advertising. Over the past 50 years, the topics of computational advertising, creativity in advertising, and consumer choice have emerged as recurring themes. Each has evolved in response to technological changes and so serve as useful bellwethers for questioning AI advertising.

While each topic entwines multiple lines of questioning, each has also been chosen as an exemplar through which each maieutic mode may be demonstrated. That is to say, the topics of computational advertising, creativity in advertising, and consumer choice exemplify ontological, technical, and ethical questions, respectively.

Ontological Questions: Is AI Advertising Computational, Programmatic, or Something Else Entirely?

It is widely known that AI stands for artificial intelligence. However, the precise nature of that intelligence is a matter of much debate, fueled by the speculations of science fiction writers, professional futurologists, and academics alike (Puntoni et al. Citation2021). Over the past 50 years, scholars publishing in the Journal of Advertising have reflected on the relationship between technological developments and the field of advertising. This is because each major invention or innovation has the potential to redefine the fundamentals of advertising theory and practice (Rust Citation2016). As Huh and Malthouse (Citation2020) summarize, “[T]echnology has always played a critical role in the emergence and evolution of advertising, propelling advertising scholars and practitioners to adopt and adapt to each new technological advance by modifying the definition of advertising, altering its practice, pushing the theoretical boundaries of the field, and by sometimes forming new subfields” (p. 367).

As AI transitions from science fiction into science fact, questions about its ontological character, technical capabilities, and ethical consequences become pressing concerns. From one ontological position, AI is simply another technology in a long historical trajectory. Advertising has evolved from printed posters through television commercials to Internet banners, mobile application notifications, and social media influencer collaborations (Araujo et al. Citation2020). Yet, AI is also a technology that is increasingly “smart,” self-learning, and able to sense its environment (Hoffman and Novak Citation2018). It is plausible that AI will be able to achieve superhuman capacities, even sentience or self-awareness (Rodgers Citation2021). From this ontological position, AI is quite unlike technologies that have come before. Its impact will be more disruptive, better understood in discontinuous terms diverging from the historical precedent. These threaten to undermine existing theorizations and practices, but may also be understood as opportunities for a revolution in the fundamentals of advertising.

Between these two extremes lies the paradoxical notion of the “changing same” (Harris Citation1991, p. 186), denoting phenomena that undergo great change yet retain a resemblance to earlier iterations. Contemporary advertising is vastly different from the field described in the earliest annals of the Journal of Advertising (e.g., Sandage Citation1972; Kirkpatrick Citation1986). Back then, “European views of advertising” could be considered a somewhat radical contribution to an American-dominated discipline (Christian Citation1974). A more recent example is Kilbourne’s (Citation1995) consideration of “green advertising.” The arguments were novel then but would now be considered unremarkable within an advertising discourse dominated by sustainability. The general point is that advertising has changed greatly over the past 50 years yet remains remarkably recognizable.

This paradox of changing sameness helps explain why the definition of advertising is the subject of ongoing debate (e.g., Dahlen and Rosengren Citation2016; Eisend Citation2016; Huh Citation2016; Kumar and Gupta Citation2016; Richards and Curran Citation2002; Schultz Citation2016; Stern, Zinkhan, and Holbrook Citation2002; Stewart Citation1992, Citation2016; Rust and Oliver Citation1994b; Rust Citation2016). Advertising is difficult to define. However, it is worth questioning whether a single and static definition would be desirable, given that advertising is an ever-evolving field of theory and practice. From a maieutic perspective, it is more desirable to provide a partial and provisional (“working”) definition to serve as a point of departure for debate and discussion.

Rodgers (Citation2021) describes “advertising” as “brand communication . . . to persuade” (p. 2). For present purposes, this serves as a suitable working definition for advertising (at least until the concluding section).

The question remains: Will AI reform advertising or will it be revolutionary? To put the question in the terms of the changing same, will AI advertising be “more of the same” or will it be “much more of a change”? These are ontological questions, inquiring about the character of AI and its relationship to advertising, but the answers given will soon stimulate technical and ethical questions. What can AI advertising achieve? What should AI advertising achieve?

Similar questions have been posed before in the broader context of digital technologies. The discussion around computational advertising is especially pertinent. Huh and Malthouse (Citation2020) define computational advertising “as a broad, data-driven advertising approach relying on or facilitated by enhanced computing capabilities, mathematical models/algorithms, and the technology infrastructure to create and deliver messages and monitor/surveil an individual’s behaviors” (p. 367). Initially, it may be argued that AI advertising must be far more advanced than the approach they describe. This is because existing conceptualizations already treat AI as a spectrum of sophistication running in parallel to human intelligence (Hoffman and Novak Citation2018; Ryan Citation2020). As Rodgers (Citation2021) explains:

Narrow AI (NAI) or “weak AI” involves a basic level of intelligence that is likened to that of an infant, as it can perform only a specific task or one task at a time (much in the way that a baby will crawl before it can walk). General AI (GAI) refers to machine intelligence at the level of an adult, meaning the machine can be trained to handle more cognitively demanding and complex tasks (e.g., perception, learning, problem solving). GAI can also have human sense abilities, such as seeing, hearing, and feeling, making it a “strong” type of AI. Super AI (SAI) refers to technologies that supersede human intelligence and act independently from humans. SAI, the strongest of all intelligence types, is not yet possible but is of the futuristic type that Stephen Hawking believed could pose a threat to humankind. (p. 6)

Computational advertising could be aligned with the narrowest or weakest form of AI, describing models and algorithms that are designed by humans but also must be continually directed by them to remain effective. In short, AI is treated as a passive tool for advertisers. Some propose a more active conceptualization of AI called programmatic advertising, where the industry is populated by self-programming learning machines tracking spend and reach, experimenting with alternative designs or copy, and predicting and prompting consumer behaviors in real-time across multiple platforms (Araujo et al. Citation2020; Busch Citation2016; Chen et al. Citation2019; Busch Citation2016). Such applications are more coterminous with the conception of general AI (Rodgers Citation2021).

General AI has already been applied to create personalized advertisements that can be adapted in real time, going beyond A/B testing by targeting, testing, and tweaking variations automatically with actual consumers (IBM Citation2021). Increasingly such AI programs are also embodied in physical hardware (Schmitt Citation2019), with virtual assistants on smartphones and domestic sideboards serving as ubiquitous examples (Hoffman and Novak Citation2018). Combining “smart” processing with sensory capabilities suggests these devices possess a “strong” form of AI (Rodgers Citation2021), with advertising being a central function. As illustrated by Dawar (Citation2018), Alexa and other smart assistants are always listening and always learning, ever responsive to consumer requests but also providing sponsored suggestions without making plain the alternative possibilities. This suggests that programmatic advertising describes the here and now of advertising, with computational conceptualizations already out of date.

Then again, some may suggest that programmatic advertising may also be outmoded. This is because AI is already developing what Rodgers (Citation2021, p. 6) describes as superhuman capabilities. For now, the scarily smart AIs of science fiction only influence consumer behaviors by shaping their perceptions of what the future might be like, and thus how they might respond to technologies-in-development (Puntoni et al. Citation2021). However, De Bellis and Johar (Citation2020) are keen to point out that “technologies are becoming increasingly autonomous, able to make decisions and complete tasks on behalf of consumers” (p. 74). At the time of writing, a recent case in point is a Google engineer who claims an AI chatbot has developed self-awareness and feelings, even reporting a fear of being switched off (Luscombe Citation2022). The fact that so many commentators have taken this claim seriously, writes Bogost (Citation2022), reflects the widespread fascination with AI sentience and the intoxicating mix of utopian and dystopian fantasy that it seems to usher in.

This case also reinforces recent academic research. For instance, Bogost (Citation2022) concluded that the Google AI controversy says more about the human tendency to anthropomorphize than it does about the actual emergence of a sentient machine. Similarly, researchers have shown how anthropomorphized machines or programs are often perceived as making decisions intentionally, in the same way that a human might (Waytz et al. Citation2010). This is not simply a question of ontics, as consumers’ responses to AI systems are affected by their perceptions of sentience. In a series of studies, Crolic et al. (Citation2022) show how consumers respond differently to anthropomorphized chatbots when angry. Angry consumers interacting with anthropomorphized chatbots led to negative consequences for the consumers (e.g., lower satisfaction) but also for the firm (e.g., reduced purchase intention).

Such research provides answers to technical questions that will benefit theorists and practitioners. However, it also supports the O-T-E logic that it may be best to pose ethical questions after ontological and technical questions have also been raised. In this case, understanding the tendency to anthropomorphize may be a powerful tool for understanding the nature of AI (Hoffman and Novak Citation2018; Schmitt Citation2019) and its technical effects on consumers (Puntoni et al. Citation2021; Crolic et al. Citation2022), but this understanding may also be used to question the anthropocentric biases hidden in ethical dilemmas (Bogost Citation2022). Another example is the intersection of AI sentience and the economic imperatives of advertisers. Commentators have pointed out that superhuman artificial intellect may not be humanlike at all, given that its underlying hardware is fundamentally different from the organic brain (Bogost Citation2012; Russell Citation2021). Bogost (Citation2022) reminds readers that the program in question was designed as a tool for market research by a for-profit technology giant. Thus, while ontological questions about the nature of consciousness may interest a diverse audience, the advertising-centric model of Google AI should be of particular interest to advertising scholars. For instance, does the commercial imperative integrated into its code affect the quality or character of the sentience that may emerge?

Indeed, questioning the extent to which AI runs parallel to human intelligence, as suggested by Rodgers (Citation2021), returns attention to the question of whether AI represents an extension of the historical narrative of advertising technologies or a disjuncture from it. Those stressing the distinctive, even alien (Bogost Citation2012), character of AI may conclude that AI advertising will be unpredictable. In contrast, others may return to the seemingly outmoded concept of computational advertising (Huh and Malthouse Citation2020), arguing that general and superhuman AI are actually more complex forms of computation. For instance, Puntoni et al. (Citation2021) conceptualize AI “as an ecosystem comprising three fundamental elements—data collection and storage, statistical and computational techniques, and output systems—that enable products and services to perform tasks typically understood as requiring intelligence and autonomous decision making on behalf of humans” (p. 132). This could mean that more sophisticated forms of AI may simply elaborate on the computational approach.

As mentioned earlier, the maieutic method adopted in this article means that a definitive ontological outlook is not the objective. Left open are questions about the relationships between AI and advertising, as well as whether these are like historical relationships between technological developments and the field of advertising. One ontological outlook suggests that computational advertising may be displaced by programmatic advertising as AI becomes more sophisticated, with both being replaced by an as-yet-unlabeled approach to advertising as AI becomes superhuman. Another holds that the computational approach may simply be revised and updated in line with technological progress.

Before such questions are dismissed as mere semantics, it is worth reiterating that conceptions inform perceptions that then feed into research and practice (Blumer Citation1931; MacInnis Citation2011). For instance, the term programmatic denotes the use of programs but also connotes a wider program of activities (that is, a strategy or plan). This may suggest a greater degree of cohesion, cooperation, and consistency across the field of advertising than the empirical evidence might suggest. The term computational has no such connotations, affording an understanding of advertising that is more complex, competitive, and even capricious, with multiple conflicting interests and initiatives. It also alludes to the possibility that superhuman AI may emerge from a nonhuman model of intelligence, rather than being modeled in the image of an anthropocentric program.

Technical Questions: What Is the Role of the Creativity in AI Advertising?

When inspired by technical questions, academics become “problem solvers” (Moore and Leckenby Citation1975, p. 21). For instance, when Paschen, Pitt, and Kietzmann (Citation2020) sought to “develop a typology . . . as an analytic tool for managers grappling with AI’s influence” (p. 147), their work fell under the rubric of technical inquiry. Alternatively, scholars may seek to apply academic rigor to already-existing AI tools provided by commercial organizations, perhaps finding them “woefully unreliable” (Hayes et al. Citation2021, p. 81). This is important because knowledge in the domain of AI advertising is driven primarily by practitioners rather than academic research (Samuel et al. Citation2021). The logic of O-T-E suggests that prefacing technical questions with ontic inquiries may be beneficial. Rodgers’ (Citation2021) spectrum of narrow AI to super AI, for example, may be used to differentiate narrower tasks, such as mining consumer-generated data as a source of information (Liu-Thompkins et al. Citation2020), from more general applications, such as automating brand content through the use of algorithms (Van Noort et al. Citation2020).

As AI becomes increasingly sophisticated (Rodgers Citation2021), it develops increasing autonomy from its human designers (De Bellis and Johar Citation2020; Schmitt Citation2019). As AI continues to evolve, IBM’s (2021) claim that machines may be able to initiate their own consumer research and targeted advertising campaigns no longer seems far-fetched. Consider IBM’s Watson AI system. Watson was used by Lexus to analyze more than 15 years of award-winning ads, generating a script that human creatives could then direct and produce (Spangler Citation2018). Kaput (Citation2022) opines that this may have been an “AI-inspired PR stunt,” but technically-minded academics may seek to interrogate the effectiveness of such systems (á la Hayes et al. Citation2021). Others may go further. Malthouse et al. (Citation2019) designed their own algorithm to make advertising auctions more efficient, while Ha et al. (Citation2020) sought to demonstrate how machine learning could be used by researchers (e.g., to analyze 452,616 Instagram posts for later interpretation). In both of these cases, academics and their innovations are entering into the AI “ecosystem” (Puntoni et al. Citation2021, p. 132) rather than simply commenting on it.

These technical developments also raise questions about the status of human actors in the advertising process. Puntoni et al. (Citation2021) imply as much when they define AI as able to “perform tasks typically understood as requiring intelligence and autonomous decision making” (p. 132). The role of the typical (i.e., human) intelligent decision maker is questioned by any alteration in atypical intelligence or decision making. In the field of advertising, humans are both consumers and producers of ads. Accordingly, redrawing the ontological boundaries of intelligence raises technical questions about who or what undertakes advertising activities. If AI influencers, for instance, “can produce positive brand benefits similar to those produced by human celebrity endorsers” (Thomas and Fowler Citation2021, p. 11), then are humans still required to create content, respond to customers, or evaluate the success of sponsored ads?

Based on their findings, Thomas and Fowler (Citation2021, p. 11) caution that when AI influencers act in a way that damages the brand, the use of a human celebrity tends to be more effective at mitigating reputational damage. This echoes the work of Crolic et al. (Citation2022), who found that the use of anthropomorphized AI may not be beneficial for brands, especially as they also discuss the enduring creative role of the human advertiser. However, this creative role should remain an open question. As AI influencers become deceptively humanlike and more sophisticated, scholars will need to return to this topic. Indeed, Vakratsas and Wang (Citation2021) have already challenged the assumption that AIs cannot compete with humans in creative tasks, developing a rule-based framework that can be taught to the computational minds of machines.

Studies like Vakratsas and Wang’s (Citation2021) reinforce the work of Goldenberg and Mazursky (Citation2008), who sought to dispel some of the mystique surrounding the concept of creativity in the advertising field. In the Journal of Advertising creativity is often treated as a defining feature of advertising, with advertisers distinguished from other practitioners by their creative skills (Bernardin et al. Citation2008; Dillon Citation1975; Lehnert, Till, and Ospina Citation2014; Politz Citation1975; Reid, King, and Delorme Citation1998; West, Koslow, and Kilgour Citation2019; White Citation1972; Winter and Russell Citation1973). This is most evident in traditional representations of advertising firms populated by artists and copywriters, such as Mad Men (Ghaffari, Hackley, and Lee Citation2019), but even cutting-edge representations of algorithmic coders and Silicon Valley entrepreneurs communicate an aura of creativity (see Bogost Citation2022). Goldenberg and Mazursky (Citation2008) sought to demonstrate the deeper patterns or structures that characterize creativity, thus allowing this concept to be studied more rigorously. In doing so, they facilitated the computation of creativity (Vakratsas and Wang 2021), contributing to the potential erosion of humanity’s unique selling point in the advertising process.

Creative AI may have many technical benefits for firms, not the least of which is cost savings by making expensive human employees redundant. This, in turn, this raises ethical questions about whether human-displacing AI should be employed. While simpler forms of AI automate rudimentary tasks and empower human advertisers to focus on more interesting and rewarding work (Araujo et al. Citation2020; Busch Citation2016; Chen et al. Citation2019), the emergence of super AI may also displace creative and strategic skills (De Bellis and Johar Citation2020), making humans increasingly obsolete (Ammanath, Hupfer, and Jarvis Citation2020). This is not necessarily a negative, as the reduced need for human labor frees up more time for individual leisure and collective flourishing (Harari Citation2017). However, this only reinforces the case for questioning the ethics of dehumanization in the advertising industry, seeing these as entangled with technical and ontological inquiries as well.

If technical inquiries lead to a diminished role for human creativity in the field of advertising, they also raise new ontological questions about the definition and future of advertising itself. If advertising becomes largely devoid of humans, what might this mean for a discipline where “meaning matters” (Puntoni, Schroeder, and Ritson Citation2010)? Presently, studies suggest that AI creates through divergence from established precedence, while human creativity is driven by a meaningful purpose (Lehnert, Till, and Ospina Citation2014), but how long may this ontological distinction remain? Might the role of humans in the advertising process be redefined around a new concept, such as empathy? Then again, might technically minded inquiries soon explicate the deeper structures of empathy and allow this to be coded into machinic minds as well? If so, will advertisers need to shift their lines of thinking yet again? The maieutic method suggests that scholars should not seek final answers but instead choose to “stay with the trouble” (Haraway Citation2016), leaving questions open-ended in anticipation of future developments.

Ethical Questions: How Should AI Advertising Respond to Consumer Choice?

Ethical questions interrogate values and morals, asking what actors should do in a given situation. Studies may be descriptive (by describing the morals of others) or normative (seeking to make the case for a particular ethical position). AI advertising is facing a crisis of legitimacy as many consumers focus on issues of privacy and manipulation, driven in no small part by high-profile scandals like that of Cambridge Analytica (Solon and Laughland Citation2018; Kietzmann, Paschen, and Treen Citation2018; Wertenbroch et al. Citation2020). In response, many experts in the area have already called for AI applications to be questioned in terms of ethics and even existential threats, not simply in terms of effectiveness (Ascarza, Ross, and Hardie Citation2021; Brown Citation2021; Chapman-Banks Citation2020; Henneborn and Eitel-Porter Citation2021; McGuire Citation2021).

The accelerating rate of technological change means that new possibilities often arrive while ethical responsibilities are still in an embryonic stage of development. For instance, advertising billboards have been designed with built-in cameras that can track the age, gender, and mood of customers to provide tailored ads as they move around shopping malls (Gillespie Citation2019). These are already being implemented, long before any significant public or political debate around the benefits and drawbacks of such devices. This demonstrates why, for Russell (Citation2021), ethical questions must be prioritized if humanity is to learn to live peacefully and productively with AI.

The oracular work of futurists like Harari (Citation2017) may be a helpful source of early inspiration, helping to pose ethical questions before technical capabilities arrive. Another might be the concerns voiced by advertising practitioners, whose cutting-edge work places them at the stimulating but also intimidating forefront of technological change (Bogost Citation2022; Van Esch, Cui, and Jain Citation2021). Both sources rely on the Delphi method of “asking the oracles” (Richards and Curran Citation2002), which can now be considered an evidence-based approach thanks to Pham, Lee, and Stephen’s (Citation2012) work on the emotional oracle effect. This was discussed in the section on maieutics, as was the maieutic method of leaving questions unresolved. The latter is especially helpful in the context of ethics, as many of the questions being raised by AI developments are evolutions of long-running ethical debates like the right to privacy, the exclusionary effects of certain technologies, and the agency of the individual.

In a consumer context, powerful institutions like the European Commission (Citation2020) call for AI developers to ensure “excellence and trust” for consumers (p. 1). However, ethical lines of questioning extend far beyond consumers’ perceptions of quality and trustworthiness. As noted in the previous section, AI advertising may pose challenges to deeply-held cultural views like the value of work and the uniqueness of human creativity or empathy. This section focuses on another important area altered by the advent of AI advertising: consumer autonomy or choice.

Consumer autonomy can be defined as “consumers’ ability to make and enact decisions on their own, free from external influences imposed by other agents” (Wertenbroch et al. Citation2020, p. 430). Ever since the “Drink Coca-Cola” hoax of 1957, there have been recurring debates about the effectiveness of subliminal messaging and other forms of subconscious manipulation (Albanese Citation2015). These debates have been held among academics (Block and Bergh Citation1985; Cuperfain and Clarke Citation1985; Elgendi et al. Citation2018; Gable et al. Citation1987; Kelly Citation1979; Strahan, Spencer, and Zanna Citation2002; Theus Citation1994) but also between practitioners and consumers (Nelson Citation2008; Sutherland Citation2013; Zanot, Pincus, and Lamp Citation1983). Psychologists, neuroscientists, and behavioral economists now broadly agree that behavior can be “nudged” through unconscious processes (Garvey Citation2021; Kahneman Citation2011; Lindstrom Citation2012; Thaler and Sunstein Citation2009), with AI advertising now being applied to create “hyper-nudges” by devising, testing, and implementing manipulations automatically in real time (Darmody and Zwick Citation2020; Dholakia et al. Citation2021).

Consumers now encounter automated choice architectures on a daily basis, with “hyperrelevant” advertising using big data and machine learning to shape what products and services they can see (Cluley and Brown Citation2015; Dholakia et al. Citation2021). For critical scholars this is an especially deceptive form of manipulation because it draws on the rhetoric of consumer empowerment and choice (Darmody and Zwick Citation2020). Yet if the products and services offered are highly relevant, then many consumers may view digital (dis)empowerment as a more desirable situation than the pre-AI model of advertising, or at least not undesirable enough to stop purchasing (Hoang, Cronin, and Skandalis Citation2022). Previously, human-segmentation-based techniques resulted in an advertising landscape with many irrelevant ads; now, AI targeting means that every advertised offering is tailor-made to the individual or situation. It remains an open question whether consumers are willing to trade choice for convenience. As Wertenbroch et al. (Citation2020, p. 436) note, “[W]ith autonomy defined as freedom from external influence, can consumer choice ever be considered fully autonomous, given that all choice is subject to a choice architecture (Thaler and Sunstein Citation2009)? Is it more appropriate to conceptualize autonomy by degree?”

Such developments raise ontological questions about the concepts of choice and autonomy; for instance, if an algorithm predicts a consumer’s preference and promotes it in real time, is this undermining consumer autonomy or enhancing it? It also raises technical questions. On the firm side, technical questioners may ask whether AI should be designed to understand whole individuals or simply divide them into “dividual” patterns of predictable behavior (Deleuze Citation1992), learning if-then dyads instead of holistic consumer profiles (Cluley and Brown Citation2015; Hietanen, Ahlberg, and Botez Citation2022). On the consumer side, technical questions may be posed to find the social, legal, or technological means to empower consumers to reclaim autonomy over these processes. Thanks to the work of investigators like Zuboff (Citation2019) and Garvey (Citation2021), consumers are increasingly aware of these manipulative communications and more negative sentiments begin to seep into their everyday experience of AI (Puntoni et al. Citation2021; Hoang, Cronin, and Skandalis Citation2022). This also inspires attempts to resist such technologies through anti-tracking activities, although their effectiveness appears to be limited (Boerman, Kruikemeier, and Borgesius Citation2017).

As “the digital revolution is entering a new phase … incorporating digital information into physical, solid products” (Schmitt Citation2019, p. 825), these machine manipulations become ever more pervasive. As shown by Van Esch, Cui, and Jain (Citation2021), AI is increasingly incorporated into store environments to provide dynamic, hyperpersonalized shopping experiences that may help consumers make better choices. Yet in the process of predicting and preempting these choices, the notion of consumer autonomy is called into question. For instance, consider the introduction of retail billboards and product labels that follow consumers’ movements and adapt ads to in-store behaviors (Gillespie Citation2019; Van Esch, Cui, and Jain Citation2021; Shankar Citation2018; Shankar et al. Citation2021), which challenges advertising scholars to ask about the nature of choice in an increasingly automated environment.

These empirical studies suggest that Wertenbroch et al. (Citation2020) are justified when they “propose that perceived autonomy and the ways it is threatened, maintained, or enhanced are fundamental to consumer behavior and merits researchers’ and practitioners’ attention in contemporary marketplace contexts” (p. 431). Far more research is needed on such AI applications and their implications for consumers—research that moves through ontological and technical questions to ask what should be done in response to such AI applications. This investigation into ethical questions completes the O-T-E logic.

Consumer autonomy is not the only ethical issue for advertising researchers to consider. The most general but also central ethical question can be articulated as follows: What should AI advertising set out to achieve? More than two decades ago, Zinkhan (Citation1994, p. 1) proposed that advertising should be a field underpinned by “a set of moral principles directed at enhancing societal well-being,” such as “nondeception” and “nondiscrimination.” If studies of surveillance capitalism are to be believed, algorithmic AI advertising thrives on the deception of manipulating consumers’ choices while maintaining the illusion of choice (Darmody and Zwick Citation2020; Dholakia et al. Citation2021; Hoang, Cronin, and Skandalis Citation2022). Although difficult to challenge (Boerman, Kruikemeier, and Borgesius Citation2017), advertising theorists may seek to play the role of the ethical educator (Frazer Citation1979) rather than simply a problem solver (cf. Moore and Leckenby Citation1975). In an era of AI-powered deepfakes (Sample Citation2020; Wakefield Citation2022), ethical lines of questioning are no longer the esoteric reserve of philosophy professors but an important skill for everyone to develop. Indeed, fakery abounds in the AI advertising ecosystem. While prior research has focused on the consumer experience of chatbots (Crolic et al. Citation2022), there have also been reports of chatbots communicating to other chatbots (Chapman-Banks Citation2020).

Zinkhan’s (Citation1994, p. 1) other principle of “nondiscrimination” may also inspire trajectories of future research. At its simplest, discrimination reduces an individual’s choices by imposing restrictions based on race, sexuality, gender, age, or some other real or perceived features; these restrictions may emerge from interpersonal interactions, institutionalized structures, or implicit and even internalized ideologies, but most often a combination of all three (Arsel, Crockett, and Scott Citation2022). Over the decades there have been calls in the Journal of Advertising to consider how consumers are affected by race (Harrison, Thomas, and Cross Citation2017), gender (Brunel and Nelson Citation2000; Kates and Shaw-Garlock Citation1999; Eisend Citation2019; Zayer and Coleman Citation2015), sexuality (Gould Citation1994; Kates Citation1999; Eisend and Hermann Citation2019), disability (Burnett and Paul Citation1996; Houston Citation2022), and age (Eisend Citation2022). Unfortunately, discrimination persists and, if recent reporting and research is to be taken seriously, it seems that AI applications often intensify inequities rather than reduce them.

In response to the growing inequities wrought by AI advertising, ethical lines of questioning may be motivated by Zinkhan’s (Citation1994) nondiscriminatory principle, working for what Arsel, Crockett, and Scott (Citation2022) describe as “procedural and distributive justice” (p. 920). To elaborate on the example of race, Barry and Sheikh (Citation1977) highlighted how black children have been overlooked by studies of television advertising. This means that the empirical basis of advertising theory in this area is inflected by the cultural biases of white populations. As such, they make the case that studying nonwhite consumers may have the added benefit of challenging ethnocentric theories with new insights. This can, in turn, only help to extend and enrich advertising theory and practice. Evidently, studies in the Journal of Advertising have been well ahead of the “Diversity, Equity, and Inclusion (DEI)” curve, which has only relatively recently “become ubiquitous in public and academic discourse” (Arsel, Crockett, and Scott Citation2022). However, DEI issues take on renewed relevance in the era of AI advertising. For instance, facial recognition software designed primarily by white engineers results in more arrests of black citizens (Breland Citation2017; Perkins Citation2019). Future research will need to respond to emergent discriminations.

It should be immediately added that AI itself is not discriminatory and may be used in ways that diminish racial discrimination and other forms of inequity. Recently, Yang, Chuenterawong, and Pugdeethosapol (Citation2021) combined “human efforts and machine-learning algorithms” to analyze “32,702 comments on 110 Instagram posts” relating to Black Lives Matter (BLM) (p. 565). They found that black influencers received more praise and positive sentiments from consumers when posting about BLM than brands posting similar content. Here, seeking out more contexts with more racially diverse producers and consumers provided novel insights for all advertising theorists and practitioners. However, this study also shows how AI can be used to learn more about racial dynamics within advertising, providing knowledge that may be used to develop recommendations for practitioners, policymakers, and other stakeholders (in this case, influencers) using advertising to promote social change.

Looking beyond race, evidence is emerging that interview algorithms filter out candidates based on their appearance and voice (Russell Citation2021), while AI-powered automotive safety tests are based on data from male dummies, which means women are 17% more likely to die in an accident (Perez Citation2020). It is recognized that AI technologies are agnostic, taking on the values and biases of their human designers. For example, they can assist those with disabilities (Henneborn and Eitel-Porter Citation2021) but also lead to unexpected frustrations when disabled people are not included throughout the design process (Smith and Smith Citation2021). Thus, studying alternative perspectives is not only an act of ontological questioning (e.g., What is AI’s relationship to human discrimination?) but also an ethical line of inquiry that can inspire better outcomes. One example is Lambrecht and Tucker’s (2019) finding that AI advertising algorithms trained to focus on cost may inadvertently target more men in contexts where they are more economical to reach. Those calling for data feminism (D’Ignazio and Klein Citation2020) and AI gender equity (Smith and Rustagi Citation2021) challenge the unethical outcomes of such algorithms by questioning the ontological assumption that AI is gender neutral.

Again, the O-T-E logic is useful as ethical and ontological lines of inquiry remain incomplete without a technical solution. Here, data feminists and those seeking gender equality may turn to studies like Watts and Adriano’s (Citation2021), who observed that machine learning can often struggle with contextual nuances, perhaps accidentally placing “a beer advertisement next to an article about drunk driving” (p, 26). This is a problem also observed by practitioners (e.g., Chapman-Banks Citation2020), but the technical reasons for this often remain unexplored. Watts and Adriano (Citation2021, p. 26) explain how these errors emerge by making a conceptual distinction between “context-free databases” and databases that are “context aware.” Put simply, machines can learn to be more sensitive to context if more contextual data are included in their training schema. Accordingly, studies like Watts and Adriano’s (Citation2021) connect ontological questions (e.g., “Is AI sexist?”) and ethical questions (e.g., “Should AI help address sexism?”) with intermediary technical questions (e.g., “How can AI be taught to avoid sexist assumptions?”).

Similar maieutic combinations may be used to respond to calls for anti-racist technologies (Waikar Citation2021), inclusive design (Henneborn and Eitel-Porter Citation2021), and other DEI-related movements. In doing so, advertising scholarship may be able to connect Zinkhan’s (Citation1994, p. 1) two principles of “nondiscrimination” and “nondeception” through the medium of consumer empowerment. As noted, AI advertising raises difficult questions about consumer choice and human agency (Wertenbroch et al. Citation2020). However, these issues are more complex and consequential for those also facing discrimination. The disabled consumer who struggles to engage with a largely visual advertising culture or the consumer without the digital literacy or economic capital to access services advertised online may be, quite rightly, dismissive of those concerned about whether algorithms are “too good” at predicting their needs. AI manipulation of choice is certainly a problem, but it is only a problem for those with the privilege of choice in the first instance.

For Entis (Citation2015), Minority Report provides the best model for the future of the advertising industry. In the film representation, digital billboards use cameras to target the individual. As noted above, this is a technology whose implementation is already well underway (Gillespie Citation2019). However, the film and the original short story also describe a world where everything is predicted so effectively that people can be arrested before they even commit a crime. In such a scenario, it is questionable whether consumers would have any autonomy at all, with products and services purchased automatically before the consumer is even aware that they have a need or desire. However, if the superhuman AIs behind these automated economies were to also contain discriminatory biases or blind spots, then these philosophical and political questions of agentic ambiguity would take on a darker, dystopian aspect. Imagine an AI-enabled market where consumers were automatically excluded from markets on the basis of disability, targeted with racially stereotypical products without their consent, or outed as nonheterosexual by being hypertargeted in public by an LGBTQ + brand on the basis of prior behaviors. Under such conditions a questioning attitude would not be an esoteric luxury, but rather an ethical imperative.

Concluding Remarks: 50 Years More of Ontological, Technical, and Ethical Questions?

At this relatively early stage in the evolution of AI advertising, and as part of the half-centenary celebrations at the Journal of Advertising, it seems a suitable time to call for a more reflective, maieutic approach. This is especially true in relation to AI advertising, an area of knowledge that is currently dominated by a “preponderance of practitioner literature” (Samuel et al. Citation2021, p. 6). The key contribution of this article is to discern and develop three lines of questioning and identify a logic that makes them most effective. It is impossible, even laughable, to try to predict exactly what advertising will look like 50 years hence. Accordingly, this piece has focused more on posing questions than providing answers—the hope being that a maieutic approach will remain relevant far longer than any specific predictions.

In the maieutic spirit, questions should be asked of this article. For instance, this discussion focused primarily on machine learning and the tasks that learned machines can perform. However, practitioner reports often analyze AI alongside blockchain, cloud storage, and other technological developments (e.g., Purcell et al. Citation2021). This article cited studies of smart assistants (Hoffman and Novak Citation2018) and smart retail stores (Van Esch, Cui, and Jain Citation2021; Shankar Citation2018; Shankar et al. Citation2021), hinting at AI advertising’s materialization in the Internet of Things (Schmitt Citation2019). Yet many more questions could be asked of AI advertising and its embeddedness within a wider digital ecosystem (Adams Citation2004; Araujo et al. Citation2020; Puntoni et al. Citation2021). For instance, how can and should AI advertising relate to developments in digital marketing, social media marketing, and mobile marketing (Lamberton and Stephen Citation2016)?

It may also be worth questioning the latent anthropocentrism of this article. While the preceding pages have acknowledged the possibility that sentient AI may emerge in the future, it has largely focused on the ways in which humans engage with AI as advertising practitioners, consumers of ads, or other advertising-relevant stakeholders. Other scholars have been more willing to speculate about the subjective experiences of AI (Coffin Citation2021a; Hoffman and Novak Citation2018), working toward a conceptualization of technocapitalism (e.g., Hietanen, Ahlberg, and Botez Citation2022). This more-than-human model will necessitate new concepts like “smart brands,” which Isisag et al. (Citation2021) recently introduced to describe an “agentic amalgam of brand, technology, and artificial intelligence” (p. 633). Here AI is considered an agent able to act in concert with others, rather than a tool awaiting human instruction. While distributed conceptions of agency are now well established in some areas of management theory (Bajde Citation2013; Canniford and Bajde Citation2016), elsewhere they remain at the avant-garde of questioning.

A third limitation worth questioning is the meaning-centric, mass-communication model implied by the working definition of advertising adopted in this article. This article defined advertising simply as brand-centric communications that persuade (Rodgers Citation2021). Notably, Politz (Citation1975) once attested that advertising “void of any persuasive characteristic it would still make the product better known,” such that a consumer “confronted with two brands, offered at equal prices,” would avoid “the one she has never heard of” (p. 11). As many now accord with Rodgers (Citation2021) in defining advertising as persuasive brand-centric communications (e.g., Dahlen and Rosengren Citation2016, p. 334), this was adopted as a definition that worked well for the purposes of this article. However, now that this article draws to a close, it is worth opening up this working definition to questioning once more.

For one, this model relies on the assumption that consumers are persuaded by the meanings conveyed by communications. The meaning-centric approach to advertising is a well-established tradition (Belk Citation2017; Brown, Stevens, and Maclaran Citation1999; Friedmann and Zimmer Citation1988; Kates and Shaw-Garlock Citation1999; Puntoni, Schroeder, and Ritson Citation2010; Stern Citation1988), but a number of other scholarly traditions critique such an emphasis on meaningfulness (e.g., Coffin Citation2021b). Another point to question is the implication that advertising involves firms communicating to consumers. It has been noted that contemporary advertising is better understood as a multidirectional matrix of communication (e.g., Araujo et al. Citation2020) rather than a firm–consumer dyad. It has also been argued that those subject to advertising do not always adopt the role of consumer; pro-social projects may seek to persuade citizens, activists, or donors (Choi et al. 2016; Dean Citation2003; La Ferle, Muralidharan, and Kim Citation2019).

If questions are asked of this article, even probing and problematic ones, then this only reinforces the benefits of a maieutic, questions-based approach. Huh and Malthouse (Citation2020) propose that advertising scholarship is best thought of as a “thought leadership forum” that facilitates “collaboration among scholars from varying academic disciplines, methodological and disciplinary perspectives, and different expertise to examine important and timely issues . . . and to set a forward-looking research agenda for the next decade and beyond” (p. 371). A questioning attitude will provide an engine for such forums. Advertising theorists must be willing to debate—with one another but also practitioners, policymakers, and consuming publics—to determine what the future of AI advertising can and should be by asking difficult questions. Advertising theorists and practitioners should not be judged by their predictions and certainly should resist judging themselves too harshly (cf. Rust Citation2016; Stewart Citation2016). Rather, the main evaluative criteria for advertising theory and practice over the next 50 years, in relation to AI advertising or any other area of consideration, should be the quality of its questioning.

Additional information

Notes on contributors

Jack Coffin

Jack Coffin (PhD, University of Manchester) is a senior lecturer in fashion marketing, Department of Materials, University of Manchester.

References

  • Adams, Richard. 2004. “Intelligent Advertising.” AI & Society 18 (1):68–81. doi:10.1007/s00146-003-0259-9
  • Albanese, Paul J. 2015. “The Unconscious Processing Information.” Marketing Theory 15 (1):59–78. doi:10.1177/1470593114558532
  • Ammanath, Beena, Susanne Hupfer, and David Jarvis. 2020. “Deloitte’s State of AI in the Enterprise: Thriving in the Era of Pervasive AI.” 3rd Edition, Deloitte. https://www2.deloitte.com/cn/en/pages/about-deloitte/articles/state-of-ai-in-the-enterprise-3rd-edition.html.
  • Araujo, Theo, Jonathan R. Copulsky, Jameson L. Hayes, Su Jung Kim, and Jaideep Srivastava. 2020. “From Purchasing Exposure to Fostering Engagement: Brand-Consumer Experiences in the Computational Advertising Landscape.” Journal of Advertising 49 (4):428–45. doi:10.1080/00913367.2020.1795756
  • Arsel, Zeynep, David Crockett, and Maura L. Scott. 2022. “Diversity, Equity, and Inclusion (DEI) in the Journal of Consumer Research: A Curation and Research Agenda.” Journal of Consumer Research 48 (5):920–33. doi:10.1093/jcr/ucab057
  • Ascarza, Eva, Michael Ross, and Bruce G. S. Hardie. 2021. “Why You Aren’t Getting More from Your Marketing AI.” Harvard Business Review, July-August. https://hbr.org/2021/07/why-you-arent-getting-more-from-your-marketing-ai.
  • Bajde, Domen. 2013. “Consumer Culture Theory (Re)visits Actor-Network Theory: Flattening Consumption Studies.” Marketing Theory. 13 (2):227–42.
  • Balakrishnan, Tara, Michael Chui, Bryce Hall, and Nicolaus Henke. 2020. “The State of AI in 2020.” McKinsey & Company, 17 November 2020. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/global-survey-the-state-of-ai-in-2020.
  • Barry, Thomas E., and Anees A. Sheikh. 1977. “Race as a Dimension in Children’s TV Advertising: The Need for More Research.” Journal of Advertising 6 (3):5–10. doi:10.1080/00913367.1977.10672701
  • Belk, Russell W. 2017. “Qualitative Research in Advertising.” Journal of Advertising 46 (1):36–47. doi:10.1080/00913367.2016.1201025
  • Bernardin, Thomas, Paul Kemp-Robertson, David W. Stewart, Yan Cheng, Heather Wan, John R. Rossiter, Sunil Erevelles, Robert Roundtree, George M. Zinkhan, and Nobuyuki Fukawa. 2008. “Envisioning the Future of Advertising Creativity Research: Alternative Perspectives.” Journal of Advertising 37 (4):131–50. doi:10.2753/JOA0091-3367370411
  • Block, Martin P., and Bruce O. Vanden Bergh. 1985. “Can You Sell Subliminal Messages to Consumers?” Journal of Advertising 14 (3):59–62. doi:10.1080/00913367.1985.10672960
  • Blumer, Herbert. 1931. “Science without Concepts.” American Journal of Sociology 36 (4):515–33. doi:10.1086/215473
  • Boerman, Sophic C., Sanne Kruikemeier, and Frederik J. Zuiderveen Borgesius. 2017. “Online Behavioral Advertising: A Literature Review and Research Agenda.” Journal of Advertising 46 (3):363–76. doi:10.1080/00913367.2017.1339368
  • Bogost, Ian. 2012. Alien Phenomenology: Or, What It’s Like to Be a Thing. Minneapolis: University of Minnesota Press.
  • Bogost, Ian. 2022. “Google’s ‘Sentient’ Chatbot Is Our Self-Deceiving Future.” The Atlantic, June 14.
  • Breland, Ali. 2017. “How White Engineers Built Racist Code—And Why It’s Dangerous for Black People.” The Guardian, 4 December. Accessed May 19, 2020. www.theguardian.com/technology/2017/dec/04/racist-facial-recognition-white-coders-black-people-police.
  • Brown, Annie. 2021. “A Smarter Type of Ad Is Already Here, But How Can We Make the Most of Ethical, Artificially Intelligent Marketing?.” Forbes, 31 October. https://www.forbes.com/sites/anniebrown/2021/10/31/a-smarter-type-of-ad-is-already-here-but-how-can-we-make-the-most-of-ethical-artificially-intelligent-marketing/?sh=2c34ec811fb1.
  • Brown, Stephen, Lorna Stevens, and Pauline Maclaran. 1999. “I Can’t Believe It’s Not Bakhtin!: Literary Theory, Postmodern Advertising, and the Gender Agenda.” Journal of Advertising 28 (1):11–24. doi:10.1080/00913367.1999.10673573
  • Brunel, Frédéric F., and Michelle R. Nelson. 2000. “Explaining Gendered Responses to “Help-Self” and “Help-Others” Charity Ad Appeals: The Mediating Role of World-Views.” Journal of Advertising 29 (3):15–28. doi:10.1080/00913367.2000.10673614
  • Burnett, John J., and Pallab Paul. 1996. “Assessing the Media Habits and Needs of the Mobility-Disabled Consumer.” Journal of Advertising 25 (3):47–59. doi:10.1080/00913367.1996.10673506
  • Busch, Oliver. 2016. “The Programmatic Advertising Principle.” In Programmatic Advertising: Management for Professionals, edited by Oliver Busch, 3–15. New York, NY: Springer.
  • Canniford, Robin, and Domen Bajde. 2016. “Assembling Consumption.” In: Assembling Consumption, ed. R. Canniford and D. Bajde, 1–18. Oxford, UK: Routledge.
  • Chapman-Banks, Ian. 2020. “How AI Can Deliver Targeted Ads While Ensuring Brand Safety.” The Drum, 7 October. https://www.thedrum.com/opinion/2020/10/07/how-ai-can-deliver-targeted-ads-while-ensuring-brand-safety.
  • Chen, Gang, Peihong Xie, Jing Dong, and Tianfu Wang. 2019. “Understanding Programmatic Creative: The Role of AI.” Journal of Advertising 48 (4):347–55. doi:10.1080/00913367.2019.1654421
  • Choi, Jungsil (David), Priyamvadha Rangan, and Surendra N. Singh. 2016. “Do Cold Images Cause Cold-Heartedness? The Impact of Visual Stimuli on the Effectiveness of Negative Emotional Charity Appeals.” Journal of Advertising 45 (4):417–26. doi:10.1080/00913367.2016.1185982
  • Christian, Dick. 1974. “European Views of Advertising.” Journal of Advertising 3 (4):23–5. doi:10.1080/00913367.1974.10672551
  • Cluley, Robert, and Stephen Brown. 2015. “The Dividualised Consumer: Sketching the New Mask of the Consumer.” Journal of Marketing Management 31 (1-2):107–22. doi:10.1080/0267257X.2014.958518
  • Coffin, Jack. 2021a. “Posthuman Phenomenology: What Are Places Like for Nonhumans?.” In A Research Agenda for Place Branding, edited by D. Medway, G. Warnaby, and J. Byrom, 183–99. Cheltenham, UK: Edward Elgar.
  • Coffin, Jack. 2021b. “Machines Driving Machines: Deleuze and Guattari’s Asignifying Unconscious.” Marketing Theory 21 (4):501–16. doi:10.1177/14705931211035160
  • Crolic, Cammy, Felipe Thomaz, Rhonda Hadi, and Andrew T. Stephen. 2022. “Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions.” Journal of Marketing 86 (1):132–48. doi:10.1177/00222429211045687
  • Cuperfain, Ronnie, and T. K. Clarke. 1985. “A New Perspective of Subliminal Perception.” Journal of Advertising 14 (1):36–41. doi:10.1080/00913367.1985.10672928
  • Cusumano, M., Y. Mylonadis, and R. Rosenbloom. 1992. “Strategic Maneuvering and Mass-Market Dynamics: The Triumph of VHS over Beta.” Business History Review 66 (1):51–94. doi:10.2307/3117053
  • Dahlen, Micael, and Sara Rosengren. 2016. “If Advertising Won’t Die, What Will It Be? Toward a Working Definition of Advertising.” Journal of Advertising 45 (3):334–45. doi:10.1080/00913367.2016.1172387
  • Darmody, Aron, and Detlev Zwick. 2020. “Manipulate to Empower: Hyper-Relevance and the Contradictions of Marketing in the Age of Surveillance Capitalism.” Big Data & Society 7 (1):2053951720904112. doi:10.1177/2053951720904112
  • Dawar, Niraj. 2018. “Marketing in the Age of Alexa.” Harvard Business Review, May-June. https://hbr.org/2018/05/marketing-in-the-age-of-alexa.
  • De Bellis, Emanuel, and Gita Venkataramani Johar. 2020. “Autonomous Shopping Systems: Identifying and Overcoming Barriers to Consumer Adoption.” Journal of Retailing 96 (1):74–87. doi:10.1016/j.jretai.2019.12.004
  • Dean, Dwane H. 2003. “Consumer Perception of Corporate Donations Effects of Company Reputation for Social Responsibility and Type of Donation.” Journal of Advertising 32 (4):91–102. doi:10.1080/00913367.2003.10639149
  • Deleuze, Gilles. 1992. “Postscript on the Societies of Control.” October 59: 3–7.
  • Dholakia, Nikhilesh, Aron Darmody, Detlev Zwick, Ruby Roy Dholakia, and Firat Fuat. 2021. “Consumer Choicemaking and Choicelessness in Hyperdigital Marketspaces.” Journal of Macromarketing 41 (1):65–74. doi:10.1177/0276146720978257
  • D’Ignazio, Catherine, and Lauren F. Klein. 2020. Data Feminism. Boston, MA: MIT Press.
  • Dillon, Tom. 1975. “The Triumph of Creativity over Communication.” Journal of Advertising 4 (3):15–8. doi:10.1080/00913367.1975.10672589
  • Eisend, Martin. 2016. “Comment: Advertising, Communication, and Brands.” Journal of Advertising 45 (3):353–5. doi:10.1080/00913367.2016.1187579
  • Eisend, Martin. 2019. “Gender Roles.” Journal of Advertising 48 (1):72–80. doi:10.1080/00913367.2019.1566103
  • Eisend, Martin. 2022. “Older People in Advertising.” Journal of Advertising. 51 (3):308–22.
  • Eisend, Martin, and Erik Hermann. 2019. “Consumer Responses to Homosexual Imagery in Advertising: A Meta-Analysis.” Journal of Advertising 48 (4):380–400. doi:10.1080/00913367.2019.1628676
  • Elgendi, Mohamed, Parmod Kumar, Skye Barbic, Newton Howard, Derek Abbott, and Andrzej Cichocki. 2018. “Subliminal Priming—State of the Art and Future Perspectives.” Behavioral Sciences 8 (6):54. doi:10.3390/bs8060054
  • Entis, Laura. 2015. “The Future of Advertising Will Probably Look a Lot Like Minority Report.” Entrepreneur, October 12. https://www.entrepreneur.com/article/251627.
  • European Commission. 2020. “White Paper: On Artificial Intelligence—A European Approach to Excellence and Trust.” European Union, Brussels, 19 February. https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
  • Frazer, Charles F. 1979. “Advertising Ethics: The Role of the Educator.” Journal of Advertising 8 (1):43–6. doi:10.1080/00913367.1979.10673271
  • Friedmann, Roberto, and Mary R. Zimmer. 1988. “The Role of Psychological Meaning in Advertising.” Journal of Advertising 17 (1):31–40. doi:10.1080/00913367.1988.10673101
  • Gable, Myron, Henry T. Wilkens, Lynn Harris, and Richard Feinberg. 1987. “An Evaluation of Subliminally Embedded Sexual Stimuli in Graphics.” Journal of Advertising 16 (1):26–31. doi:10.1080/00913367.1987.10673057
  • Garvey, James. 2021. “Under the Influence.” British Broadcasting Corporation Radio 4, First Aired 7 December. https://www.bbc.co.uk/programmes/m00127zc.
  • Ghaffari, Mahsa, Chris Hackley, and Zoe Lee. 2019. “Control, Knowledge, and Persuasive Power in Advertising Creativity: An Ethnographic Practice Theory Approach.” Journal of Advertising 48 (2):242–9. doi:10.1080/00913367.2019.1598310
  • Gillespie. 2019. “Are You Being Scanned? How Facial Recognition Technology Follows You, Even as You Shop.” The Guardian, 24 February. https://www.theguardian.com/technology/2019/feb/24/are-you-being-scanned-how-facial-recognition-technology-follows-you-even-as-you-shop.
  • Goldenberg, Jacob, and David Mazursky. 2008. “When Deep Structures Surface: Design Structures that can Repeatedly Surprise.” Journal of Advertising 37 (4):21–34.
  • Gould, Stephen J. 1994. “Sexuality and Ethics in Advertising: A Research Agenda and Policy Guideline Perspective.” Journal of Advertising 23 (3):73–80. doi:10.1080/00913367.1994.10673452
  • Grayling, A. C. 2019. The History of Philosophy. London, UK: Penguin.
  • Ha, Yui, Kunwoo Park, Su J. Kim, Jungseock Joo, and Meeyong Cha 2020. “Automatically Detecting Image-Text Mismatch on Instagram with Deep Learning.” Journal of Advertising 50 (1): 52–62.
  • Harari, Yuval Noah. 2017. “Reboot for the AI Revolution.” Nature 550 (7676):324–7. doi:10.1038/550324a
  • Haraway, Donna. 2016. Staying with the Trouble. Durham, NC: Duke University Press.
  • Harris, William J. 1991. The Leroi Hones/Amiri Baraka Reader. New York, NY: Thunder’s Mouth Press.
  • Harrison, Robert L., Kevin D. Thomas, and Samantha N. Cross. 2017. “Restricted Visions of Multiracial Identity in Advertising.” Journal of Advertising 46 (4):503–20. doi:10.1080/00913367.2017.1360227
  • Hayes, Jameson L., Brian C. Britt, William Evans, Stephen W. Rush, Nathan A. Towery, and Alyssa C. Adamson. 2021. “Can Social Media Listening Platforms’ Artificial Intelligence Be Trusted? Examining the Accuracy of Crimson Hexagon’s (Now Brandwatch Consumer Research’s) AI-Driven Analyses.” Journal of Advertising 50 (1):81–91. doi:10.1080/00913367.2020.1809576
  • Henneborn, Laurie A., and R. Eitel-Porter. 2021. “AI for Disability Inclusion: Enabling Change with Advanced Technology.” Accenture. https://www.accenture.com/_acnmedia/PDF-155/Accenture-AI-For-Disablility-Inclusion.pdf.
  • Hermann, Erik. 2022. “Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective.” Journal of Business Ethics 179 (1):43–61. doi:10.1007/s10551-021-04843-y
  • Hietanen, Joel, Oscar Ahlberg, and Andrei Botez. 2022. “The ‘Dividual’ Is Semiocapitalist Consumer Culture.” Journal of Marketing Management 38 (1-2):165–81. doi:10.1080/0267257X.2022.2036519
  • Hoang, Quynh, James Cronin, and Alex Skandalis. 2022. “High-Fidelity Consumption and the Claustropolitan Structure of Feeling.” Marketing Theory 22 (1):85–104. doi:10.1177/14705931211062637
  • Hoffman, Donna L., and Thomas P. Novak. 2018. “Consumer and Object Experience in the Internet of Things: An Assemblage Theory Approach.” Journal of Consumer Research 44 (6):1178–204. doi:10.1093/jcr/ucx105
  • Houston, Ella. 2022. “Polysemic Interpretations: Examining How Women with Visual Impairments Incorporate, Resist, and Subvert Advertising Content.” Journal of Advertising 51 (2):240–55. doi:10.1080/00913367.2021.1895008
  • Huh, Jisu. 2016. “Comment: Advertising Won’t Die, but Defining It Will Continue to Be Challenging.” Journal of Advertising 45 (3):356–8. doi:10.1080/00913367.2016.1191391
  • Huh, Jisu, and Edward C. Malthouse. 2020. “Advancing Computational Advertising: Conceptualization of the Field and Future Directions.” Journal of Advertising 49 (4):367–76. doi:10.1080/00913367.2020.1795759
  • IBM. 2021. “How AI Is Changing Advertising.” IBM Watson Advertising, 23 March 2021. https://www.ibm.com/watson-advertising/thought-leadership/how-ai-is-changing-advertising.
  • Isisag, Anil, Craig Thompson, Delphine Dion, Markus Giesler, Ashlee Humphreys, Gregory Carpenter, Nicolas Pendarvis, and Marius Luedicke. 2021. “Contemporary Investigations into the Relational Understanding of Branding.” In NA—Advances in Consumer Research, edited by Tonya Williams Bradford, Anat Keinan, and Matthew Thompson, Vol. 49, 631–5. Duluth, MN: Association for Consumer Research.
  • Kahneman, Daniel. 2011. Thinking, Fast and Slow, London, UK: Penguin.
  • Kaput, Mike. 2022. “Artificial Intelligence in Advertising: Everything You Need to Know.” Marketing Artificial Intelligence Institute, 10 March. https://www.marketingaiinstitute.com/blog/ai-in-advertising.
  • Kates, Steven M. 1999. “Making the Ad Perfectly Queer: Marketing “Normality” to the Gay Men’s Community?” Journal of Advertising 28 (1):25–37. doi:10.1080/00913367.1999.10673574
  • Kates, Steven M., and Glenda Shaw-Garlock. 1999. “The Ever-Entangling Web: A Study of Ideologies and Discourses in Advertising to Women.” Journal of Advertising 28 (2):33–49. doi:10.1080/00913367.1999.10673582
  • Kelly, J. Steven. 1979. “Subliminal Embeds in Prints Advertising: A Challenge to Advertising Ethics.” Journal of Advertising 8 (3):20–4. doi:10.1080/00913367.1979.10673284
  • Kietzmann, Jan, Jeannette Paschen, and Emily Treen. 2018. “Artificial Intelligence in Advertising: How Marketers Can Leverage Artificial Intelligence along the Consumer Journey.” Journal of Advertising Research 58 (3):263–7. doi:10.2501/JAR-2018-035
  • Kilbourne, William E. 1995. “Green Advertising: Salvation or Oxymoron?” Journal of Advertising 24 (2):7–20. doi:10.1080/00913367.1995.10673472
  • Kirkpatrick, Jerry. 1986. “A Philosophic Defense of Advertising.” Journal of Advertising 15 (2):42–64. doi:10.1080/00913367.1986.10673004
  • Kumar, V., and Shaphali Gupta. 2016. “Conceptualizing the Evolution and Future of Advertising.” Journal of Advertising 45 (3):302–17. doi:10.1080/00913367.2016.1199335
  • La Ferle, Carrie, Sidharth Muralidharan, and Eunjin (Anna). Kim. 2019. “Using Guilt and Shame Appeals from an Eastern Perspective to Promote Bystander Intervention: A Study of Mitigating Domestic Violence in India.” Journal of Advertising 48 (5):555–68. doi:10.1080/00913367.2019.1668893
  • Lamberton, Cait, and Andrew T. Stephen. 2016. “A Thematic Exploration of Digital, Social Media, and Mobile Marketing: Research Evolution from 2000 to 2015 and an Agenda for Future Inquiry.” Journal of Marketing 80 (6):146–72. doi:10.1509/jm.15.0415
  • Lehnert, Kevin, Brian D. Till, and José M. Ospina. 2014. “Advertising Creativity: The Role of Divergence versus Meaningfulness.” Journal of Advertising 43 (3):274–85. doi:10.1080/00913367.2013.851630
  • Li, Hairong. 2019. “Special Section Introduction: Artificial Intelligence and Advertising.” Journal of Advertising 48 (4):333–7. doi:10.1080/00913367.2019.1654947
  • Lindstrom, Martin. 2012. Buyology: How Everything We Believe about Why We Buy Is Wrong, London, UK: Random House.
  • Liu-Thompkins, Yuping, Ewa Maslowska, Yuqing Ren, and Hyejin Kim. 2020. “Creating, Metavoicing, and Propagating: A Road Map for Understanding User Roles in Computational Advertising.” Journal of Advertising 49 (4):394–410. doi:10.1080/00913367.2020.1795758
  • Luscombe, Richard. 2022. “Google Engineer Put on Leave after Saying AI Chatbot Has Become Sentient.” The Guardian, 12 June.
  • MacInnis, Deborah J. 2011. “A Framework for Conceptual Cotnributions in Marketing.” Journal of Marketing 75 (4):136–54. doi:10.1509/jmkg.75.4.136
  • Malthouse, Edward C., Yasaman Kamyab Hessary, Khadija Ali Vakeel, Robin Burke, and Morana Fudurić. 2019. “An Algorithm for Allocating Sponsored Recommendations and Content: Unifying Programmatic Advertising and Recommender Systems.” Journal of Advertising 48 (4):366–79. doi:10.1080/00913367.2019.1652123
  • Mangiò, Federico, Giuseppe Pedeliento, and Daniela Andreini 2022. “Brand Experience Co-Creation at the Time of Artificial Intelligence.” In The Routledge Companion to Corporate Branding, edited by Oriol Iglesias, Nicholas Ind, and Majken Schultz, 192–208. London: Routledge.
  • McGuire, Gez. 2021. “The Future of AI-Enabled Advertising: How Marketers Can Win in a First-Party Data World.” The Drum, 16 November 2021. https://www.thedrum.com/opinion/2021/11/16/the-future-ai-enabled-advertising-how-marketers-can-win-first-party-data-world.
  • Moore, Frazier, and John Leckenby. 1975. “The Role of Advertising Educators as Problem Solvers in the Field of Advertising.” Journal of Advertising 4 (2):21–6. doi:10.1080/00913367.1975.10672581
  • Nelson, Michelle R. 2008. “The Hidden Persuaders: Then and Now.” Journal of Advertising 37 (1):113–26. doi:10.2753/JOA0091-3367370109
  • Paschen, Ulrich, Christine Pitt, and Jan Kietzmann. 2020. “Artificial Intelligence: Building Blocks and an Innovation Typology.” Business Horizons 63 (2):147–55. doi:10.1016/j.bushor.2019.10.004
  • Perez, Caroline C. 2020. Invisible Women: Exposing Data Bias in a World Designed for Men, London, UK: Penguin.
  • Perkins, T. 2019. ““It’s Techno-Racism”: Detroit Is Quietly Using Facial Recognition to Make Arrests.” The Guardian, 17 August. Accessed May 19, 2020. www.theguardian.com/us-news/2019/aug/16/its-techno-racism-detroit-is-quietly-using-facial-recognition-to-make-arrests.
  • Pham, Michel Tuan, Leonard Lee, and Andrew T. Stephen. 2012. “Feeling the Future: The Emotional Oracle Effect.” Journal of Consumer Research 39 (3):461–77. doi:10.1086/663823
  • Politz, Alfred. 1975. “Creativeness and Imagination.” Journal of Advertising 4 (3):11–4. doi:10.1080/00913367.1975.10672588
  • Puntoni, Stefano, Jonathan E. Schroeder, and Mark Ritson. 2010. “Meaning Matters.” Journal of Advertising 39 (2):51–64. doi:10.2753/JOA0091-3367390204
  • Puntoni, Stefano, Rebecca Walker Reczek, Markus Giesler, and Simona Botti. 2021. “Consumers and Artificial Intelligence: An Experiential Perspective.” Journal of Marketing 85 (1):131–51. doi:10.1177/0022242920953847
  • Purcell, Brandon, Michele Pelino, Marta Bennet, and Lauren Nelson. 2021. “Tech Trends to Watch in 2021.” Forrester, 7 January 2021. https://www.forrester.com/what-it-means/ep199-tech-trends-2021/.
  • Reid, Leonard, Karen Whitehill King, and Denise E. Delorme. 1998. “Top-Level Agency Creatives Look at Advertising Creativity Then and Now.” Journal of Advertising 27 (2):1–16. doi:10.1080/00913367.1998.10673549
  • Richards, Jef I., and Catharine M. Curran. 2002. “Oracles on “Advertising”: Searching for a Definition.” Journal of Advertising 31 (2):63–77. doi:10.1080/00913367.2002.10673667
  • Rodgers, Shelly. 2021. “Themed Issue Introduction: Promises and Perils of Artificial Intelligence and Advertising.” Journal of Advertising 50 (1):1–10. doi:10.1080/00913367.2020.1868233
  • Russell, Stuart. 2021. “Reith Lectures 2021—Living with Artificial Intelligence: Lecture 3, Edinburgh—AI in the Economy.” British Broadcasting Corporation. https://www.bbc.co.uk/programmes/articles/1N0w5NcK27Tt041LPVLZ51k/reith-lectures-2021-living-with-artificial-intelligence.
  • Rust, Roland T. 2016. “Comment: Is Advertising a Zombie?” Journal of Advertising 45 (3):346–7. doi:10.1080/00913367.2016.1180657
  • Rust, Roland T., and Richard W. Oliver. 1994a. “Video Dial Tone: The New World of Services Marketing.” Journal of Services Marketing 8 (3):5–16. doi:10.1108/08876049410065561
  • Rust, Roland T., and Richard W. Oliver. 1994b. “The Death of Advertising.” Journal of Advertising 23 (4):71–8. doi:10.1080/00913367.1943.10673460
  • Ryan, Mark. 2020. “In AI We Trust: Ethics, Artificial Intelligence, and Reliability.” Science and Engineering Ethics 26 (5):2749–67. doi:10.1007/s11948-020-00228-y
  • Sample, Ian. 2020. “What Are Deepfakes—and How Can You Spot Them?” The Guardian, 13 January. https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.
  • Samuel, Anthony, Gareth R. T. White, Robert Thomas, and Paul Jones. 2021. “Programmatic Advertising: An Exegesis of Consumer Concerns.” Computers in Human Behavior 116 (March):106657. doi:10.1016/j.chb.2020.106657
  • Sandage, Charles H. 1972. “Some Institutional Aspects of Advertising.” Journal of Advertising 1 (1):6–9. doi:10.1080/00913367.1972.10672465
  • Schmelzer, Ron. 2020. “AI Makes a Splash in Advertising.” Forbes, June 18. https://www.forbes.com/sites/cognitiveworld/2020/06/18/ai-makes-a-splash-in-advertising/?sh=7b04e1427682.
  • Schmitt, Bernd. 2019. “From Atoms to Bits and Back: A Research Curation on Digital Technology and Agenda for Future Research.” Journal of Consumer Research 46 (4):825–32. doi:10.1093/jcr/ucz038
  • Schultz, Don. 2016. “The Future of Advertising or Whatever We’re Going to Call It.” Journal of Advertising 45 (3):276–85. doi:10.1080/00913367.2016.1185061
  • Shankar, Venkatesh. 2018. “How Artificial Intelligence (AI) is Reshaping Retailing.” Journal of Retailing 94 (4):vi–xi. doi:10.1016/S0022-4359(18)30076-9
  • Shankar, Venkatesh, Kirthi Kalyanam, Pankaj Setia, Alireza Golmohammadi, Seshardri Tirunillai, Tom Douglass, John Hennessey, J. S. Bull, and Rand Waddoups. 2021. “How Technology in Changing Retail.” Journal of Retailing 97 (1):13–27. doi:10.1016/j.jretai.2020.10.006
  • Smith, Peter, and Laura Smith. 2021. “Artificial Intelligence and Disability: Too Much Promise, yet Too Little Substance?” AI and Ethics 1 (1):81–6. doi:10.1007/s43681-020-00004-5
  • Smith, Genevieve, and Ishita Rustagi. 2021. “When Good Algorithms Go Sexist: Why and How to Advance AI Gender Equity.” Stanford Social Innovation Review, 31 March. https://ssir.org/articles/entry/when_good_algorithms_go_sexist_why_and_how_to_advance_ai_gender_equity.
  • Solon, Olivia, and Oliver Laughland. 2018. “Cambridge Analytic Closing after Facebook Data Harvesting Scandal.” The Guardian, 2 May. https://www.theguardian.com/uk-news/2018/may/02/cambridge-analytica-closing-down-after-facebook-row-reports-say.
  • Spangler, Todd. 2018. “First AI-Scripted Commercial Debuts, Directed by Kevin Macdonald for Lexus.” Variety, 19 November. https://variety.com/2018/digital/news/lexus-ai-scripted-ad-ibm-watson-kevin-macdonald-1203030693/.
  • Stern, Barbara B. 1988. “How Does an Ad Mean? Language in Services Advertising.” Journal of Advertising 17 (2):3–14. doi:10.1080/00913367.1988.10673108
  • Stern, Barbara B., George M. Zinkhan, and Morris B. Holbrook. 2002. “The Netvertising Image: Netvertising Image Communication Model (NICM) and Construct Definition.” Journal of Advertising 31 (3):15–27. doi:10.1080/00913367.2002.10673673
  • Stewart, David W. 1992. “Speculations on the Future of Advertising Research.” Journal of Advertising 21 (3):1–18. doi:10.1080/00913367.1992.10673372
  • Stewart, David W. 2016. “Comment: Speculations of the Future of Advertising Redux.” Journal of Advertising 45 (3):348–50. doi:10.1080/00913367.2016.1185984
  • Strahan, Erin J., Steven J. Spencer, and Mark P. Zanna. 2002. “Subliminal Priming and Persuasion: Striking While the Iron Is Hot.” Journal of Experimental Social Psychology 38 (6):556–68. doi:10.1016/S0022-1031(02)00502-4
  • Sutherland, Rory. 2013. “Tips from the Marlboro Man.” Campaign, October 4, 15.
  • Taleb, Nassim N. 2007. The Black Swan: The Impact of the Highly Improbable. London, UK: Penguin.
  • Thaler, Richard H., and Cass R. Sunstein. 2009. Nudge: Improving Decisions about Health, Wealth and Happiness. London, UK: Penguin.
  • Theus, Kathryn T. 1994. “Subliminal Advertising and the Psychology of Processing Unconscious Stimuli: A Review of Research.” Psychology and Marketing 11 (3):271–90. doi:10.1002/mar.4220110306
  • Thomas, Veronica L., and Kendra Fowler. 2021. “Close Encounters of the AI Kind: Use of AI Influencers as Brand Endorsers.” Journal of Advertising 50 (1):11–25. doi:10.1080/00913367.2020.1810595
  • Vakratsas, Demetrios, and Xin (Shane) Wang. 2021. “Artificial Intelligence in Advertising Creativity.” Journal of Advertising 50 (1):39–51. doi:10.1080/00913367.2020.1843090
  • Van Esch, Patrick, Yuanyuan (Gina) Cui, and Shailendra Pratap Jain. 2021. “Stimulating or Intimidating: The Effect of AI-Enabled in-Store Communication on Consumer Patronage Likelihood.” Journal of Advertising 50 (1):63–80. doi:10.1080/00913367.2020.1832939
  • Van Noort, Guda, Itai Himelboim, Jolie Martin, and Tom Collinger. 2020. “Introducing a Model of Automated Brand-Generated Content in an Era of Computational Advertising.” Journal of Advertising 49 (4):411–27. doi:10.1080/00913367.2020.1795954
  • Waikar, Sachin. 2021. “Designing Anti-Racist Technologies for a Just Future.” Stanford University Human-Centered Artificial Intelligence, 28 June. https://hai.stanford.edu/news/designing-anti-racist-technologies-just-future.
  • Wakefield, Jane. 2022. “Deepfake Presidents Used in Russia-Ukraine War.” British Broadcast Corporation, 18 March. https://www.bbc.co.uk/news/technology-60780142.
  • Watts, Jameson, and Anastasia Adriano. 2021. “Uncovering the Sources of Machine-Learning Mistakes in Advertising: Contextual Bias in the Evaluation of Semantic Relatedness.” Journal of Advertising 50 (1):26–38. doi:10.1080/00913367.2020.1821411
  • Waytz, Adam, Kurt Gray, Nicolas Epley, and Daniel M. Wegner. 2010. “Causes and Consequences of Mind Perception.” Trends in Cognitive Sciences 14 (8):383–8. doi:10.1016/j.tics.2010.05.006
  • Wertenbroch, Klaus, Rom Y. Schrift, Joseph W. Alba, Alixandra Barasch, Amit Bhattacharjee, Markus Giesler, Joshua Knobe, Donald R. Lehmann, Sandra Matz, Gideon Nave, et al. 2020. “Autonomy in Consumer Choice.” Marketing Letters 31 (4):429–39. doi:10.1007/s11002-020-09521-z
  • West, Douglas, Scott Koslow, and Mark Kilgour. 2019. “Future Directions for Advertising Creativity Research.” Journal of Advertising 48 (1):102–14. doi:10.1080/00913367.2019.1585307
  • Westfall, Joseph. 2009. “Ironic Midwives: Socratic Maieutics in Nietzsche and Kierkegaard.” Philosophy & Social Criticism 35 (6):627–48. doi:10.1177/0191453709104450
  • White, Gordon E. 1972. “Creativity: The x Factor in Advertising Theory.” Journal of Advertising 1 (1):28–32. doi:10.1080/00913367.1972.10672470
  • Winter, Edward, and John T. Russell. 1973. “Psychographics and Creativity.” Journal of Advertising 2 (1):32–5. doi:10.1080/00913367.1973.10672482
  • Yang, Jeongwon, Ploypin Chuenterawong, and Krittaphat Pugdeethosapol. 2021. “Speaking up on Black Lives Matter: A Comparative Study of Consumer Reactions toward Brand and Influencer-Generated Corporate Social Responsibility Messages.” Journal of Advertising 50 (5):565–83. doi:10.1080/00913367.2021.1984345
  • Zanot, Eric J., Davis Pincus, and Joseph E. Lamp. 1983. “Public Perceptions of Subliminal Advertising.” Journal of Advertising 12 (1):39–45. doi:10.1080/00913367.1983.10672829
  • Zayer, Linda T., and Catherine A. Coleman. 2015. “Advertising Professionals’ Perceptions of the Impact of Gender Portrayals on Men and Women: A Question of Ethics?” Journal of Advertising 44 (3):1–12. doi:10.1080/00913367.2014.975878
  • Zinkhan, George M. 1994. “Advertising Ethics: Emerging Methods and Trends.” Journal of Advertising 23 (3):1–4. doi:10.1080/00913367.1994.10673445
  • Zuboff, Shoshana. 2019. Surveillance Capitalism. London, UK: Profile.