208
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Intermediaries, mediators and digital advertising’s tensions

Received 21 Aug 2023, Accepted 28 Apr 2024, Published online: 15 Jul 2024

ABSTRACT

We argue that Latour’s distinction between ‘intermediaries’ and ‘mediators’ captures important facets of tensions in market encounters and in the co-creation of ‘supply’ and ‘demand’ within digital advertising. Drawing upon 110 interviews with 87 practitioners, participation in sector meetings and training courses, and McGowan’s ‘autoethnographic’ experiences, we explore the different forms tensions take in the two main configurations of digital advertising: (1) the ‘open marketplace,’ which is the site of much effort to turn mediators into intermediaries; and (2) ‘walled gardens,’ which increasingly are unequivocally mediators, and which are ‘de-agencing’ advertising’s human practitioners in specific ways (e.g. by making human-guided targeting less attractive and more difficult). De-agencing interacts with current initiatives, especially Apple’s, that partially ‘de-individualize’ advertising’s audiences and reinforce walled gardens’ mediator roles.

Digital advertising’s complication and scale are astonishing. A single action by a single user – entering text into a search engine, opening an app, visiting a website or social-media platform – can trigger at least one, and sometimes more than a hundred, near-instantaneous auctions of the advertising opportunity. The process happens billions of times daily: the average internet-connected user globally is shown over 50 ads per day.Footnote1 Those ads fund much of the everyday digital world and are responsible for around roughly a tenth of information and communication technology’s carbon-dioxide emissions (Pärssinen et al. Citation2018).Footnote2

Advertising’s human practitioners, working for advertisers or advertising agencies, have to navigate this complex world. Our focus is the tensions practitioners encounter as they attempt this navigation. Some of these tensions are explicit ‘matters of concern’ in the sense of, for example, Callon (Citation2021).Footnote3 Other tensions, though, are more private. They include ambivalences, for example about systems that the practitioner feels simultaneously attracted to and diffusely suspicious of, and also difficulties: things that the practitioner accepts that s/he needs to do but finds hard, such as demonstrating to clients that advertising ‘works.’ Sometimes, tensions are subjectively felt, sharp anxieties. But tensions are also opportunities, for example for ‘moral entrepreneurship’ (in the sense of Becker Citation1973): highlighting an unease and making it an explicit matter of concern, perhaps selling a system or service to assuage it.

We view these issues through the lens of Latour’s distinction between ‘intermediaries’ and ‘mediators’:

An intermediary … transports meaning or force without transformation … Mediators … transform, translate, distort, and modify the meaning or the elements they are supposed to carry. (Latour Citation2005, 39, emphases in original)

In the literature on advertising, its practitioners have long been thought of as ‘cultural intermediaries’ (e.g. Cronin Citation2004), indeed as ‘the emblematic intermediary occupation’ (McFall Citation2014, 45). Latour’s intermediary/mediator distinction has not been adopted at all widely in that literature.Footnote4 Does that matter? After all, empirical work in the literature on these ‘intermediaries’ implicitly makes clear that much of what they do is mediation.

Adoption of Latour’s distinction could, however, improve analytical clarity, and has four other benefits. First, despite the empirical rarity of intermediaries, ‘intermediation’ has a central place in high-modern imaginaries. In digital advertising, for example, the idea that mediators should be made into intermediaries is, in some contexts, influential. Second, existing work on ‘cultural intermediaries’ usually views them simply as human beings or, at most, organizations or occupations made up of humans. Latour’s conceptualization pushes us to broaden the discussion to non-human/hybrid intermediaries and mediators.

Third, the intermediary/mediator distinction also throws light on knowledge, and whether unmediated access to ‘raw truth’ and ‘objective’ data is possible, via technical devices that can be considered intermediaries (or, indeed, via trustworthy human senses).Footnote5 While that issue sounds abstract, even ‘philosophical,’ it is an important everyday matter in digital advertising: the status of ‘data,’ of the ‘pixels’ and other devices that generate it, and of the knowledge that they make possible underpins several of the tensions we have found.

Fourth, the intermediary/mediator distinction is particularly pertinent to the co-creation of supply and demand, because here high-modern ‘intermediary’ imaginaries of markets are deeply rooted. ‘Arm’s-length’ views of supply and demand, common in at least elementary economics, implicitly conceive of supply and demand as pre-formed, and thus as simply ‘brought together’ to facilitate market exchange. In these imaginaries (influential far beyond academic economics), the people, organizations or technical systems that do the bringing together are – or, at least, should be – intermediaries: they do not, or should not, alter supply and demand. All that intermediaries should do is ‘transparently’ facilitate the ‘discovery’ of a market-clearing price, and ‘transparency’ (Ananny and Crawford Citation2018) and ‘discovery’ are metaphors that belong firmly to a world of intermediaries rather than mediators.

That view of supply and demand is, however, sharply criticized by Callon as an over-abstract, fundamentally misleading, ‘interface’ model:

Supply is not a given, preconstituted block that faces an equally set block, that of a demand. Supply and demand emerge and express themselves over the course of a continuous process. They are constantly in motion. (Callon Citation2021, 22)

Empirically, for example, Caliskan shows that market price is not the singular, mathematical ‘coming together of the two lines of supply and demand,’ but takes multiple, concrete forms generated by diverse tools and framings (Caliskan Citation2009, 241). Instead of the impoverished interface model, Callon puts forward an ‘agencement’ model, in which sellers, buyers and goods are fluid and co-created in ever-changing, complex ways in concrete, material ‘market encounters.’Footnote6 Although Callon has touched explicitly on the point only briefly (in joint work with Caliskan), his implicit view is that these encounters are shaped by mediators, not intermediaries.Footnote7

Our empirical focus is the two main forms that market encounters take in digital advertising.Footnote8 The first is in what is referred to by participants as the ‘open marketplace.’ In it, algorithms acting on behalf of advertisers bid competitively for advertising opportunities offered by multiple ‘publishers,’ a term that in digital advertising refers to the providers of online content of all kinds, including, e.g. games and other apps. The second is in what practitioners often call ‘walled gardens,’ such as those operated by Alphabet (in particular Google Search and YouTube), Meta (Facebook and Instagram), TikTok and Snap.Footnote9 The metaphor signals that the platform’s owners strongly influence what goes on within it, that there is a boundary between it and the wider web/Internet, and that while flows of data into the garden are copious, what flows out is carefully curated. Economic arrangements within a walled garden are frequently wildly at odds with arm’s-length views of supply/demand interaction. For example, advertisers (i.e. ‘demand’) often do not decide how much to bid for each advertising opportunity: the garden (i.e. ‘supply’) decides that for them.

This paper’s analysis speaks both to the literature on platforms and platform power and that on digital advertising. Walled gardens (especially Google’s and Meta’s) are platforms of precisely the kind that the literature on platforms (e.g. Bucher Citation2018; Citation2021; Gillespie Citation2017; Srnicek Citation2016) discusses: large, complex, influential and often deeply embedded in everyday life. The literature on platforms is too extensive to discuss here in detail – it is reviewed in, e.g. Nieborg, Poell, and van Dijck (Citation2023), and we ourselves review it in Caliskan, Callon, and MacKenzie (Citationforthcoming) – but let us briefly summarize how this paper contributes to it.

Our view of platforms is that they need to be analyzed as forms of what Caliskan (Citation2020) calls ‘stack economization’: the layering together of different processes of economization, in other words processes of ‘rendering things economic,’ an analysis we expand on in Caliskan, Callon, and MacKenzie (Citationforthcoming). In return for a walled garden ‘gifting’ free services to its users (and gifts, with all their associated complexities, reciprocities and obligations, are a form of economization), it ‘economizes’ users’ attention by selling opportunities to advertise to them. It is, of course, well known that selling advertising is fundamental to the economic viability of, for example, Google/Alphabet and Facebook/Instagram/Meta. Yet in-depth empirical research on the buying, selling and pricing of advertising opportunities is surprisingly unusual in the literature on platforms, and even a recent, well-informed analysis such as Bucher’s (Citation2021) account of Facebook advertising has been rendered dated by the developments we discuss below.

Among the results of insufficient attention to advertising in the literature on platforms is that an important dimension of platform power is largely missed. Platforms do not only have power in relation to users, creators/influencers, gig workers and, for example, news publishers (Nielsen and Ganter Citation2022; Poell, Nieborg, and Duffy Citation2023), but also in relation to advertisers. That power and its contestation stand in need of in-depth investigation, and we hope that the final three sections of this paper prompt others to push this investigation forward. Furthermore, our intermediary/mediator analysis suggests that AdTech (advertising technology) systems should not be viewed as simply transmitting power that has its origins elsewhere, but as contingency-ridden, sometimes far from predictable mediators, which researchers need to take seriously in their own right.

That, we argue, is a necessary corrective to what Poell, Nieborg, and Duffy (Citation2023, 1391) call ‘monolithic perspectives on platform dominance.’ The paradigmatic such perspective, influential in both the literature on platforms (see, e.g. Bucher Citation2021, 139 and 147) and that on advertising, is Zuboff (Citation2019). This famously portrays nearly all-powerful surveillance capitalists, of which digital advertisers are the prototype, who ‘know everything about us’ and ‘not only know our behavior but also shape our behavior at scale’ (Zuboff Citation2019, 11 and 8, emphases in original). That this assessment of platform power is overstated is already clear (see, e.g. Hwang Citation2020, 141), but what is less commonly noted is that, like the conventional ‘sociology of the social’ against which Latour mobilizes his distinction, Zuboff implicitly treats technical systems as if they are intermediaries:

[W]e hunt the puppet master, not the puppet … surveillance capitalism is a logic in action and not a technology … Surveillance capitalism’s unique economic imperatives are the puppet masters that hide behind the curtain orienting the machines and summoning them to action. (Zuboff Citation2019, 14–16, emphasis in original)

Latour, however, explicitly queers the ‘puppet master’ metaphor:

[P]uppeteers will rarely behave as having total control over their puppets. They will say queer things like ‘their marionettes suggest them to do things they will never have thought possible by themselves’ (Latour Citation2005, 59–60).

Technical systems and devices – including those that mediate supply and demand – are, we suggest, much more commonly Latourian marionettes than Zuboffian puppets. Sometimes, they thwart human intentions; sometimes, they helpfully ‘suggest them to do things.’ For example, the single most important material change in Facebook advertising was its 2012 decision to move ads away from their previously separate, easily ignorable, on-screen location, and integrate them into the user’s News Feed. That change, interviewees tell us (and McGowan directly experienced) greatly increased the effectiveness of Facebook ads and thus the company’s advertising revenue. It is far from clear, however, that this was actually anticipated. Mark Zuckerberg told journalist Steven Levy that the change was simply materially necessitated by mobile phones’ small screens, saying ‘There’s no room for a right-hand column of ads on mobile’ (Levy Citation2020, 296).

We also build on four specific contributions to the literature on digital advertising. First is Ruckenstein and Granroth (Citation2020), who analyze a pervasive tension, between users’ dislike/fear of ‘creepy’ tracking and the ‘pleasurable moments of being “seen” by the market’ (Citation2020, 12). That tension is not simply ‘external’ (i.e. felt by advertising’s audiences), but has transposed itself into digital advertising itself, in the form of increasing restrictions, imposed, e.g. by Apple in 2021 (see below), which substantially de-individualize advertising’s audiences. As documented by the second contribution on which we built, Beauvisage (Citation2023), the overall trajectory so far of digital advertising has been to ‘singularize’ – i.e. individualize – audiences. Current privacy initiatives (Beauvisage Citation2023, 174–177) disrupt that trajectory, and are the source of considerable tension.

Pursuit of de-individualization has side-effects: it can drain away the residual ‘intermediary’ aspects of walled gardens, and reinforce their role as mediators. That in turn has consequences for the agency of advertising’s human practitioners. The third contribution to the literature on digital advertising on which we build is thus Ryan et al.’s (Citation2023) investigation of the ‘agencing’ of these practitioners. They distinguish between AdTech practices that enhance the human practitioner’s agency, and those that ‘equip but limit the agency of the marketer’ (Ryan et al. Citation2023, 480). In our terms, that distinction maps onto the intermediary/mediator divide. Access to a system that is an intermediary enhances the practitioner’s agency. A mediator, though, does indeed limit as well as equip that agency.

However, a formulation stronger than that offered by Ryan et al. (Citation2023) seems to us to be justified: in important senses, the intensified mediator role of the biggest walled gardens is de-agencing human practitioners. Ryan et al. (Citation2023) draw the notion of ‘agencing’ from Cochoy, Trompette, and Araujo (Citation2016), who use it to refer to, first, ‘arranging market entities’ and, second, ‘giving agency,’ in the sense of ‘converting people, non-human entities or “hybrid collectives”’ into ‘active agents’ (6). That second aspect of ‘agencing’ implicitly suggests the possibility of its opposite, ‘de-agencing.’ We are not, of course, arguing that advertising practitioners have been stripped of all capacity to be active agents, but previously important capacities may be in the process of being lost.

A crucial such capacity, often central to practitioners’ professional self-identity, is their understanding of advertising’s audiences and skilled steering of the targeting of advertising. The fourth contribution on which we build is therefore Beauvisage et al.’s insightful (Citation2023) examination of ‘[t]he tension between stable and common knowledge demographic categories and big data originated automated targeting’ (Beauvisage et al. Citation2023, 15), which is ‘post-demographic’ in the sense of Rogers (Citation2009). We have found human-controlled sociodemographic targeting less resilient than Beauvisage and colleagues report. The key factor affecting its resilience seems to be the divide, identified by Beuscart and Mellet (Citation2013), between (a) ‘direct-response’ advertising, which seeks to prompt immediate actions, especially purchases, and in which delegation of targeting to walled gardens’ post-demographic mediator systems is now widespread (as noted by Beauvisage et al. Citation2023, 13); and (b) ‘brand-building’ advertising, where targeting that can be communicated and explained to clients seems to remain important.

The paper is structured as follows. The next section is on our data sources. The third section examines tensions in the open marketplace, while the fourth discusses walled gardens. Section five turns to current de-agencing and de-individualization. The sixth section is the conclusion.

Data sources

We draw upon four sources of data. First is the trade press, in particular Ad Exchanger and Mobile Dev Memo, which cover digital advertising in particular depth. Second, we have participated in six face-to-face advertising-sector conferences (three in the US, three in the UK), twelve online conferences, 35 other online events such as webinars, and two online training courses. Formal presentations to these conferences and webinars frequently resemble sales pitches, but conference panels and smaller events are less scripted, as, e.g. were a short-lived flurry of AdTech discussions on Clubhouse. Even the sales pitches indirectly helped us identify tensions, because they often implicitly position the product/service in question as a response to them.

Third, we have conducted 110 interviews with 87 practitioners of digital advertising, publishers and related technical specialists. Interviews are semi-structured, typically 45–60 minutes long, and all but five were audio-recorded and transcribed. We anonymize interviewees with two-letter codes: see . Our canvas was initially deliberately broad: we sought interviewees from across digital advertising, and at first simply pursued any opportunity to speak to people with first-hand experience of its practices and systems; it is only a slight exaggeration to say that initially we spoke to anyone who would speak to us.

Table 1. Interviewees.

Gradually, our data collection became more analytically led. MacKenzie in particular immersed himself thoroughly in the field, reading the trade press every day, taking part in as many sector meetings as possible, following the sector’s debates, and gradually learning about the field’s systems, practices, organizations and specialized terminology. Immersion enabled us to identify specific systems, practices and episodes (e.g. Apple’s 2021 de-individualizing measures) that are of particular analytical interest, and where possible to focus interviews on those, also seeking out further interviewees with involvement in them. Convenience/snowballing/analytically-led selection of interviewees and evolving, interviewee-specific questions rule out even rudimentary quantitative analysis (‘N percent said X; M percent said Y’) of interviews, but bring benefits in the concrete depth of what we learned. Proceeding iteratively in this way is inherently slow (our research began in 2019 and still continues), but that has its virtues in providing insights into change through time.

Precisely because supply and demand are co-created, locating interviewees or their organizations in either ‘supply’ or ‘demand’ can be difficult, but in we adopt digital advertising’s conventional distinction between the ‘demand side’ (advertisers and advertising agencies working for them, along with AdTech firms that position themselves as catering primarily to their needs) and the ‘supply side’: the big platforms, other publishers, and the AdTech firms that cater to publishers’ needs. Because this paper interrogates digital advertising primarily from the advertiser’s side, we draw here mainly from the 43 interviewees that belong, at least loosely, in the three ‘demand-side’ categories of . We make no claim that our interviewees are in any sense ‘representative.’ Perhaps the most significant shortcoming in this respect is that Big Tech platforms often sharply constrain what employees can say about internal matters, making them very wary of being interviewed. In particular, we have not been able to interview employees of either Meta or Apple, so lack first-hand fieldwork data on internal processes in those corporations.

A limitation of a different kind of our data is that the pandemic and generally slow return to the office in digital advertising mean that our visits to workplaces have been sparse. Only 20 of our 110 interviews were face-to-face, and of those only three involved a visit to the interviewee’s workplace. That denies us the observations of workplaces that can happen, almost incidentally, when an interview takes place there. Our fourth data source is deliberately designed to remedy that, at least partially. We draw upon auto-ethnographic reflections by the first author, McGowan, who has extensive experience working for advertising agencies and advertisers from 2014 to the present, since 2019 in parallel with PhD and now postdoctoral research on a topic other than advertising. Auto-ethnography is, of course, methodologically perilous and contested, so we draw upon what she has experienced and observed as a practitioner only if it is confirmed, at least generically, by our interview data.

Our initial identification of the tensions in the field that we report here was strongly shaped by reading the trade press and especially by what we heard at sector meetings, but gradually we identified themes in the interviews and in McGowan’s autoethnographic experiences that were absent, at least in any explicit form, from those more formal settings. The analyses below reflect these themes as well. An example is the pervasive anxiety that often surrounds demonstrating to clients and senior managers that advertising is ‘working.’ ‘[P]eople are really scared of proving their value,’ says interviewee AP, ‘that’s the main worry I think for all agencies.’ McGowan is all too familiar with this anxiety, with the upset, even tears, often involved, and with what can be at stake: loss of a client, loss of one's job, even redundancy of a whole team.

Tensions in the open marketplace

The open marketplace for digital advertising looks like – indeed, is – a market. Publishers bring to it a ‘supply’ of advertising opportunities, which encounters ‘demand’ from advertisers, often in ‘ad exchanges’ that have a degree of resemblance to finance’s stock exchanges. That encounter, however, is an entangled, often opaque form of mediated co-creation, rather than arm’s-length interaction via transparent intermediaries – but there are influential efforts to turn the encounter’s mediators into intermediaries.

Tensions in the open marketplace are of two main kinds. The first concerns what is being bought and sold. For example, will the ad actually be viewable by a human being? Or will it be transmitted to the user’s device, but not in view on-screen, perhaps because it is ‘below the fold’: in a part of a page in view only if the user scrolls down? The efforts, insightfully described by Cluley (Citation2018), to develop an industry-standard definition of ‘viewability’ met resistance: one of our interviewees, who occupied a mediatory role, admits that ‘we didn’t want people [clients] to measure viewability, because it was an expense to us.’ Ad ‘exchanges were selling things that were not viewable,’ he says, and buying only viewable ads was more expensive.

The eventually agreed standard definition of viewability involves ‘a minimum of 50 percent of pixels’ of a standard display ad being ‘in view of a minimum of 1 second’ for it to be judged viewable (IAB Citation2014), or 2 seconds in the case of a video ad. Tension around the issue remains, however. ‘It’s not a solved problem,’ says AE. The ‘standards are to be honest … a joke to me. We, as advertisers, I feel like we’ve been paying for ads that have not been served, or have not been viewed, for years.’

Indeed, will the viewer be a human being, or an automated ‘bot’ simulating humans? ‘[T]here’s been such a problem with fraud in this industry,’ says BJ: ‘there’s a history of [mediator platforms] knowingly looking the other way and letting it [fraud] persist on their platform because it creates lots of revenue.’ Again, there is an influential industry initiative, Ads.txt, in which publishers create electronic files listing the mediators permitted to sell advertising opportunities on their behalf. These files enable advertisers to reduce the risk of paying for a ‘bot’ viewership or for ads on a low-quality, fraudulent website impersonating a reputable publisher.

Another source of tension of the first kind (over what is bought and sold) is ‘brand safety’ or ‘brand suitability.’ For a brand’s ad to appear alongside salacious, hateful or otherwise inappropriate content ‘can be massively damaging,’ says BB, especially if ‘someone, somewhere … take[s] a screengrab’ and shares it on social media. ‘Worst-case scenario,’ notes AV, ‘you lose a client.’ Advertisers sometimes suspect, he says, that agencies find it ‘financially interesting to run [ads] on not-brand-safe spots because they’re a lot cheaper and you [an agency] can earn way more money’ by buying ads for considerably less than the advertiser is paying the agency to show them. ‘This is,’ AV adds, ‘one of the reasons why we see the in-house trends,’ in which advertisers themselves directly initiate their purchases of ads, so that they can ‘have actually an overview themselves on where … their ads … are being shown.’

Tensions around viewability, fraud and brand safety/suitability have fuelled the growth, especially since around 2014, of companies that specialize in ad ‘verification.’ Their role is to monitor the process of the buying and display of ads so that, again to quote Latour, it ‘transports meaning or force without transformation’ (Citation2005, 39), and the advertiser or advertising agency can therefore be sure that it is buying ads of the kind it intends to buy. Typically, the verification company provides the advertiser or agency with a software ‘wrapper’ to enfold the electronic files of each of their ads, or at least a sample of them. Code in the wrapper ‘check[s] the URL before they download the ad,’ blocking the ad if the verification provider’s system ‘[doesn’t] like the content of the page’ (interviewee AO). The wrapper also determines whether the viewer seems to be a human being, and whether enough of an ad was in view for long enough for it to count as ‘viewable.’

Transforming mediators into intermediaries is not easy: it is ‘a rare exception, that has to be accounted for by some extra work – usually by the mobilization of even more mediators!’ (Latour Citation2005, 40, emphasis in original). The extra work of ‘verification’ costs money: around 5–7 cents per 1,000 ad impressions, says AQ, but ‘if you multiply it with the billions of impressions, it’s … a lot of money.’ And our news-publisher interviewees do indeed report experiencing verification as mediation, not intermediation, especially in respect to brand safety/suitability concerns. That was, for example, particularly the case in the early months of the coronavirus pandemic, when brand safety/suitability systems often stopped ads appearing alongside news coverage of the pandemic, even in the homepage bannerhead ad slots of outlets such as the New York Times or Wall Street Journal, hitting publishers’ revenues considerably (MacKenzie Citation2022). Almost certainly, advertisers and agencies did not intend to ad-block these prestigious homepages, but the blocklists they provided to verification companies seem to have contained pandemic-related keywords that triggered this blocking.

One side-effect of advertisers’ and agencies’ concerns about viewability and brand safety/suitability has been a particularly direct form of supply-demand co-creation: the rise of ‘made-for-advertising’ sites. These carefully avoid controversial topics likely to trigger ad-blocking, and perform well on automated measures of viewability. As DH tells us, the sites attract users by buying cheap social-media ads, which they fill with what CH calls ‘clickbait; “you won’t believe how much weight these five celebrities have gained.”’ When a user clicks on one of these ads, they get taken to the made-for-advertising site, which will have little content but is stuffed with multiple ads (often including, e.g. video ads that the user cannot skip). The site’s goal is arbitrage: to earn more money by showing users these ads than it has paid for the ads that attract them. That is not fraud – the ads are viewable by human beings – but ‘ads that run on MFA [made-for-advertising] sites are very ineffective at driving sales for marketers’ (DH), with ‘no returns for brands in terms of business results’ (CH). Advertisers can, again almost certainly inadvertently, spend large amounts of money on ads on these sites: a study for the US Association of National Advertisers estimates this as around 15 percent of open-marketplace expenditure (ANA Citation2023).

A second type of tension in the open marketplace concerns economic relations and whether these are characterized by intermediary-like transparency. MacKenzie encountered this tension in the first sector meeting he attended, in December 2019, at which participants more than once quoted a description of open-marketplace economic relations as ‘murky at best and fraudulent at worst,’ which turned out to be a quote from a speech by Marc Pritchard, Chief Brand Officer of Procter & Gamble, then the world’s largest advertiser (Pritchard Citation2017).

In 2018, the Incorporated Society of British Advertisers commissioned auditors PwC to trace the spending of fifteen big UK advertisers on ads on websites of members of the UK’s Association of Online Publishers. The PwC team found that on average only 51 percent of what the advertisers spent reached the publishers. Two-thirds of the remaining 49 percent was absorbed by fees charged by agencies, platforms and AdTech firms (ISBA Citation2020). Most strikingly, however, the PwC team found that on average around 15 percent of advertisers’ spending was untraceable (ISBA Citation2020).

Money seemed to be vanishing at exactly the point at which, in standard economic imaginaries, ‘supply’ and ‘demand’ should be being transparently brought together. ‘[M]oney was disappearing between DSP and SSP,’ says interviewee BM: DSPs are demand-side platforms, which bid, on behalf of advertisers, for the supply of ad slots brought together by SSPs or supply-side platforms. Even experienced practitioners were taken aback: ‘that was genuinely new for everyone,’ says BM. That the heart of the open marketplace had been found to be economically opaque was a frequent talking point in the industry meetings we attended in 2021.

Subsequent efforts to turn economically ‘murky’ mediation into ‘transparent’ intermediation have been led in the UK by an industry task force, again convened by the Incorporated Society of British Advertisers, which has developed a toolkit for financially auditing buyer-seller interactions in digital advertising. A second study conducted by PwC found that 65 percent of advertisers’ spending was now reaching publishers, and only 3 per cent of spending could not be traced (PwC Citation2023). Again, though, the work needed to create intermediary-like transparency needs emphasized. Tracing the money took nine months, even in a ‘curated’ sector of the open marketplace that had consciously adopted tools to facilitate that tracing.Footnote10

What is, however, perhaps most striking about open-marketplace tensions of this second kind, concerning transparency of economic relations, is that crucial aspects of those relations are long-established, predating digital advertising. Advertising’s traditional mediator organizations, agencies, made much, often most, of their money not from fees charged to their clients (the advertisers that formed the demand for advertising), but from payments known as ‘rebates’ from suppliers of advertising opportunities such as TV channels and newspapers. There were even conventions about how much of what an agency’s clients had spent should be rebated to the agency: in the US and UK, for example, the traditional rebate was 15 percent (see, e.g. Tunstall Citation1964, 29–30).

Crucially, the agency kept the rebate rather than returning it to its advertiser clients. ‘[E]verybody was aware’ of such arrangements, says interviewee CD, ‘but it was something like an unspoken thing’ – perhaps the source of some embarrassment, but a de facto accepted mediators’ practice. Even today, the open marketplace often involves complex fee and rebate arrangements that are hard to document and disentangle. Money seeming to ‘disappear’ (as in the 2020 ISBA/PwC study) may not be ‘nefarious … stealing money from advertisers,’ says DH, but what he calls ‘esoteric’ fees, which compensate those involved for loss of revenue in intricacies of mediation opaque to outsiders, such as the frequent discrepancies between the publisher’s and the advertiser’s systems’ counts of how many ads have actually been shown.

If, though, intermediaries are preferred to mediators, interviewee CD’s ‘unspoken’ rebate arrangements and DH’s ‘esoteric’ mediation-related fees can nevertheless lose legitimacy dramatically. The Association of National Advertisers (the US equivalent of the Incorporated Society of British Advertisers) commissioned a study that reported in 2016 that rebates were still ‘pervasive,’ condemning them as ‘non-transparent business practices’ (K2 Intelligence Citation2016, 1). But just as striking as this conclusion was who the Association engaged to conduct the study: the celebrated corporate fraud investigators Kroll and Kroll. Rebates, once accepted practice, have become seen by some as an illegitimate ‘kickback’ (interviewee CE), even ‘systemic fraudulent behaviour’ (AC).

Walled gardens

There are tensions, too, around walled gardens, but they differ from those in the open marketplace. Platforms, as already noted, ‘stack’ economization processes of different kinds, and what characterizes a walled garden is that these processes take place largely within its electronic walls and under its control. It is not normally possible to ‘follow the money’ through a walled garden’s internal processes, as the auditors PwC did in the open marketplace, but its taken-for-granted impossibility means that it does not appear to be a source of tension. ‘You pay the platform for a combination of service, technology, media placement, but you don’t know how it breaks down’ (interviewee CH). Some of the other open-marketplace anxieties outlined in the previous section have become evident sporadically in the case of walled gardens, but are usually assuaged by the greater degree of control that gardens can exercise. ‘[I]n the open web you run into a lot more fraudulent robot traffic’ than in a walled garden, says AK. ‘Google does a pretty good job of suppressing bot charges’ (AE). ‘Facebook, for all its faults … because of the way the ad [is] served [in the user’s] News Feed [is] very definitive’ in regards to viewability (CO).

Crucially, too, walled gardens’ automation of the advertising process and straightforward, self-service interfaces mean that they can accommodate not just large advertisers but huge numbers of small advertisers. As interviewee DH says, ‘on the order of 10 million advertisers … are actively running ads with Meta and Google,’ while it is much rarer for small advertisers to participate, at least directly, in the open marketplace: no more than 10,000 firms globally, he estimates, use the services of the Trade Desk, a leading entry-point to the open marketplace.

Walled gardens have also been seen as generally excelling at ‘attribution,’ in other words measuring whether ads are ‘working’ cost-effectively. This service, which is fully automated, is typically provided free of charge, and the resultant easily digested, quantitative evidence of cost-effectiveness has greatly helped practitioners in the crucial, anxiety-provoking task of convincing clients and managers of the effectiveness of their advertising. Google and Meta have indeed often been treated as, in the words of one practitioner quoted in Hercher (Citation2022), ‘sources of truth’: in our terms, as intermediaries in this respect.

But for a system to be an intermediary, not simply a mediator, is, as we have emphasized, not straightforward, and perceived ‘truth’ needs to be materially produced, often elaborately. The crucial issue is that measuring success in advertising on Google, Meta, or another walled garden typically involves recording events outside of the platform, such as purchases or app installs. On the web, that involves the platform getting its ‘pixels’ (snippets of code) installed in very large numbers of webpages, especially those via which purchases are made or other ‘conversions’ such as signups take place.Footnote11 To measure purchases, etc., ‘we install the Facebook pixel’ on the ‘landing page’ to which the ad leads the user, interviewee AK told us. The pixel then ‘shoots … back to Facebook’ the information that the user ‘converted,’ in other words took the action the advertiser desired. In apps, an embedded Meta or Google SDK or ‘software development kit’ performs a role equivalent of that of a pixel on the web. AdTech firms in the open marketplace also attempt similar measurement, but they lack the sheer scale of deployment of Meta’s and Google’s pixels and SDKs.

The walled gardens ‘first ask you,’ reports advertiser AH, ‘hey, put this pixel on your site. Send me all the data.’ The latter sentence, signalling the copious data that flows from advertisers’ sites to the walled gardens, does indicate a tension. It is easy ‘literally [to give] every piece of information to Google,’ says AH, rather than trying to ‘take control of your business and your destiny and how you operate.’ His firm, however, is one of the world’s leading advertisers, which makes ‘tak[ing] control’ seem conceivable. And even he would acknowledge that there is a trade-off here: installing Meta’s and Google’s pixels and SDKs in an advertiser’s websites or apps generates data on purchases and other conversion events that is well-regarded and highly prized. One participant in an industry meeting we attended even talked of ‘the raw truth that a pixel provides.’

That role as intermediary is, however, the exception. In other respects, walled gardens are predominantly mediators, and are generally seen – and accepted – as such. Take, for example, the automated auctions via which opportunities for Google Search, Facebook and Instagram advertising are sold. These auctions are not the arm’s-length bringing together of pre-formed supply and demand. Typically, algorithms written and operated by the garden (i.e. by the supplier of advertising opportunities) decide, hundreds of millions or billions of times daily, whether an advertiser should bid to show an ad to a particular user in a particular context and, if so, how much – an arrangement, as already noted, entirely at odds with standard ‘separate bloc’ conceptualizations of supply and demand. For example, Google Search advertising campaigns, as often run by McGowan and described by multiple interviewees, typically begin with the advertiser giving Google’s system the URL that will be the ‘destination’ of the campaign: the site to which users who click on the campaign’s ads will be taken. Google’s system then generates a list of suggested keywords on which to bid, and the recommended level of bid that would likely be successful often enough to meet the advertiser’s goals.

Advertisers or advertising agencies can then amend the list of keywords and suggested bids, but our interviews suggest that extensive delegation to Google’s system is common. For example, interviewee AI, whose agency specializes in search ads, had long believed that its experienced staff could make more cost-effective bids than Google’s system. Now, however, ‘[t]hey [Google] have finally reached a point where they are better than the [human] bid managers,’ says AI. ‘It’s all being done by machines now,’ he reports: ‘you say [i.e. enter into Google’s system] “I am prepared to pay this amount for a sale.”’ Google’s algorithms then determine which opportunities the advertiser bids for and how much.

Advertisers retain the capacity to shape Google Search bids in specific ways, e.g. temporarily to increase them to make it too expensive for a competitor to win auctions, but use of that capacity seems increasingly to be the exception. Similarly with Meta’s platforms, Facebook and Instagram, on which the advertiser or advertising agency enters the goal, e.g. for sales, app installs or brand awareness, that it is trying to achieve and the budget it is giving Meta, and Meta’s systems, using machine learning, then ‘identify how you are going to best achieve [that] end goal’ (interviewee AF). ‘[Y]ou can't influence Facebook's algo, he says: [y]ou’re not in control of that.’

The decision as to which item will next get inserted into a Meta user’s feed is taken by an automated auction. That auction, however, is far from an arm’s-length interaction between pre-formed supply and demand. Not only is the size of each advertiser’s bid normally decided by Meta’s algorithms, but the user’s friends’ posts also bid, as do system messages that originate ultimately from, e.g. Facebook’s internally powerful Growth Team (messages such as ‘people you may know’), in both cases with monetary values assigned by Meta’s systems. There is therefore no predetermined ‘supply’ of Facebook/Instagram advertising opportunities: the auction determines, e.g. whether the next item in a user’s feed will be an ad or a friend’s post. Meta’s auctions thus quite literally co-create supply of and demand for advertising.

Those auctions employ the ‘Vickrey-Clarke-Groves’ mechanism from the theoretical economics of auction design. That mechanism, however, remains in practice opaque, to practitioners as well as to the social-science literature on Facebook.Footnote12 For example, before beginning this research, McGowan, who has run many successful Facebook advertising campaigns, did not know that this was the auction mechanism into which Facebook’s algorithms bid on behalf of her campaigns.

That knowledge would not have been useful to her: advertisers on Meta’s platforms are in practice separated from Meta’s auctions by an algorithmic wall. Advertisers can set a maximum bid, but Meta generally advises against that, and the UK Competition and Markets Authority reports that ‘[o]ver 90% of UK advertisers on Facebook use the default automated bidding feature, which does not allow advertisers to specify a maximum bid’ (CMA Citation2020, 17). Interviewee AF, working for a leading global advertiser, expressed some dissatisfaction with lack of direct control over its Facebook bids: ‘Even the biggest advertisers are hamstrung by the fact that your bidding algo is pretty much dictated as opposed to invented.’ More common, however, seems to be AP’s acceptance: ‘Essentially, you can’t do anything, you don't have control over your bids at all. That’s just a thing.’

‘Machines need room to work’: de-agencing and de-individualization

In this section, we turn to how walled gardens’ role as mediators is currently changing the relationship of human practitioners to walled gardens’ systems, and the connection between that issue and moves to de-individualize advertising’s audiences. Let us begin with the question of practitioners’ agency. When we began this research in 2019–2021, even interviewees such as AP, who de facto accept lack of control over the size of their bids, seemed to regard the targeting of ads as something that human practitioners should control. There were, however, already difficulties in that. As noted, normal practice is for the practitioner to input into the walled garden’s system parameters such as her/his goal (e.g. ‘conversions’ such as purchases) and total budget, and the platform’s bidding algorithm then seeks to spend that budget as cost-effectively as possible. This can have the effect that ads are preferentially delivered to those to whom it is cheaper to deliver them, rather than their audience being as the practitioner intended. For example, Lambrecht and Tucker (Citation2018) found that a gender-neutral Facebook ad for scientific careers was differentially shown to men, who are typically lower-cost advertising targets than women.

It is common to assume that advertising on platforms such as Facebook is ‘micro-targeted’ by human practitioners (including political advertisers) to very specific audiences. That practice, however, is coming under increasing pressure. Meta, for example, explicitly warns against it, arguing that its machine-learning systems outperform human practitioners. ‘To reach your goals,’ it says to advertisers, ‘machines need room to work’:

[Advertisers] accustomed to running campaigns [in the open marketplace] may be wary of trusting machines with their work. But automated systems enhanced by machine learning work best when parameters are broad. Teams working within these systems are advised to embrace a certain agnosticism towards placement, platforms and yes, even audience.Footnote13

Meta’s argument/injunction – that human practitioners should not attempt to control in detail the targeting of advertising, but should delegate it to walled gardens’ machine-learning systems – is shared by Google and other major platforms. Google, for example, also often advises practitioners: ‘take your hands off the wheel’ (interviewee AC). Practitioners’ attitudes to that advice vary. For DE, for example, human-guided targeting ‘was a very early way of doing things,’ which should be abandoned. Walled gardens such as Meta’s have rich information on users’ behaviour, and these ‘actual behavioural histories,’ as he calls them, make possible much more effective advertising than advertisers’ possibly quite mistaken intuitions about particular sociodemographic groups.

Other interviewees have misgivings. AL is uneasy about a relationship to walled gardens in which ‘we get surprises all the time’ in what the platform does:

I don’t think we go deep enough to understand if the difference is coming from the [platform’s] system or it’s coming from the audience [that the platform is targeting]. (AL)

AM acknowledges the difficulty of keeping human control over targeting: ‘the more niche you want to get, the harder it is.’ But, she says, ‘I don’t trust [the platform] because Facebook is kind of like [an opaque] box.’ So instead of running a single integrated campaign with ‘one big audience group,’ and leaving targeting to the platform’s machine-learning systems, she will sometimes try to keep control by splitting a campaign into separate campaigns, for example for specific genders, age groups or user devices (e.g. Apple versus Android phones). That reduces the walled garden’s ad-delivery optimization, but gives her, she believes, ‘a better understanding of what really works … it’s better if you have a little bit more control’ (AM).

AP reported to us in 2021 a case in which individual resistance of this kind to perceived de-agencing of the human practitioner became more widespread. It concerned Facebook’s Campaign Budget Optimization (CBO), introduced in November 2017. As AP says, ‘[y]ou turn [CBO] on and Facebook spends the budget for you on [the] ad sets or the audiences that it deems is best within that campaign.’ In 2019, Facebook proposed making its use mandatory, but ‘[t]here was so much backlash over it that they decided not to make it compulsory’ (AP).

Since 2021, however, the issue of the human practitioner’s agency has become interwoven with the second issue discussed in this section: moves to de-individualize advertising’s audiences. The crucial such move so far is Apple’s decision that from April 2021's iOS14.5 onwards, every app in its App Store needs a user’s explicit permission to track them beyond the app’s electronic boundaries, e.g. via their phone’s IDFA or Identifier for Advertisers. Interviewees report that typically only a minority of users (perhaps around a fifth) grant that permission.

We have, as noted, no first-hand data on the processes within Apple that led to the decision and whether any issues beyond the protection of user privacy were involved. What is, however, clear from our data is that Apple’s decision seriously disrupted the infrastructure of digital advertising on iPhones. It has been particularly problematic for Meta, which fiercely denounced Apple’s move, but decided it had to comply. ‘[W]e have no choice but to show Apple’s [accept/reject tracking] prompt,’ said a Facebook vice-president. ‘If we don’t, they will block Facebook from the App Store’ (Levy Citation2021). For Meta, heavily dependent on its presence on Apple phones, that would be a catastrophic outcome.

Having to show Apple’s prompt, with most users rejecting tracking, renders largely impossible the traditional way in which social media platforms such as Meta’s constructed the crucial ‘behavioural histories’ referred to by interviewee DE, which pivoted on the use of an iPhone’s IDFA (Identifier For Advertisers) to tie together the ads a user was shown on a platform such as Facebook with the user’s purchases, app installs, etc., outside of Facebook.Footnote14 All the other major social media platforms, such as TikTok and Snapchat, face the same difficulty, because they too are apps; Google, which is not simply an app, is less affected. Meta’s pixels and SDKs, for example, are still present on the web and in apps, but Apple’s iOS changes have rendered the ‘raw truth’ they generate partially inaccessible, because it is much harder to connect it to the ads users have been shown within Meta’s platforms. ‘[T]he data is there,’ says AM, ‘it’s just you can’t tie it back.’ The result was substantial disruption to Meta’s capacity to provide practitioners with demonstrations that advertising ‘worked,’ an issue that may have cost Meta $8.3 billion in the six months from April 2021 (McGee Citation2021) and caused major disruption to, e.g. McGowan’s working life as an advertising practitioner.

The loss of intermediary-like, SDK- or pixel-generated, deterministic ‘raw truth’ about an individual’s actions has led to further reinforcement of walled gardens’ mediator roles, because walled gardens have had to intensify their use of humanly opaque machine-learning systems to draw aggregate, ‘probabilistic’ rather than individual/‘deterministic,’ connections between the showing of ads and users’ subsequent actions. Social media platforms now encourage advertisers to have their own servers (and not the user’s browser or a system controlled by Apple) send data on purchases, etc, to the platform’s systems, which seek to correlate these data with the ads the platform has shown users, using machine learning. In the words of interviewee AP, this ‘basically models people’s behaviours and uses that … to track conversions [e.g. purchases] rather than tracking conversions directly … via the pixel.’

This ‘de-individualized’ modelling of the behaviour of aggregates of users has both direct and indirect links to the de-agencing of advertising’s human practitioners. The direct link is that because the modelling is inherently probabilistic, it must pass tests of statistical significance to achieve confidence that its results are not spurious. That requires larger samples than typically needed when deterministic data on individuals’ actions are available (Hercher Citation2021), which is most likely one reason why walled gardens now strongly advise practitioners against, or even prevent (see below), human-guided micro-targeting.

The indirect link to de-agencing is that the loss of individual-level data has given further impetus to walled gardens’ construction of ambitious machine-learning systems that increasingly automate all aspects of the advertising process, not just bidding and targeting. The two paradigmatic such systems are Google’s Performance Max, launched at full scale in November 2021, and Meta’s March 2022 Advantage+ (Seufert Citation2024). In them, as well as the walled garden’s system bidding on behalf of the advertiser and selecting the audience for the advertiser’s ads, it also chooses the platform on which the ads are shown, for example in the case of Performance Max choosing among, e.g. Google Search, YouTube, Google Maps and the Google Display Network. Both Performance Max and Advantage+ are capable of automatically reformatting the advertiser’s ads for the demands of these different channels, and they are increasingly employing generative AI capabilities automatically to turn text and images provided by the advertiser into ads.

Thorough-going use of machine learning to automate large swathes of advertising work is a source of ambivalence among practitioners. The greatest degree of enthusiasm was manifested by AdTech specialist AC: ‘I’ve seen companies where they had sixty people in their marketing function. I would replace all of them with one algorithm.’ Other practitioners employ walled-garden affordances of the kind just described for pragmatic reasons, but without wanting to see them displace humans. Using these affordances ‘sav[es] time’ (interviewee AI), and therefore money, so making an agency’s services more attractive to clients, and it minimizes tediously repetitive work, such as creating multiple versions of an ad, with e.g. differently placed and differently coloured components, to test which is most effective. But saving time, money and work intrinsically involves passing responsibility for decisions, e.g. over which ad to show to whom, to the walled garden’s system. Practitioners accept that, at the very least, they must ‘give time for the machine to purr’ (AH) before trying to intervene: CC’s ‘at least one good week’ seems to be a typical non-intervention period. Similarly, there seems to be acceptance that asking Facebook’s advertising system to optimize for ‘conversions’ such as purchases requires the practitioner not to intervene until at least fifty conversions have occurred.

‘[W]e’re going to do what works,’ says CJ, ‘which by the way, generally speaking, it [Google’s automated affordances] works great.’ Eventually, though, the human practitioner should intervene, he says, and ‘do more of what works and less of what doesn’t work.’ An online practitioner panel on walled gardens’ automated affordances that we attended exemplified such ambivalences. There was much talk of the ‘exciting stuff that you can now do,’ but occasional bleaker formulations: as one speaker put it, these affordances can feel like ‘the last bricks of the wall around us,’ and learning how best to use them might simply be learning ‘how you can feed the monster [the walled garden’s system] best.’ DG, interviewed in February 2024, ‘doesn’t trust it [machine learning] yet … I don’t like the lack of control, but it’s done well for some [advertising campaigns] … maybe I don’t have a job in two years.’ That last phrase was a joke, but, like many jokes, it seemed to touch on an issue that she took seriously.

Conclusion

This article has, we hope, contributed directly to the literature on digital advertising, and indirectly to the literature on platforms, in a number of ways. We have shown the relevance of Latour’s distinction between intermediaries and mediators. We have argued that AdTech systems are far more commonly mediators than intermediaries, that ‘supply’ and ‘demand’ are indeed co-created by those mediators (rather than having taken form prior to these market encounters and simply being brought together in them), and that constructing an intermediary is a very demanding task, just as Latour postulates. We have highlighted the distinction between the two main forms of digital advertising’s market encounters, the open marketplace and walled gardens – a distinction that, curiously, is seldom made in the literature, at least in any explicit form.Footnote15 And we have identified the de-agencing of advertising’s human practitioners and the de-individualizing of advertising’s audiences as current or incipient sources of tension, especially in respect to walled gardens.

Tensions characteristic of the open marketplace are created above all by the uneasy juxtaposition of AdTech systems that are mediators and an influential imaginary of the intermediary. Some tensions concern ads themselves. Are they viewable? Are the viewers human beings or fraudulent bots? Is the advertiser’s system being fooled by the superficially attractive metrics of made-for-advertising websites? Will the digital content surrounding an ad damage the image of the advertiser or its products? Other tensions concern the economic arrangements of advertising. Are they still murky, even fraudulent? Are rebates being covertly paid or hidden fees charged?

Such tensions, we have suggested, help create an appetite for the services of the increasingly influential ‘verification’ companies, which seek to make the mechanisms via which ads are bought, sold and displayed in the open marketplace more intermediary-like: more faithful in how they transmit what advertisers’ intend. There are also substantial efforts to reshape open-marketplace economic arrangements so that they better conform to the ideal of the transparent intermediary.

In contrast, we have noted, there seems to be de facto acceptance that walled gardens are inescapably mediators. We cannot find any general expectation of ‘transparency’ of their internal economic accounting, and little tension therefore seems to surround that. And current efforts to make market encounters within walled gardens better resemble standard intermediary-based imaginaries of supply and demand seem at best marginal. That the size of the advertiser’s (i.e. ‘demand’s’) bids will be determined by the walled garden (i.e. by the supplier) seems now to be largely taken-for-granted, not a source of tension.

That decisions about the targeting of ads will also be taken by the platform’s system, not the human advertising practitioner, may also be en route to de facto acceptance: see below. For that system to decide which ads are shown on behalf of an advertiser – and perhaps itself even to construct those ads – is, however, far from clearly accepted, and there are overt tensions concerning the control by the walled garden of where (i.e. via which channel) the advertiser’s ads are shown: again, see below. Such decisions, though, are certainly no longer unequivocally in the hands of the advertiser.

It is for those reasons that we have talked about the de-agencing of advertising’s human practitioners. We expect this interpretation to be controversial. It would, e.g. be rejected by platforms themselves. At the centre, quite literally, of a sector meeting we attended in 2023 was a large, elaborate, amply staffed Google installation, which was an extended material riff on the notion of the advertising practitioner as racing driver. At the installation’s core was an actual McLaren Formula One racing car (see ). A giant billboard urged attendees: ‘Go from spectator to driver with Google AI.’ The more powerful tools that Google was placing in the hands of the human practitioner were intermediaries, not mediators; it was agencing the practitioner, not de-agencing her.

Figure 1. Inside Google’s installation at an advertising conference in 2023. Authors’ fieldwork photograph.

Figure 1. Inside Google’s installation at an advertising conference in 2023. Authors’ fieldwork photograph.

Yet spending considerable amounts of money and effort to evoke ‘agencing’ may itself signal a tension: the fear that the opposite interpretation may be taking hold among advertising practitioners, and needs combatting. Our academic colleagues in market studies and related fields might also, we have found, prefer a more hedged interpretation such as ‘distributing agency differently.’ In our view, though, it would be a mistake for market studies to conceptualize only how ‘devices … contribute to and augment the agency of humans as a result of processes of agencing’ (Fuentes and Sörum Citation2019, 135). The opposite effect is perfectly possible, and needs to be identified explicitly. That is what the notion of ‘de-agencing’ does.

Systems such as Google’s Performance Max and Meta’s Advantage+ are offered to practitioners as optional, not compulsory. Here, though, is where Beuscart and Mellet’s (Citation2013) distinction between direct-response advertising (which, as noted, seeks to prompt an immediate action, such as purchase of a product or install of a game) and longer-term brand-building is significant. As Beauvisage and colleagues argue, advertising-agency staff who work on brand-building need targeting to be humanly ‘intelligible’ to clients, and see it as a valuable skill firmly ‘under their professional jurisdiction.’ ‘[A]utomated targeting,’ therefore, ‘is still frowned upon’ (Beauvisage et al. Citation2023, 14).

In direct response, though, the goal is not the enhancement of a brand but immediately measurable outcomes, achieved at minimum cost. If the practitioner’s ‘boss is yelling at them, I need more installs, get me more installs at this price … they’ll do anything to get those installs at that price’ (interviewee CR), and if automated rather than human-guided targeting helps, there is a considerable incentive to adopt it. Furthermore, brand-building can easily morph into direct response, at least in the case of small and mid-sized advertisers with limited budgets. McGowan, for example, always tries initially to engage such clients in discussions of ‘strategy,’ such as the most appropriate audiences for their advertising. She is, however, also familiar with how ‘strategy’ can become subordinate to measures, automatically produced by the walled gardens, of the immediate or near-term monetary return on advertising spending. If what it is for advertising to ‘work’ is measured in this way, use of machine-learning systems designed to optimize for it, such as Performance Max and Advantage+, can cease to be genuinely optional.

And once an ‘option’ is embraced, constraints that are no longer optional follow. If, for example, an advertising practitioner at a games studio or app developer adopts the powerful affordances of Meta’s ‘Advantage+ app campaigns,’ s/he retains almost no capacity for traditional human-guided targeting. So that ‘the system has more flexibility,’ Meta tells her, ‘[y]ou can’t add additional targeting options including demographics, interests and behaviours. … You can't target your audience based on gender. You can't target your audience based on their relationship status.’Footnote16 Even ‘age’ is available to her only in the form of a minimum age threshold to meet legal requirements.

It might seem as if our analysis is drifting towards the monolithic, Zuboffian view of platform power that we have disavowed, albeit with the advertising practitioner, rather than the everyday user of the platform, as its subject, so it is important to emphasize the continuing possibility, and sometimes actuality, of contestation. Human beings are resourceful, and can often find ways of circumventing constraints. Controversy can erupt unexpectedly. It did so, for example, in November 2023, when a small AdTech firm, Adalytics, publicly attacked automatic ad placement by Google’s systems, in particular the placing of search ads not within Google Search itself but on ‘Google Search Partner’ sites that included, Adalytics alleged, pornographic and extreme right wing sites, and sites subject to US government sanctions.Footnote17 Google responded that only a tiny proportion of sites were problematic, but it has given advertisers, at first temporarily, and then from March 2024 more permanently, the capacity to opt out of automatic placement of their ads on specific Search Partner sites.

Advertisers, furthermore, still have an alternative to walled gardens: the open marketplace. Advertising there can be cheaper, and therefore more cost-effective, at least for a large, brand-oriented advertiser sophisticated enough to navigate its tensions: ‘if you’re at the scale of Procter & Gamble, you can staff a full-time team to be good at this’ (DH). Since news journalism is often dependent on open-marketplace ad revenue, that has wider benefits, and the existence of an alternative may give large advertisers some implicit leverage over walled gardens. Big advertisers, for example, often want the reassurance that they get from verification companies. The latter, originally restricted to the open marketplace, now encompass walled-garden advertising too, although so far often simply by re-analyzing data generated by the garden itself rather than deploying a measurement apparatus of their own, as they do in the open marketplace (Fou Citation2023).

Finally, let us emphasize that a characteristic of the actor-network tradition on which we draw is to refuse uncritical invocations of scale, of ‘big’ and ‘small,’ of ‘macro’ and ‘micro’ (Callon and Latour Citation1981). ‘Small’ material things, which often pass without notice, can help create the power of a ‘big’ system, but can also destabilize it. The big platforms that are present in everyday life primarily in the form of mobile-phone apps were shaken by Apple’s 2021 de-individualizing initiative. As we have noted, they have recovered by intensifying the use of machine learning to measure and optimize advertising’s effectiveness, creating the powerful systems we have discussed.

Our most recent interviews, though, suggest that the impressive optimizing power of several of today's digital advertising systems may indeed depend, at least partially, on something apparently small and seldom discussed, the continuing availability to those systems of users’ IP addresses. If that is correct – we cannot be certain – it is an important material vulnerability for any platform that takes everyday form as a mobile-phone app. It is perfectly conceivable that Apple could rule unequivocally that this use of IP addresses violates the terms and conditions of the App Store, or could intensify its current material measures to block it. Either move would rekindle tensions that have subsided in the face of those platforms’ machine-learning-led recovery from 2021's disruption. There are still implicit tensions even at the heart of platform capitalism, and power is indeed still contestable, not monolithic.

Acknowledgements

We are deeply grateful to our interviewees, without whom this paper could not have been written, and to the audiences to whom we presented it at the Seventh Interdisciplinary Market Studies Workshop, Edinburgh, June 2023, and the Science, Knowledge, and Technology Workshop, Columbia University, September 2023. This research was assessed, classed as ‘Level 1’ (‘negligible or low foreseeable risks’), and ethically approved under the research ethics procedure of Edinburgh University’s School of Social and Political Science: https://www.sps.ed.ac.uk/research/ethics. Informed consent was obtained for all interviews.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the UK Economic and Social Research Council [grant numbers ES/V015362/1 and ES/R003173/1].

Notes

1 We base this figure on Kotila’s (Citation2021) estimate that 146 trillion digital ads are displayed globally each year.

2 We know no more recent aggregate global estimate that can be directly compared with Pärssinen et al. (Citation2018). Drawing upon the latter, Kotila (Citation2021) estimates the average CO2 emissions associated with showing a single ad once, to just one user, as 0.08–1.09 grams, which is consistent with more recent work, notably Scope3 (Citation2023).

3 On matters of concern, see Latour (Citation2004) and, e.g. Geiger et al. (Citation2014).

4 Since this paper was first submitted for publication, we found an article in Housing Studies that usefully applies Latour’s distinction to property platforms such as Rightmove (Goodchild and Ferrari Citation2024).

5 On the range of meanings of ‘objectivity’, see, e.g. Daston (Citation1992).

6 For Callon, a ‘market encounter’ is not always, and perhaps not usually, an interaction between human beings. E.g. interactions of customers with supermarket shelves, their contents and shopping carts are market encounters.

7 Caliskan and Callon (Citation2010, 14) note that ‘organizing and framing [market encounters] is the product of the activity of mediators (we prefer this word to the less dynamic term ‘intermediary’, since the idea of mediation stresses active participation in producing an outcome)’. Two decades previously, Callon and Latour’s colleagues Antoine Hennion and Cécile Méadel (Citation1989) investigated advertising’s role as ‘mediator between supply and demand’ (191). They did not distinguish between mediators and intermediaries, but their notion of ‘mediation’ – elaborated by Hennion in other contributions to the sociology of culture – may have influenced Latour’s and Callon’s.

8 Two market arrangements that space constraints prevent us discussing are direct deals between advertisers and publishers, and ‘private marketplaces’, which employ open-marketplace mechanisms but allow only pre-selected participants to use them. Our research focuses on the US, UK and European Union. China’s advertising market, for example, is differently structured.

9 Big retailers (Amazon, and also, e.g. Tesco or Walmart) also sell advertising on their websites and apps via their own walled gardens, which word-length constraints again prevent us discussing.

10 A similar study by the US Association of National Advertisers (ANA Citation2023) found that all costs could be traced in cases in which full log-level data on ad impressions were available. However, only 21 of the 67 corporations that wanted to take part in the study were able to do so; the remainder were not able to secure access to log-level data on the ad impressions they had bought.

11 These code snippets were originally implemented via tiny, transparent, single-pixel images on webpages. Although that would now be regarded as ‘old tech’ (AC), the name has stuck.

12 The one exception to this opacity in the literature of which we are aware is Vijoen et al. (Citation2021, 3), who note that Facebook uses a Vickrey-Clarke-Groves auction, while Google Search employs ‘generalized second-price’ auctions. The latter are simpler to understand, but not theoretically optimal.

14 The equivalent of IDFA for Android phones, Google’s Advertising ID, is still available. Google is scheduled to introduce similar restrictions on its use in late 2024, but they seem likely to be postponed at least to 2025.

15 Hwang (Citation2020), for example, presents a great deal of evidence that AdTech systems are, in our terms, mediators, and indeed highly unreliable, but because his overall argument is that this dangerously undermines digital advertising as a whole, creating the potential for cataclysmic crisis, he pays insufficient attention to crucial differences between the open marketplace and Big Tech’s walled gardens.

References