2,797
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Regulating social media through self-regulation: a process-tracing case study of the European Commission and Facebook

ORCID Icon &
Article: 2182696 | Received 19 May 2022, Accepted 15 Feb 2023, Published online: 16 Mar 2023

ABSTRACT

Digital campaigns are an increasingly important facet of political campaigning. As a consequence, calls to regulate online political advertising have become louder. Yet, how digital campaigning and social media platforms should be regulated has been the subject of intense debate. In this paper, we combine a principal-agent perspective with a process-tracing methodology to examine the process of self-regulation of social media platforms. This developed mechanism is applied to the case of political advertisements on Facebook in the context of the 2019 European elections. We analyse the motivation of the European Commission to opt for self-regulation and explain the mismatch between the goals of the European Commission regarding online political campaigning on the one hand, and the deviating implementation of Facebook on the other hand. We find that both media-centric and politics-centric factors led the Commission to choose for self-regulation and identified how a combination of an incomplete contract and insufficient monitoring instruments prevented the Commission from tackling Facebook’s deviating policy with regard to online political advertisements.

Introduction

Over the past decade, political campaigning has increasingly shifted to the digital realm. Social media platforms are a pivotal aspect of this trend. Many governments around the world struggle with the questions if and how online political advertising should be regulated, especially in the context of elections. On the one hand, political campaigning on social media can be an important tool to level the playing field among political parties and candidates, and provide unique opportunities to engage with citizens (Jacobs and Spierings Citation2016; Gibson and McAllister Citation2015). On the other hand, online advertising entails the risk of foreign interference, of disinformation, and of attempts to steer voter behaviour through micro-targeting (see, for example, Guess and Lyons Citation2020; Ohlin and Hollis Citation2021; Zuiderveen Borgesius et al. Citation2018).

Governments have dealt with this challenge in various ways. In many cases, no regulatory action has (yet) been taken, and it has been left up to the social media platforms themselves to come up with solutions. Facebook has attempted to provide more transparency, for example, by introducing an Ad Library, although the quality of this tool has been criticized (see, for example, Leerssen et al. Citation2019; Righetti et al. Citation2022). Other countries focused their regulatory efforts on the political parties themselves. For example, in the run-up to the 2021 national elections the Netherlands pushed for a Code of Conduct on online political advertising among the political parties in an effort to improve transparency and counter disinformation (IDEA International Citation2021). Currently, more and more countries are considering more stringent regulatory measures to tackle the negative aspects of social media campaigning. However, little is known about the factors that impede or incentivize rule-makers to take regulatory action, or about the consequences of specific regulatory choices.

In this article, we examine these questions by focusing on the regulation of digital political advertising at the level of the European Union. More specifically, we analyse the first regulatory effort by the European Commission in 2018, when it opted for a self-regulatory instrument: the introduction of a Code of Practice on Disinformation. While the Commission formulated specific objectives to strengthen, among others, the transparency of online political advertising, the implementation was left to the online platforms themselves. Despite a clear framework developed by the Commission, the implemented policy of one of the platforms – Facebook – was directly contrary to the Commission’s intentions about digital political advertising: the social media company made it impossible for European political parties and the EU institutions themselves to conduct transnational digital political campaigns.

The objective of this paper is to examine the motivation of the European Commission to opt for self-regulation and to explain the mismatch between the goals of the European Commission regarding online political campaigning on the one hand, and the deviating implementation of Facebook on the other hand. Consequently, the central research question is two-fold: why did the European Commission choose for self-regulation as an instrument to regulate online political advertising, and how did the process of self-regulation lead to unintended policy outcomes?

With this study, we seek to contribute to the debate on the challenges and difficulties of regulating digital political advertising, and online platforms more broadly. We argue that the motivation of the European Commission to opt for self-regulation was based on a combination of media-centric and politics-centric factors, and found a number of factors not previously identified by scholarly work. In order to assess the implications of the choice for a self-regulatory instrument, we developed a causal mechanism of how self-regulation leads to policy drift by combining a principal-agent framework with a process-tracing methodology. We apply this mechanism to the case of Facebook and demonstrate how the Commission’s choice for self-regulation as a policy instrument allowed Facebook to implement an advertising policy that directly contradicted the Commission’s objectives and preferences.

The paper is structured as follows: the second part focuses on the literature concerning the regulation of social media platforms more generally, and digital political advertising in particular. The third part provides an introduction and motivation for the analysed case and ends with a discussion of the applied methodology. In the fourth part, we will motivate our choice for the combined use of a principal-agent framework and process-tracing method. In the fifth part, insights from the principal-agent literature will be used to construct a causal mechanism on the use of self-regulation of online platforms and potential adverse policy actions taken by these social media companies. The sixth part includes the empirical assessment of our case.

Regulating social media platforms

Online platforms play an increasingly important role in politics, in particular in the context of elections. Dommett and Power (Citation2019, 257) argued, for example, that the use of Facebook by political parties and candidates has become an ‘accepted part of election campaigns in many countries’. This is certainly the case for the United States and the United Kingdom (Bimber Citation2014; Gibson Citation2015), and studies on other European countries point in a similar direction (see, for example, Lilleker et al. Citation2019). Social media allow political parties and candidates to reach a large audience, while at the same time personalizing their messages through microtargeting (Kirk and Teeling Citation2022). Research has indeed shown that the use of Facebook has positive effects on the online visibility and reach of parties (Koc-Michalska et al. Citation2021). However, scholars have also highlighted the detrimental effects of increased (manipulative) use of digital political advertising on the quality of democratic debate and the conduct of electoral campaigns (Zuiderveen Borgesius et al. Citation2018; Marwick and Lewis Citation2017; Risso Citation2018). Similarly, the way online platforms like Facebook and Google actively vet paid political ads, make decisions on content, and enforce them, occurs in often opaque ways (Kreiss and Mcgregor Citation2019).

The increasing importance of digital campaigning in politics has amplified calls for the adaption or introduction of (new) rules on political campaigning. Yet, our knowledge about the drivers and barriers of regulation, and the relation between regulatory frameworks and the organizational and policy changes in these online platforms is modest (Kreiss and Mcgregor Citation2018, 175).

In general, regulatory action seems to be guided by a set of incentives and constraints. On the one hand, the mediatization of ‘scandals’ highlighting possible disruptive electoral effects of online advertising has created incentives to regulate digital political campaigning. Events like the Cambridge Analytica controversy have acted as ‘public shocks’ in this respect: ‘public moments that interrupt the functioning and governance of these ostensibly private platforms, by suddenly highlighting a platform’s infrastructural qualities and call it to account for its public implications’ (Ananny and Gillespie Citation2017, 3). The Cambridge Analytica controversy indeed prompted US Congressional and UK parliamentary investigations and public calls for regulation (see also Rogers and Niederer Citation2020, 20).

On the other hand, scholars have also identified several constraints against regulatory action regarding online political campaigning. These constraints are related to the particular attributes of digital platforms themselves (‘media-centric factors’) as well as to characteristics of political decision-making (‘politics-centric factors’) (see also Dommett and Zhu Citation2022). Regarding the media-centric factors, the design and operation of digital platforms hamper rulemaking activities in several ways (see also OECD Citation2019). The pace of technological development is much higher than the speed of regulatory change: it is difficult for lawmakers to catch up with the constantly changing platform technology (see also Kirk and Teeling Citation2022). In addition, the process of digitalization blurs the traditional delineation of sectors and country markets, and the distinction between consumers and producers. As opposed to traditional media, the ability of the nation-state to regulate and control social media platforms is reduced because of their cross-border and global nature (Picard and Pickard Citation2017). The decentralized structure of the online ecosystem – where content is created bottom-up and distributed worldwide – creates challenges for regulation across countries or multiple political levels (Flew, Martin, and Suzor Citation2019). Consequently, due to the rapid changes in the industry, the heterogeneity of activities by platforms and multi-level jurisdictions it is hard to impose concrete regulations that target the industry as a whole.

In addition, Dommett and Zhu (Citation2022) identified three main political factors that created barriers to the regulation of digital political advertisements in the United Kingdom. First, they pointed to political reticence: policy-makers were unable or unwilling to invest their political capital and energy in the regulation of online advertising. A second constraint was the logistical challenges related to the nature of online platforms, such as the perceived ambiguity as to what constitutes online political advertising, and how this should be regulated accordingly. Third, they observed a lack of consensus among policy-makers about what and how the regulation needed to change: there was no agreement about what exactly should be done. This also echoes warnings from some academics on the dangers of overregulating social media platforms (see e.g. Brown and Peters Citation2018).

Consequently, while scholars have started to explore the incentive structure that leads to (or hampers) regulatory action, little is known about the motivation of policy-makers to opt for specific types of regulation: from forms of self-regulation over co-regulation to legislative initiatives. In addition, the consequences of regulatory decisions on the policy of online platforms remain unclear. This article aims to contribute to this debate by examining the decision of the European Commission in 2018 to choose a self-regulatory instrument for managing online political advertising.

The European Commission and the regulation of online political advertising

Like in many countries, the regulation of online political advertising has also been an intensely debated topic at the level of the European Union. In 2018, the European Commission decided to regulate online political advertising – in a broader effort of tackling disinformation – through a self-regulatory instrument. The Commission set out a number of objectives to improve the transparency of online political advertising, which were subsequently codified – alongside other initiatives to tackle disinformation – in a Code of Practice, and finally implemented by the online platforms. However, in the case of Facebook, the implementation of the Code resulted in a policy that was contrary to the intentions of the European Commission.

For the Commission, the use of online political advertisements was considered an important tool to make the 2019 elections for the European Parliament more ‘European’. Traditionally, these elections have lacked a genuine EU-wide dimension and have been labelled as ‘second-order national elections’ (Hix and Marsh Citation2011; Pasquino and Valbruzzi Citation2019): they are contested in national electoral districts by national political parties on national political issues (see e.g. Cremonesi et al. Citation2019). A proposed solution to ‘europeanize’ these elections has been to strengthen the role of political parties at the European level, also called ‘Europarties’. In addition to the Spitzenkandidaten system – in which European political parties were encouraged to put forward their candidate for the presidency of the European Commission – and the allocation of substantial European subsidies to conduct EU-wide campaigns (Wolfs Citation2022; Wolfs, Put, and Van Hecke Citation2021), online political advertising could provide another instrument for the Europarties to strengthen their visibility and link with voters. This was particularly important, given that the asymmetry and differences in electoral and campaign (finance) rules at the EU level and in the various member states have made it incredibly difficult for Europarties to conduct pan-European physical electoral campaigns (Wolfs Citation2022). Social media advertising would allow them to efficiently spread their political message across physical and language barriers.

This potential of social media as campaign tools was also acknowledged, stimulated – and even extensively used – by the European Commission and the European Parliament. In its report on the 2014 European elections, the European Commission acknowledged how ‘the use of interactive social media as campaigning tools made it easier than before for voters to engage directly in the campaign’ and thus facilitated pan-European election campaigns (European Commission Citation2015, 16). In the run-up to the 2019 elections, the European Commission also stressed the importance of the European dimension of the election campaigns and highlighted the essential role played by European political parties in this respect (European Commission Citation2018a, 3; Citation2018b, 2–3).

The possibilities for an EU-wide digital campaign were however significantly thwarted in the run-up to the 2019 European elections when the European political parties and EU institutions learned that Facebook imposed important limitations for cross-border campaigning. All advertisers had to register in the country where they wished to purchase political advertising (Kayali and De La Baume Citation2019). Most European political parties had their registered headquarters in Brussels, and thus could only campaign digitally in Belgium. The action of Facebook was deplored by the EU institutions (see e.g. European Parliament, Council of the European Union and European Commussion Citation2019).Footnote1 Yet, this was a direct consequence of the self-regulatory approach of the European Commission with regard to online political advertising.

An in-depth analysis of the considerations and implications of the Commission’s choice for self-regulation and the subsequent deviating implementation by Facebook can serve as an instrumental case study for several reasons. First, this was the first supranational attempt to regulate these platforms: a cross-border, rulemaking effort in order to overcome the constrained abilities of nation states to deal with this challenge. Second, while the European Commission had several policy options, it chose self-regulation. This stands in contrast to other cases, where policy-makers opted for a ban on online political advertising during elections (such as in France), self-regulation of political parties (such as the Netherlands) or no regulatory action (such as the UK; see Dommett and Zhu Citation2022). Third, the policy choice made by the European Commission resulted in a deviating implementation by Facebook that went against the Commission’s preferences.

In terms of methodology, we rely on an examination of the preparatory and policy documents in combination with semi-structured interviews to identify the factors that can explain the choice of the European Commission for self-regulation. In order to examine the policy implications of this choice – more specifically, the mismatch between the Commission’s preferences regarding online political advertising and Facebook’s diverging implementation – we use a combination of a principal-agent framework and a process-tracing analysis. First, we develop a theoretical causal mechanism of how self-regulation leads to a diverging implementation. Second, we empirically test this mechanism on our specific case.

This empirical assessment is based on a combination of policy documents, semi-structured interviews and media articles. The documents consist of preparatory, internal and policy documents of the EU institutions, studies and assessments of the conducted policy, letters, and implementation reports of the online platforms. This analysis was complemented with interviews with policy advisors in the European Commission that were involved in the preparation of the policy initiative, a representative of the online platforms who participated in the process, a representative of the European political parties, and a transcript of a media interview of Facebook’s Vice-President for Global Affairs and Communications Nick Clegg. The documents and interviews were manually coded in an inductive coding process to identify the factors that can explain the Commission’s choice for self-regulation and to identify possible evidence for the subsequent steps in the developed causal mechanism (cf. infra). In line with the process-tracing methodology, this evidence was subsequently assessed in terms of certainty, uniqueness and accuracy (details can be found in Annex 2).

Social media regulation: a problem of delegation

We examine the relation between the European Commission and the online platforms through the analytical lens of the principal-agent framework. This framework has been frequently used by political scientists to examine questions on the delegation of authority (See e.g. Thatcher and Sweet Citation2002, 2). It is a useful heuristic tool ‘to understand why, how and with which consequences certain actors delegate the authority to execute a particular task to other actors’ (Delreux and Adriaensen Citation2017, 3; Tallberg Citation2002). In our case, the Commission, as principal, has delegated certain powers – in this case, the management of online political advertising – to agents, the social media platforms. This has resulted in a policy outcome that differed substantially from the Commission’s interests as principal, constituting – in the terminology of the principal-agent framework – ‘agency costs’.

Using a principal-agent perspective entails advantages and disadvantages. It allows for the simplification of complex realities related to the reasons and consequences of the delegation of authority. The extensive use and refinement of the framework in many studies have resulted in the identification of a large number of possible reasons for delegation, and explanations for agency costs (see among others Pollack Citation1999; Franchino Citation2000a, Citation2000b; Tallberg Citation2002; Dijkstra Citation2014; Elsig Citation2007; Furness Citation2013). The main disadvantage of the principal-agent approach is that the delegation process is ‘black boxed’: it is often unclear how the delegation of authority has led to a diverging policy outcome, or in other words, how delegation produces agency costs (Reykers and Beach Citation2017, 255, 258).

Therefore, in order to shed light on the actual causal process between the delegation of the European Commission and the deviating implementation by Facebook, we follow Reykers and Beach (Citation2017) and apply a process-tracing analysis. The focus of this method of data analysis lies on tracing the causal mechanism linking cause(s) with a specific outcome. The use of process-tracing provides an added value for the principal-agent framework on at least four points. First, on a theoretical level, it forces us to clarify how the insights from the principal-agent literature will occur in the causal mechanism of our case. Second, instead of treating the causal link between cause and outcome as a black box, process-tracing examines what the actors actually do and links this to the theoretical expectations. This strengthens the causal claims that we can make about our analysed case. Third, process-tracing is open to contextual factors that can be important in explaining the case. Such vital contextual information is lost when applying a parsimonious principal-agent framework. Although the inclusion of contextual information makes our theoretical insights more complex, it also strengthens our understanding of the causal dynamics behind a principal-agent relation (Beach and Pedersen Citation2016; Reykers and Beach Citation2017, 256–257). Fourth, the application of process-tracing allows for a more sophisticated analysis on how the various elements of the principal-agent framework – such as information asymmetries, selection issues and (lack of) proper oversight instruments – work together to produce a certain outcome.

In this paper, we identify a causal mechanism linking the delegation of the management of political advertising through self-regulation to agency costs: the policy outcome that made it impossible for European political parties and EU institutions to conduct transnational online campaigns ().

Figure 1. Basic causal mechanism.

Figure 1. Basic causal mechanism.

The analysis subsequently takes place in two steps: first, the causal mechanism is broken down into several linear and sequential steps that link the causal condition with the outcome on a theoretical level, using the insights from the principal-agent literature. Second, we will assess whether we can find empirical support for the causal steps that we have theorized.

Causal mechanism of delegation through self-regulation

We follow Delreux and Adriaensen (Citation2017, 14–22) and argue that the analysis of the principal-agent framework occurs in two stages. As a first step, it must be verified whether the framework can be used to study the relation by identifying ‘principal-agent proof’: which actors can be considered as the respective principal and agent, and can a clear act of delegation be recognized? When the principal-agent relation is clearly established, we can examine the conditions that have led to the pattern of delegation (the ‘politics of delegation’) and the consequences for the distribution of power between the principal and the agent (the ‘politics of discretion’). The first stage is a necessary step before the politics of delegation and discretion can be studied (Delreux and Adriaensen Citation2017, 14).

Mapping the principal-agent relation

In order to apply the principal-agent framework, two conditions must be fulfilled: we must be able to identify the principal and the agent in the relation (who delegates to whom?) and the object of delegation (what is delegated?). In order to be considered as a principal, an actor must delegate authority to an agent (Delreux and Adriaensen Citation2017, 15). In case of self-regulation, the act of delegation of authority is not immediately evident, although other authors have already framed the act of self-regulation as a principal-agent relationship (see e.g. Héritier and Eckert Citation2008). In our case under analysis, it is indeed possible that the European Commission has prompted the online platforms to take action through self-regulation because it did not have the capability to act itself. Consequently, we need to establish that the Commission had the authority to regulate political advertising, but decided to delegate this task. In the political system of the EU, the European Commission has the monopoly over legislative initiative but can only act if there is a legal basis in the EU treaties. Specifically, with regard to the regulation of political advertising, the Commission can rely on its obligation to ensure the proper functioning of the EU internal market to take legislative action. This legal basis was invoked in 2021, when the Commission came forward with a legislative proposal to ensure greater transparency in paid political advertising (European Commission Citation2020b, 5).Footnote2 In other words, the European Commission could have already taken legislative initiative regarding political advertising in 2018 but decided to delegate this authority to the online platforms themselves.

These platforms thus constitute the agent in this relationship. The question remains whether they acted as a collective agent or as multiple agents. A collective agent is (1) linked with a principal by a single contract, (2) composed of more than one actor that are collectively responsible for executing the delegated task, and (3) characterized by a diffusion of authority between these different actors. Multiple agents are also composed of several actors with a diffusion of authority but are not linked with the principal by a single contract (Laloux Citation2017, 85). In other words, the difference between a collective agent and multiple agents lies in the contract that determines the act of delegation.

In our case under analysis, the contract takes a rather hybrid form, mainly because it is guided by the principle of self-regulation. The task of managing political advertising was part of the EU Code of Practice on Disinformation (European Union Citation2018a), which could be regarded as a single contract that links the multiple actors to the principal. However, while the Code of Practice contains common principles and objectives, each online platform that signed the Code could choose its own specific commitments in the field of managing political advertising (European Union Citation2018b). Therefore, rather than a single contract univocally applicable to all agents, it can be considered as a collection of multiple contracts. The main consequence is that the online platforms should not be analysed as a collective agent, but as multiple agents performing a single task. This theoretically justifies our choice to analyse the relation between the European Commission as principal and one specific agent: Facebook.

Finally, the object of delegation must be specified. The management of political advertising is part of the broader EU strategy on tackling disinformation. The objectives set out by the European Commission on this point are improving the scrutiny of the placement of political advertisements and ensuring transparency about sponsored political advertising (European Commission Citation2018a, 7). These objectives have been translated by the online platforms into specific commitments. For our case under analysis, Facebook affirmed that it would provide users more information on the displayed advertisements and more control on which advertisements would be shown, and declared that it would implement policies to ensure that political advertisers complied with all applicable laws and processes (European Union Citation2018b, 2). In sum, it can be concluded that the necessary elements are present in our case to apply a principal-agent framework.

The politics of delegation and discretion

A process of delegation almost always leads to (some degree of) agency costs when the agent implements the tasks in a way that goes against the interests of the principal (e.g.: da Conceição Citation2010; Hawkins et al. Citation2006) In general, agency costs can originate from both shirking and slippage. Shirking describes the intentional deviating behaviour of an agent caused by a situation in which the principal and the agent hold (a priori) conflicting interests, and the agent’s wish to maximize his own interests. A situation of slippage occurs when deviating agent behaviour is the unintended consequence of the design of the delegation structure (da Conceição Citation2010, 1108–1109; Delreux and Adriaensen Citation2017, 6). This structure stimulates the agent to minimize its efforts, which results in the adoption of a position different from the principal’s intentions (Kiewiet and McCubbins Citation1991; Pollack Citation1997, 108). Slippage can be caused by conflicting and inadequate communication, since ‘[i]n principal-agent relationships […], information may not be fully or accurately communicated between the primary principal and the primary agent’ (Lane and Kivisto Citation2008, 154).

Within the principal-agent framework, two main components can be distinguished: how a principal delegates, and the consequences of the form of delegation for the desired outcome. Delreux and Adriaensen (Citation2017, 19) describe these respectively as the politics of delegation and discretion:

Whereas the politics of delegation lay down the rules of the game to be played between principal and agent, the eventual unfolding of the game is subject of the politics of discretion. Just like the outcome of a game depends on the rules by which one plays, the politics of discretion will be deeply affected by the politics of delegation.

The politics of delegation thus focuses on the attributes of the act of delegation and the consequences for the relation between the principal and the agent(s). Two elements have an important potential effect on this relation: (1) which delegation mandate regulates the delegation of authority, and (2) which control mechanisms are put in place by the principal to monitor the agent’s actions.

The type of delegation mandate chosen by a principal determines the level of authority and discretion that their agents might enjoy. In general, two main types of delegation mandates can be recognized: (1) complete contracts, based on specific rules that instruct agents on how they must act, and (2) incomplete contracts, a discretion-based type of delegation that gives agents more autonomy and possibly states the goals that the principal envisions, but no specific actions that the agents must take to reach these goals (Hawkins et al. Citation2006). While such a dichotomous distinction is largely theoretical, in general, it can be argued that the more precise a delegation contract is, the less room there is for agency costs (Bradley and Kelley Citation2008). The use of incomplete contracts thus increases the risk of agency slack (da Conceicao-Heldt Citation2013, 24), mainly because principal-agent relations are characterized by an information asymmetry in favour of the agent. While rule-based complete contracts clearly stipulate the functions assigned to agents and anticipate almost every possible future contingency (Cooley and Spruyt Citation2009), incomplete contracts provide a lot more discretion and autonomy to agents, which allows them more leeway to deviate from the principal’s intentions. Delegation through self-regulation is in itself a form of an incomplete contract: while it is possible that the principal has set out the objectives that must be achieved, the specificity of the rules that the agents must follow and how they must act will be limited. Consequently, self-regulation provides substantial discretion to the agents and entails a large risk of agency costs.

Yet, oversight mechanisms can mitigate agency costs when delegating authority. The more discretion an agent got through the delegation mandate, the more extensive forms of oversight are required in order to prevent agency costs (da Conceicao-Heldt Citation2013, 25). In general, there are four main oversight instruments that can be used (Kiewiet and McCubbins Citation1991, 27–34). First, principals can establish thorough screening procedures before the selection of the agent to limit hidden information and adverse selection. Second, the delegation mandate can include an incentive structure that involves negative rewards or sanctions if the agent’s actions are not in line with the principal’s objectives. Third, monitoring and reporting requirements can be imposed to check the actions of the agent. Fourth, a principal can set institutional checks to reduce the agent’s possible scope of activity. However, when the instrument of self-regulation is used, the stringency of the oversight mechanisms is unlikely to be sufficient to mitigate the degree of agent discretion – and thus also the risk of agency costs. Agent selection is to a large extent self-selection and does not include a thorough screening process. In addition, while it is possible that an incentive structure with negative rewards and monitoring requirements is put in place, it is unlikely to be sufficiently strong to avert agency costs.

A mechanism of self-regulation

Based on these insights and considerations of the principal-agent theory, we developed a causal mechanism for the process of self-regulation (). When the principal decides to use the instrument of self-regulation (Step 0), he communicates the desired objectives of the process to the agents, possibly in combination with (a limited number of) oversight mechanisms to monitor the implementation by the agent (Step 1). Yet, this enables the agent to set up a self-regulatory framework that leaves substantial discretion to the agent to perform its task and thus constitutes an incomplete contract (Step 2). In the second part of the mechanism, this high degree of autonomy leads to actions of the agent that deviate from the initial objectives of the principal (Step 3). This can be the consequence of either intentional deviating behaviour of the agent due to diverging interests (shirking) or the unintentional implementation of a policy due to miscommunication or unclear objectives in the delegation structure (slippage). Due to the self-regulatory nature of the delegation, the oversight mechanism put in place by the principal are insufficient to identify or adjust these deviating actions (Step 4), which eventually results in agency costs (Step 5).

Figure 2. Causal mechanism of delegation and discretion through self-regulation.

Figure 2. Causal mechanism of delegation and discretion through self-regulation.

The management of political advertising through self-regulation

In order to provide an explanation for the mismatch between the preferences of the European Commission and the policy conducted by Facebook based on the principal-agent framework, we will apply the developed causal mechanism to our case and empirically assess the strength of this mechanism. contains the application of the causal mechanism to the actions of the European Commission and Facebook. Based on our empirical findings, we conclude that the agency costs is a consequence of shirking: Facebook’s interests with regard to political advertisements differed from the preferences of the European Commission, which led to an implemented policy that went directly against the Commission’s intentions. The following paragraphs will provide an overview of the various pieces of evidence that provide support for the subsequent steps in the causal mechanism. The assessment of the various pieces of evidence can be found in Annex 2.

Figure 3. Causal mechanism of managing disinformation in political advertisements on Facebook through self-regulation.

Figure 3. Causal mechanism of managing disinformation in political advertisements on Facebook through self-regulation.

Politics of delegation: incomplete contract

The first part of our causal mechanism that deals with the politics of delegation assumes that the choice of the Commission for a delegation through self-regulation and the self-regulatory framework set out by the online platforms led to a delegation mandate that constituted an incomplete contract.

The European Commission indeed decided to manage disinformation in political advertisements through self-regulation in Spring 2018 (Step 0). Why did the European Commission take regulatory action, and why did it opt for self-regulation instead of a legislative initiative? First, it should be recalled that such a legislative initiative was a real possibility. In November 2021, the European Commission submitted a proposal for a Regulation on transparency and targeting of political advertising in an aim to combat disinformation.Footnote3 This shows that was a sufficient legal basis to propose a legislative initiative in 2018, but that the Commission refrained from this option. The evidence points to a combination of media-centric and politics-centric arguments that motivated the European Commission to choose self-regulation.

With regard to media-related characteristics, the pace and complexity of technological development were considered a barrier for legislative action. In the preparatory documents of the European Commission was argued that a strong regulatory response would run the risk of becoming inadequate due to the fast-changing ‘evolution of technologies and digital behaviour patterns’ (European Commission Citation2018e, 31). In this respect, the best response was considered to be actions driven by the stakeholders themselves. The complexity and fast pace of developments in the digital environment was also part of the motivation that was used by the Commission in its Communication on tackling online disinformation to motivate its choice for the development of a Code of Practice of Disinformation by the online platforms themselves (European Commission Citation2018a, 6). This was linked to the problem of information asymmetries. The respondents from the European Commission argued that a self-regulatory process allowed to tab into the expertise of the online platforms, since they understood best what the problems were and how disinformation was used on their services (Interviews 1, 2 and 3).

In addition, two more politics-centred arguments were put forward to motivate the choice for self-regulation. The first was a substantive argument: since the regulation of online political advertising was linked to the issue of freedom of speech, the choice for self-regulation posed fewer risks of jeopardizing this fundamental value. In the preparatory documents of the European Commission the importance of leaving the initiative to the stakeholders themselves was stressed, ‘[s]ince filtering out disinformation is difficult to achieve without hitting legitimate content, and is therefore problematic from a freedom of expression perspective’ (European Commission Citation2018e, 31–32). Similarly, safeguarding the freedom of expression was strongly emphasized in the proposal for the Code of Practice in the Commission’s Communication (European Commission Citation2018a, 8). The Commission responded explained how the Code was considered the more suitable policy in this respect:

disinformation is inevitably tied to questions of freedom of speech, which are very sensitive. We felt that a self-regulatory process, whereby the platforms would draft commitments taking into account input from other stakeholders would be a better approach to addressing its problems in the first instance. (Interview Citation3)

The second argument was linked to the political decision-making process itself, but we found no evidence related to political reticence or conflicting policy proposals (see Dommett and Zhu Citation2022). Self-regulation was deemed the preferable policy option because it could be implemented more quickly. The main objective of the European Commission was to have measures in place in time for the electoral campaigns of the 2019 European elections (European Commission Citation2020a, 3, 5; Citation2020c, 24; Citation2018a, 11–12; Citation2018d; Citation2018e, 25). While a self-regulatory framework could be put in place quickly and certainly before the elections, taking a legislative initiative through the ordinary legislative procedure and subsequent implementation by the EU member states could most probably not be finalized in time. This is confirmed by respondents from the European Commission: ‘getting a Regulation on the topic would be lengthy process with the other institutions involved, whereas we wanted to see action on the process quickly’ (Interview Citation3), and ‘it was considered that it was likely that a self-regulatory code in the first instance would be more expeditious and something we could put in place more quickly’ (Interview Citation1) (Evidence 0.1).

The decision of the European Commission to use self-regulation to tackle disinformation led to a transfer of the general objectives with regard to – among others – political advertising to online platforms without the requirement for strong oversight mechanisms (Step 1). To this end, the Commission organized a stakeholder forum with the main goal of producing an EU-wide Code of Practice on Disinformation, drafted by the online platforms (European Commission Citation2018a, 8–9) (Evidence 1.1.). This approach built on the experience and templates of the 2016 Code of conduct on countering illegal hate speech online (European Commission Citation2020c, 67; Interview Citation3). The Commission merely communicated a number of general objectives to the online platforms in order to tackle online disinformation, and not how these objectives should be achieved (Interviews 1 and 2) (Evidence 1.2.). Specifically, with regard to online political advertising, the objective stated that ‘transparency about sponsored content, in particular, political and issue-based advertising’ should be ensured (European Commission Citation2018a, 7). In addition, the Commission aimed to monitor the implementation of the Code of Practice ‘in broad consultation with stakeholders and on the basis of key performance indicators based on the above objectives’ (European Commission Citation2018a, 8–9). Yet, these ‘key performance indicators’ had to be developed by the online platforms themselves, thus substantially limiting the Commission’s monitoring abilities. Both an ex-post evaluation study as well as an interviewee from the European Commission confirmed that these indicators were insufficient to properly monitor the implementation by the online platforms (European Commission Citation2020a, 16; Interview Citation1) (Evidence 1.3.).

During the stakeholder forum, the online platforms designed a Code of Practice that left a substantial degree of discretion for the implementation (Step 2). Evidence from EU documents and interviews indicate that the involvement of the European Commission in drafting the Code was limited (Evidence 2.1.). Participation in the Code was voluntary for the online platforms: it only applied to those platforms that agreed to subscribe to it, meaning they could ‘opt out’. The objectives of the European Commission were translated into commitments in the Code. Regarding political advertising, this included the commitment to ensure that all advertisements would be clearly distinguishable from editorial content and the commitment to enable public disclosure of political advertising (European Union Citation2018a, 5–6). However, the platforms could choose which of the goals they wanted to achieve, and signatories could withdraw from the Code or specific commitments any time. The Code explicitly stated that it ‘[did] not commit all Signatories of the Code to sign up to every commitment’ (European Union Citation2018a, 2) (Evidence 2.2.).

In addition, the social media platforms could not only choose which goals they would pursue, but also how they would do it. The Code did not include specific measures that needed to be taken and even emphasized the flexibility that the signatories had in the implementation of the measures to fulfil their commitments: ‘[since] the various Signatories operate differently, with different purposes, technologies and audiences, the Code allows for different approaches to accomplishing the spirit of the provisions herein’ (European Union Citation2018a, 2) (Evidence 2.3.). Specifically with regard to political advertising, Facebook stated it would allow ‘advertisers to run political, election related and issue ads, provided they comply with all applicable laws and processes required by Facebook’ (European Union Citation2018b, 2).

Furthermore, the Code of Practice did not include any stringent monitoring provisions. In line with the Commission guidelines, the Code required the development of a ‘set of Key Performance Indicators’ that only applied ‘to the Relevant Signatories with respect to their respective commitments’ and were very broad. This included an annual publicly available report and regular meetings to assess the implementation. There was also a possibility for Signatories to signal potential non-compliance of other Signatories, but the most severe consequence was a withdrawal from the Code, without any additional sanctions (Evidence 2.4.).

Consequently, the Code of Practice constituted a self-regulatory framework that only included voluntary and incomplete objectives for the online platforms without any strong oversight mechanisms. Ex-post assessments conducted on behalf of the European Commission as well as interviews confirmed that the Code left substantial room for manoeuvre to the online platforms in how the Code would be applied and how the commitments would be implemented. The European Regulators Group for Audiovisual Media Services stated that ‘the measures of the Code are too general in terms of content and structure. […] [I]t provides space for the signatories to implement measures only partially or, in some cases, not at all’ (ERGA Citation2020, 3). Similarly, representatives from the media, civil society and academia also criticized the Code, arguing that ‘it contains no common approach, no clear and meaningful commitments, no measurable objectives or KPIs, hence no possibility to monitor process, and no compliance or enforcement tool’ (Sounding Board Citation2018) (Evidence 2.5.).

The combined evidence shows that the decision of the European Commission to delegate the task of regulation political advertisements to the online platforms themselves – including Facebook – through a Code of Practice resulted in a loose regulatory framework with unspecified objectives and instructions on implementation, and a limited monitoring structure that left a substantial room for manoeuvre to Facebook to implement its policy with regard to political advertisements without interference from the European Commission.

Politics of discretion: shirking and insufficient control

The incomplete contract subsequently led to Facebook implementing the management of political advertising in the most cost-effective way, leading to policy actions that were not in line with the Commission’s intentions (Step 3). The ban on foreign political advertisements in the EU was an extension from Facebook’s policy in the United States. In the fall-out of the alleged Russian interference in the 2016 US elections, the online platform decided to require ad buyers to provide a US identification and address (Daskal Citation2019). This policy was extended to the EU in the beginning of 2019 in preparation to the European elections. However, while Facebook used a single jurisdiction system in the US, it identified the individual EU member states – and the EU as a whole – as the sole jurisdiction. This meant that political advertisements could only be launched in the member state in which an actor was registered, rendering transnational political campaigns impossible. This was also denounced in the joint letter of the Secretary-Generals of the three main institutions: ‘[Facebook] ignores the system of shared competences between the Union level and the national level’. This thus also includes the European Commission, and demonstrates how Facebook’s actions went directly against its interests (Evidence 3.1.).

In order to establish that this was the result of shirking – i.e. the intentional deviating behaviour of Facebook – , we need to find evidence that (1) the Commission communicated its preferences regarding political campaigning of European political parties to Facebook; (2) the Commission disapproved Facebook’s intention to introduce registration requirements in each EU country before it was implemented, and (3) Facebook took the intentional decision to implement the policy regardless of this disapproval.

On multiple occasions, the European Commission called for increased transparency with regard to political advertising, but without stipulating in detail how this should be achieved or how this should be dealt with regarding European political parties (see e.g. European Commission Citation2018a, 2, 4-5; Citation2018c, 1–2). Yet, in its Communication on tackling disinformation, the Commission did point to the need for a ‘European approach’ (European Commission Citation2018a, 3). Similarly, the European Commission did stipulate that European political parties should reach out to citizens and be able to conduct European campaigns but did not refer specifically to online campaigns (European Commission Citation2018c, 3; Citation2018d, 4) (Evidence 3.2.). Consequently, based on the prior communication it is possible that its intentions with regard to Europarty cross-border online campaigning were not entirely clear to Facebook.

However, there is strong evidence that Facebook clarified its intention to force political advertisers – including European political parties and institutions – to register in each country they wanted to launch online advertisements and that the European Commission voiced its disapproval before the policy was implemented. In its implementation reports of January to April 2019, Facebook announced that it would not allow cross-border political advertising (Facebook Citation2019a, 3; Citation2019b, 1; Citation2019c, 3; Citation2019d, 2; Citation2019e, 2). In these reports, Facebook also stated that it consulted the European Commission and they were ‘in a constant dialogue with European Institutions since early February to find the best possible approach for this complex and novel situation’ (Facebook Citation2019e, 2). A respondent from the European Commission also acknowledged that they made their position clear to Facebook (Interview Citation1) (Evidence 3.3.). In other words, during the preparation of the implementation of this specific measure, Facebook should have been well aware of the disapproval of the European Commission.

Yet, despite these objections, Facebook decided to implement the policy anyway. That this was an intentional choice by the online platform, was asserted by the EU institutions and acknowledged by Facebook itself. For example, in their letter to the representative of Facebook, Nick Clegg, the three main EU institutions wrote that

Facebook transposes the US single jurisdiction system to the European level, identifying the individual Member States as sole jurisdiction for the European elections. […] While the decision not to recognise this specific situation is the commercial choice of a company with regard to its clients or users, it has huge political and institutional consequences. (European Parliament, Council of the European Union and European Commission Citation2019, 2)

In public statements, Representatives of Facebook also admitted that the decision was made intentionally:

‘We weighed the different risks and concluded that the right solution to help best guard against foreign interference is to only allow people to run advertisements in an EU country if they have passed an authorization process confirming they are a resident in that same country,’ a spokeswoman for Facebook said in a statement.

Similarly, in a telephonic interview with several journalists Nick Clegg – Head EU Affairs of Facebook – made clear that one of their motives to implement the pan-European campaign restrictions ban was based on their own research (Interview 5; Facebook Citation2019a) (Evidence 3.5.).

In sum, the evidence suggests that Facebook was informed about the preferences of the Commission regarding political advertisements, but nevertheless persisted in its decision to introduce national registration requirements for political advertisers. The fact that other online platforms (Twitter and Google) made different choices (Evidence 3.4.) indicates that the policy outcome was not the unintended or inevitable consequence of the delegation structure, and adds to our argument that Facebook’s actions constituted a case of shirking.

In the process, it also became clear that the European Commission could not stop the implementation of the deviating outcome due to lacking control mechanisms (Step 4). As mentioned above (cf. part 4.2.), there are four main oversight instruments that can be used: screening, negative rewards or sanctions, reporting requirements, and institutional checks to reduce the agent’s possible scope of activity. Mainly due to the self-regulatory nature of the delegation structure, these instruments were insufficient to adjust deviating behaviour of the agent. This was also acknowledged by the European Commission in its evaluation report (Evidence 4.1.). First, with regard to screening and selection, the capabilities of the Commission were limited. By using self-regulation, the Commission had no influence on the selection, but had to rely on those online platforms that became signatory to the Code of Practice.

Second, the Code of Practice – designed by the agents – did not include any sanctioning mechanisms. The capabilities of the Commission regarding this aspect were again minimal. The only ‘stick’ that the Commission set out, was the possibility of regulatory action by the end of 2018 if self-regulation deemed insufficient, as was stated in the Commission Communication on the topic: ‘The Commission will assess [the Code’s] implementation. […] Should the results prove unsatisfactory, the Commission may propose further actions, including actions of a regulatory nature’ (European Commission Citation2018a, 8–9). Of course, even if the European Commission would come forward with a legislative initiative by the end of 2018, it would be too late to be implemented by the 2019 European elections. Indeed, a speedy and effective implementation was indeed one of the motivations why the Commission had relied on self-regulation (cf. supra).

Third, the monitoring and reporting requirements were insufficient to prevent a deviating policy implementation by Facebook. The signatories of the Code of Practice had to develop roadmaps with the anticipated concrete actions and results until the 2019 European Elections (European Commission Citation2018b). Yet, the actual roadmap of Facebook lacked detail regarding the planned actions and did not include a ban on foreign political advertisements (see Annex 1). A second foreseen monitoring instrument was the KPI’s, but these were in practice replaced by self-assessment reports (Interview Citation1). The absence of more specific KPI’s was also an important point of criticism by the European Regulators Group for Audiovisual Media Services (ERGA Citation2020, 3).

Furthermore, concerning these self-assessments reports, the Code included the requirements of an annual report, but the Commission also introduced monthly reports after the creation of the Code of Practice because one report per year was not deemed sufficient to monitor the online platforms (Interview Citation1).Footnote4 As already mentioned, Facebook mentioned the ban on cross-border campaigns in its monthly self-assessment reports and discussed the issue with the European Commission. Despite the Commission’s consultation, the latter was not able to ensure the possibility of transnational campaigns (Evidence 4.2. and 4.3.). In other words, the principal was notified but unable to avert the deviating agent behaviour.

This eventually led to a clear situation of agency costs since the European political parties and the EU institutions could not conduct cross-border campaigns on Facebook (Step 5). This was clearly acknowledged by the European political parties and the EU institutions (Evidence 5.1.). In its letter to the representative of Facebook, Nick Clegg, the three main EU institutions stated that the actions of Facebook had ‘huge political and institutional consequences’ (European Parliament, Council of the European Union, European Commission Citation2019, 1–2). Similarly, the European political parties wrote in their letter to Facebook that ‘[t]hese rules create an unprecedented barrier for European political organisations and political parties. […] With these rules, Facebook is de facto putting barriers in place that will harm the European discourse around a European democracy’ (EPP, PES, ECRP, ALDE, EGP, PEL and EFA, Citation2019, 1–2). These difficulties were confirmed by a senior official of the European People’s Party (Interview Citation4). Consequently, the actions of Facebook resulted in a situation that was detrimental to the interests of the Europarties and EU institutions, and went directly against the preferences of the European Commission.

Conclusion and discussion

The main objective of this paper has been to explain why the European Commission opted for self-regulation to manage online political advertising and to examine the consequences of this policy choice. Many countries are considering regulatory action, and previous research highlighted the barriers that rule-makers face in regulating digital platforms. In line with existing studies, we found that the incentive structure for regulatory action is composed of a combination of media-centric and politics-centric factors. Regarding media-centric elements, the Commission motivated its choice for self-regulation by referring to the pace and complexity of technological development in the digital environment: online platforms allegedly had the required expertise to tackle problems with disinformation on their services.

The two politics-centred arguments that were identified in this paper have not been highlighted by scholars before. First, because of the sensitivity of freedom of speech, self-regulation was deemed more appropriate by the Commission to improve the transparency of online political advertising. Second, the need for swift results was another consideration to choose for self-regulation: a legislative initiative would take too much time and would not be in place in time for the European elections. Consequently, while the proximity of elections in itself was an important incentive for regulatory action by the European Commission, it also constituted an important constraint for the type of action that it considered to be most appropriate. Future research should thus not only focus on factors that foster or hamper regulatory action regarding online political advertising – and platform governance more broadly – but also on the incentives and constraints policy-makers experience for the type of action. The variation of policy actions taken by governments around the world can serve as an agenda for future research in this respect.

With regard to the consequences of the policy choice, this paper specifically examined the mismatch between the European Commission’s intentions for the 2019 European elections, and the policy on political advertising implemented by Facebook. While the Commission aimed at improving the EU-wide dimension of these elections by fostering transnational campaigning, Facebook made such cross-border political advertising impossible for the European political parties and EU institutions. The use of a principal-agent framework in combination with a process-tracing analysis explained why and how the decision of the European Commission to use the tool of self-regulation caused this unintended policy outcome. In this respect, the findings are in line with previous studies that have pointed to problems when policy content is regulated through intermediaries (e.g. Brown Citation2010; Keller Citation2018; Korff Citation2014).

The self-regulatory framework materialized in a Code of Practice on Disinformation, drawn up by the online platforms themselves. It resulted in an incomplete contract with vague objectives and limited monitoring provisions. Specifically, with regard to political advertising, the Code was limited to calls to increase transparency. The voluntary nature of the Code meant that Facebook could choose to opt-out from certain commitments and had a lot of leeway in the implementation. This meant that – in combination with limited monitoring provisions – a substantial degree of discretion was granted to Facebook on how the Code was eventually operationalized. This led to a policy regarding political advertising that went directly against the preferences of the European Commission. Facebook, following the most cost-effective strategy, replicated its policy on political advertisements in the EU from the United States, the country that has the greatest leverage over the company (Kreiss and Mcgregor Citation2018, 159). Consequently, while information asymmetries have been identified as barriers to legislative action (Dommett and Zhu Citation2022), or as an incentive for self-regulation, our analysis shows that such asymmetries are also an important reason for policy drift. The monitoring provisions were unable to adjust the decision: despite the fact that the Commission explicitly expressed a different preference, Facebook maintained the decision. As opposed to previous research on the topic (see e.g. Héritier and Eckert Citation2008), the potential threat of legislative action was unable to change this.

While the findings of this paper are mainly limited to the case under analysis, they can serve as a source of reference for future studies. On a methodological level, the paper has shown the added value of combining a principal-agent framework with a process-tracing approach. The latter can be a useful tool to examine how the principal-agent concepts – such as incomplete contracts and monitoring – work together to produce a situation of agency costs. The use of mechanistic evidence in line with the process-tracing methodology are important to verify the presence, interplay and impact of these concepts in an analysed case. On a more theoretical level, future research can further refine the causal mechanism of how self-regulation can lead to agency costs. Specifically, with regard to the regulation of online advertising, the analysis of new cases should attempt to test and refine the mechanism in cases with different regulatory choices and contextual elements.

Supplemental material

Supplemental Material

Download MS Word (46.6 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Research Foundation Flanders (FWO) (1294122N).

Notes

1 Only after relentless pressure from the EU institutions, Facebook agreed to make an exception for European political parties and EU institutions in the final weeks of the electoral campaign.

2 More specifically, the European Commission refers to Articles 114 and 224 TFEU as a legal basis for its legislative proposal.

3 Proposal for a Regulation on the transparency and targeting of political advertising (COM(2021)731_final). The proposal is currently being discussed by the European Parliament and the Council of Ministers.

4 In total, five monthly reports were drafted from January until May 2019 (European Commission Citation2019).

References

  • Academic references
  • Ananny, M., and T. Gillespie. 2017. “Public Platforms: Beyond the Cycle of Shocks and Exceptions." Paper presented to Interventions: Communication Research and Practice, 67th Annual Conference of the International Communications Association, San Diego, CA, May 25–29.
  • Beach, D., and R. B. Pedersen. 2016. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing. Ann Arbor: University of Michigan Press.
  • Bimber, B. 2014. “Digital Media in the Obama Campaigns of 2008 and 2012: Adaptation to the Personalized Political Communication Environment.” Journal of Information Technology and Politics 11 (2): 130–150. doi:10.1080/19331681.2014.895691.
  • Bradley, C. A., and J. G. Kelley. 2008. “The Concept of International Delegation.” Law and Contemporary Problems 71 (1): 1–36.
  • Brown, I. 2010. “Beware Self-Regulation.” Index on Censorship 39 (1): 98–106. doi:10.1177/0306422010362193.
  • Brown, N., and J. Peters. 2018. “Say This, Not That: Government Regulation and Control of Social Media First Amendment Symposium.” Syracuse Law Review 68 (3): 521–546.
  • Cooley, A., and H. Spruyt. 2009. Contracting States: Sovereign Transfers in International Relations. Princeton: Princeton University Press.
  • Cremonesi, C., A. Seddone, G. Bobba, and M. Mancosu. 2019. “The European Union in the Media Coverage of the 2019 European Election Campaign in Italy: Towards the Europeanization of the Italian Public Sphere.” Journal of Modern Italian Studies 24 (5): 668–690. doi:10.1080/1354571X.2019.1681686.
  • da Conceicao-Heldt, E. 2013. “Do Agents ‘Run Amok’? A Comparison of Agency Slack in the EU and US Trade Policy in the Doha Round.” Journal of Comparative Policy Analysis: Research and Practice 15 (1): 21–36. doi:10.1080/13876988.2012.754152.
  • da Conceição, E. 2010. “Who Controls Whom? Dynamics of Power Delegation and Agency Losses in EU Trade Politics.” Journal of Common Market Studies 48 (4): 1107–1126. doi:10.1111/j.1468-5965.2010.02086.x.
  • Daskal, J. 2019. “Speech Across Borders.” Virginia Law Review 105 (8): 1605–1666.
  • Delreux, T., and J. Adriaensen. 2017. “Introduction. Use and Limitations of the Principal–Agent Model in Studying the European Union.” In The Principal Agent Model and the European Union, Palgrave Studies in European Union Politics, edited by T. Delreux, and J. Adriaensen, 1–34. Cham: Springer International Publishing.
  • Dijkstra, H. 2014. “Approaches to Delegation in EU Foreign Policy: The Case of the Commission.” In New Approaches to EU Foreign Policy, edited by M. Wilga, and I. Pawel Karolewski, 38–56. Abingdon: Routledge.
  • Dommett, K., and S. Power. 2019. “The Political Economy of Facebook Advertising: Election Spending, Regulation and Targeting Online.” The Political Quarterly 90 (2): 257–265. doi:10.1111/1467-923X.12687.
  • Dommett, K., and J. Zhu. 2022. “The Barriers to Regulating the Online World: Insights from UK Debates on Online Political Advertising.” Policy & Internet 14 (4): 772–787.
  • Elsig, M. 2007. Delegation and agency in EU trade policy-making: Bringing Brussels back in. Paper prepared for delivery at EUSA Conference - Montreal, Canada May 17-May 19, 2007.
  • Flew, T., F. Martin, and N. Suzor. 2019. “Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance.” Journal of Digital Media & Policy 10 (1): 33–50. doi:10.1386/jdmp.10.1.33_1.
  • Franchino, F. 2000a. “Control of the Commission's Executive Functions: Uncertainty, Conflict and Decision Rules.” European Union Politics 1 (1): 63–92. doi:10.1177/1465116500001001004.
  • Franchino, F. 2000b. “The Commission's Executive Discretion, Information and Comitology.” Journal of Theoretical Politics 12 (2): 155–181. doi:10.1177/0951692800012002002.
  • Furness, M. 2013. “Who Controls the European External Action Service? Agent Autonomy in EU External Policy.” European Foreign Affairs Review 18 (1): 103–126. doi:10.54648/EERR2013006.
  • Gibson, R. 2015. “Party Change, Social Media and the Rise of “Citizen-Initiated” Campaigning.” Party Politics 21 (2): 183–197. doi:10.1177/1354068812472575.
  • Gibson, R., and I. McAllister. 2015. “Normalising or Equalising Party Competition? Assessing the Impact of the Web on Election Campaigning.” Political Studies 63 (3): 529–547. doi:10.1111/1467-9248.12107.
  • Guess, A., and B. Lyons. 2020. “Misinformation, Disinformation, and Online Propaganda.” In Social Media and Democracy: The State of the Field, Prospects for Reform, edited by N. Persily, and J. Tucker, 10–33. Cambridge: Cambridge University Press.
  • Hawkins, D., D. A. Lake, D. L. Nielson, M. J. Tierney, et al. 2006. “Delegation Under Anarchy: States, International Organizations, and Principal-Agent Theory.” In Delegation and Agency in International Organizations, Political Economy of Institutions and Decisions, edited by D. L. Nielson, 3–38. Cambridge: Cambridge University Press.
  • Héritier, A., and S. Eckert. 2008. “New Modes of Governance in the Shadow of Hierarchy: Self-Regulation by Industry in Europe.” Journal of Public Policy 28 (1): 113–138. doi:10.1017/S0143814X08000809.
  • Hix, S., and M. Marsh. 2011. “Second-order Effects Plus pan-European Political Swings: An Analysis of European Parliament Elections Across Time.” Electoral Studies 30 (1): 4–15. doi:10.1016/j.electstud.2010.09.017.
  • Jacobs, K., and N. Spierings. 2016. Social Media, Parties, and Political Inequalities. Basingstoke: Palgrave Macmillan.
  • Keller, D. 2018. Internet Platforms: Observations on Speech, Danger, and Money (Aegis Series Paper no. 1807). A Hoover Institution essay, Stanford University, California.
  • Kiewiet, R., and M. D. McCubbins. 1991. The Logic of Delegation. Chicago: University of Chicago Press.
  • Kirk, N., and L. Teeling. 2022. “A Review of Political Advertising Online During the 2019 European Elections and Establishing Future Regulatory Requirements in Ireland.” Irish Political Studies 37 (1): 85–102. doi:10.1080/07907184.2021.1907888.
  • Koc-Michalska, K., D. G. Lilleker, T. Michalski, R. Gibson, and J. M. Zajac. 2021. “Facebook Affordances and Citizen Engagement During Elections: European Political Parties and Their Benefit from Online Strategies?” Journal of Information Technology & Politics 18 (2): 180–193. doi:10.1080/19331681.2020.1837707.
  • Korff, D. 2014. The Rule of Law on the Internet and in the Wider Digital World (Issue Paper). Strasbourg: Council of Europe Commissioner for Human Rights.
  • Kreiss, D., and S. Mcgregor. 2018. “Technology Firms Shape Political Communication: The Work of Microsoft, Facebook, Twitter, and Google with Campaigns During the 2016 U.S. Presidential Cycle.” Political Communication 35 (2): 155–177. doi:10.1080/10584609.2017.1364814.
  • Kreiss, D., and S. Mcgregor. 2019. “The ‘Arbiters of What Our Voters See’: Facebook and Google’s Struggle with Policy, Process, and Enforcement Around Political Advertising.” Political Communication 36 (4): 499–522. doi:10.1080/10584609.2019.1619639.
  • Laloux, T. 2017. “Designing a Collective Agent for Trilogues in the European Parliament.” In The Principal Agent Model and the European Union, Palgrave Studies in European Union Politics, edited by T. Delreux, and J. Adriaensen, 83–103. Cham: Springer International Publishing.
  • Lane, J., and J. Kivisto. 2008. “Interests, Information, and Incentives in Higher Education: Principal-Agent Theory and its Potential Applications to the Study of Higher Education Governance.” In Higher Education, edited by J. Smart, 141–179. Dordrecht: Springer.
  • Leerssen, P., J. Ausloos, B. Zarouali, N. Helberger, and C. H. de Vreese. 2019. “Platform ad Archives: Promises and Pitfalls.” Internet Policy Review 8 (4). doi:10.14763/2019.4.1421.
  • Lilleker, D., K. Koc-Michalska, R. Negrine, R. Gibson, T. Vedel, and S. Strudel. 2019. Social Media Campaigning in Europe. Abingdon: Routledge.
  • Marwick, A., and R. Lewis. 2017. Media Manipulation and Disinformation Online. New York: Data & Society Research Institute.
  • OECD. 2019. Regulatory Effectiveness in the Era of Digitalisation. Paris: OECD.
  • Ohlin, J. D., and D. Hollis. 2021. Defending Democracies: Combating Foreign Election Interference in a Digital Age. Oxford: Oxford University Press.
  • Pasquino, G., and M. Valbruzzi. 2019. “The 2019 European Elections: A ‘Second-Order’ Vote with ‘First-Order’ Effects.” Journal of Modern Italian Studies 24 (5): 736–756. doi:10.1080/1354571X.2019.1681706.
  • Picard, R., and V. Pickard. 2017. Essential Principles for Contemporary Media and Communications Policymaking. Oxford: Reuters Institute for the Study of Journalism.
  • Pollack, M. 1997. “Delegation, Agency, and Agenda Setting in the European Community.” International Organization 51 (1): 99–134. doi:10.1162/002081897550311.
  • Pollack, M. 1999. “Delegation, Agency and Agenda Setting in the Treaty of Amsterdam.” European Integration Online Papers (EIoP) 3.
  • Reykers, Y., and D. Beach. 2017. “Process-Tracing as a Tool to Analyse Discretion.” In The Principal Agent Model and the European Union, Palgrave Studies in European Union Politics, edited by T. Delreux, and J. Adriaensen, 255–281. Cham: Springer International Publishing.
  • Righetti, N., F. Giglietto, A. E. Kakavand, A. Kulichkina, G. Marino, and M. Terenzi. 2022. Political Advertisement and Coordinated Behavior on Social Media in the Lead-Up to the 2021 German Federal Elections. Dusseldorf: Media Authority of North Rhine-Westphalia.
  • Risso, L. 2018. “Harvesting Your Soul? Cambridge Analytica and Brexit.” In Brexit Means Brexit: The Selected Proceedings of the Symposium, edited by C. Jansohn, 75–90. Akademie der Wissenschaften und der Literatur: Mainz.
  • Rogers, R., and S. Niederer. 2020. The Politics of Social Media Manipulation. Amsterdam: Amsterdam University Press.
  • Tallberg, J. 2002. “Delegation to Supranational Institutions: Why, How, and with What Consequences?” West European Politics 25 (1): 23–46. doi:10.1080/713601584.
  • Thatcher, M., and A. S. Sweet. 2002. “Theory and Practice of Delegation to Non-Majoritarian Institutions.” West European Politics 25 (1): 1–22. doi:10.1080/713601583.
  • Wolfs, W. 2022. European Political Parties and Party Finance Reform: Funding Democracy?. London: Palgrave Macmillan.
  • Wolfs, W., G.-J. Put, and S. Van Hecke. 2021. “Explaining the Reform of the Europarties’ Selection Procedures for Spitzenkandidaten.” Journal of European Integration 43 (7): 891–914. doi:10.1080/07036337.2021.1876687.
  • Zuiderveen Borgesius, F., J. Möller, S. Kruikemeier, R. Ó. Fathaigh, K. Irion, T. Dobber, B. Bodo, and C. H. de Vreese. 2018. “Online Political Microtargeting: Promises and Threats for Democracy.” Utrecht Law Review 14 (1): 82–96. doi:10.18352/ulr.420.
  • Interviews
  • Interview 1. Policy Officer in DG CONNECT (European Commission), telephone interview.
  • Interview 2. Government Affairs and Public Policy Manager at the Google Brussels Office, telephone interview.
  • Interview 3. Policy Officer in DG CONNECT (European Commission), telephone interview.
  • Interview 4. Senior Official of the European People’s Party, Brussels.
  • Interview 5. Transcript of media interview with Nick Clegg, Facebook Vice-President of Global Affairs and Communications.
  • Primary Documents
  • EPP, PES, ECRP, ALDE, EGP, PEL and EFA, Letter to Mark Zuckerberg, Brussels, 18 April 2019.
  • European Commission. 2015. Report on the 2014 European Parliament Elections, Brussels.
  • European Commission. 2018a. Communication from the European Commission: Tackling Online Disinformation – A European Approach (COM(2018)236final), Brussels, 17 pp.
  • European Commission. 2018b. Roadmaps to implement the Code of Practice on disinformation, Brussels.
  • European Commission. 2018c. Communication from the European Commission: Securing free and fair European elections (COM(2018) 637 final), Brussels, 12 September 2018, 10 pp.
  • European Commission. 2018d. Recommendation from the European Commission on enhancing the European nature and efficient conduct of the 2019 elections to the European Parliament (C(2018) 900 final), Brussels, 14 February 2018, 6 pp.
  • European Commission. 2018e. A multi-dimensional approach to disinformation: Report of the independent High level Group on fake news and online disinformation, Brussels, March 2018, 41 pp.
  • European Commission. 2019. Tackling online disinformation, Brussels.
  • European Commission. 2020a. Commission Staff Working Document: Assessment of the Code of Practice on Disinformation - Achievements and areas for further improvement (SWD(2020)180final), Brussels, 28 pp.
  • European Commission. 2020b. Annex to the Communication from the European Commission: Commission Work Programme 2021 – A Union of Vitality in a World of Fragility (COM(2020)690final), Brussels, 26 pp.
  • European Commission. 2020c. Study for the “Assessment of the implementation of the Code of Practice on Disinformation”: Final Report, Luxembourg, 139 pp.
  • European Parliament, Council of the European Union and European Commission. 2019. Letter to Mr Nick Clegg – Facebook (D 200794), Brussels, 16 April 2019, 2 pp.
  • European Regulators Group for Audiovisual Media Services. 2020. ERGA Report on disinformation: Assessment of the implementation of the Code of Practice. Retrieved from https://erga-online.eu/wp-content/uploads/2020/05/ERGA-2019-report-published-2020-LQ.pdf.
  • European Union. 2018a. Code of Practice on Disinformation, Brussels, 12 pp.
  • European Union. 2018b. Annex II Current Best Practices from Signatories of the Code of Practice, Brussels, 14 pp.
  • Facebook. 2019a. Facebook Baseline Report on Implementation of the Code of Practice on Disinformation, January 2019, 11 pp.
  • Facebook. 2019b. Facebook January 2019 Update on Implementation of the Code of Practice on Disinformation, January 2019, 17 pp.
  • Facebook. 2019c. Facebook February update on implementation of the Code of Practice on Disinformation, February 2019, 17 pp.
  • Facebook. 2019d. Facebook March 2019 monthly update on implementation of the Code of Practice on Disinformation, March 2019, 20 pp.
  • Facebook. 2019e. Facebook Reports on Implementation of the Code of Practice on Disinformation, April 2019, 40 pp.
  • Sounding Board. 2018. The Sounding Board’s Unanimous Final Opinion on the So-Called Code of Practice, Brussels, 24 September 2018.
  • Newspaper Articles
  • IDEA International. 2021. “First national Code of Conduct on online political advertising in the European Union signed by Dutch political parties and global online platforms.” Press Release, February 9. https://www.idea.int/news-media/news/first-national-code-conduct-online-political-advertising-european-union-signed-dutch.
  • Kayali, L., and M. De La Baume. 2019. “EU on Facebook ad rules.” Politico Europe, April 16. https://www.politico.eu/article/eu-institutions-blast-facebook-over-political-advertising-rules-social-media-european-election-system/.