901
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Securing the platform: how Google appropriates security

ORCID Icon

ABSTRACT

Google is increasingly developing a manifold of security products for its users, businesses, and national security actors like the US Department of Defence. However, the company and its employees struggle with whether, and how, it should be involved in practices of security, war or weaponry. To unpack how Google emerges as a security actor, I bring new media studies perspectives regarding the socio-political roles Google plays in today’s society to critical security studies. With this interdisciplinary approach to studying Big Tech’s role in security, this article analyses how Google appropriates security throughout its ecosystem of platforms, products and projects. The article illustrates that Google’s first and foremost objective is to secure its platform by carefully balancing between being perceived as both neutral and progressive. Google thus appropriates (in)security by developing seemingly mundane and neutral security products, services and projects that align with its platform logic. In doing so, Google locks in new users into its platforms, whilst reshaping (in)security issues into platform issues and identifying the platform as a public and security concern.

Introduction

In April 2018, over 3000 Google employees sent an open letter to the company’s CEO Sundar Pichai protesting Google’s involvement in Project Maven (Shane and Wakabayashi Citation2018). As a part of this pilot, Google developed artificial intelligence (AI) programmes to analyse drone footage for the United States Department of Defence (US DoD). In the letter, the employees argue that ‘this contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US Government in military surveillance – and potentially lethal outcomes – is not acceptable’ (Letter to Google C.E.O Citation2018). In the following two months, a dozen employees resigned and nearly four thousand signed an internal petition asking the company ‘to immediately cancel the contract and institute a policy against taking on future military work’ (as cited by Conger Citation2018a). On , 1 June 2018, Google announced that it would not pursue a follow-up contract in 2019 (Conger Citation2018b).

Shortly after the employee protests, Google published a list of seven ‘AI principles’ about how AI applications should be developed and used. Google’s CEO Sundar Pichai stated that ‘as a leader in AI, we feel a deep responsibility to get this right’ (Pichai Citation2018). AI should, for example, be ‘socially beneficial’, ‘avoid creating or reinforcing unfair bias’ and ‘incorporate privacy design principles’ (Ibid). The list of AI principles does end with a disclaimer: ‘while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas [including] cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue’ (Ibid). A sentiment which was confirmed in late 2019, when Google’s senior management argued that it is ‘eager to do more’ for the Pentagon and that the decision to step out of project Maven was ‘not a broader statement about our willingness or our history about working with the Department of Defense’ (Tucker Citation2019).

The clash between Google’s senior management and its employees regarding Project Maven signals the company’s struggle with whether, and how, it should be involved in practices of security, war, or weaponry. The senior management of Google wants to develop (AI powered) security products, as the example of Project Maven shows, and as this article will also further corroborate. At the same time, Google has been hesitant to take on security responsibilities bestowed on it by US and EU regulators. Content moderation, most prominently, is argued to go against Google’s main commercial objective to position itself as an ‘open, neutral, egalitarian and progressive’ platform as it forces Google to make inherently political choices about what is allowed on its platforms (Gillespie Citation2010, 352). The resulting ambiguity surrounding Google’s approach to security raises questions about what Google’s role in, and impact on, security will be.

Within critical security studies, researchers have extensively examined the role of private military and security companies (PMSCs) to understand the dynamic between commercialism and security (Avant and Sigelman Citation2010; Cutler Citation2009; Krahmann Citation2017; LeanderCitation2005a, Citationb, Citation2010). This literature has illustrated how PMSCs create ‘a market for force’ (Leander Citation2005a) and simultaneously become integrated into ‘global security assemblages’ in which distinctions between public and private have become blurred (Abrahamsen and Williams Citation2009, 1; Hoijtink Citation2014). This literature has been widened to analyse ‘how and why an increasing number of private actors beyond PMSCs have come to perform various security-related functions’ (Bures and Carrapico Citation2017, 230). For these ‘non-security actors’ security is a new responsibility that is ‘grafted onto’ their daily practices (De Goede Citation2018; building on, p. 2; Murray Li Citation2007). ‘The public/private relation here is one of friction, tension and contradiction’ (De Goede Citation2018, 26). Hospitals, for instance, need to train staff to recognise potential security/radicalisation threats while caring for their patients (Heath-Kelly Citation2017) and banks are obligated to adopt extensive procedures and detection technologies to police financial transactions (Favarel-Garrigues, Godefroy, and Lascoumes Citation2011). Over time, these ‘non-security’ actors ‘learn to see the world through a security lens’ (De Goede Citation2018, 26) albeit reluctantly. This happens through the process of appropriation – broadly understood as a process of making something one’s own – of security norms, objectives, and, at times, specific devices such as terror watch lists (Amicelle and Jacobsen Citation2016; De Goede Citation2018; Favarel-Garrigues, Godefroy, and Lascoumes Citation2008, Citation2011). Appropriation captures how non-security actors embed security into their pre-existing commercial practices. This process is a two-way street: it is a dialogue between the existing practices and the new that leaves both altered as a result (Amicelle and Jacobsen Citation2016).

Apart from recent research that focuses on the impact of Project Maven on military practices (Hoijtink and Planqué-van Hardeveld Citation2022; Suchman Citation2020), critical security studies has yet to include ‘Big Tech’ companies like Google into the debate on non-security actors. This article contributes to this debate by drawing attention to how Big Tech companies emerge as security actors. It does so by asking ‘how does Google appropriate security throughout its ecosystem of platforms, products and projects?’ This focus on Big Tech has been informed by research in new media studies that treats Big Tech as important socio-political actors. Big Tech companies are complex global ecosystems that ‘collectively operate an exclusive set of competing-cum-coordinating platforms that reign the core of the world’s digital information systems’ (van Dijck Citation2020a, 2). Google, for instance, has been argued to be a ‘curator of public discourse’ (Gillespie Citation2010, 347) and even a shaper of the public space (Van Dijck, Poell, and de Waal Citation2018) due to its many platforms that ‘influence the very texture of society and the process of democracy’ (Van Dijck Citation2020b, 2). Google navigates tensions within its ecosystem by deploying platform politics, which entails a discursive balancing act aimed at presenting Google as a neutral and progressive platform (Gillespie Citation2010).

In this article, I bring new media studies’ insights into Google’s global ecosystem together with critical security studies’ conceptual focus on appropriation. Appropriation draws our attention to how Google navigates (and/or capitalises on) possible tensions between its commercial logic and new security tasks. This interdisciplinary perspective allows this article to illustrate that Google’s first and foremost objective is to secure the platform. Google appropriates (in)security by developing mundane and seemingly neutral security products, services and projects that align with its platform logic, infusing (in)security issues with said logic. In doing so, Google locks new users into its platforms, and reshapes (in)security issues into platform issues and simultaneously approaches its ‘private’ platform issues as public and security concerns.

To capture the widely varied practices of Google and allow room for the range of perspectives of its employees, this article examines how Google appropriates security across its ecosystem (drawing inspiration from Ananny and Crawford Citation2018).

The argument develops as follows. First, I explain how a focus on appropriation allows for a detailed study of how Google emerges as a security actor. Second, building on insights from new media studies, I unpack Google’s commercial practices and underlying platform model. Third, I present a clustering of Google’s security work in , and then unpack per cluster how security and commercial objectives become entangled. The conclusion reflects on the wider implications this research holds for our understanding of Big Tech and their impact on (in)security. It argues that Google’s appropriation of security reinforces the notion that its (AI-driven) security initiatives are neutral, whilst weaving itself deeper into the fabric of digital society and the security practices aimed at keeping it safe. By doing so, Google reinforces the perceived need for algorithmic security (Amoore and Raley Citation2017) and the notion that Google is a security actor that can and should regulate itself.

Table 1. Google’s security appropriations 2014–2018.

Appropriating security

To examine how Google emerges as a security actor, this section turns to literature in critical security studies on banks who are tasked with specific security responsibilities by public authorities (Bures and Carrapico Citation2017; De Goede Citation2018). Despite Google seeming less reluctant than banks, this literature provides a fruitful analytical focus to understand and capture Google’s security work: the process of appropriation.

Critical security studies scholars have examined the dynamic between the logics that drive public and private actors (Bures and Carrapico Citation2017). Where public actors deploy a ‘better safe than sorry’ logic to security, private actors act from a ‘profit first’ logic which causes them to ‘pursue different options from the ones followed by public agencies’ (Bures and Carrapico Citation2017, 234). For instance, when banks are forced to adopt a ‘law enforcement role’ (Favarel-Garrigues, Godefroy, and Lascoumes Citation2011, 182), bankers approach this role reluctantly because treating the financial practices of their clients confidentially is key to their commercial logic. Banking actors thus attempt to make these responsibilities fit their own objectives. This ‘involves a process of authorisation and appropriation’ of the ‘security lens’ (De Goede Citation2018, 26; Favarel-Garrigues, Godefroy, and LascoumesCitation2008, Citation2011).

Contributing to this literature – which loosely refers to appropriation to describe the process of ‘making own’ - Amicelle and Jacobsen foreground appropriation as ‘an analytical focus’ to examine how banks appropriate ‘sanction, watch and regulatory lists’ (Amicelle and Jacobsen Citation2016, 89–91). These lists function as security devices that ‘allow sanctions regimes and regulations on “dirty money” to be made operational on a daily basis’ (Amicelle and Jacobsen Citation2016, 90). Building on the work of Crespin and Lascoumes (Citation2000, 134), Amicelle and Jacobsen conceptualise the process of appropriation as ‘the dynamic process of dialogue and reflection between an object, [the lists], and a “space of activities”’ it is introduced to, in this case ‘finance in general, banks in particular’ (Amicelle and Jacobsen Citation2016, 91). Their research of compliance officers in the United Kingdom illustrates that terrorist watch-lists need to function alongside existing ‘filtering engines’ that flag suspicious transactions. In the process of making these two devices fit together, the security list is transformed into a ‘compliance device’ which banks can use to prove to their regulators that their practices are in line with (inter)national security regulations (Ibid). The need to show compliance with international regulations is rooted in the bank’s commercial logic, namely securing the integrity and stability of their banking practices to secure profits and to minimise reputational risks. As ‘[t]he combination of lists with high-tech filtering equipment [… helps to] systematise documentation on every flagged transaction/client’, the security list also functions as a ‘disciplinary device’ which assists state authorities in holding banks accountable (Amicelle and Jacobsen Citation2016, 91, 93). Lastly, by comparing these findings with their research in India, the authors highlight the importance of the pre-existing practices that are present in each case: ‘it is how, precisely, this “dialogue” is held that matters’ (Amicelle and Jacobsen Citation2016, 91). In India, where the list is brought into dialogue with Know-Your-Customer programmes, the list is transformed from an (inter)national security device to an ‘exclusionary and inclusionary device’ that assists bankers in determining which costumers posit financial risks to their business and who receives the benefit of the doubt (Ibid). Appropriation thus serves as a helpful analytical focus for examining how reluctant non-security actors make security fit with their own commercial practices and how these are altered in turn. Examining appropriation sheds light on the dynamic between commercialism and security, or as stated by Amicelle and Jacobsen ‘the twin and overlapping processes of securitisation of finance and financialization of security’ (Amicelle and Jacobsen Citation2016, 102).

Like banks and other non-security actors, Google is faced with new security responsibilities that are an uncomfortable fit with its commercial objectives, such as preventing the spread of terrorist content on its social media platform YouTube and providing state authorities with data on its users. Despite important differences between the banking sector and Google, a parallel can be drawn between how these actors reluctantly appropriate security. Focusing on the process of appropriation, however, also reveals that Google is willing to be involved in certain security work. Put differently, this focus reveals how Google can capitalise on some of its security appropriations. Whether a non-security actor takes on security responsibilities as a strategy of defensive compliance or profit maximisation (as in the case of cultural appropriation Cattien and Stopford Citation2022), both can be considered to be appropriation as it involves taking something that lies outside of one’s business model and making it fit. Understanding appropriation as a dialogue that alters both the new and the existing practices allows this article to go beyond ‘the zero-sum game’ of the ‘Googlization’ of security (Vaidhyanathan Citation2012) versus the securitisation of Google. Instead, the process of appropriation enables a nuanced study of how Google emerges as a security actor and the implications of how security becomes entangled with its commercial practices.

Securing the business model

This section discusses the history of Google to help us understand its commercial interests and how ‘platform politics’ play a role in balancing its business model. These insights enable me to trace how Google appropriates security into these commercial practices.

Back in 1998, when Google was first launched, its search engine was based on an algorithm called PageRank which ranked the search results by analysing the link structure (Page et al. Citation1998, 15). Two years later, with the launch of AdWords, Google started selling targeted advertisements. Only in 2009 Google turned to ‘interest-based advertising’. Up until that point, Google determined the interests of the user by analysing the key search words or the content of the website the user was visiting. But ‘interest-based advertising’ allows advertisers to target advertisements based ‘categories of interest’ of the user by analysing the various types of websites it visits over time (Wojcicki Citation2009). It is this supposed ‘ability to predict the future––specifically the future of human behaviour’ that lies at the centre of Google’s business model (Zuboff Citation2019, 2). In a critical analysis of Google’s commercial logic which she calls ‘surveillance capitalism’, Shoshana Zuboff argues that by extracting value from its user data, Google’s users ‘became a means to profits in new behavioural futures markets in which users are neither buyers nor sellers nor products’ and are instead ‘a free source of raw material’ (Zuboff Citation2019, 3). This material is used ‘to predict and modify human behaviour as a means to produce revenue and market control’ (Zuboff Citation2015, 75).

Google capitalises on its expertise in large-scale data analysis across its ecosystem (Birch and Cochrane Citation2022; Wood and Monahan Citation2019), which includes an operating system (Android), a cloud hosting solution (Google Cloud Platform), a machine learning platform (TensorFlow), a smart home automation system (Google Nest Home), and a self-driving car (now the independent company Waymo). It is this ability to capitalise on large-scale data analysis that makes Google more than a social media company and that gives it financial room to invest in new products and services for new markets.

New media scholars have predominantly focused on the underlying logic of targeted advertising, as this is how Google makes most of its profit and revenue. Target advertising is based on a ‘three-sided market’ (Rieder and Sire Citation2014, 197), comprised of the users who search for the information that the content providers post online, and the advertisers who want to reach the users. Google needs to cater to all three sides to keep them invested in the platform: ‘the sides are linked: if one group is absent, the demand from the others tends to disappear’ (Rieder and Sire Citation2014, 199). Some researchers, most notably Shoshana Zuboff, have questioned the equal distribution of importance between the users, content providers and advertisers. Zuboff states that users have been reduced to a resource: ‘Google is “formally indifferent” to what its users say or do, as long as they say it and do it in ways that Google can capture and convert into data’ (Zuboff Citation2015, 79). While this is an important critique, the fact remains that Google needs all three groups to keep its core platforms running (e.g. Google Search, YouTube). Despite Google’s investments in other business ventures, such as Google Cloud, which is directly aimed at businesses, Google’s advertisement platforms remain the most important revenue stream. And keeping all three groups engaged is a challenge. As new media scholar Tarleton Gillespie argues, tensions arise ‘between user-generated and commercially-produced content, between cultivating community and serving up advertising, between intervening in the delivery of content and remaining neutral’ (Gillespie Citation2010, 348). Google aims to solve these tensions by engaging in ‘platform politics’ which entails strategically positioning its services using the discursive frame of ‘the platform’. According to Gillespie, the frame of the platform lends itself well for discursive positioning as it holds four different meanings:

‘computational, something to build upon and innovate from; political, a place from which to speak and be heard; figurative, in that the opportunity is an abstract promise as much as a practical one; and architectural, in that [the platform] is designed as an open-armed, egalitarian facilitation of expression, not an elitist gatekeeper with normative and technical restrictions’. (Gillespie Citation2010, 352)

By emphasising one or multiple meanings of the word ‘platform’ Gillespie argues that ‘digital intermediaries’ can position themselves as ‘open, neutral, egalitarian and progressive support for activity’ (Ibid). He illustrates this by showing how YouTube presents itself as a ‘facilitator, supporter, [and/or] host’ instead of a ‘gatekeeper’ or ‘curator’ to its users, and as a platform of ‘commercial opportunity’ to its advertisers (Gillespie Citation2010, 352, 354). As Gillespie puts it, making use of these discursive practices represents ‘an attempt to establish the very criteria by which these technologies [platforms] will be judged, built directly into the terms by which we know them’ (Gillespie Citation2010, 359). Platform politics thus enable Google to navigate the tensions between the various sides of its market and the competing interests within its ecosystem more broadly. With security not being the focus of its business model, this article assumes that Google appropriates security from this platform perspective, whether it be to maximise commercial opportunities or minimise reputational risks. In the next section, I therefore pay close attention to how platform politics and security objectives become entangled.

Mapping Google’s security appropriations

This section examines how Google appropriates security by mapping and unpacking its security work. As the definition of security is not static or explicit (Aradau et al. Citation2014; Coleman and Rosenow Citation2016) this article adopts an open, explorative position to analyse how Google itself speaks and does security that does not rule out projects on the basis of a narrow definition of security. The act of defining security objectives, threats, and solutions, is an important part of how Google appropriates security.

The mapping exercise is visualised in , which shows different ways in which Google speaks and does security, thereby appropriating security tasks and languages. Inspired by the clustering method in machine learning which computes ‘the similarity between all pairs of examples’,Footnote1 similar products and projects, as well as the threats or security objectives they are aimed at, are clustered together. The table relies on a close reading of over 160 public statements, Google Blogs, products, and project launches and their corporate websites, technical descriptions of these initiatives, and Founders’ letters published between 2014 and 2018. This period represents an important phase in which Google emerged as a security actor. In this post-Snowden period, Google was faced with a significant increase in security issues, responsibilities, and opportunities, which included a rise in data requests by (inter)national government authorities (Transparency Report Google Citationn.d.), (state-sponsored) cyberattacks, online influence campaigns, online radicalisation content. In the same period, Google decided to become ‘AI first’ (Pichai Citation2015) with the acquisition of AI company DeepMind in January 2014 and the open-source release of Google’s AI platform TensorFlow late 2015.

The following subsections discuss the three clusters of security appropriations. By going back and forth between their commercial and technical descriptions, I trace how Google’s platform objectives and security practices are brought into dialogue and unpack what implications this has for both Google and understandings of (in)security. Together, the following subsections show how security and commercial objectives become entangled: public security concerns are approached as platform concerns and platform concerns are reframed as matters for public concern.

Expanding cybersecurity and privacy

Google is invested in securing its individual and corporate users’ accounts from, for instance, potentially dangerous spam (Taylor Citation2007). This practice is informed by the commercial need to keep its users satisfied with its products and to maintain Google’s reputation as a capable, responsible, and neutral platform. The products listed in cluster 1 illustrate how Google is locking in new users by both reinforcing the perceived need for stronger cybersecurity measures and expanding its (AI-driven) cybersecurity products by presenting them as appropriate solutions to various complex socio-political challenges. This enables Google to brush over the complexity of these challenges and avoid engaging critically with the underlying, perhaps even systemic, causes that lead to cybersecurity issues in the first place. Most notably, Google ignores how the dominance of its own platforms, and its own large-scale data collection might create privacy or other cybersecurity issues.

Essential to the products in cluster 1 is that they do not protect users against specific culprits. Instead, Google protects its users from all ‘third parties’, which include state (sponsored) attackers. Doing so reframes state actors from those who traditionally provide the security of citizens to actors who might form a serious threat. Google’s security warnings for suspected state-sponsored attacks, for instance, were introduced by stating that ‘it is our [Google’s] duty to be proactive in notifying users about attacks or potential attacks so that they can take action to protect their information’ (Grosse Citation2012).

In 2016, Alphabet launched a new sister company: Jigsaw. This company, formerly known as ‘Google Ideas’, is described as a ‘technology incubator’ with the mission to address: ‘the toughest geopolitical challenges, from countering violent extremism to thwarting online censorship to mitigating the threats associated with digital attacks’ (Schmidt Citation2016). The projects that are launched under Jigsaw’s name, however, are mainly focused on developing cybersecurity products that lock (new) users into Google’s platforms. Password Alert, for instance, protects one’s account from phishing by sending an alert when someone has tried to use their password to log into another site or app. This only works if you have a Google account and are logged into Google’s browser Chrome. Project Shield offers protection from DDoS-attacks to ‘news, human rights, or elections monitoring organisations and individual journalists’ (Project Shield Citation2019). As explained the president of Jigsaw: ‘Project Shield is […] about improving the health of the Internet by mitigating against a significant threat for publishers and people who want to publish content that some might find inconvenient’ (Cohen Citation2016). To be eligible, however, you need to go through an application process online and need to have a Gmail-account.

What also stands out is how almost all the products listed in cluster 1 are powered by AI. While some of the AI tools run in the background, such Gmail’s spam filter, Google also develops large-scale-data-analysis driven tools which the users of Google’s business platforms can choose to adopt themselves. For instance, in March 2018, Google launched the Cloud Security Command Center which allows businesses to pre-emptively secure their Google Cloud Platform as it ‘offers deep insight into application and data risk so that you can quickly mitigate threats to your cloud resources across your organisation and evaluate overall health’ (Cloud Security Command Center Citationn.d.). In a similar vein, Google launched the new sister-company Chronicle in 2018, which offers AI tools as solutions to cybersecurity issues to (global) businesses. Google stresses the importance of AI as a necessary, powerful tool by stating that: ‘it is pretty common for hackers to go undetected for months, or for it to take a team months to fully understand what’s going on once they’ve detected an issue’ (Gillett Citation2018). To be able to counter these difficult to detect threats, Chronicle offers ‘planet-scale computing and analytics’ which runs on ‘the same fast, powerful, highly scalable infrastructure that powers a range of other Alphabet initiatives that require enormous processing power and storage’ (Ibid).

In summary, while Google’s cybersecurity projects have ambitious objectives, they mostly reinforce the importance of cybersecurity and allow for an expansion of Google’s cybersecurity work. Google entangles itself deeper into digital society by widening and deepening its user base, and reinforcing its perceived expertise on all matters cybersecurity. Doing so normalises Google’s involvement in these discussions, arguably making it harder to critique its suggested solutions and its digital dominance more generally.

Harnessing transparency, deflecting accountability

From 2015 onwards, Google faced an increasing amount of ‘user information requests’, ‘emergency disclosure requests’ and ‘preservation requests’ from national security authorities and law enforcement agencies from across the world to aid criminal inquiries or terrorism investigations (Transparency Report Google, Citationn.d.). During this time, it became clear how Google’s and other Big Tech platforms were used in foreign influence campaigns aimed at the 2016 US presidential elections. Since then, Google is tasked with the broad responsibility to secure the integrity of elections. While these two new responsibilities seem unrelated, the projects in cluster 2 illustrate how Google appropriates them in a similar way: Google aligns the two responsibilities with its commercial objectives by harnessing transparency.

Starting with the responsibility to disclose user information with governments, Google combines this responsibility with the need to strengthen the trust of its users, which was significantly challenged after the Snowden revelations in June 2013. Google states that in times of emergency: ‘Sometimes we voluntarily disclose user information to government agencies when we believe that doing so is necessary to prevent death or serious physical harm to someone’ (Transparency Report Google Citation2019). Google goes as far as stating that ‘[w]e might not give notice [to the user in question] when, in our sole discretion, we believe that notice would be counterproductive or exceptional circumstances exist involving danger of death or serious physical injury to any person’ (Ibid). At the same time, Google has been openly contesting overly broad data requests: ‘If a request asks for too much information, we try to narrow it, and in some cases we object to producing any information at all’ (Transparency Report Google, Citationn.d.). In 2016, Google provided ‘user information in response to 64% of those requests’ (Salgado Citation2016). Additionally, with the number of data requests steadily increasing, Google has been extending its Transparency Reports, in which the company shares data about how it shares or removes data, with or for whom and under which circumstances. In 2015, the day after the U.S. House of Representatives passed the USA Freedom Act, Google expanded its reporting on ‘emergency disclosure requests’ to include other countries besides the US, and to report the amount of ‘preservation requests’ it receives (Salgado Citation2015). A year later Google started to publish copies of the National Security Letters, which were no longer under indefinite nondisclosure obligations (Salgado Citation2016).

Another way in which Google appropriates the new responsibility to share data with law enforcement and national security agencies is by lobbying for stricter regulatory frameworks. Since its first testimony for the House Judiciary Subcommittee on the Constitution, Civil Rights, and Civil Liberties in 2010, Google argues that the Electronic Communications Privacy Act of 1986 (ECPA) needs to be updated:

If the current trajectory does not change, there will be an even more chaotic, conflicting world of expansive government access laws and overly aggressive investigative techniques that will weaken privacy protections for users and exacerbate existing tensions between governments and service providers. This could undermine the global Internet that is driving economic and social progress around the world and would ultimately undermine cooperation between law enforcement authorities and service providers (Cross Border Law Enforcement Requests White Paper Citation2017, emphasis added).

By lobbying for this legislative change, Google argues that transparency about user information requests as well as data removal requests is not only essential to protect the privacy of its users, but also to the relationship between governments and their citizens, and economic and social progress. By doing so, Google reframes what is essentially a threat to its business model as a threat to the economy and the relationship between the public and private sector but lays the responsibility to address these issues at the feet of governments. By harnessing transparency Google raises the barrier for sharing its data which will not only support the credibility of Google’s privacy commitments but will also make it easier for Google to legitimise its actions when it indeed needs to share data with government agencies.

Google appropriates its responsibility to protect the integrity of democratic elections in the same way. It takes up the fight against (state) censorship, by publishing how many and from whom it receives requests to remove content:

Governments cite defamation, privacy, and even copyright laws in their attempts to remove political speech from our services. Our teams evaluate each request and review the content in context in order to determine whether or not content should be removed due to violation of local law or our content policies (Transparency Report Google Citationn.d.).

Being hard on any data removal allows Google to stand up for the rights of its users and to stress its egalitarian and progressive position, while at the same time strengthening its neutrality. The most outspoken commitment to its new responsibility is the project Protect Your Election, which aims to secure ‘reliable election information and reporting’. This Jigsaw project consists of a set of cybersecurity solutions to ‘help protect news organisations, human rights groups, and election monitoring sites […] from online threats like DDoS attacks, phishing and attempts to break into people’s private accounts’ (Dauba-Pantanacce and Albers Citation2017). Through this project, Google brings the need for transparency into dialogue with its cybersecurity practices.

Unsurprisingly, Google’s senior management’s commitment to transparency wanes when it could harm its reputation. A prime example of this was when Google delayed the announcement of the Google+ leak despite Google’s earlier commitment to publish so-called ‘zero-day’ vulnerabilities. The leak in Google’s social media platform was found at the same time as Facebook’s Cambridge Analytica scandal in March 2018 but was only reported on briefly after the Wall Street Journal published an article about it later that year. According to documents obtained by the Wall Street Journal, the leak was not reported on ‘in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage’ (McMillan and MacMillan Citation2018). Another example of Google’s fickle commitment to transparency revealed itself when the Senate Select Committee on Intelligence invited the CEOs of the major social media platforms to testify in a hearing about foreign influence in the United States during the 2016 elections (MacMillan Citation2018). While the senior executives of Facebook and Twitter accepted the committee’s request, Google claimed its CEO was not able to make it. Instead, Google wanted to send its senior vice president of Global Affairs Kent Walker. The committee refused this offer, stating that Walker was not senior enough, and left Google’s name tag and empty chair in the room. Google’s absence was largely understood as Google trying to avoid answering questions about how its platform was misused during the elections (Ibid). So, while Google wants to provide more transparency to fit its new security responsibilities into its commercial practices, its senior management no longer frames this as an important objective when it might harm its harmless platform reputation.

What can we learn from the initiatives grouped together in cluster 2 other than that protecting Google’s reputation reigns supreme? Cluster 2 illustrates how Google identifies transparency as an important good to protect as it creates an opportunity for Google to provide (some) transparency about its data sharing practices. A closer look reveals that harnessing transparency also provides it with ways to deflect accountability for what happens on its platform. Especially considering that these projects and policy changes were introduced post-Snowden and, some of them after the 2016 US elections, the transparency initiatives in cluster 2 should also be understood as be a way for Google’s senior management to secure its platform by restoring its legitimacy and regaining the trust of its users, content providers and advertisers.

Moderating by proxy

Before this subsection unpacks the security appropriations of cluster 3, it is important to acknowledge that when it comes to content moderation Google’s relation to its users and its regulators is becoming increasingly more complex. Marieke de Goede and Rocco Bellanova, notably, have shown how within the European Union, with the development of ‘numerous policy and legislative initiatives’ content moderation practices are marked by ‘complex and far reaching forms of public – private security co-production’ (Bellanova and de Goede Citation2021, 1325). While cluster 3 does not reflect this level of complexity, the services, products, and projects in this cluster are important to unpack as it shows a different side to Google’s appropriation of security. It shows how Google reluctantly navigates a law enforcement role, which is key to understanding how and why Google is emerging as a security actor.

Faced with an increasing amount of terrorist attacks across several EU member states, the EU Commission (among others) has put pressure on Google, Facebook, and Twitter to moderate their platforms more proactively. In March 2018, the European Commission stated that ‘internet firms should be ready to remove extremist content within an hour of being notified’ and that it ‘would assess the need for legislation of technology firms within three months if demonstrable improvement is not made’ (Gibbs Citation2018). In response to this pressure, Google appropriates this new security responsibility by entangling it with its platform objective of presenting itself as neutral and progressive. This dialogue is an uncomfortable one, as it puts pressure on the inherent tensions between these commercial practices. Google needs to delete terrorism-related content so it cannot be held accountable for the content published and the advertisements placed on its platform. But content moderation forces Google to decide which content is harmful and which is not – a decision that is inherently political. This new security responsibility puts strains on its relationship with users, content providers, and advertisers alike. Cluster 3 shows how Google manoeuvres itself past the issue of balancing progressiveness and neutrality by moderating by proxy, i.e. through others or other things.

Even though Google acknowledges that AI is ‘not a silver bullet’ when it comes to content moderation (Walker Citation2017), YouTube’s moderation efforts show that Google foregrounds AI to help remove terrorist- and other harmful content. The speed of AI helps it to:

take down extremist content before it has been widely viewed […] well over 90% of the videos uploaded in September 2018 and removed for Violent Extremism had fewer than 10 views. (Transparency Report YouTube, Citationn.d.)

YouTube heavily relies on this system to remove videos. In the period July 2018 – September 2018, a total number of 7,845,400 videos were removed from the platform, of which 6,387,658 were flagged by the automated system (Ibid). YouTube claims that machine learning not only helps with speed, but also improves efficiency, accuracy, and scale of its deleting efforts (The YouTube Team Citation2017b).

YouTube is also delegating moderation to its users and content providers. The project Creators for Change, for instance, amplifies ‘the voices of role models who are tackling difficult social issues with their channels’ by funding their equipment and awarding production grants (Downs Citation2016). Another project, the Trusted Flagger programme, asks ‘expert NGO’s and organisations’ to review videos as there are ‘accurate over 90% of the time and help us [Google] scale our efforts and identify emerging areas of concern’ (Walker Citation2017). Additionally, Jigsaw developed the Redirect method, a system that ‘uses curated video content to redirect people away from violent extremist propaganda and steer them towards video content that confronts extremist messages and debunks its mythology’ that already existed on YouTube (The YouTube Team Citation2017a). By leaving it to those who are active on the platform, Google still meets its security objective to remove harmful content, whilst making sure not to upset its relationship with content providers and advertisers by forcing others – or other things- to make political decisions about what is harmful content and what is not.

Google also navigates the tension caused by content moderation by collaborating with other public and private actors, and by making its tools available to other online moderators. An example of the latter is the Jigsaw project Perspective, which is an AI tool that helps news websites like the New York Times to moderate their comment section. An example of the former is Google’s effort to eradicate terrorist content of YouTube by forming the Global Internet Forum to Counter Terrorism with Facebook, Microsoft, and Twitter in July 2017. Within the context of this platform, social media companies ‘refine and improve existing joint technical work, such as the Shared Industry Hash Database; exchange best practices […]; and define standard transparency reporting methods for terrorist content removals’ (GIFCT Citation2017). Note how technological solutions, most of them powered by AI, take prominence in this collaboration.

In sum, cluster 3 illustrates how Google reluctantly appropriates the need for online moderation by entangling it with its commercial objective to be seen as a responsible leader in AI who recognises that there are limits to technological solutions and who is willing to share its expertise with others. While Google’s initiatives in and off themselves are not problematic, together they form a practice of moderating by proxy that enables Google to position itself a neutral facilitator. In doing so, Google can keep its new security responsibility at arm’s length and strengthen the notion that it is not and should not be accountable for what happens on its various platforms.

Conclusion

Existing scholarship in critical security studies has examined the socio-political implications of how non-security actors appropriate security responsibilities. This article contributes to this literature by drawing attention to Big Tech’s increasing involvement in security. More specifically, considering Big Tech’s position as complex, global ecosystems that are deeply entangled in the fabric of the information society, this paper asked: how does Google appropriate security throughout its ecosystem of platforms, products, and projects?

Rather than zooming in on Google’s more controversial involvements in security, like Project Maven (Hoijtink and Planqué-van Hardeveld Citation2022), the analytical focus of appropriation enabled this article to analyse how, through many seemingly mundane, technical ways, Google makes security its own by entangling it with its commercial practices.

Drawing from insights from new media studies that have illustrated Google’s necessity to position itself as a neutral and progressive platform, this article argues that Google appropriates (in)security by developing products, services, and projects that align with this platform logic. In doing so, Google entangles itself deeper into the fabric of digital society, infusing (in)security issues with its platform logic to, ultimately, secure the platform. This entails locking in new users into its platforms and reshaping the social identity of security issues into commercial platform issues and simultaneously presenting its own ‘private’ issues into public security concerns.

While this article focused on how security and the commercial logic become entangled, the analysis draws our attention to two broader implications. First, Google reinforces the perceived need for ‘algorithmic security’ (Amoore and Raley Citation2017) by presenting AI as a ‘fix-almost-all’ security solution. All the appropriations captured in illustrate how Google gravitates towards AI as its preferred way to ‘do’ security, especially when having to address a threat that sits in tension with its commercial objectives. By doing so, Google is linking AI to security more firmly, and more generally. While Google’s CEO states that it will ‘promote thoughtful leadership in this area’ (Pichai Citation2018) it is side-lining an expanding literature in critical security studies and critical data studies that warns us of the serious limitations of AI and its potentially harmful consequences, particularly when used for military objectives such as in project Maven (Amoore and Raley Citation2017; Aradau and Blanke Citation2017; Suchman Citation2020). Due to AI’s inscrutability, it remains unclear how Google arrives at the products that are offered to other developers and/or used to protect online users and how successful these products are (Gorwa, Binns, and Katzenbach Citation2020). AI-powered algorithms are also inherently political as they have ‘generative and world-making capacities’ that ‘filter, expand, flatten, reduce, dissipate and amplify what can be rendered of a world to be secured’ (Amoore and Raley Citation2017, 5).

Second, by presenting itself as a leader in developing seemingly neutral (AI-driven) solutions, Google perpetuates the notion that it can and should regulate itself. State authorities have become more critical of Google’s socio-political role in digital society and various regulations have been developed to limit its influence and/or assign it security responsibilities that draw Google into ‘global assemblages’ of e.g. content moderation (Abrahamsen and Williams Citation2009). That said, its expertise in cybersecurity and AI is rarely questioned. Maintaining the notion that self-regulation is appropriate could thus deprioritize critical engagement with the subtle ways in which Big Tech fulfil their new security responsibilities and, especially, how they influence ongoing debates on responsible AI. Only by bringing together critical security studies’ analytical focus on the role and impact of non-security actors on security with a detailed understanding of Big Tech’s global ecosystems, can we unpack the broader implications of their involvement in security.

Disclosure statement

No potential conflict of interest was reported by the author.

Additional information

Funding

This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (research project ‘FOLLOW: Following the Money from Transaction to Trial,’ Grant No. ERC-2015- CoG 682317).

Notes on contributors

Anneroos Planqué-van Hardeveld

Anneroos Planqué-van Hardeveld is an PhD candidate at the Department of Political Science at the University of Amsterdam. Her research focuses on how Google, and other ‘Big Tech’ companies, emerge as security actors and unpacks how their politics of knowledge and expertise in machine learning/AI shape current understandings of (in)security.

Notes

References