3,176
Views
11
CrossRef citations to date
0
Altmetric
Articles

The (False) promise of solutionism: ideational business power and the construction of epistemic authority in digital security governance

ORCID Icon & ORCID Icon
Pages 1305-1329 | Received 25 Apr 2022, Accepted 18 Jan 2023, Published online: 02 Feb 2023

ABSTRACT

Digital technologies are transforming security governance, bringing new risks and opportunities. The resulting uncertainty creates interpretative contests about what these new challenges are and who can – and should – address them. We argue that private actors use their ideational business power – and specifically solutionist arguments – to influence how public actors perceive digital security problems; whether they view private actors as necessary and/or effective in solving them; and whether they view public and private goals as compatible. In doing so, they influence how public actors navigate competence-control trade-offs. We substantiate this argument in two qualitative case studies on the involvement of Palantir in EU law enforcement and on the prominent role of (foreign) tech companies in the European cloud project Gaia-X. Drawing on and contributing to the literatures on (critical) security governance, competence-control theory, and ideational business power, we shed light on the ideational underpinnings of Europe’s regulatory security state.

Introduction

The rise of digital technologies is transforming security governance. As criminals and terrorists move their activities online, law enforcement and intelligence agencies encounter new problems. But digitalisation also offers security actors new sources of evidence as well as tools to organise and analyse data, from accessing security-relevant data in private company clouds to visualising money laundering networks. In the words of Europol director de Bolle, ‘digital devices are increasingly the key instruments with which crimes are planned, perpetuated and memorialised; and they often hold evidence necessary to solve a crime’ (De Bolle & Vance, Citation2021). In addition to such direct security challenges, there are indirect security challenges resulting from the technological leadership of non-European firms. For example, foreign technology companies operating in security-relevant areas may be subject to data access requests by geopolitical or geoeconomic rivals.Footnote1

Yet neither the security risks nor the opportunities are simply given. Instead, the digital transformation of security governance creates uncertainty about what technology can – and cannot – do. This sets the stage for ‘interpretative contests’ (Willers, Citation2021) over what exactly the new challenges of digital security governance are and who can – and should – best address them. As the critical literature on security governance has shown, private actors actively participate in these interpretative contests, inflating some risks and downplaying others (Maschmeyer et al., Citation2021; Willers, Citation2021), or framing security problems in ways that lend themselves to technological solutions (Lavorgna & Ugwudike, Citation2021; Leander, Citation2015; Martins & Jumbert, Citation2022). In short, they exert ‘ideational power’, understood as influencing other ‘actors’ normative and cognitive beliefs through the use of ideational elements’ (Carstensen & Schmidt, Citation2016, p. 322; Selling, Citation2021, p. 48).

At the same time, as this special issue and the broader literature on EU security governance document, private (and often non-European) companies have come to play an increasingly important role in ‘co-producing’ (Bellanova & de Goede, Citation2022) European (digital) security governance. For example, public security officials increasingly foster data-driven security tools to ‘enhance law enforcement capacity in digital investigations’ (European Commission, Citation2020, p. 11). This raises the question of how we can understand public-private interactions in the uncertainty-laden area of digital security governance. Our argument proceeds in two steps, drawing on and combining the literatures on competence-control theory (Abbott et al., Citation2020) and ideational business power (Carstensen & Schmidt, Citation2016; Selling, Citation2021).

First, we argue that whether public actors rely on private companies in digital security provision depends on how they navigate competence-control trade-offs (Abbott et al., Citation2020, p. 4). On the one hand, public actors often need the competences of private intermediaries to address digital security challenges, given their technical expertise and control over digital infrastructures (European Commission, Citation2020, p. 6; cf. Mügge, Citation2023). On the other hand, security officials need to control private actors to make sure they act in line with public interests and values. For example, Europol’s reliance on big data analyses and (often commercial) digital security tools has already created tensions with European data protection authorities (Fotiadis et al., Citation2022).

Second, however, we depart from the – at least implicit – assumption of competence-control theory to take both competence and the need for control as simply given. Instead, we argue that the competence-control trade-off is ideationally mediated. On the one hand, it is unclear whether digital security problems always lend themselves to technological solutions, whether the technological solutions private companies promise are adequate, or whether private companies can deliver on their promises. On the other hand, knowledge asymmetries prevent public security actors from always monitoring or even understanding what private companies do (Taeihagh et al., Citation2021). Whether they rely on private companies thus also depends on whether they perceive them – or ‘trust’ them – to ultimately share their interests or not (Carrapico & Farrand, Citation2021). In short, both beliefs about the competence of private actors and beliefs about the difficulties of controlling them are subject to ideational influence, specifically under conditions of uncertainty.

Private actors have strong economic incentives to try and influence these beliefs, which we conceptualise as ‘ideational business power’ (Selling, Citation2021). Ideational business power is a form of ideational power and can be defined as the use of ideas – ‘diagnoses of problems, priorities and solutions’ (Carstensen & Schmidt, Citation2016, p. 447) – by private actors to ‘persuade policymakers to accept and adopt their views’ (Selling, Citation2021, pp. 51–52; cf. Carstensen & Schmidt, Citation2016). We further argue that in exerting their ideational business power, private companies strongly rely on solutionist ideas that depict security problems as technological problems and posit a compatibility between the economic interests of private actors and the broader political or normative interests of public actors (Martins et al., Citation2021; Morozov, Citation2013; Nachtwey & Seidl, Citation2020).

Thus, the core argument of this contribution is that in order to understand the rise and limits of the regulatory security state (RSS) in Europe, as this special issue sets out to do, we need to understand how private tech companies use their ideational business power to try and co-construct epistemic authority: the recognised belief that they are not only ‘most competent to produce a security-relevant good’ (Kruck & Weiss, Citation2023, p. 1213) but can also do so without compromising broader public goals. We substantiate this argument through the empirical analysis of two cases, which have not yet received significant attention in the academic literature: one on the involvement of the data analytics company Palantir in European law enforcement (Holst et al., Citation2021); and one on the role of (foreign) tech companies in the European cloud project Gaia-X (Autolitano & Pawlowska, Citation2021).

We use policy and company documents, journalistic reporting, documents obtained through freedom of information requests, and interviews with key stakeholders to show how tech companies systematically use their ideational business power to influence public actors’ beliefs about their competences and the necessity of controlling them. We thus pursue an ideational explanation which centres on the ideas (e.g., beliefs) through which actors interpret the world around them (Jacobs, Citation2015; Parsons, Citation2007, p. 98). In doing so, our focus is on comprehensively documenting the – often secretive – use of ideational business power. In addition, we also attempt to shed light on its effects, given that the uncertainties surrounding digital security governance should create space for ideational influences (Parsons, Citation2007, p. 101). ‘Tracing the effects of ideas’, however, comes with considerable methodological challenges (Jacobs, Citation2015), specifically in a secretive policy area like security governance. While carefully reflecting these challenges and acknowledging certain limitations, we are nonetheless confident to at least demonstrate the plausibility and likelihood of such ideational effects.

Our paper thus contributes to different literatures in three ways. First, we add to the literature on regulatory governance by showing how neither competence nor epistemic authority are simply given but are instead ideationally (co-)constructed. We demonstrate the ideational mediation of competence-control trade-offs (Abbott et al., Citation2020) by tracing the ideational construction of epistemic authority by private actors. By drawing on insights from science and technology and critical security studies, we also contribute a more critical reading of epistemic authority in the RSS: While the RSS frameworks posits that security in Europe is characterised by the reliance on rule-making and epistemic authority (Kruck & Weiss, Citation2023), it does not outline how actors come to be in (or fail to reach) a position of epistemic authority (cf. Bode & Huelss, Citation2023; Dunn Cavelty & Smeets, Citation2023).

Second, we add to the literature on the ideational dimension of security governance by more systematically theorising the trade-offs public actors face when involving private actors in ‘co-producing’ (Bellanova & de Goede, Citation2022) security decisions. We thus demonstrate the usefulness of incorporating arguments from regulatory governance – specifically competence-control theory (Abbott et al., Citation2020) into (digital) security research (see also Bellanova & de Goede, Citation2022). In addition, we break new empirical ground by analysing two sensitive and under-researched but highly important cases of digital security governance.

Third, we contribute to the literature on the sources and uses of tech business power (Culpepper & Thelen, Citation2020; Kemmerling & Trampusch, Citation2022; Seidl, Citation2022) by showing how tech companies exert ideational business power to get involved in the provision of public services. We also show how tech companies are particularly well positioned to exert ideational business power in the uncertainty-laden context of digital security governance. Lastly, we shed light on the ideational origins of ‘institutional business power’ (Busemeyer & Thelen, Citation2020) and the role of solutionist ideas therein (Kruck, Citation2016; Nachtwey & Seidl, Citation2020).

We begin by briefly surveying the literature on the role of ideational elements in digital security governance. Next and based on this, we elaborate on our theoretical argument which focuses on the role of ideational business power – and specifically solutionist ideas – in influencing both sides of the competence-control trade-off in digital security governance. After discussing our empirical strategy and case selection, we present and discuss our case studies on digital law enforcement and cloud infrastructure. The contribution concludes with a reflection on the implications of our argument and findings for the rise and limits of the RSS.

The ideational dimension of security governance

From vulnerabilities in critical infrastructure to the usability of big data solutions, actors and institutions struggle to navigate a changing security landscape (Lewallen, Citation2020, p. 1). The uncertainty about the new security risks and opportunities digital technologies bring (Willers, Citation2021) sets the stage for interpretative contests in which actors try to advance their preferred problem definitions. The existing literature has extensively documented this ideational dimension of security governance.

First, it has shown how security actors influence and often ‘inflate risk representations’ (Willers, Citation2021, p. 10). Maschmeyer et al. (Citation2021), for example, document how commercial cyber threat reporting is systematically biased to over-report cyber risks to actors that can afford commercial cyber defence, while under-reporting those that affect poorer civil society actors. Similarly, Aradau and Blanke (Citation2015, p. 2) show how public security officials similarly use the ‘supposed “novelty”’ of digital security governance to justify the need for new competences. Security actors thus engage in the self-interested ‘management of uncertainty’ (Willers, Citation2021, p. 11) by actively put a spotlight on certain risks and not others.

Second, security scholars have shown how security problems are often defined in ways that lend themselves to technological solutions, usually provided by private companies (Leander, Citation2015). The technology industry, in the words of an EU official, often presents practitioners ‘with a puzzle that they (the industry) themselves then solve in front of our eyes’ (quoted in Martins & Jumbert, Citation2022, p. 1440). In other words, they define security problems in ways that match ‘their prearrived solutions’ (Mehta, Citation2011, p. 32). For example, Lavorgna and Ugwudike (Citation2021) show how tech companies successfully pushed ‘optimistic’ frames about data-driven technologies for crime prevention and control – at the expense of more balanced or critical ones. Such optimistic frames rely on a problem definitions that prioritises ‘crime control imperatives’ (Lavorgna & Ugwudike, Citation2021, p. 11), which allows their proponents to emphasise the ‘technological superiority’ and ‘vital utility’ of data-driven technologies in crime control (Lavorgna & Ugwudike, Citation2021, p. 8).

Third, the solutions presented often imply that technological solutions will not only address security problems but also broader political or normative concerns that security officials might have. For example, Martins and Jumbert (Citation2022) have not only shown how the challenges of border management are increasingly framed in technological terms, such that ‘surveillance becomes the needed solution and drones the most adapted vehicle to carry out this surveillance’ (Martins & Jumbert, Citation2022, p. 1437). They also show how private firms depict migrants as both a risk and at risk to argue that surveillance technologies cannot only help control migration flows but also improve search and rescue missions (Martins & Jumbert, Citation2022, p. 1433). In addition, surveillance technologies offer solutions to several problems at once, from border management to environmental protection to fisheries control (Martins & Jumbert, Citation2022, p. 1438).

Security ‘problems’, as this quick survey of the literature shows, are thus not simply ‘out there’, demanding specific solutions. Rather, they are crafted in ways that make certain ‘solutions’ seem necessary, effective, and in line with public interests and values. The literature on security governance thus helps us understand the ideational construction of security risks and opportunities and thus how the rise of the RSS is influenced – accelerated, expanded, and sometimes even made possible – by ‘battles of interpretation’ (Willers, Citation2021, p. 4) over the nature of (digital) security challenges. Purely materialist explanations, by contrast, obscure the politics behind problem definitions, as companies are ‘assumed to provide solutions to pre-given problems, not to problems produced in and by the processes through which companies assist their (public or private) clients in identifying risks and providing suggestions for how to mitigate them’ (Leander, Citation2015, p. 153).

The ideational construction of competence and control

But how exactly do problem definitions affect public-private interactions in security governance? How can we conceptualise the trade-offs public security actors face in whether or not to rely on (foreign) private companies? And how should we analyse the strategies private actors use to influence how public actors navigate these trade-offs? The literature on security governance lacks a sophisticated theory of public-private interaction. To better understand when, why, and to what extent public actors rely on private actors in providing public security governance, we therefore draw on competence-control theory (Abbott et al., Citation2020). Competence-control theory posits that governors need to rely on intermediaries, as ‘no governor has the capabilities to achieve its governance goals single-handedly’ (Abbott et al., Citation2020, p. 4). But while governors want competent intermediaries that help them achieve their governance goals (e.g., providing security), they also want to control them, that is, ‘shape and constrain the behavior of intermediaries so that they pursue the governor’s goals and cannot subvert them’ (Abbott et al., Citation2020, p. 4). These two objectives are in conflict, as more control hampers competence and more competence makes control more difficult. This constitutes the ‘governor’s dilemma’.

We see this dilemma play out in digital security governance too. On the one hand, technical expertise and capacity are concentrated among private technology firms and governors cannot easily monitor or even understand what they do (Taeihagh et al., Citation2021, pp. 2–3); on the other hand, however, control might be necessary to make sure the interests of public and private actors are aligned and the latter do not prioritise their interests over those of the former. Public security actors thus need to rely on the competences of private technology firms to achieve their goals, drawing on their ‘expertise, credibility, legitimacy, and operational capacity’ (Abbott et al., Citation2020, p. 4); but they must also control these firms, i.e., make sure that they act in accordance with their goals while giving them enough leeway to operate effectively.

While competence-control theory allows us to think much more systematically about the trade-offs public actors face in deciding on the role of private actors in digital security governance, it lacks an explicit theory of how ideational processes influence these trade-offs. This is problematic as the ambiguity and uncertainty that characterise digital security governance (Aradau & Blanke, Citation2015; Willers, Citation2021) are such that ‘even rational [actors] depend to some degree on interpretive filters to organize their preferences, priorities, and problems’ (Parsons, Citation2007, p. 98; cf. Beckert, Citation1996). Combining insights on the ideational dimension of security governance (see above) and the literature on ideational (business) power (see below), we therefore suggest to ‘ideationalise’ the governor’s dilemma.

Private actors, we argue, will heavily invest in their ‘epistemic authority’, that is, their ‘reputation for [technological] expertise’ and ‘(perceived) effectiveness’ (Kruck & Weiss, Citation2023, p. 1213) as well as their perceived ability to provide effective solutions that do not undermine broader political or normative concerns. Thus, while we agree that the role of private actors in security governance depends in important ways on their epistemic authority (Kruck & Weiss, Citation2023), we stress that private actors co-construct this epistemic authority. They do so by influencing public actors’ beliefs both in the necessity and effectiveness of private solutions and in the compatibility of public and private interests or values.

We conceptualise such influence as ‘ideational business power’ (Carstensen & Schmidt, Citation2016; Selling, Citation2021). Ideational business power is exercised when ‘organized interests try to persuade policymakers to accept and adopt their views through the use of ideas’ (Selling, Citation2021, pp. 51–52).Footnote2 Based on this conceptualisation, and the above-mentioned research on the ideational dimension of security governance, we expect private security actors to use their ideational business power to influence (i) how public actors perceive digital security problems; (ii) whether they view private actors as necessary and/or effective in solving them; and (iii) whether they view public and private goals as compatible.

Actors may draw on both cognitive and normative arguments in exerting ideational (business) power: Cognitive arguments are meant to offer plausible and coherent problem definitions and solutions; normative arguments, by contrast, are meant to offer more generally accessible problem definitions and establish the conformity of solutions with the values of a given community (Carstensen & Schmidt, Citation2016, p. 324). Building on the growing literature on (techno-)solutionism (Csernatoni, Citation2020; Martins et al., Citation2021; Morozov, Citation2013; Nachtwey & Seidl, Citation2020), we argue that tech companies will often advance both cognitive and normative arguments through solutionist ideas. Solutionism is based on the notion that there is not only a technological solution to every social problem but that there is also a business case for solving such problems: one can make money while ‘making the world a better place’ (Morozov, Citation2013; Nachtwey & Seidl, Citation2020). By drawing on solutionist beliefs that are widely held by both public and private actors (Bode & Huelss, Citation2023; Csernatoni, Citation2020; Martins et al., Citation2021), tech companies can more convincingly (i) depict digital security problems as amenable to technological solutions, (ii) convince public officials that their technical prowess is essential – or even necessary – to solve these problems (the promise of the technical fix); and (iii) reassure public officials that their goals are aligned because technology allows to reconcile private profits and public benefits (the promise of the win-win).

While most companies try to influence the beliefs of public actors, tech companies enjoy a number of advantages that make them particularly well positioned to successfully exert ideational business power. First, they operate in a context of high uncertainty and novelty, as digital technologies change quickly and are poorly understood. We know that ideas are most powerful when the both problems and potential solutions are clouded in uncertainty (Beckert, Citation1996; Parsons, Citation2007), and tech companies have already been shown to exploit this uncertainty (Seidl, Citation2022). Second, in digital security governance, there is much potential for ‘epistemic arbitrage’ whereby actors ‘can mediate between different pools of knowledge for strategic advantage to position themselves and their preferred skill set and knowledge as the best way to address problems’ (Seabrooke, Citation2014). Tech companies can use their enormous advantages in technical expertise to shape how security officials – who are suddenly in dire need of this expertise – come to think about the uses of digital technology in security governance (Bode & Huelss, Citation2023). Third, tech companies are well versed at combining their ideational business power with other sources of business power, such as their large financial resources, key infrastructural position, and direct connection to users (Culpepper & Thelen, Citation2020; Kemmerling & Trampusch, Citation2022; Seidl, Citation2022). This allows them to back their ideational business power with significant resources, which they can use to directly influence public opinion, sponsor conferences, or fund research or policy institutes that help circulate and bolster their ideational claims (Bodó et al., Citation2020).

Case selection, data & empirical approach

To examine how these dynamics play out in practice, we selected cases from (data analytics) software and (cloud) infrastructure, both key areas of digital security governance. First, we focus on digital law enforcement, and specifically the role of the US company Palantir. Second, we investigate the role of tech companies in the cloud project Gaia-X. Neither of these cases have received much attention in the academic literature, despite being typical and prominent cases for the growing involvement of private tech companies in the emerging field of digital security governance. However, the sensitive nature of these cases makes it challenging to obtain access to relevant information. Moreover, substantiating an ideational approach comes with additional difficulties of measurement and inference (Jacobs, Citation2015).

We therefore followed Jacobs (Citation2015, p. 41) advice to have an ‘expansive’ approach to empirical data collection, taking in a wide variety of different data sources from different empirical contexts. Focusing on how both private and public actors communicate – internally or externally – about digital security problems, solutions, and their justifications, we analysed a large number of newspaper articles, official documents, press releases, company publications such as blog entries or earnings reports, and documents obtained through freedom of information requests. In addition, we triangulated our analysis of these documents with ten interviews with experts, representatives of private companies, and policy officials between December 2021 and April 2022, lasting approximately 45–60 min (for more details, see the online appendix).

Despite these diverse data sources that shed light on different perspectives, the opacity of security governance imposes certain limitations on our empirical analysis, particularly regarding inferences about the effects of private ideational business power on the decisions of public actors. This is further complicated by the fact that existing studies on our cases are far and few between, which makes it difficult to pitch our ideational explanation against existing alternative (e.g., materialist) explanations. It is therefore important to not only acknowledge potential limitations but also carefully reflect on the nature and specific challenges of ideational explanations (Jacobs, Citation2015). Following Jacobs (Citation2015, p. 43) we define an ideational explanation as one ‘in which the content of a cognitive structure influences actors’ responses to a choice situation, and in which that cognitive structure is not wholly endogenous to objective, material features of the choice situation being explained’.

To show how both competence and control perceptions are ideationally mediated, we need to first establish whether and how companies use their ideational business power. Finding ‘significant verbal references to the ideational constructs hypothesized’ is in many ways ‘necessary for the survival of an ideational explanation’ (Jacobs, Citation2015, p. 54). Establishing the existence of such communicative evidence is therefore an important task in itself, especially in a secretive policy area like security governance. We also formulate clear theoretical expectations about the – in our case: solutionist – content of private actors’ ideational strategies. If private companies attempt to influence public actors’ perception of security problems and solutions, we expect them to actively and systematically advance (solutionist) cognitive and normative arguments about their capabilities and commitments to values in security governance. By contrast, alternative, materialist explanations would expect discussions to at most revolve around price-performance metrics, while having a hard time explaining why companies invest in ideational lobbying at all (cf. Mehta, Citation2011, p. 24).

But while understanding the ideational strategies of private companies is the primary focus of our analysis, we are, second, also interested in the effects of these ideational strategies: if and when private companies’ epistemic authority is actually recognised. Drawing these inferences is challenging, not only because ideas are often hard to observe in an unbiased way, but also because it is difficult to disentangle material and ideational factors (Jacobs, Citation2015, p. 45; Parsons, Citation2007, p. 98). We therefore put particular emphasis on whether private and public communication ‘match’, whether diverse actors acknowledge the presence of ideational influence, whether actors’ behaviour in ‘choice situations’ (Jacobs, Citation2015, p. 48) is characterised by elements of surprise, whether public actors fundamentally change their beliefs in a way that is consistent with ideational lobbying, or whether ideational change (continuity) occurs in spite of continuity (change) in the non-ideational context. By contrast, alternative, materialist explanations expect public actors to have a relatively clear understanding of digital security problems and their solutions, and to hire (and fire) private actors based only on what private actors do – not on what they say.

Case study I: Palantir & digital law enforcement in Europe

Palantir’s ideational business power

Palantir has gained a substantial foothold in Europe’s public sector landscape, receiving substantial contracts and access to sensitive data in high-profile areas with immediate security relevance. Europol, for example, through the consultancy Capgemini, signed a contract with Palantir, which, according to an internal document, had the ‘best offer’ in 2012 over a total of 7.5 million Euro (27 January 2021). Europol has been using Palantir’s software Gotham for counter-terrorism purposes between 2016 and at least 2021. Journalistic reporting and internal documents show that Palantir has been extremely active in arranging meetings with high-ranking EU officials, including the EU Counterterrorism Coordinator de Kerchove in March 2019, the former EU Commissioner Bieńkowska in 2016, and Commission President von der Leyen in Davos in 2020. Palantir thus has an unusual ‘proximity to power’ (in ‘t Veld, cited in Holst et al., Citation2021), which it actively cultivates and which puts it in an ideal position to exert ideational business power.

The use of solutionist arguments

In line with our expectations, Palantir frequently employs solutionist frames such as the promises of the win-win and technical fix in exerting its ideational business power. Palantir’s CEO Alex Karp, for example, sees Palantir in the business of ‘resolving the contradiction, which does not exist anyway, between data protection and the fight against terrorism’ (Karp, Citation2022, our translation). In this view, Palantir has both the technical competences to solve digital security problems and the ability to do so without violating shared interests and norms. Palantir repeatedly stresses its technical ability to provide effective and privacy-compatible solutions, both resulting from decades of experience working with public security actors (Interview 6). Concerning its competences, Palantir argues that law enforcement officers often ‘don’t know is what is actually technologically possible, what technology can already do to address their problems’ (Interview 6). Its visualisation tools, for example, can ‘augment intuition in ways that excel sheets, for example, can’t’ (Interview 6), allowing officials to tackle security challenges in novel ways. Given how pressing today’s security challenges are, the public sector could never develop similar competences in time – even if it were possible, it would take decades and ‘we might all be dead in 20 years’ (Interview 6).

At the same time, years of experience and constant vigilance have made the company particularly good at ‘reconciling law enforcement needs with data protection requirements’ (Interview 6). Palantir has made it a central part of its sales pitch that the significant experience in working with sensitive data has pushed it to build technology ‘trusted by the world’s most stringent – and skeptical – data protection regimes’ (Palantir’s website). This reputation-through-experience provides Palantir with an important competitive advantage (Interview 1). Interestingly, Palantir emphasises – rather than downplays – the difficulties of getting data analytics right. But it presents its own technological solutions as the way to address these difficulties based on its competence and normative commitments. In a private presentation to the European Data Protection Supervisor (EDPS), for example, Palantir stresses that its system does not ‘undermine fundamental rights’ and outlined its ‘philosophy of privacy and social liberties engineering’, depicting the lack of data protection as a technical problem of ‘fragmented data landscapes’. Palantir has also set up an in-house Privacy and Civil Liberties engineering team ‘dedicated to working for the common good and doing what’s right’ (Palantir’s website). And even if its software is misused by rogue officials, log files and built-in restrictions provide technological solutions that minimise these risks as well (Interview 6).

In sum, there is clear evidence that Palantir strongly draws on the solutionist promise of the win-win to convince regulators that their interests are aligned, and they do so both in public and private settings. It assures data protection officials that there is no conflict between its commercial interests and data protection. And it even argues that the two objectives are complementary as Palantir’s technology transforms data protection into a commercial ‘opportunity’. There exists, in the words of Palantir CEO Alex Karp (Citation2022) a ‘fortunate union (…) between our business and moral objectives’. At the same time, Palantir uses the promise of a technical fix in a modified form. Only Palantir’s sophisticated products – which cannot easily be replicated by private competitors or public actors – allow for data analytics to pay a considerable role in ‘meaningfully strengthen[ing] national security’ (Palantir’s website). In all this, Palantir actively engages in problem definition by seeking to provide ‘public actors with a clear understanding of the challenges police are confronted with today: with new forms of crime but also much more data to fight crime’ (Interview 6).Footnote3

The recognition of epistemic authority

We thus find strong communicative evidence for Palantir’s use of solutionist arguments to frame security problems in ways that lend themselves to technological solutions, to highlight the ‘competence’ of Palantir in providing such solutions, and to alleviate control concerns by stressing the compatibility of public and private interests. Palantir has also been and continues to be very successful at marketing its products to European security actors, despite strong criticisms from various sides (Holst et al., Citation2021; in’t Veld, Citation2020). Palantir continues to receive lucrative contracts, and has, according to a Commission official, ‘managed to position itself as an indispensable partner’ of European security officials (Interview 10). This, in itself, however, is not definitive evidence of the effective use ideational business power. So why do we believe that public actors were influenced by Palantir’s use of ideational business power in their navigation of competence-control trade-offs?

First, many of our interviewees, including public officials, have explicitly noted that Palantir is very good at exerting ideational influence: that they are ‘extremely effective marketers and sellers’ (Interview 10), ‘very good at selling themselves’ (Interview 4), or even ‘the best bullshitters in the digital era’ (Interview 10). Such statements make little sense in a world in which decisions about the involvement of private actors in security provisions are arrived at through rational assessment alone (cf. Parsons, Citation2007, p. 98). We also see Palantir’s ideational success reflected in private communication. For example, DG GROW noted in an internal protocol of a meeting with Palantir in 2016 that it has come to believe ‘that the role of public authorities is to create the preconditions for firms to lead innovation in this field [i.e., big data]’ (22 September 2016).

In addition, there is strong evidence that public officials lacked a clear understanding of what Palantir’s technologies can (seemingly) do. As one EU official noted, public security officials are often ‘mesmerised by [Palantir’s] new technologies’ (Interview 4). Instead of asking ‘what these technologies can actually solve’, they buy into Palantir’s sales pitch and only later seriously think about what exactly ‘we can use [these technologies] for’ (Interview 4). In other words, EU security officials seek competences in digital technologies without always knowing specific problems they want solved, reflecting an almost ‘magical’ (Interview 5) or ‘ceremonial’ (Fourcade & Healy, Citation2017, p. 16) belief in the power of data analytics. Brayne (Citation2021) makes similar observations with regard to the Los Angeles Police Department’s use of Palantir’s software, where some officers are frustrated about their superiors being ‘easily distracted by the latest and greatest shiny object [even though] they don’t know what it takes to get that integrated (…). They just want it’. In much the same vein, it is an important sign of ideational influence that Palantir’s promise to provide effective security solutions that meet the highest data protection standards, often ‘surprises’ (Interview 6) law enforcement officials. Palantir ‘often confronts the argument that this or that cannot be done with software “because of data protection”’; however, it can then plausibly argue that that this ‘simply is not true if you have the right tooling’ (Interview 6). Alternative, materialist explanations have a hard time explaining public officials being mesmerised or surprised by Palantir’s technologies, as they expect public actors to have a relatively clear understanding of what (Palantir’s) technologies can and cannot do.

Second, performance expectations by some security actors were disappointed, but this did not lead others to stop acquiring Palantir’s products. On the one hand, in an internal document, the Commission notes that because of ‘repeated performance failures’ Europol decided to not extend its contract with Palantir as the company could not be expected to meet ‘Europol’s evolving business requirements’. While Europol still ‘holds the licenses for the Palantir software’ and used its software at least until 2021, it plans to build a ‘future-proof data repository architecture, under the in-house lead of the organisation’ (27 January 2021). In other words, Europol no longer believes in the necessity and effectiveness of Palantir’s software and may thus resort to a more capacity-based approach. This not only demonstrates that the involvement of private actors in digital security governance depends on whether security officials believe them to be competent and trustworthy – if this trust ‘fades’, cooperation falters too (Carrapico & Farrand, Citation2021). It is also shows that there is much uncertainty around what technology, and what private companies, can and cannot do. This, in turn, is at least indirect evidence that beliefs about competence can influence decisions in ways that cannot be ‘read off’ from an ‘objective situation’ (cf. Beckert, Citation1996; Parsons, Citation2007).

On the other hand, though, Palantir’s repeated performance issues at Europol (Holst et al., Citation2021) have not deterred other European security officials from acquiring the company’s services, including in countries such as Denmark, France, Germany, the Netherlands, and Norway. The Bavarian police, for example, recently signed a framework contract with Palantir over $26.2 million that can serve as the basis for additional contracts by other law enforcement agencies in Germany. The fact that this contract was signed despite unaddressed normative concerns and knowledge about previous performance issues, speaks to the continued success of Palantir’s ideational business power even in the face of new information about its performance. In repeated ‘choice situations’, police and law enforcement agencies have thus responded in line with Palantir’s problem definitions and proposed solutions. It is also worth noting that the security officials in charge of the framework contract strongly rely on Palantir’s cognitive and normative arguments, stating that ‘traditional databases are no longer helpful’, that the risk of abuse is minimal since ‘everything is logged’, and that ‘citizens expect the police to know what we know’ (Mersi & DPA, Citation2022, our translation). This underscores the plausibility of our argument that how public actors navigate competence-control trade-offs is affected by ideational business power, and not simply based on (past) performance.

Case study II: cloud security: Gaia-X

Cloud companies’ ideational business power

The growing market share and technological dominance of foreign cloud providers have raised security concerns among European officials, particularly concerning cloud providers’ vulnerabilities to data access requests and cyberattacks (European Commission, Citation2020, p. 7f.). By setting up a decentralised and interoperable platform for different service providers, Gaia-X aims to offer an alternative to the ‘full package-services’ (Interview 2) provided by the dominant companies. Established in June 2020 as a Belgian non-profit and now comprising more than 300 members, the project represents one of the most concrete manifestations of Europe’s quest for digital sovereignty (Autolitano & Pawlowska, Citation2021).

Public actors have earmarked significant funding for cloud infrastructures, with the European Commission announcing plans to invest €2 billion (Citation2021), Germany €175 million, and France €150 million over the next years. Gaia-X, however, is a privately run project. 20 of the 26 members of the Board of Directors are private representatives (Gaia-X, Citation2021). While excluded from major decision-making roles, US and Chinese tech companies have quickly begun to shape debates about what Gaia-X is meant to solve, and have also become ‘dominant in the technical groups’ (Goujard & Cerulus, Citation2021). Thus, despite Gaia-X’s initial ambitions, private companies, including foreign ones, have become strongly involved in the project, putting them in a strong position to exercise ideational business power.

The use of solutionist arguments

Interestingly, Gaia-X was initially framed as a more ambitious ‘AI-Airbus’ project by the German economic ministry. As a federated data infrastructure, it was meant to establish data sovereignty and help German companies develop artificial intelligence products. Soon enough, however, the project’s ambition was dialled down. Tech companies had fiercely opposed the initial plans. Sabine Bendiek, former head of Microsoft in Germany, problematised the idea as inefficient and stressed the importance of using the ‘technologically best and safest’ and ‘globally leading solutions’ (Bendiek, Citation2019). An Amazon spokesperson suggested that benefits were limited, as the plan ‘restricts freedom of choice, flexibility, and ability to scale globally, without increasing security’ (cited in Stupp, Citation2019). In short, tech companies quickly tried to convince public actors that their competences were necessary for the project to succeed, even though their inclusion ran counter to the initial sovereigntist and neo-mercantilist ambitions.

To accommodate such ambitions, tech companies put considerable emphasis on depicting their goals as compatible or even symbiotic with European values (Klynge, Citation2020, p. 2; Peterson, Citation2020). They used solutionist rhetoric to highlight the compatibility of public benefits and private profits (promise of the win-win). Amazon, for example, states that ‘AWS stands ready to help play its part in powering Europe’s new digital leadership’ (Peterson, Citation2020), while Microsoft stresses that ‘[a] key part of our responsibility is to support European efforts to foster global competitiveness and promote local economic opportunities’ (Klynge, Citation2020, p. 1). Google, meanwhile, commits to ‘Cloud. On Europe’s terms’ (Johnson & Valente, Citation2021), and Palantir highlights that the ‘values at the core of the GAIA-X project are, indeed, central to our own mission as a company’ (Palantir, Citation2021).

Tech companies thus used both cognitive and normative arguments to make the case that Europe can achieve its digital sovereignty only – or at least most effectively – by having access to their technical competences (e.g., flexible and scalable solutions). Amazon, for example, argued that the ‘best way to ensure successful European companies is by allowing them access to the world’s most advanced technology’ (Peterson, Citation2020). And for Palantir, ‘digital sovereignty means that public and private institutions should be able to use today’s cutting-edge technology to process data to remain competitive tomorrow – all while retaining full control of their data and decisions’ (Palantir, Citation2022). In short, tech companies used solutionist arguments to depict themselves as the solution to European concerns around data sovereignty and the competitiveness of European firms, and to depict their interests as compatible with those of European officials (and companies), trying to counter or assuage Europe’s new-found ‘regulatory mercantilism’ (Farrand & Carrapico, Citation2022).

The recognition of epistemic authority

We have thus again found strong significant verbal references to solutionist arguments, a necessary condition for our ideational explanation (Jacobs, Citation2015, p. 54). But what is the evidence that private companies’ use of ideational business power has affected public decisions around Gaia-X?

First, the fact that foreign tech companies have come to play a major role in a project that was meant to address their economic dominance (BMWi, Citation2019, p. 6) seems at least counter-intuitive for alternative, materialist explanations. As noted, the German Economic Ministry quickly changed its initial plans for a European ‘AI-Airbus’ and more or less embraced the arguments put forward by foreign tech companies. It not only stated that it was necessary to include US firms to realise the global market potential of the project (BMWi, Citation2020, p. 6). It even emphasised that the mere interest by hyperscalers was proof of the project’s attractiveness (BMWi, Citation2020, p. 6). As a German official put it, ‘we, perhaps naively thought we could develop our own cloud’ (Interview 9).

The fact that publicly communicated goals were walked back and even reversed in such short order shows that the public actors behind Gaia-X changed their beliefs about what would be possible without involving non-European companies. While (the realisation of) objective constraints certainly played a role in this discursive about-face, it is highly plausible that private companies’ use of ideational business power did so, too. For one, the legal and technical obstacles that stand in the way of the initial ambition for Gaia-X already existed when the project was announced. It seems unlikely that it was only fundamentally new ‘objective’ information that created scepticism as to the ability of public ‘bureaucrats’ to run a cloud service (Interview 9), concerns about the major costs of the development of genuinely European capacity (Interview 9), or potential scalability problems resulting from the exclusion of major market participants (Interview 7).

However, the timeline of belief change is consistent with the argument that private companies have ideationally co-opted the neo-mercantilist and sovereigntist ambitions that initially motivated Gaia-X. After all, it was precisely when or after private companies began to publicly and privately criticise the project that its initial ambition was dialled down. Moreover, public actors, as we have seen, often echoed the statements of private companies, further indicating ideational influence. Lastly, the German Economic Ministry continues to see a role for industrial policy in nurturing European cloud providers (BMWi, Citation2020, p. 6), which was key to the initial ambition behind Gaia-X. This is evidence that public actors have not changed their mind about the role of the state in cloud infrastructure in general, but have done so in response to successful lobbying in the particular case of Gaia-X. Likewise, the European Alliance for Industrial Data, Edge and Cloud promises to be ‘much more stringent’ with regard to the involvement of non-European companies, including ‘with respect to compliance and risks of unlawful transfer of data’ (Interview 7, Interview 8). Such discursive divergence across cases in which the same or similar objective constraints should apply also speaks in favour of our ideational explanation, and is not easily accounted for by alternative, materialist explanations.

Second, one indication that private companies successfully shaped public actors perceptions about how much their goals are aligned is the limited degree of control exercised over Gaia-X. Several interviewees highlighted the limited ambitions by public actors to steer the project. One even stressed that Gaia-X ‘was not taken out of our hands; rather, we actively gave it out of our hands’ (Interview 9). With private actors in charge, public actors ‘somehow disengaged’ (Interview 7). As a result, Gaia-X has become firmly entrenched as a project of the industry for the industry (Schmidt-Holtmann cited in Voß & Punz, Citation2021) – while benefiting from public funding. This is noteworthy as it allows private companies to shape the technical operationalisation of European values through standards. Gaia-X’s plans to create a certification label to confirm cloud services’ adherence to European values (Goujard & Cerulus, Citation2021) might even serve as a ‘European stamp of approval’ (Interview 2), while security concerns around data access by foreign agencies are pushed to the side. Palantir, for example, has already called Gaia-X a potential ‘fast track to trust’ (Palantir, Citation2021). Thus, despite several interviewees pointing to competing ideas about Gaia-X’s character among public and private actors (Interviews 1, 6, 9), private actors ultimately received much leeway, suggesting they were rather successful in depicting public and private goals as compatible.

Third, Gaia-X has received much criticism, with multiple actors stating it was ‘badly handled’ (Interview 10), suffered from ‘internal discrepancies’ (Deutsche Telekom AG, Citation2021), and progressed slowly (Interview 1). In addition, the relative success of the ideas of foreign tech companies certainly created frustration on the part of European companies (Goujard & Cerulus, Citation2021). The Commission itself, while looking at Gaia-X with ‘great interest’ (Interview 8), also kept a certain distance to the project, criticising that it was opened up without strong conditions (Interview 7). Recently, even the German government quietly pulled its money from five of the 16 projects previously selected for funding. Many of Gaia-X’s problems are a not entirely unsurprising result of including dominant non-European cloud providers, and some have even criticised these companies for trying to ‘to scupper the project, slow it down or discredit it; [to] want it not to succeed, under the pretense of participating’ (quoted in Westendarp & O’Brien, Citation2022). The fact that despite these problems, public actors have so far barely criticised the involvement and role of private companies in Gaia-X is another indication that these companies were successful at making themselves seem indispensable for the success of the project.

In sum, we argue that public actors, in various ‘choice situations’, decided to, first, reverse their initial plans of a sovereigntist and European cloud initiative, second, barely exercise control over private companies, and third, limit their public criticism of a potentially failing project. In the absence of contextual changes (e.g., continued dominance of foreign cloud companies in the European market, constant public capacities), one is hard-pressed to explain these developments without taking the effects of ideational business power into account.

Conclusion

As digital technologies transform security governance, creating new challenges but also providing new tools to address them, security officials face a ‘governor’s dilemma’ (Abbott et al., Citation2020). Much of the technical competences to use digital technologies for security purposes is concentrated among private and often foreign tech companies, but harnessing these companies requires control, i.e., making sure that these companies act in line with public interests and values. In this contribution to the special issue on the rise of the regulatory security state in Europe, we have suggested that while it is very useful to think about digital security governance in terms of competence-control-trade-offs (Abbott et al., Citation2020), both sides of these trade-offs are ideationally mediated. We have shown how companies use their ‘ideational business power’ (Carstensen & Schmidt, Citation2016; Selling, Citation2021) to frame security problems in ways that serve their interests. In particular, they use the solutionist promise of the technical fix to recast security problems as amenable to technical solutions which (only) they can offer; and they use the solutionist promise of the win-win to depict their private profits as entirely compatible with public values. Despite certain limitations with regard to the inferences we can draw (Jacobs, Citation2015), we have also traced the effects of ideational business power, demonstrating at least the plausibility of the view that private companies were successful in getting public actors to accept their problem definitions and recognise their epistemic authority.

In concluding, we want to reflect on the implications of our argument and findings for understanding the rise and trajectory of the RSS in Europe. First, while we also observe a ‘growing centrality’ (Martins & Jumbert, Citation2022, p. 1442) of expert and specifically technical knowledge in security governance, the epistemic authority of technology firms is not simply a function of the objective level of technical competence. Rather, it is influenced by ideational power that affects both beliefs about competence and the need for control. Second, while we see the ‘growing presence of hybrid rule’ (Martins & Jumbert, Citation2022, p. 1442) in EU digital security governance, this move from the positive to the regulatory security state is conditional on whether public actors believe that private actors are competent and broadly share their goals. If this is no longer the case due to performance or trust issues, some – although not all – security officials might resort to more ‘positive’ or ‘mercantilist’ security approaches (Carrapico & Farrand, Citation2021; Farrand & Carrapico, Citation2022). Given its dependence ‘on artificial intelligence software developed by non-European businesses’, for example, the Commission wants Europol to be ‘able to help developing new technologies that match law enforcement needs’ (27 January 2021). This shows, as others in this special issue have also highlighted, that regulatory and positive security approaches can co-exist simultaneously in Europe (Dunn Cavelty & Smeets, Citation2023; Mügge, Citation2023). However, as we have seen, even in an increasingly ‘hostile’ regulatory environment (Farrand & Carrapico, Citation2022), foreign private companies can still successfully exert their ideational business power and play a central role in public security provision.

Finally, while ideational business power has its limits in view of performance and trust issues, it at least helps private actors get a foot in the door. This can – but does not have to – set into motion ‘material’ and ‘ideational’ ‘feedback loops’ (Kruck, Citation2016, p. 759) or ‘co-constitutive loop of reinforced legitimacy and power’ (Bode & Huelss, Citation2023, p. 1247). On the one hand, once public security providers delegate key tasks to private firms, they may become dependent on the continued provision of these services. This is because rebuilding capacity ‘becomes more difficult over time as path dependency and lock-in effects increase the economic and political transaction costs implied in such a shift’ (Busemeyer & Thelen, Citation2020, p. 457). This gives business actors ‘institutional power’ (Busemeyer & Thelen, Citation2020) over public actors. On the other hand, once private solutions are adopted, they may mobilise defenders among security officials themselves to the extent they are viewed as improvements and become part of officials’ daily routines (Brayne, Citation2021). Moreover, once techno-solutionist ideas are recognised, its proponents come to increasingly ‘co-define’ security problems and solutions and thus shape how public actors think about digital security governance (Bellanova & de Goede, Citation2022). For example, the Commission’s proposal for a new Europol mandate has fully accepted and doubled down on the necessity and importance of a big-data approach to law enforcement, which might provide new opportunities for private companies in the future.

Thus, as the RSS rises in digital security governance, it becomes increasingly important to understand both the ideational underpinnings and institutional consequences. While our contribution focuses on two under-researched cases, we believe our argument travels to other areas of (digital) security governance, such as cybersecurity (Carrapico & Farrand, Citation2021; Dunn Cavelty & Smeets, Citation2023) or artificial intelligence (Bode & Huelss, Citation2023; Mügge, Citation2023), as well as other cases of public-private security partnerships, such as the contested cooperation between European intelligence services and the Israeli digital security company NSO Group, the involvement of Chinese tech giant Huawei’s 5G rollout in Europe, or the growing use of artificial intelligence software in border surveillance, as evidenced by Frontex’s renewed contract with Israeli company Windward. Studying such cases, however, also comes with inherent limitations imposed by the secrecy and opacity of (digital) security governance. These limitations are not just methodological. As de Goede et al. (Citation2019) remind us, they are also a mode of power: allowing law enforcement to deny or ignore information requests, or respond only sporadically or curtly – not just to researchers but also to members of the European Parliament (Interview 4). As private meaning-making processes further entrench the perceived necessity and normative compatibility of technological solutions in public security governance, it becomes all the more important to understand the underlying decisions as fully as possible.

Supplemental material

Supplemental Material

Download MS Word (13.8 KB)

Acknowledgements

We would like to thank the participants of the Special Issue Workshop in Munich in November 2021 and the participants of the 2022 ECPR-SGEU panel on the regulatory security state for their helpful comments. We are particularly indebted to the editors of the special issue – Andreas Kruck and Moritz Weiss – for their exceptional support throughout. We also want to thank Ludek Stavinoha for important early pointers, Philipp Genschel for his very astute comments on our paper, and three anonymous reviewers for challenging but constructive comments. We are also very grateful to our interviewees who kindly shared their insights. Lastly, we want to thank the Fritz Thyssen Foundation for generous funding.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Correction Statement

This article has been corrected with minor changes. These changes do not impact the academic content of the article.

Additional information

Notes on contributors

Anke Sophia Obendiek

Anke S. Obendiek is a postdoctoral researcher at the Centre for European Integration Research, Department of Political Science, University of Vienna.

Timo Seidl

Timo Seidl is a postdoctoral researcher at the Centre for European Integration Research, Department of Political Science, University of Vienna.

Notes

1 For example, through the Clarifying Lawful Overseas Use of Data Act (CLOUD) Act, US law enforcement authorities can force US companies to provide access to data stored in the European Union (EU), even though this potentially violates European data protection rules.

2 Tech companies have strong incentives to invest in and exert ideational business power. Not only is the public security sector is a lucrative market in the short run; companies might also increasingly become (seen as) indispensable co-providers of public security, boosting their economic prospects in the long run (Busemeyer & Thelen, Citation2020; Kruck, Citation2016).

3 Palantir also effectively reinforces its ideational influence through the strategic use of testimonials. As one interviewee from a competitor noted, Palantir systematically uses existing customers, such as public bodies that have already used Palantir products, as ‘influencers’ (Interview 1). For example, internal emails show that the US Center for Disease Control and Prevention proactively contacted its European counterpart to advertise Palantir’s services, highlighting their important role in ‘critical data integration, analytics, & decision support’ (23 March 2018).

References