2,061
Views
0
CrossRef citations to date
0
Altmetric
Articles

Colonising the narrative space: unliveable lives, unseeable struggles and the necropolitical governance of digital populations

ORCID Icon
Pages 2398-2418 | Received 17 Mar 2023, Accepted 01 Jun 2023, Published online: 05 Jul 2023

ABSTRACT

Social media platforms play a critical civic role during times of conflict, war, and crises as spaces for people to document and share content that publicises instances of human rights violations, graphic violence, and material destruction. Such content not only function as primary materials in the narration of social and political realities, but also operate as intense sites of control when platforms, state, and non-state actors seek to censor them for various political and opaque reasons. This article argues the gradually increasing intertwinement of corporate-government power, and the asymmetrical application of power to govern our lives in platform spaces, constitutes the necropolitical governance of digital populations. I delineate how the contours of platform necropolitics manifest through asymmetrical content moderation processes, platform policies, and alternative enforcement systems, and describe its operational registers: acts of commission (overenforcement), omission (underenforcement), and exceptionalism (extraordinary exceptions). The article’s theoretical resourcefulness and analytical significance is demonstrated through three qualitative case studies: 2021 Israeli–Palestinian conflict, ‘rest of the world’ counties, and 2022-ongoing Russian–Ukrainian War. The article problematises conventional understandings of necropolitics while providing nuanced conceptualisation into the narrative and curatorial power of corporate-government assemblages of control for digital subjects – especially those most at-risk.

Introduction

In times of conflict, war, and other crises, social media platforms (‘platforms’ hereafter) perform a critical civic role as spaces for eyewitness testimony, news and information, humanitarian efforts, collective action, and legal accountability. Yet, platforms can also operate as battlegrounds as people struggle against content moderation tools and governance structures to give visibility to content that would otherwise remain largely obscured from public view: graphic, violent, or contentious content but which are legitimate forms of expression. Such content are primary materials in the narration of social and political reality, as well as intense sites of control and censorshipFootnote1 (Andén-Papadopoulos, Citation2020). Prosecutors, researchers, journalists, and activists alike increasingly depend on content collected from platforms as evidence and archive of human rights abuses and war crimes (Banchik, Citation2021). The censorship of such content not only implicates the potential for prosecution and retribution, but also shapes the (in)visibility of struggles, historical memory, and how political futures are determined.

Platforms are ill-equipped to make subjective and important socio-political decisions about moderating problematic content alongside situational context, cultural nuance, and the public’s right to know against possible harms (Helberger et al., Citation2018; York & Zuckerman, Citation2019), especially in countries of the Global South (Article 19, Citation2022). When moderating content at scale (Gillespie, Citation2020; Gorwa et al., Citation2020), automated processes are unable to interpret what constitutes violence and by whom. Deploying automated tools to detect complex problematic content is a governance strategy that presents numerous challenges (Roberts, Citation2019) and ‘frequently results in dangerous errors that often harm the most vulnerable among us’ (Suzor, Citation2019, p. 100). Yet, platforms are escalating the deployment of automated tools as technological solutionism to augment human moderation and detect problematic content (see Clegg, Citation2022a). Such enforcement measures can not only lead to wrongful acts of overenforcement, but also deny adequate, democratic deliberation of contextual decision-making in the review of content (Banchik, Citation2021).

Platforms govern through their interior politics and are governed through the exterior politics of governments and non-state actors (Gorwa, Citation2019b; Suzor, Citation2019). In response to calls to eliminate terrorist, violent, and harmful content, platforms face increasing pressure from a range of actors to moderate information and access to social spaces in often conflicting ways. This has led to a series of individual policy changes, technological solutionisms, as well as broader industry governance initiatives (e.g., 2017 Global Internet Forum to Counter Terrorism, Meta’s Oversight Board) and formal and informal regulatory agreements (e.g., 2019 Christchurch Call) (see Gorwa, Citation2019b; Klonick, Citation2018; Suzor, Citation2019). Such measures are habitually articulated as reposes to promote security and safety online, yet, state and political actors often mobilise the spectre of terror and violence to exert more control with deleterious effects for free expression (Khan, Citation2022; York, Citation2021).

The interior politics of platforms are increasingly difficult to separate from exterior geopolitical influences (Goldsmith, Citation2018a; Gorwa, Citation2019a). Decisions made by and on platforms are enacted in distinct ways that playout to create different effects – and can have life and death consequences – across different geopolitical regions. What is at stake from this manifestation and enactment of power, and how are the implications best theorised and analysed? This question animates the article’s theoretical contribution and comparative analysis.

Content moderation processes and governance policies are well recognised in scholarly literature as opaque and controversial operations of platform power. Little research has explored how the interior politics of territorially bounded platform corporates correlates with the exterior politics of nation states and territorial governmental coercion in the moderation of content in the context of conflict, war, and civic contestation. I engage with these political forces to argue the gradually increasing intertwinement of corporate-government power, and the asymmetricalFootnote2 application of power to govern lives in platform spaces, constitutes the necropolitical governance of digital populations. Or as I have referred elsewhere, this assemblage of power constitutes platform necropolitics (Lewis, Citation2023, in press) which is an emergent and troubling development of ‘digitally mediated violence: when platforms exercise the power to effectively let digital subjects live or to kill contentious content and voices of dissent’.

Bringing into conversation scholarship from digital media and communication, critical platform studies, internet governance, political theory and decolonial critique, the article provides a nuanced understanding of the implications of content moderation and governance mechanisms for digital populations – especially those most at-risk. I proceed by introducing the approach and methods of the study. Secondly, I trace the conceptual contours of platform necropolitics. Thirdly, I describe the registers through which platform necropolitics manifest: acts of commission, omission, and exceptionalism. I then demonstrate the operative dynamics of these registers through three qualitative case studies to establish the plausibility of the argument and analysis. These case studies focus on acts of (1) commission through governance overenforcement during the May 2021 Israeli-Palestinian conflict, (2) omission through underenforcement in ‘rest of the world’ countries (e.g., Syria, Kenya, and India), and (3) exceptionalism where normative platform moderation rules are suspended in service of western exception and extraordinary measures during the 2022-ongoing Russian–Ukrainian War. I conclude by drawing out the implications of the analysis and necropolitical governmentalities for digital futures, democracy, and human rights.

Approach and methods

The premise of this article is not to argue all forms of expression on platforms should be unmoderated. Nor to offer a comprehensive account of platform governance and moderation implications. Or legal critique or manifesto of how free expression should be moderated on platforms. These aspects are beyond the scope of this article and other scholars have already provided this important expertise. This article argues that the opaque and asymmetrical enactment of platform governance and content moderation cannot be understood solely as a sociotechnical issue. It must also be understood as a geopolitical issue with critical comprehension of how contemporary corporate-government configurations of necropolitical power manifest, for whom, and to what ends. This is the prism through which the argument and analysis proceeds.

Case study research is conducted on Meta’s on-platform actions – across Facebook, Instagram, and WhatsApp – in three differing contexts and diverse geographical and political environments, and traces how platform power is enacted through governance processes, policies, and moderation practices. While the theoretical component of the article refers to corporates and platforms in the broad sense, the qualitative analysis focuses its attention on actions taken by Meta across its platforms. The space of one article does not afford scope for both in-depth theoretical contributions, multiple-case study design, and multi-platform comparative analysis. Consequently, a comparative analysis of Meta’s actions was selected as the focal point for analysis because its platforms are dominant across all three contexts, its power as a ‘digital giant’ (Gorwa, Citation2019b) is central in social and political debates about platform governance, and its impacts demonstrate the most pressing threats to human rights globally (Kaltheuner, Citation2022). The impetus for this study was motivated by significant examples of asymmetrical content moderation as demonstrated by acts of overenforcement, underenforcement, and exceptionalism.

The study draws on qualitative ‘data thickening’ methods (Latzko-Toth et al., Citation2017) and combines multiple methodological strands, layers of data collection, and interpreting of social media trace data. To situate everyday and particular experiences within the context of Meta’s practices – and the broader networks of power within which they are embedded – I adopt a discourse analytic lens combining close textual analysis, digital ethnography, and case study approaches. Platform’s routinised logics of secrecy create significant impediments to systematic and qualitative inquiry about the practical ramifications of content moderation policies and practices (Díaz & Hecht, Citation2021; Gillespie, Citation2022). To navigate the limitations of privileged research access, I verify and present details using publicly available evidence. To contextualise platform governance, a critical review of Meta’s corporate communications, policy documents, transparency reports, and public statements was conducted between 2021 and 2023. A supplemental analysis of relevant reports from Meta’s Oversight Board, independent audits, leaked internal documents, civil society and legal reports, coverage in technology press, and trade publications was also undertaken. To contextualise the implications of content moderation, the analysis relies on primary findings that form part of a larger study of content moderation examining the Israeli-Palestinian conflict on Instagram between 7 May and 30 December 2021 of 85 public pro-Palestinian accounts manually collected through a snowballing technique (Lewis, Citation2023, in press). This is supplemented with secondary user reports on Meta’s platforms gathered through citizen science methods by civil society research and investigative journalism.

Necropolitical governance on platforms

Mbembe’s (Citation2019) conceptualisation of necropolitics and necropower directs our attention to the contemporary processes and asymmetrical relations through which forms of political and social power subjugate ‘life to the power of death’ (p. 92). Necropolitics is understood as the ultimate expression of sovereignty that manifests as the power and capacity ‘to dictate who may live and who must die’ (p. 11). Death and making die are the operations of power and modalities of control that structures living. Mbembe argues the power and ubiquity of digital technologies, increasing forms of computational infrastructures, and automated processes reconfigure people as digital subjects that, henceforth, enact power over the living as the absolute form of sovereign power and the automation of society. A capability that engenders asymmetrical impacts for the majority world.

I extend the meaning of necropolitics and necropower to foreground my conceptualisation of platform necropolitics (Lewis, Citation2023, in press). Reformulating it as a contemporary corporate-government assemblage of power that manifests as the ultimate expression of sovereignty through which powers of violence and exception are exercised in platform spaces for the necropolitical management of digital populations and their discursive articulations. It is a phenomenon that manifest in various situations yet most visibly when private corporations work in coordination with governments and oppressive political actors to determine the means of access to platform spaces and to expedite the censorship of content deemed problematic, particularly those which are legitimate forms of political expression and do not violate platform policies or community guidelines. This has specific consequences for freedom of expression and human rights as it governs the conditions for whose voice and what content are (or are not) given the right to life online. Through this lens, sovereignty means the capacity to define whose life matters, whose struggles are seen, who are recognised as rights bearing citizens with the ability to speak, and who are relegated as undesirable others and denied the permission to narrate.

Themes of violence and exception acquire reenergised force in platform ecosystems, sovereignty consists in the power to control the narrative and govern flows of ‘dangerous’ information, and where the actor ‘making that determination need not be a state sovereign at all’ (Cohen, Citation2019, p. 109). If bodies are the indispensable political medium and the digital extends bodies in social spaces (Kraidy, Citation2016), ‘dangerous’ bodies represent an existential threat rendering them and their discursive articulations as sites for necropolitical control in platforms spaces. Dynamics of competition and collusion manifest between corporate and government actors as they assert authority to designate which bodies and what forms of speech define the boundaries of the political and threaten the social order: threats in response to which legal measures, regulatory policies, and moderation practices may be modified or suspended in favour of self-serving extraordinary measures.

Sovereignty is not rehearsed here as a traditional account of power based on state privilege or located exclusively within the boundaries of the nation-state. Berlant’s (Citation2007) concept of sovereignty is operationalised as the practical power to govern the mediating conditions and repertoires of violence that control the forms and futures of social realities and political possibilities in platform spaces. This articulates the modern functions of sovereignty, its spectrum of tools, and the various figures of power that come into being, collude, and compete for ultimate governmentality authority. This focus more readily accounts for the ways in which the boundaries of life-exhausting and death-making conditions are determined through political economies of governmentality and indifference. These forms of ‘practical sovereignty’ (Berlant, Citation2007, p. 755) manifest through conditions of boundary-making and governmentality that are habitually enacted in spaces of everyday ordinariness and give rise to motives and temporalities of ‘slow death’ (Berlant, Citation2007, p. 759). Slow death renders real the always already precariousness of life, its vulnerability to disposability and destruction as positioned under regimes of exhaustive capital.

Slow death manifests not in time-based violent events like military encounters; it becomes visible in temporal environments where the qualities and contours ‘in time and space are often identified with the presentness of ordinariness itself’ (Berlant, Citation2007, p. 759). That is, where everyday spaces of living, interacting, reproducing the social, and redefining the political are brought into proximity. Berlant (Citation2007) distinguishes between environment and event to describe space temporally, as a dialectical process of mediating spatial practices in patterns of undramatic observability, and event as a genre rendered into being by the intensities and nature of impact. I apply this analytical lens to consider how letting-live and making-die are brought into proximity and are yet differentially distributed modalities of being. These operations manifest in the environment of platforms as everyday social worlds that are modulated in real time by algorithmically managed checkpoints that govern the speech and action of billions of users. Operations that are enacted opaquely as exceptional yet everyday conduits that work to regulate death-logics in ways that evade spectacular forms of public observability (Rieder & Hofmann, Citation2020).

Platforms function as environmental infrastructures that mediate the conditions of violence by rendering some lives as unliveable through temporal practices and governance intensities that enact forms of borderisation with projectilic affect (Kraidy, Citation2017). Borderisation is the process by which sovereign powers modulate particular spaces into impassable zones for certain populations of people (Mbembe, Citation2019), thereby, creating differential conditions of being and discriminatory categories of people – separating forms of desirable and undesirable life. Platforms enact borderisation through escalating forms of data-driven logics, surveillance, and policing of digital subjects under the spectre of security and safety, and via exceptional powers to eliminate perceived threats and manage digital populations based on political and sovereign objectives. These logics of governmentality take digital subjects into its fold, not by inflicting corporeal bodily violence but by inflicting normative modalities and techniques of everyday violence that orient life toward death by rendering it unseeable, unspeakable, disposable, and displaceable. Such logics are not disconnected from the flesh and blood of the body, as online violence and transgressions often spill over to offline real-world violence and death. These are extraordinary operations of necropolitical governance that subjugate vast digital populations to unprecedented forms of social (in)existence and that asymmetrically confer upon particular categories of people the ‘status of the living dead’ (Mbembe, Citation2019, p. 92), while creating conditions of social belonging and reproduction for specific people.

Necropower differentially saturates some lives as grievable (Butler, Citation2009), exemplary (normatively white) bodies at the locus of crisis that privileges exceptional measures to enable lives that must be lived. In contrast to surplus (normatively racialised) bodies that are subjugated and rendered into a state of crisis ordinariness – ungrievable lives as positioned outside the locus of humanity and subjected to forms of violence deployed in service of security and safety. Consequently, the preservation of particular digital subjects and reproduction of desirable forms of social existence in platform spaces depends on the creation of ‘death-worlds’ (Mbembe, Citation2019, p. 92) and forms of ‘operational enclosure’ (Andrejevic & Volcic, Citation2020) that orients undesirable others toward escalating forms of control and containment in ways that inscribe them into temporalities and spatialities of disconnection and disenfranchisement.

Taken together, this reorientates a critical theorising of platforms as technologies of necropolitical power and traces the contours of the multiple modalities and operations of necropolitics in platform spaces. It problematises conventional understandings of necropolitics as manifesting largely in states and spaces of territorial exception – such as the plantation, the colony, refugee camps, and the Occupied Palestinian Territories – by suggesting unarticulated dimensions of necropower and examining contemporary spaces in which it is operative and encompasses a broader range of tools and techniques of violence, and that extend beyond its conventional associations with corporeal death. Platforms, thus, are expanded spaces of exceptionality that operate as lawless intermediaries (Suzor, Citation2019; Zuboff, Citation2019a) when they function to (re)produce spaces of insecurity. From this perspective, and following Bargu (Citation2019, pp. 6–7), I argue necropolitics readily manifests in platforms – where competing claims for and articulations of sovereignty play out between people, intermediaries, and state and non-state actors to different ends – as always already spaces that can be converted ‘into sites of exceptionality’ and where different tools and tactics are selectively deployed ‘according to the meaning and valorisation of its targets’.

Interior and exterior platform necropolitical governmentalities

Interior politics refers to the various ways platforms – their ideological conventions, design and affordances, social logics, economic imperatives, technological infrastructure, and rulemaking impetuses – condition a particular order and possibility of things (Gillespie, Citation2010, Citation2018). That is, platforms organise our social, cultural, and economic life in distinct ways (Dijck et al., Citation2018; Helberger et al., Citation2018) that determine access to the material means of existence and expression for digital populations (York & Zuckerman, Citation2019; Zuboff, Citation2019b). Platforms not only allocate particular public values and civic discourse, but they also actively shape and enforce them through cultural norms and free expression values (Klonick, Citation2018; York, Citation2021). As such, they become arbiters of human rights to the point where the very ‘right to have rights’ (Arendt, Citation2004, pp. 376–377) and ‘the right to a future tense as a condition of a fully human life’ (Zuboff, Citation2019a, p. 332) is under siege for some.

Platform’s interior politics are enacted in various and significant ways. For example, when moderation policies and practices are enacted asymmetrically or inconsistently (see Tynan, Citation2022) it can produce harmful outcomes and lead to explicit forms of physical and structural violence (Article 19, Citation2022; Díaz & Hecht, Citation2021). Content moderation can result in forms of censorship and infringement of human rights via practices that encompass mechanisms including user reports, human reviewers, and automated processes. This can lead to the deletion or reduced visibility of content, suspension/deplatforming of users and accounts, and limiting of affordances including liking, commenting, following, and sharing. The political nature of platform decision-making extends beyond content moderation to encompass broader implications for accessibility and people’s livelihoods. For example, censorship can disguise itself in the form of geoblocking which not only places boundaries on informational access but can limit access to crowdsourcing and financial services that engenders disparate impact (McDonald, Citation2022). Platforms can enact geoblocking in collaboration with censorious country objectives, or independently restrict access to information and services in ways that avoid expressed responsibility and obscure publicly visible forms of access denial (McDonald, Citation2022).

Exterior politics articulates the ways state and political actors are increasingly asserting authority over platforms through formal and informal governance structures (Gorwa, Citation2019b), differing legal terrains and policy approaches (Klonick, Citation2018; Suzor, Citation2019), backdoor agreements to control global communications (Greenwald, Citation2023; Klippenstein & Fang, Citation2022) and affect sovereign regimes of control (Goldsmith, Citation2018b). It is a phenomenon that was amplified by the US domestic and global ‘war on terror’ that emerged in the aftermath of 11 September 2001 – a significant period that witnessed the embedding of technological imperialism into US policy (Vaidhyanathan, Citation2018) – and has enlarged against a backdrop of global governance initiatives applied broadly to combat violence and extremism online (York, Citation2021) and take aim at ‘dangerous’ forms of information and activity (Cohen, Citation2019). Measures that have given rise to powerful forms of legal enforcement and security apparatuses that can be interpreted as serving US foreign policy interests and national security imperatives (Cohen, Citation2019; Goldsmith, Citation2018a).

Governments and political actors intervene and control the flow of information on platforms through various mechanisms including formal takedown requests (Bischoff, Citation2021); internet referral units (Eghbariah & Metwally, Citation2021), and by gaming platforms’ flagging systems by posing as users to trigger the content removal process (Banchik, Citation2021; York & Zuckerman, Citation2019). Governments also subject platforms to pressure by imposing liability and regulation compliance, leveraging bargaining powers like taxation and economic costs, or by threatening to block platform access and services at the country level (Eghbariah & Metwally, Citation2021). Platforms are central to ‘many different struggles to control the internet’ (Suzor, Citation2019, p. 93), and have proven themselves ‘far more adept at serving government demands than those of their own users’ (York, Citation2021, p. 35).

Dynamic patterns are emerging through the complex interior-exterior interplay and intertwining of state and private economic power in ways that are reshaping entitlements to platform access, information, and expressed rights and freedoms, as well as practical enforcement realities across a variety of contexts. On the one hand, this leads to forms of collaboration and new forces of social control, on the other, competing economic, social, and political interests (Törnberg, Citation2023) where claims over sovereignty and governmentality come into conflict as platforms and governments compete to shape market interests, geopolitical tools, and exert influence in particular disputes according to desired political goals.

Yet, as Cohen (Citation2019, p. 137) argues platforms ‘enjoy increasing autonomy both to define the terms of their own compliance with mandates promulgated by state actors and to create and refine their own operational arrangements’. This constitutes platform’s enactment of practical sovereignty as they exploit mechanisms of self-regulation over content to protect commercial interests even when immune from liability under legal regimes. Germany’s 2017 Network Enforcement Act (NetzDG)Footnote3 is a notable example that requires platforms make their own determinations according to German law on the legality of content and remove ‘illegal content’ within 24 h or face fines of up to €50 million. Critics argue such measures are vague and overreach by turning ‘private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal’ (Human Rights Watch Citation2018, para. 2). This highlights current power struggles between corporate and government actors, and the reality that platforms exert almost unfettered, sovereign power at global scale (Vaidhyanathan, Citation2018). Such forms of practical sovereignty increasingly enable platforms the power to determine the bounds of their logics of operational and regulatory compliance, and exceptions to ordinary practices, principles of regulatory process, and accountability on a case-by-case basis. The current platform environment is characterised by dynamic power struggles among the dominant interests – where platforms have positioned themselves ‘as both essential partners and competing sovereigns in the quest to instantiate states of exception’ (Cohen, Citation2019, p. 122).

Registers of platform necropolitics

Platform necropolitics manifests through the registers of commission, omission, and exceptionalism.

Commission refers to a platform’s intentional or unintentional acts or functions of governance and content moderation that lead to instances of overenforcement against its stated policies, processes, and human rights obligations, and that produce discriminatory and harmful outcomes for people including acts that constitute violations of freedoms of expression.

Omission denotes a platform’s failure to invest in equitable governance structures, adequate content moderation resources, and failure to enforce stated policies and processes in line with human rights obligations, either intentionally or unintentionally, that result in outcomes where rules have been underenforced and lead to harmful outcomes for people.

Exceptionalism delineates a platform’s exceptional acts or functions of governance and content moderation that are enacted to claim sovereignty or uniqueness in a singular context which exempts particular digital subjects/populations/contexts from the normative rules that apply to all other users/contexts, and that advances a version of policies and processes to serve particular interests, values, and actors.

Below, I demonstrate the operative logics of these registers through empirical engagement with Meta’s content moderation policies and their implementation in different contexts.

Commission: 2021 Israeli-Palestinian conflict

Hostilities in the ongoing Israeli-Palestinian conflict intensified during May 2021 following the violent repression of protests against the forced eviction of Palestinian refugee families living in the East Jerusalem neighbourhood of Sheikh Jarrah and increased when Israeli police brutally stormed and blockaded the Al-Aqsa Mosque. Israeli forces and Jewish Israeli settlers also waged brutal attacks against Palestinians across the occupied territories of Jerusalem, the West Bank, and Gaza. In retaliation, Hamas and the Palestinian Islamic Jihad armed group launched attacks against Israel. The violence manifested across Meta’s platforms as people shared imagery of human rights violations, material damage, and armed conflict to global audiences. Yet, content and accounts were routinely removed and restricted from platforms through opaque and controversial mechanisms, visibility and access to content was reduced and restricted, hashtags hidden, live video streams blocked, participatory features disabled, and archived content deleted. This affected people’s ability to share lived experiences, exercise their right to freedom of expression, and participate in political discussion of public importance.

7amleh (Citation2021) – the Arab Center for Social Media Advancement documented 500 digital rights violations against pro-Palestinian content on platforms with 50% reported on Instagram and 35% on Facebook. 7amleh (Citation2021) also received 40 reports of hate speech and incitement to violence against Palestinians and Arabs online, including reports of Israeli extremist groups mobilising lynch mobs on WhatsApp. Incitement that led to the killing of two Palestinians during this time. Digital rights organisation Access Now (Citation2021) received hundreds of reports that Meta’s platforms were suppressing pro-Palestinian content, and Sada Social Center (Citation2021) for the Defense of Palestinian Digital Rights recorded 770 violations with 45% reported on Facebook and 13% on Instagram. On Instagram, I documented 324 user reports of overenforcement with the majority of content removed or restricted for alleged hate speech/symbols, graphic violence, and in accordance with Meta’s Dangerous Individuals and Organisations Policy (DIO) (see Lewis, Citation2023, in press; see Meta, Citation2023).

In one instance, Instagram labelled the Al-Aqsa Mosque, one of Islam’s holiest sites in Jerusalem, as a terrorist organisation resulting in the removal and blocking of content tagged with #AlAqsa and its Arabic counterparts. Meta attributed the censorship to technical and human moderation issues (Mac, Citation2021).

In another, a Facebook post of the Izz al-Din al-Qassam Brigades, the military wing of the Palestinian group Hamas, was removed for violating Meta’s DIO standards. It was originally shared by the verified Al Jazeera Arabic Page and reshared by a user in Egypt with the caption ‘Ooh’ in Arabic added. Meta’s Oversight Board reviewed the appealed case and found the content did not violate the company’s DIO rules as it did not contain praise, support or representation of the al-Qassam Brigades or Hamas. Facebook reversed its original decision but was unable to explain ‘why two human reviewers originally judged the content to violate this policy, noting that moderators are not required to record their reasoning for individual content decisions’ (Oversight Board, Citation2021, para 5). Leaked documents of Meta’s DIO polices reviewed by The Intercept found that ‘al-Qassam does not appear on the DIO list’ (Biddle, Citation2021a, para. 37).

Various governance policies and content moderation practices are used to make determinations about content appropriateness on Meta’s platforms. Perhaps the most opaque and controversial set of rules in relation to the ongoing Israeli-Palestinian conflict concern Meta’s problematic history with moderation aligned to its hate speech policy, specifically in relation to the word ‘Zionist’, and its DIO and Violence and Incitement policies (BSR, Citation2022). Critics argue these policies have been enacted in ways that further censor legitimate political speech and suppress criticism against the State of Israel amid ongoing Israeli abuses and violence (see Biddle, Citation2021b; York, Citation2021).

The Cyber Unit of Israel’s State Attorney’s Office regularly requests platforms remove or restrict access to pro-Palestinian content and accounts deemed problematic or violating local law. It reported 1,340 content removal submissions from security agencies during May 2021, a 360% increase from 2020 (United States Department of State, Citation2022). Israel’s Defence Minister also met with executives from Meta in May and requested pro-Palestinian content considered as inciting by Israel be removed (Akkad, Citation2021). The amount and nature of Israeli government requests to censor content during this period was not publicly reported via Meta’s Transparency Center (see Meta, Citation2021). When asked by its Oversight Board (Citation2021, para 8) if it received ‘official and unofficial requests from Israel to remove content’ related to the conflict Meta declined to provide information. However, historical records demonstrate Meta has complied, controversially, with up to 90% of Cyber Unit content takedown requests (Human Rights Watch, Citation2022, para 5).

Business for Social Responsibility (BSR) conducted an independent review of Meta’s action during May 2021 finding Meta’s actions had an ‘adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred’ (BSR, Citation2022, p. 4). These violations were enacted via acts of ‘overenforcement (erroneously removed content and erroneous account penalties, including false strikes that impacted visibility and engagement) and underenforcement (failure to remove violating content and failure to apply penalties to offending accounts)’ (BSR, Citation2022, p. 4). Findings indicate Arabic content experienced higher overenforcement rates on a per user basis, while potentially violating Arabic content received significantly higher proactive detection rates compared to that for Hebrew content. BSR attributed the findings to Meta’s policies and practices which incorporate certain legal obligations relating to designated foreign terrorist organisations, that there was an Arabic hostile speech classifier but not a Hebrew hostile speech classifier, and found outcomes resulted in ‘various instances of unintentional bias’ where Meta’s actions led to ‘different human rights impacts on Palestinian and Arabic speaking users’ (BSR, Citation2022, p. 8). Human Rights Watch (Citation2022) (HRW) said while BSR found evidence of unintentional bias, HRW had appealed to Meta for years regarding the disparate impact of its content moderation on Palestinians. Therefore, while bias began unintentionally, ‘after knowing about the issues for years and not taking appropriate action, the unintentional became intentional’ (Human rights Watch, Citation2022, para 4).

Meta relies on the US government to define terrorism, and censors’ content from groups and individuals designated on the State Department’s Foreign Terrorist Organisations or Specially Designated Global Terrorists lists (see Biddle, Citation2021a; Díaz & Hecht, Citation2021; York, Citation2021) – dominated by Muslim, Arab, and South Asian groups and individuals. Yet, York (cited in Biddle, Citation2021a, para. 48) argues Meta ‘isn’t just abiding by or replicating US policies, but is going well beyond them’ and, as Biddle (Citation2021a, para. 28) demonstrates, Meta applies an ‘expansive definition of dangerous’ to individuals, groups, and content that disparately impacts Arab regions and Muslim peoples. Leaked documents reveal Meta includes ‘hundreds of groups and individuals not designated or sanctioned by the US government’ (Patel & Dwyer, Citation2021, para 4), which includes Palestinian aid and human rights organisations (Biddle, Citation2021a). Yet, Meta reasons it is legally required to remove such content from its platforms – an argument legal specialists say is a false claim (see Cohen, Citation2019; Díaz & Hecht, Citation2021; Patel & Dwyer, Citation2021; York, Citation2021) and that Meta leverages this claim to censor content at its discretion, possibly to avoid material support liability (Patel & Dwyer, Citation2021). Beyond Israel’s strong political and military ties to the US, Israel is a significant contributor of advertising revenue for Meta (Shtaya, Citation2022). Israel also has a controversial history of designating legitimate Palestinian human rights groups as terrorist organisations and criminalising public discourse of the groups (Krauss, Citation2021). Yet, it remains unclear whether Meta considers itself legally bound to comply with Israel’s designation and removes disfavoured groups and content, or if Meta is commercially incentivised to do so. Either way, such moves constitute an assault on civil society and a violation of freedom of expression.

Omission: ‘Rest of the world’ countries

Acts of moderation overenforcement are evident beyond Palestine and across the Arab world. A pronounced example occurred in the context of the Syrian Civil War when Meta used automated mechanisms to tackle so-called extremism which resulted in the erasure of vital documentation of protest, chemical weapons attacks, and human rights abuses and war crimes (Asher-schapiro, Citation2017). This content can often be the only form of evidence that attest to the fact a crime was committed and act as counternarratives to hold false truths and propaganda accountable. On the other hand, acts of underenforcement and failed human rights due diligence on Meta’s platforms have led to acts of hate speech, incitements to violence on ethnic and religious grounds, and the killing of people (Haugen, Citation2021; Roberts, Citation2019), in countries like Myanmar (see Amnesty International, Citation2022), India (see Brown, Citation2022), Ethiopia (see Robins-Early, Citation2021), and Palestine (see Business for Social Responsibility, Citation2022), among other countries largely in the Global South.

An investigation by Global Witness (Citation2022) and Foxglove Legal in the lead up to the 2022 Kenyan election found Meta’s content moderation system failed to detect hate speech ads on Facebook in Swahili and English, the country’s two official languages. Findings followed similar patterns found in Myanmar and Ethiopia yet raised significant concerns about Meta’s automated moderation capabilities in English for the first time. Twenty ads featuring real-life hate speech examples in English and Swahili were submitted to Facebook’s ad library. All ads violated Facebook’s policies qualifying as hate speech and calls to ethnic-based violence, yet were approved with one exception: English language hate speech ads were initially rejected for violating Facebook’s Grammar and Profanity policy. The researchers made minor changes to the ads which were then approved. After alerting Meta to these failings, the company publicly announced it was intensifying efforts ahead of the election to limit hate speech and incitements to violence. The researchers submitted further ads to Facebook containing hate speech in Swahili and English only to find the ads were once again approved. Nanjala Nyabola, a Kenyan technology and social researcher, said Meta’s content moderation failures are a result of a ‘deliberate choice to maximise labour and profit extraction, because they [Meta] view the societies in the Global South primarily as markets, not as societies’ (Wadekar, Citation2022, para 3).

The Facebook Papers (Citation2021) – a series of leaked Meta documents disclosed by whistleblower Frances Haugen to the Securities and Exchange Commission in October 2021 – revealed Meta advanced its operations in developing and non-English speaking countries without investing in comparable protections to those of the US, leaving users in those regions vulnerable to hate speech, disinformation, and harm that extended to offline violence (Zakrzewski et al., Citation2021). The Washington Post reviewed internal Meta documents finding vast disparity between the experience of using Facebook in India compared to what people in the US typically encounter. A 2020 summary (Zakrzewski et al., Citation2021, para 12) found less than 10% of daily Facebook users comprise the US market, yet the company’s budget to combatting hate speech and misinformation skewed in favour of US interest where 84% of its ‘global remit/language coverage’ was allocated, leaving just 16% to ‘rest of world’ countries.

The historical India-Pakistan conflict intensified in the disputed Kashmir region in 2019, violence also played out on Facebook as incitements to violence against Kashmiris, anti-Muslim hate speech, and propaganda aligned to Indian Prime Minister Narendra Modi became dominant across Meta’s platforms. Despite warnings from civil society groups in 2018 and 2019, respectively, that unchecked hate speech could lead to large-scale civic violence in India, and Meta’s assurances it would invest in robust moderation mechanisms in the country, findings (Zakrzewski et al., Citation2021, para 18-19) reveal that during the 2020 Delhi riots calls to violence remained on Facebook despite being flagged, and graphic images that falsely depicted violence enacted by Muslims were labelled as factchecked and remained on the platform. More than 53 people were killed and hundreds injured during the violence, the majority were Muslim.

In India, the Washington Post found harmful pages and groups of Hindu-nationalists associated with the Bharatiya Janata Party enacted anti-Muslim hate speech and incitements to violence on Facebook. Yet, such groups and content were not flagged because they were granted ‘political sensitivities’ and there were no ‘algorithms that could detect hate speech in Hindi and Bengali’ (Zakrzewski et al., Citation2021, para 41) despite them being the fourth- and seventh-most globally spoken languages, respectively. Meta content policy documents from 2020 conceded it ‘routinely makes exceptions for powerful actors when enforcing content policy’ (Zakrzewski et al., Citation2021, para 42), citing India as an example.

Meta’s struggles to moderate problematic content in some of the most at-risk regions are part of an ongoing pattern of failures to protect people in vulnerable countries and contexts. The company played a significant role in stoking ethnic conflict during Ethiopia’s 2020–2022 Tigray War (Robins-Early, Citation2021), inciting genocide against Rohingya Muslims in Myanmar (Amnesty International, Citation2022), fomenting religious sectarianism and violence in Iraq and Yemen (Scott, Citation2021), among many others. Collectively, the moderation failings point to a deep geographic, linguistic, and cultural inequity, overreliance on automated tools that are ill-equipped to moderate complex content with nuance and that often result in overenforcement errors, underenforcement of policies and removal of violating content, and an absence of priority to address problematic issues in countries where Meta has less market and political incentives.

Exceptionalism: 2022 (ongoing) Russian–Ukrainian war

The ongoing Russia–Ukraine War began in February 2014 when Russia annexed Crimea and escalated in February 2022 when Russia launched a full-scale military invasion of Ukraine. The invasion was internationally condemned and multiple countries-imposed sanctions against Russia. The war has resulted in tens of thousands of deaths and created more than 8 million Ukrainian refugees. The hostilities mark a geopolitical flashpoint with competition between the US, its western allies, and Russia intensifying, and tens of billions in military assistance having been provided by the US to restore Ukraine’s territorial integrity and sovereignty (Masters, Citation2023). The conflict is also playing out online as content from battlefields, bombings, drone footage, eyewitness testimony, political contestation, propaganda campaigns, and solidarity voices compete in a battle over narrative control. Meta, and other platforms, has become an important stakeholder in the conflict by shaping the information warfare through exceptional decision-making powers that extend to policy changes, content moderation, and access to platform spaces. By extension, Meta has become a powerful tool of the US government and military apparatus (Papaevangelou & Smyrnaios, Citation2022) by determining what is and is not permissible as free expression and for whom on its platforms based on US foreign policy interests (Biddle, Citation2022a).

Citing internal Meta policy materials, The Intercept (Biddle, Citation2022a) reported Meta enacted a policy exception in late February 2022 allowing praise of the Azov Battalion – a Ukrainian neo-Nazi military unit, listed on the highest tier of Meta’s DIO policy since 2019 for its serious offline harms and violence – for which praise, support, or representation of the group was previously banned under its rules. According to the policy materials, Meta said it would ‘allow praise of the Azov Battalion when explicitly and exclusively praising their role in defending Ukraine or their role as part of the Ukraine’s National Guard’ (Biddle, Citation2022a, para 3), while praise of violence and representation of the group’s uniforms and banners remained banned as hate symbol imagery. It follows similar policy exceptions made by Meta in the context of anti-government protests in Iran in July 2021 when the company allowed people to post content containing the call ‘death to Khamenei’ for a two-week period (Franceschi-Bicchierai, Citation2021), and in Iraq in 2017 when Meta enacted exceptions to allow for calls for or glorifying violence against the Islamic State (Oremus, Citation2022).

On 2 March 2022, the Council of the European Union (EU) issued a regulatory amendment prohibiting the distribution of content from Russian state news services Russia Today and Sputnik. Meta then restriced access to Russian state media channel pages and demoted visibility of content on its platforms across the EU, United Kingdom, and Ukraine. On 10 March, Reuters (Vengattil & Culliford, Citation2022, para 1) reported Meta had ‘temporarily’ amended its hate speech policies to allow ‘Facebook and Instagram users in some countries to call for violence against Russians and Russian soldiers’ in the context of the invasion. The policy changes were applied to Armenia, Azerbaijan, Estonia, Georgia, Hungary, Latvia, Lithuania, Poland, Romania, Russia, Slovakia, and Ukraine.

Meta also allowed some posts that called for ‘death to Russian President Vladimir Putin or Belarusian President Alexander Lukashenko’ (Vengattil & Culliford, Citation2022, para 2). A Meta spokesperson told Reuters it had made exceptional allowances to enable political expression that would normally violate hate speech, violence and incitement rules such as ‘death to the Russian invaders’ (Vengattil & Culliford, Citation2022, para 3). Reuters, citing internal Meta emails, reported ‘calls for the leaders’ deaths will be allowed unless they contain other targets or have two indicators of credibility, such as the location or method’ (Vengattil & Culliford, Citation2022, para 4). Meta said its ‘spirit-of-the-policy allowance’ was enacted in the specific context of the Russian invasion of Ukraine because the term ‘Russian soldiers’ was being used as proxy for the Russian military (Vengattil & Culliford, Citation2022, para. 10).

Following Meta’s actions, Russia’s embassy in the US called on Washington to stop Meta’s ‘extremist activities'. In a Twitter message the embassy said, ‘users of Facebook and Instagram did not give the owners of these platforms the right to determine the criteria of truth and pit nations against each other’ (Vengattil & Culliford, Citation2022, para 6). Russia banned access to Facebook and Instagram on 4 and 14 March, respectively.

Meta’s series of content policy revisions resulted in internal confusion, especially among moderators, and with platform users. It also led to grave mistakes made across Meta’s platforms including a group called the Ukrainian Legion running ads (that were later removed) to recruit foreigners to the Ukrainian Army, a violation of international law (Mac et al., Citation2022).

Meta reversed course. Meta’s president of global affairs, Nick Clegg, issued a policy update to staff on 13 March stating that ‘calls for the death of a head of state’ were banned, that exceptional moderation rules were not to be ‘interpreted as condoning violence against Russians in general’ (Milmo, Citation2022, para 3-4), and the policy exception ‘allowing threats of violence against the Russian military only applied in Ukraine’ (Milmo, Citation2022, para 8). On Twitter, Clegg (Citation2022b) said Meta’s policies were enacted in ‘extraordinary and unprecedented circumstances’ to protect ‘people’s rights to speech as an expression of self-defence in reaction to a military invasion of their country’. Arguing that if Meta applied its ‘standard content policies without any adjustments’ it would be ‘removing content from ordinary Ukrainians expressing their resistance and fury at the invading military forces, which would rightly be viewed as unacceptable’ (Clegg, Citation2022b).

Yet Meta did not articulate the basis on which it had taken the political stand, nor did it clearly define the criteria regarding what were permissible as calls for self-defence from invasion. Meta’s exceptional measures to enable Ukrainians to express their right to national self-defence have not been granted to people experiencing wars, conflict, and crises in other countries and contested regions. Nor has Meta articulated how such measures taken in Ukraine might be applied in other conflict settings, like the Occupied Palestinian Territories or the ongoing Yemeni Civil War, for example. Meta has not equally demonstrated its willingness to support Yemenis’ and Palestinians’ right to freedom of expression and calls for self-defence in resistance to a foreign military force – against the Saudi Arabia-led military intervention in Yemen or Israel’s illegal occupation of Palestinian territory, respectively.

Conclusion

The analysis demonstrates that Meta exerts almost unfettered, sovereign power and has played an active role in radically altering global public discourse and human rights across a diversity of contexts through actions that constitute double standards. Meta’s shaping of not only the social worlds and knowledge bases of global populations, but the narratives of conflict and contestation presents a significant threat to democracy and collective deliberation (Vaidhyanathan, Citation2018, p. 247). Meta’s power to determine the boundaries of free expression and for whom in differing contexts is both a sign and warning of the consolidated control it has garnered over the flow of global communications (Biddle, Citation2022b) and its expanding role as a powerful geopolitical actor. Such forms of power and governance not only impose penalties on content, but on people (Ahn et al., Citation2022) whose lives and lived experiences become the locus of control and containment. This brings into being capital-imperial modes of visibility and visuality that facilitate control over resources so as to structure the terms of access and means of production as the right to existence (Mirzoeff, Citation2011) in platform spaces.

Beyond the singular context of Meta, the battle to control the narrative dimensions of conflict, war, and contestation in platform spaces has become as central and intensified as the physical acts that typically govern it. Platforms enable people a space to tell their stories to global audiences and build alliances which can pressure international actors to respond with political and material support, while also functioning as strategic tools and optic devices that can directly impact events on the ground and the dynamics of conflict (Patrikarakos, Citation2017; Zeitzoff, Citation2018). Platform necropolitical governmentalities threaten the fundamental right to ‘freedom of opinion and expression, including the right to seek, receive and disseminate diverse sources of information’ which become especially paramount ‘in times of crises and armed conflict as a precious ‘survival right’ on which people’s lives, health, safety, security, and dignity depend’ (Khan, Citation2022, para 6).

I have argued platform content moderation is shaped by a variety of stakeholders and extant political forces (Gorwa, Citation2019b), and is enacted within a broader landscape of power, insecurity, surveillance, and censorship (Roberts, Citation2019). While beyond the scope of this article to address, such dynamics cannot be viewed in isolation to historical actions taken by both democratic and tyrannical actors to influence global media and information operations for political causes. Add to that, the role of media institutions in conflict settings to support state agendas and shape social consensus in support of particular interests. Nor can the implications of how closely aligned big tech is with the agenda of the US security state and its allies be overstated – the interconnections between technology and defence are deep, storied, and increasingly fused. Contemporary state informational strategies on platforms are central to the changing dynamics of modern warfare (Banchik, Citation2021; Patrikarakos, Citation2017). Platforms are commonly mobilised as spaces, tools, and discourses of militarist engagement and military operations including PR, surveillance, counterinsurgency, and as archives for human rights abuses (Kunstman & Stein, Citation2015). Platforms thus not only moderate, but also extend theatres of civil discontent, conflict, war, and military operations by making important decisions about what types of content and which forms of speech are permitted. In doing so, platforms can create spaces of conflict, insecurity, waring narratives, and reshape geopolitical struggles by intervening in matters of the political and redefining what constitutes the political for particular digital subjects and entire digital populations. This not only problematises the way we understand the political functions, ends, and implications of platforms, it enables platforms the power to dictate what counts as fact and truth which shapes the way we understand, are witness to history (or not), and the capacity for public and political response to it. Such decision-making by platforms is not only profoundly political, but also reveals the ways platforms make value judgments and moral choices about what constitutes newsworthiness, civic interest, and civilised discourse (Klonick, Citation2018). Importantly, it grants platforms the power to determine which lives are worthy of the right to have a right to life online and those that are not. It also extends the already existing application of western imperialism through technological tools and speech norms that prioritise western values, economic imperatives, and political discourses (York, Citation2021).

The purpose of this article has been to reformulate Mbembe’s (Citation2019) notion of necropolitics and necropower as a contemporary assemblage of corporate-government power, which I have defined as platform necropolitics. Platform necropolitics manifests through the complex interior-exterior interplay and intertwining of private and state political power that takes shape as the ultimate expression of sovereignty through which powers of violence and exception are exercised in platform spaces for the necropolitical management of digital populations. I have traced its operational registers, understood as acts of commission, omission, and exceptionalism, that manifest in asymmetrical ways in different contexts. Findings reveal platform necropolitics as more than simply as a sociotechnical process; rather, a geopolitical process involving the reconfiguration of spatial and political relations of control over access to information and rights to expression of all kinds. The necropolitical governance of digital populations determines how, when, and for whom the boundaries and conditions of letting-live and making-die are rendered into being. The intensities and nature of such logics are enacted in the spatial temporality of platforms in ways that manifest as exceptional yet everyday spectrums of violence that take life into its fold, displaces, and orients some lives toward forms of slow death through habitual operations of power that evade responsibility and public observability.

I conclude by drawing one final implication from my theorising and analyses. The algorithmically modulated forms of borderisation I have outlined provide an important lens for critically interrogating the implications of current platform configurations. Yet, increasing forms of operational, or environmental, governance over digital subjects, as well as entire populations, will intensify as the mailability of our platform environment merges with new forms of augmented, virtual, and extended reality technologies, like the so-called metaverse. Borderisation in digital futures will manifest new intensities of operational power and give rise to contested spatial struggles and geopolitical fractures where competing sovereigns collaborate and compete for governmentality over virtual worlds and, thus, over people and political realities. Future research needs to attend to these significant implications.

Acknowledgements

I would like to thank Mark Andrejevic and David Myles, whose feedback on an earlier version of this article greatly improved the quality and depth of the work. I would also like to thank the anonymous reviewers of this article who were generous with their time and provided thoughtful and constructive feedback. Finally, I wish to thank the special issue journal editors, Andrew Iliadis, Eugenia Siapera, and Tanya Lokot, for inviting me to contribute to this important body of work.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Kelly Lewis

Dr Kelly Lewis is a Postdoctoral Research Fellow in the Australian Research Council Centre of Excellence for Automated Decision-Making and Society (ADM+S), and the Emerging Technologies Lab at Monash University. Her research focuses on the social, political, cultural, and economic implications of digital media technologies, platforms, and data cultures, as well as new and innovative digital methods and critical approaches for studying them. Kelly's interdisciplinary work has a particular focus on investigating paradigms of power asymmetries, discrimination, violence, and political (in)visibility that manifest through opaque relations, logics, and data flows.

Notes

1 Censorship is mobilised to broadly to refer to the removal or suppression of any form of expression or actor considered as objectionable, harmful, sensitive, or inconvenient.

2 I mobilise asymmetrical content moderation and governance to refer to the ways policies and practices can be enacted in ways that differentially discriminate against users, content type, cultural context, and political situation.

3 The Digital Services Act will supersede Germany’s 2017 Network Enforcement Act – in application throughout Europe from 17 February 2024.

References