10,650
Views
42
CrossRef citations to date
0
Altmetric
Articles

The political economy of digital data: introduction to the special issue

ORCID Icon
Pages 439-446 | Received 17 Jan 2020, Accepted 26 Jan 2020, Published online: 10 Feb 2020

ABSTRACT

In an era where digital data is becoming an increasingly important element in the production of knowledge, wealth, and power, it takes radical solutions to ensure that digital data is not used to merely increase power and profits for the privileged. As the contributions to this Special Issue show, it also takes new regulatory approaches, institutions, and research fields to ensure that the political economy of digital data contributes to justice and wellbeing of people and societies. Rather than merely analysing the shortcomings of the current situation, we need visions and instruments to build new institutions: institutions in and through which human expertise, experience, and interaction are seen as equally important as high-tech precision; where new norms and policy instruments ensure that the benefits of data use accrue for society at large, and in particular for the marginalised and vulnerable; and where the datafication of the bodies, lives, and practices of people who have no realistic chance to opt out is recognised and condemned for what it is: robotic brutality (Mick Chisnall).

In November 2019, the world woke up to a Nightingale. Rather than the song of a bird, however, it was the news of a massive data leak that had reached people's social media. The second largest healthcare provider in the United States, Ascension, had transferred medical files of millions of patients to the technology giant Google without the consent of these patients (Copeland Citation2019) – in a transaction dubbed “Project Nightingale”. The files contained, among other things, lab reports and full medical records linked with personal identifiers. That the healthcare provider had not even bothered to encrypt the files when handing them over to Google made it a particularly outrageous data breach, but possibly not the worst. Google, in turn, had just come out of another scandal in England, where a hospital trust had given access to over a million patient files to Google's artificial intelligence (AI) outfit “DeepMind” – again without asking patients for their consent, or consulting with patient representatives. DeepMind had promised to develop software to detect kidney disease. Although software that could do so was never successfully developed (Powles Citation2019), the patient data that the hospital trust had handed over has remained in Google's possession.

The scandals around data breaches are only the most visible part of a much less conspicuous, structural shift that has gone on in the landscape of healthcare institutions in the last 15 years. With the increasing digitisation of healthcare, technology companies have joined healthcare providers, pharmaceutical companies, manufacturers, and regulators as key players in the healthcare domain. In fact, technology companies often perform several roles simultaneously: They produce the devices that are used for patient monitoring as well as the software to do so. Some of them own the data collected with these tools. In the United States, they now also act as healthcare providers: Amazon recently joined forces with JP Morgan and Berkshire Hathaway to create a venture to “transform health care” for the millions of employees that these companies have between each other (Haven Healthcare Citation2019). And technology companies have become important players in research as well; the founders of Facebook, Google, and Microsoft – to name only some of the most powerful – have become forces to reckon with in the funding landscape across the globe. They run research institutes and provide research funding in areas that they consider important, thus setting and controlling research agendas – without the public accountability requirements that traditional research funders are facing (e.g. McGoey Citation2015). Last but not least, private philanthropies and foundations also steer and shape research by setting up institutes and providing endowed chairs. While private funding in universities is nothing new (Popkin Citation2019), the arrangements that we currently see are unprecedented both in terms of the scale and force of private funding (Popkin Citation2019). They now determine what is researched and how, and what gets published (Van Dijck, Poell, and De Waal Citation2018; Bero Citation2019).

These examples from the field of healthcare illustrate how new constellations of actors and power characterise even those fields that some of us have considered to be relatively insulated from surveillance capitalism, to use Shoshana Zuboff’s term (Citation2019). In her recent book on this topic, Zuboff argues that the ability to predict – and ultimately shape – human behaviour has become the new paradigmatic form of capitalist value creation. Predicting human behaviour allows companies to produce and sell to people exactly what they want, and, by doing so, to create and shape new consumer needs. Even for those who are critical of the mantra that “data is the new oil”Footnote1, it is clear that data has taken on the role of a key asset in economic knowledge production (see Birch et al, in this volume).

Not only in the business sector, but also in government and public administration, the ability to predict people’s behaviour with the tools of digital monitoring – via people’s mobile phones, home assistants, and systems such as face recognition and geolocation tracking – renders populations more controllable. The usual reference in this context is to the social credit programme in China, which is unique in that a government-run programme integrates information from various sources to create a unified score. It is particularly worrisome in the sense that the state, an entity that is supposed to protect the welfare and wellbeing of its citizens, is using a technology that stigmatises and excludes. But in other respects, the Chinese effort is not a singular one; consumer scoring companies also in many other parts of the world are bringing together information from different sources and contexts to sell information to banks and other businesses that will help them to decide whether or not to sell someone insurance, or market certain goods to them (Dixon and Gellman Citation2014). Such decentralised scoring practices deserve our concern too: They place people behind a “one way mirror” (Wellcome Trust Citation2016) where institutions can monitor them without being watched back. If we cannot see what data and information about us “big brother and company man” (Kang et al. Citation2012) are using, and what conclusions they reach, then it is impossible for us to correct mistakes, or adjust our practices so that the information obtained about us does not affect us negatively.

In theory, the European Union’s (EU) data protection framework in the form of the European Data Protection Regulation (GDPR), should change this situation for EU residents. It aims to give people more control over their own data, including the right to know what data is processed about them. The idea that the GDPR serves as a protective shield from the negative effects of data use in the digital era, however, is challenged by Luca Marelli, Elisa Lievevrouw, and Ine Van Hoyweghen in their contribution to this Special Issue. Looking at digital health technologies specifically, they argue that the GDPR is “misaligned with the surge in big data practices and digital health technologies”. The GDPR does not, for example, address the opacity of many digital health technologies, where it is often impossible for data subjects or decision makers to know how an algorithm, for example, arrived at a specific conclusion. Moreover, with its assumption of linear data use – namely that a person consents to having their data used by a specific data processor for a specific purpose – the GDPR also envisages potential harms in such a linear manner. Its regulatory approach is thus not suited to protect people from the harms that can occur from predictive analytics where insights obtained from data from other people are applied to a person (e.g. a person’s risk of falling ill is seen as particularly high because they have a characteristic that in other people was found to correlate with higher disease risk). Neither is the GDPR fit to protect people and societies from systemic harms, such as the increase of power differentials between privileged and marginalised people and institutions. The “notice-and-consent” model, the authors argue, cannot effectively mitigate the power asymmetries and divides between data subjects and data users, especially in the context of increasingly diffuse digital health technologies.

Despite its shortcomings, the fact that Europe at least seeks to take the protection of personal data seriously, is lauded by many critical data studies scholars and activists. For others, particularly for industry actors, the GDPR is the paradigmatic example of “too much regulation” that hampers innovation (for a critical perspective see Yeung, Howes, and Pogrebna Citationin press). While this type of argument is refuted by scholars such as Mariana Mazzucato (Citation2015), these same scholars do not challenge the value of innovation in itself. In their contribution to this Special Issue, Kean Birch, Margaret Chiappetta, and Anna Artyushina do exactly that. They trace a shift from innovation-as-entrepreneurship to innovation-as-rentiership: The former aims at making a contribution to the real economy, whereas the latter “is defined by the extraction and capture of value through different modes of ownership and control over resources and assets” (Birch, Chiappetta, and Artyushina Citation2020). Digital data, these authors argue, have become a key asset in innovation-as-rentiership (see also Birch Citation2017a, Citation2017b). In the domain of intellectual property (IP) protection, for example, the focus of legal protections has shifted from the IP itself to protecting the financial investments into IP assets. This, in turn, means that the people who benefit from this protection are not the innovators themselves or society at large, but investors and shareholders. Birch and colleagues conclude that, rather than a solution to contemporary global challenges, innovation lies at the root of these problems. Research on the political economy of digital data – and on digital practices more widely – should thus “examine the policy implications of this pursuit of economic rents” (Birch, Chiappetta and Artyushina Citation2020) as a deliberate research strategy.

Mick Chisnall’s paper offers a new perspective on another kind of harm emerging from the current political economy of data. He uses the notion of “digital slavery” (Rogerson and Rogerson Citation2007) to argue that digital practices involve at least two processes of alienation: One refers to the fact that people’s digital selves – namely the data and information that represent their bodies, actions, and preferences – are typically owned by third parties rather than by themselves. The other consists of the “removal of an individual’s ability to govern her own life” (Chisnall Citation2020) – whereby appropriation is a productive praxis of the self. Going beyond the important but by now well-known critique of digital exploitation of people who are enlisted in the value creation for for-profit corporations, Chisnall uses an example from public administration. He shows how digital architectures and processes take away people’s ability to shape their own trajectories through the system. They also remove points of resistance, exchange, and recourse. People who are suspected by government to have wrongfully claimed welfare support, for example, must follow rigidly defined processes in which they have no control over their own information and futures. The automated nature of these processes represents, in Chisnall’s words, a form of “robotic brutality” that shares characteristics with the constraints that human slaves have been exposed to. Similar to the case of physical slavery, the relationship of the digital slave and her master is characterised by the possibility of random violence on the side of the former and the feeling of utter powerlessness on the side of the latter, and the absence of a redeeming other. The reasoning, pleading, and the pain of the slave is not responded to, because the slave is a thing to be controlled, an asset rather than a person. And both physical and digital slavery were – and are, respectively – justified by a rhetoric of capitalist efficiency and order. Upon reading Chisnall’s thought provoking paper, one is left wondering: Given that the state does not allow people to sell their physical bodies, why is it that others are allowed to buy and sell our digital bodies?

Joanna Redden, Lina Dencik and Harry Warne then expand the critique of digital capitalism even deeper into the domain of government and public policy. With its emphasis on prediction and pre-emption, Redden and colleagues argue, the logic of data accumulation lures governments into the provision of more targeted services. The authors look at three examples of datafication and digitisation of child welfare in different regions in England. Using the notion of data assemblages (Kitchin and Lauriault Citation2014), which takes data systems to be complex assemblages of people, political, social and legal trajectories, infrastructures, and practices of sense-making, Redden and colleagues show how the three data analytics systems, despite their different approaches and configurations, all raise a number of similar problems: lack of clarity regarding consent to data processing, transparency and intelligibility of decision making systems, and the quality of decisions (including false positives or false negatives in risk scoring). Drawing upon interviews with data system developers, managers, and other stakeholders, these authors find that the datafication of child welfare also seems to have a de-humanising effect on the system as a whole: Traditionally, an emphasis on professional judgement in administrative decision making has been a key to good administrative practice (see also Wagenaar Citation2004). Replacing human professional judgement by decision-making by machines transforms the meaning and the skills associated with being a social worker. If automated risk scoring enters the assessment process that used to be the responsibility of a human – even if the machine is “merely” an aide, not the final decision maker – then the relationship between the service provider and the service user is affected as well. Redden and colleagues’ paper is a compelling call upon researchers and policy makers to evaluate the effects of data- and algorithm-supported decision making on resource allocation, and on the lives of citizens, instead of accepting the narrative of efficiency and precision in public service provision.

The concept of precision, in turn, is the subject of a paper by Declan Kuch, Kalervo Gulson, and Matthew Kearnes. Precision, these authors argue, has emerged “as an increasingly common descriptor of specific domains of scientific and technological research and deployments in advanced industrial nations” (Kuch, Gulson and Kearnes Citation2020). Kuch and colleagues analyse the domains of medicine, agriculture and innovation as three fields within which precision is currently heralded as a “disruptor” of outdated structures and practices. Against the glitzy promises of precision, these old structures and practices appear to be blunt, ineffective, and wasteful. But the rhetoric of the cost-effectiveness of precision, Kuch and colleagues propose, also conceals a deeper change in economic and political rationales. Doing “precision agriculture”, for example, entails a different way of knowing the land, plants, and animals than other types of farming. For farmers, precision farming means to delegate the knowledge of what the land, the plants, the animals need to remote sensors and algorithms who then determine the right dose of water, feed, and fertiliser for each plot and part. In this way – so the promise of precision goes – the resources needed to create the output are minimised. Precision thus becomes the epitome of post-industrial, digital rationalisation: The physical farm is taken apart into digital data points and infrastructures that come to be seen as more valuable – and thus ultimately “truer” – than the blunt, imprecise, messy physical world. The process of datafying and digitising farming, however, is not merely a translation of all things analogue into the digital world. It datafies and digitises selectively: It captures some parts of reality – namely those that matter for profitable production – and moves others out of sight. Precision agriculture, just like precision medicine, and precision education, are a form of worlding (Friese Citation2013): Rather than merely digitising and automating old practices and phenomena, they create a new world that suggests ubiquitous control, while ironically removing humans ever further from actually exercising it.

In my own contribution to this Special Issue, I look at the practice of using patient data to “nudge” people to adopt healthier lifestyles. Nudging was famously defined by Richard Thaler and Cass Sunstein as making changes in the “choice architecture” that render certain options more appealing without making other options prohibitively expensive or onerous (Thaler and Sunstein Citation2008, 6). Nudging has been hailed as a particularly attractive addition to the toolbox of policy makers because of its focus on the so-called demand side of policy. Tackling the practices of people at the individual level, so it is argued, is an inexpensive and effective way to address problems, with the added benefit that it does not impose the value judgements and preferences of authorities unto people. Contrary to these assumptions, nudging is value-laden in at least two ways: First, as critical scholarship has argued, the choice of nudging as a policy instrument absolves public policy from its responsibility to create good outcomes and moves this responsibility onto the shoulders of individuals (see e.g. Pykett Citation2012). Nudging draws attention away from social determinants and other structural and institutional factors that shape the very problem that is to be solved. Second, by assuming that nudging helps people to do what they “truly” want to do (or what should want to do, if they were rational), it brackets off questions about the values and goals that nudging is used for. The increasing amount and volume of data generated in the healthcare domain, I argue, should thus not be used to tackle people’s practices at the individual level, but to create better institutions: They should help us to learn where specific services are needed, what types of patients may be most in need of support, and how infrastructures and services could be changed to better support people in leading healthy lives. Instead of following the fashionable trend to change the demand-side of policy by directly addressing individual behaviour, we should organise the supply-side of policy in such a way that our policies and institutions do not stigmatise and divide, but foster solidarity especially with the least well off.

In an era where digital data are becoming an increasingly important element in the production of knowledge, wealth, and power, it takes radical solutions to ensure that digital data is not used to merely increase power and profits for the privileged. As the contributions to this Special Issue show, it also takes new regulatory approaches, institutions, and research fields to ensure that the political economy of digital data contributes to justice and wellbeing of people and societies. Rather than merely analysing the shortcomings of the current situation, we need visions and instruments to build new institutions: institutions in and through which human expertise, experience, and interaction are seen as equally important as high-tech precision; where new norms and policy instruments ensure that the benefits of data use accrue for society at large, and in particular for the marginalised and vulnerable; and where the datafication of the bodies, lives, and practices of people who have no realistic chance to opt out is recognised and condemned for what it is: robotic brutality.

Acknowledgements

I am grateful to Ine van Hoyweghen and Kean Birch for very helpful comments on the draft of this Introduction.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributor

Barbara Prainsack is a Professor at the Department of Political Science at the University of Vienna, and at the Department of Global Health & Social Medicine at King’s College London. Her work explores the social, regulatory and ethical dimensions of biomedicine and bioscience, with a focus on personalised and “precision” medicine, citizen participation, and the role of solidarity in medicine and healthcare.

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Notes

1 The mantra about data being the new oil is problematic because it assumes that, just like oil, data is “just out there” and belongs to the companies applying their skill and effort to get it “out of the ground” and make it ready for consumption. It makes thus invisible the work that people, as patients and citizens and in other capacities, have done to create the data, and the public and other investments that have created the infrastructures for data curation (see Prainsack Citation2019a). The comparison between data and oil is problematic also in that the value of oil rests in its material substrate, which does not apply to digital data. Digital data are also multiple in that they can be in several places at the same time (Prainsack Citation2019b).

References