1,929
Views
1
CrossRef citations to date
0
Altmetric
Articles

Oil crisis: the political economy of digital data. Conclusion of the special issue

ORCID Icon
Pages 563-566 | Received 17 Jan 2020, Accepted 26 Jan 2020, Published online: 04 Feb 2020

ABSTRACT

It is a truism to say that we live in a digital era. Advances in digital technologies in recent decades have changed personal, political, and social practices. Digital tools – and with it, new capabilities for the generation, storage, analysis, and exploitation of data – have also had profound effects on the political economy, including the financialisation of our economy, movements of data, money, and companies across national borders. At the same time, economic, political and social factors shape what and how digital data are generated, how they are stored and used, and for whose benefits. This concluding chapter to the Special Issue on the Political Economy of Digital Data gives an overview of the contributions` main findings and suggestions.

Rather than focusing on ways in which we can make data production and use “more ethical” within existing political, social, and economic structures, the papers in this Special Issue foreground these very structures as subject of their analyses. They treat the institutions and structures that cause or maintain asymmetries of power and resources as objects of digital data governance, rather than regarding them as externalities. Taking this perspective, the contributions to this Special Issue challenge some of the assumptions and polities of the digital data economy. In their paper on “Datafied child welfare services: unpacking politics, economics and power”, Joanna Redden, Lina Dencik, and Harry Warne go against the grain of a fundamental assumption of policy makers who try to “do more with less” in light of cost cutting imperatives and austerity politics: Looking at the effects of the digitization and automation of child welfare administration in England, where algorithmic systems are hoped to replace human judgement in risk prediction and intervention choice (and save cost and time in the process), the authors find no evidence that automation makes child welfare systems more effective. Moreover, the reliance on algorithmic decision making seems to have unintended consequences including the loss of engagement of case workers. Reddin and colleagues’ work thus raises the question of whether it may be better not to digitize and automate these systems altogether, instead of merely asking how we can make algorithmic governance more “ethical”.

Mick Chisnall, in his paper “Digital Slavery, Time for Abolition?”, suggests that the appropriation of people’s digital bodies and selves by third parties should be treated as a form of contemporary slavery. Following Chisnall’s rationale, if we are serious about condemning all forms of slavery, we must create policies that make it impossible for corporations and other institutions to effectively own people’s digital bodies and selves to the extent that people are no longer able to author their lives. The more administrative and commercial practices move into the digital domain, and the more people are dependent on digital systems even for the satisfaction of their fundamental needs and interests, the more important it becomes that digital data use does not take agency and resources away from people and communities towards commercial enterprises and rent-seekers. In sum, Mick Chisnall’s piece challenges the assumption that current practices of data appropriation by corporations is a problem that can be solved by enhancing data protection and individual privacy.

Kean Birch, Margaret Chiappetta, and Anna Artyushina challenge another core assumption of the digital economy, namely that innovation is always a good thing. In their paper titled “The problem of innovation in techoscientific capitalism”, they posit that innovation in the digital data economy often serves the purpose of rentiership instead of contributing to the real economy, or creating public value in another way (e.g. through the provision of public services). Besides problematizing the extreme profits that corporations make on the basis of people’s data, this insight also sheds a different light on the current trend to celebrate the state as a motor of innovation (e.g. Mazzucato Citation2015). How can we change the meaning and practice of innovation in such a way that it serves the public good again? And how can digital data practices contribute to that? Birch and colleagues are opening our gaze towards these larger questions that ultimately call for a reconsideration of what kind of value our policies our digital practices can and should create.

My own contribution to this Special Issue, titled “The value of healthcare data: To nudge, or not?”, zooms into the public value aspect of the digital data economy. I do this through the case of nudging: With the increasing availability of digital data in healthcare and medicine, in conjunction with the growing popularity of behavioural interventions in public policy more broadly, there is an increasing lure to use data and information from the healthcare field to “nudge” people to make healthier decisions. Addressing the challenges of rising disease burdens and costs “on the demand side” of healthcare policy, its proponents argue, nudging is a win-win situation as it brings down healthcare costs and helps people do what they want to do (or should want to do) at the same time. This line of argument, however, misses the point that behavioural interventions are not value-neutral at all: By choosing the instrument of nudging, policy makers are not only shifting the responsibility for outcomes from the level of public and other collective institutions to the shoulder of individual people, but they are also moving out of sight the structural and systemic factors that have contributed to creating the problem in the first place. Consequently, instead of using healthcare data to try to change “behaviour” at the individual level, we should use this data and information to create better institutions. Ultimately, this implies a shift in our assumptions about the value of data use: While many policies and practices currently assume that the value of data is highest when it gets individuals to do something – to click on an ad, to buy a service, to pick different types of food in the supermarket – the value of data could be argued to be much higher when it helps to change the political, economic, and social circumstances that shape people’s practices.

This insight draws attention also to the need to create new nomenclatures and categories that capture the needs and interests that people have as individuals, but also as members of families, communities, and even planetary systems. Data protection frameworks in the paper age have focused on protecting the rights and interests that people have as individuals, but not those that emerge from people’s embeddedness in other communities. When the use of a person’s data is concerned, not only should her individual privacy and autonomy be protected, but also her right to shape the world that the families and communities that she is part of lives in. This is one of the reasons that current data protection framework, such as the General Data Protection Regulation (GDPR) do not and cannot go far enough, as argued by Luca Marelli, Elisa Lievevrouw, and Ine Van Hoywghen. In their chapter on “Fit fur purpose? The GDPR and the Governance of European Digital Health”, these authors call into question the ability of the Regulation to adequately capture the practices and stakes of digital health technologies. Key principles of the GDPR, such as purpose limitation, data minimization, and transparency, sit squarely with practices such as machine learning, where the objective is to have as many good data as possible, and to be able to repurpose datasets across contexts. Also the understanding of privacy and other rights as merely individual rights fails to capture the relational nature of digital data – in that they can disclose information about more than one person (see also Taylor Citation2012; Prainsack Citation2019). Moreover, the harms of data practices can affect different people than the data subjects themselves. Marelli and colleagues take digital health technologies as an example to make an argument whose relevance goes far beyond the health domain: If the GDPR’s nomenclature – and the ontology of digital data practices that it seeks to represent – is from the paper age, how can it effectively protect people from harms in the digital era? How can it unfold public value, without posing undue threats to the rights and interests of individual people and societal groups?

Part of an answer to this question can be found in Declan Kuch, Kalvero Gulson, and Matthew Kearnes’ contribution (“The promise of precision: Datafication in medicine, agriculture and education”). In each of the three fields of practice, the authors show how a commitment to “precision” among policy makers – and often also practitioners – changes the way in which the field is known and acted upon. Besides moving power and agency out of the hands of practitioners and into the hands of large institutional actors and companies, precision practices turns digital data into a uniform currency that renders diverse practices and knowledge commensurable and computable. In other words, precision helps to make practices in fields as diverse as healthcare, education, and agriculture legible (Scott Citation1998) through creating a uniform language that is no longer controlled by the local practitioners but instead by the producers of the instruments and standards used in the field. In the domain of healthcare, this new ruling class comprises healthcare corporations and technology companies; in education it includes large publishers that license educational materials; and in agriculture, the corporations that produce and license crops and feed, and that manufacture devices for smart farming, are among the most powerful of these new rulers.

What unites all six contributions to this Special Issue is that they do not only seek to analyse practices and policies to diagnose problems, but also lay the foundations for solutions. In some cases, the solution starts with a different way of posing the problem: When the problem is no longer framed as the protection of individual privacy, but instead as one of collective action, then this creates space for policies and regulation that treats people not merely as data subjects but as members of families and communities who all have rights and interests in data use. In other instances, the very categories of our thinking and acting needs to change to allow for policies that contribute to greater social and economic justice. Perhaps we need a notion as radical as digital slavery to ensure that digital data, which is celebrated as “the new oil”, does not merely grease the palms of those that least need it.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes on contributor

Barbara Prainsack is a Professor at the Department of Political Science at the University of Vienna, and at the Department of Global Health & Social Medicine at King’s College London. Her work explores the social, regulatory and ethical dimensions of biomedicine and bioscience, with a focus on personalized and “precision” medicine, citizen participation, and the role of solidarity in medicine and healthcare.

References

  • Mazzucato, Mariana. 2015. The Entrepreneurial State: Debunking Public vs. Private Sector Myths. London: Anthem.
  • Prainsack, Barbara. 2019. “Logged out: Ownership, Exclusion and Public Value in the Digital Data and Information Commons.” Big Data & Society 6 (1): 1–15. doi: 10.1177/2053951719829773
  • Scott, James C. 1998. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Boston: Yale University Press.
  • Taylor, Mark. 2012. Genetic Data and the Law: A Critical Perspective on Privacy Protection. Cambridge, UK: Cambridge University Press.