615
Views
0
CrossRef citations to date
0
Altmetric
Introduction

Introduction to thematic section on ‘social theory in an age of machine learning’

The proliferation of machine learning (ML) systems, which are algorithmic assemblages designed to extract patterns from data and make predictions, is visibly transforming society and everyday life. OpenAI's GPT-4 represents the latest development in this field, but even less remarkable ML systems have made significant inroads into important societal domains over the past decades. For instance, scholars have explored the deployment of ML systems in areas such as credit scoring (Kiviat Citation2019; Rona-Tas Citation2020), insurance (Cevolini and Esposito Citation2020), criminal justice (Brayne and Christin Citation2021), self-driving cars (Bissell et al. Citation2020; Stilgoe Citation2018), social media (Fourcade and Johns Citation2020), warfare (Scharre Citation2018), and automated trading (Hansen Citation2020; Hansen and Borch Citation2021).

A substantial and growing body of literature has examined the societal effects of these systems. Concerns have been raised about their potential biases (Zou and Schiebinger Citation2018), their contribution to racial and social inequalities (Benjamin Citation2019; Eubanks Citation2018; Noble Citation2018), and their transformative impact on subjectivity, everyday life, and labour markets (Shestakofsky Citation2017; Wajcman Citation2019). Scholars have also discussed the opacity of ML systems and the broader epistemological, ethical, and political implications they entail. These discussions have touched on established notions of accountability, expertise, liability, and more (Amoore Citation2020; Brighenti and Pavoni Citation2021; Burrell Citation2016; Coeckelbergh Citation2020; Collins Citation2018; Fazi Citation2020; Pasquale Citation2020; Svetlova Citation2021).

Simultaneously, there is a growing recognition, partially fuelled by these studies, that the rise of ML may have profound implications for social theory. On one hand, ML's use as a new methodological tool holds the promise of uncovering patterns in data that could prompt a reevaluation of established concepts used to describe the social world. While this promise may not yet be fully realized, some scholars are optimistic about ML's potential to generate theories by extracting non-linear patterns in data (Edelmann et al. Citation2020; Evans and Aceves Citation2016). On the other hand, the functioning of ML systems necessitates a reconceptualization of human-centered social theory (Airoldi Citation2022; Borch Citation2023; Esposito Citation2017; Yolgörmez Citation2021). In certain domains, the actionable predictions of ML systems not only inform human decision-making but replace it entirely (Borch and Min Citation2023). This distinction sets them apart from previous algorithmic systems and raises questions about accountability, control, ethics, liability, and politics. Additionally, it prompts inquiries into the ways in which their use may reshape human-machine configurations and potentially replace social relationships (Borch Citation2022).

This thematic section comprises three papers that contribute to the rethinking of social theory in light of ML, fitting into the broader context of the special issue on Political arithmetic: old and new. Each paper draws on different theoretical resources and approaches the connection between social theory and ML in distinct ways. They either reconsider established notions to comprehend ML or analyze the implications of ML for specific theoretical traditions.

In the paper titled ‘Hyperproduction: a social theory of deep generative models’, Fabian Ferrari and Fenwick McKelvey focus on synthetic media or deep generative models, such as Chat GPT and DALL-E, which create text, images, videos, and more. They argue that these models, closely tied to video game engines, should be understood as a form of hyperproduction – a circular dynamic blurring the boundaries between input and output. Moreover, they suggest that this hyperproduction is rooted in rentier capitalism, where the owners of data infrastructure extract rents for its use.

Richard Groß and Susann Wagenknecht take a different approach in their paper ‘Situating machine learning: On the calibration of problems in practice’. They centre their analysis on pragmatist theorization, drawing on Dewey's concept of the situation to explore ML as a practice. The paper examines the situated problems arising from the deployment of ML in art and science, including in relation to the training of models.

Dalia Mukhtar-Landgren and Alexander Paulsson, in their paper ‘From senses to sensors: autonomous cars and probing what machine learning does to mobilities studies’, investigate the use of ML-powered sensors in autonomous vehicles. They focus on how these vehicles aim to replace human senses and navigate roads. One crucial issue they raise is the role of autonomy and its significance within mobilities studies, where questions about autonomy assume a new form in the ML era, transforming human drivers into mere passengers.

Needless to say, the potential connections between ML and social theory extend beyond the papers featured in this thematic section. However, we hope that future research exploring the social theory implications of ML will build upon the foundation laid by these papers, leveraging the questions they raise and the propositions they put forth.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Christian Borch

Christian Borch is Professor of Sociology at the Department of Sociology, University of Copenhagen. His recent work focuses on automated securities trading and its implications for social theory. His latest book is Social Avalanche: Crowds, Cities and Financial Markets (Cambridge University Press, 2020). He is currently working on a book titled Making Autonomous Markets: How Machine Learning Is Reshaping Financial Markets (under contract with Stanford University Press) and is co–editor, with Juan Pablo Pardo–Guerra, of the Oxford Handbook of the Sociology of Machine Learning (forthcoming with Oxford University Press).

References

  • Airoldi, Massimo. 2022. Machine habitus: toward a sociology of algorithms. Cambridge: Polity Press.
  • Amoore, Louise. 2020. Cloud ethics: algorithms and the attributes or ourselves and others. Durham and London: Duke University Press.
  • Benjamin, Ruha. 2019. Race after technology: abolitionist tools for the New Jim code. Cambridge: Polity Press.
  • Bissell, David, Thomas Birtchnell, Anthony Elliott, and Eric L. Hsu. 2020. Autonomous automobilities: The social impacts of driverless vehicles. Current Sociology 68, no. 1: 116–34.
  • Borch, Christian. 2022. Machine learning and social theory: collective machine behaviour in algorithmic trading. European Journal of Social Theory 25, no. 4: 503–20.
  • Borch, Christian. 2023. Machine learning and postcolonial critique: homologous challenges to sociological notions of human agency. Sociology, doi: 10.1177/00380385221146877.
  • Borch, Christian, and Bo Hee Min. 2023. Machine learning and social action in markets: from first- to second-generation automated trading. Economy and Society 52, no. 1: 37–61.
  • Brayne, Sarah, and Angèle Christin. 2021. Technologies of crime prediction: The reception of algorithms in policing and criminal courts. Social Problems 68, no. 3: 608–24.
  • Brighenti, Andrea Mubi, and Andrea Pavoni. 2021. On urban trajectology: algorithmic mobilities and atmocultural navigation. Distinktion: Journal of Social Theory, doi:10.1080/1600910X.2020.1861044.
  • Burrell, Jenna. 2016. How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data & Society 3, no. 1, doi:10.1177/2053951715622512.
  • Cevolini, Alberto, and Elena Esposito. 2020. From pool to profile: social consequences of algorithmic prediction in insurance. Big Data & Society 7, no. 2, doi:10.1177/2053951720939228.
  • Coeckelbergh, Mark. 2020. Ai ethics. Cambridge, Massachusetts: The MIT Press.
  • Collins, Harry. 2018. Artifictional intellligence: against humanity’s surrender to computers. Cambridge: Polity Press.
  • Edelmann, A., T. Wolff, D. Montagne, and C.A. Bail. 2020. Computational social science and sociology. Annual Review of Sociology 46, no. 1: 61–81.
  • Esposito, Elena. 2017. Artificial communication? The production of contingency by algorithms. Zeitschrift für Soziologie 46, no. 4: 249–65.
  • Eubanks, Virgina. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. New York: St. Martin’s Press.
  • Evans, James A., and Pedro Aceves. 2016. Machine translation: mining text for social theory. Annual Review of Sociology 42, no. 1: 21–50.
  • Fazi, M. Beatrice. 2020. Beyond human: deep learning, explainability and representation. Theory, Culture & Society, doi:10.1177/0263276420966386.
  • Fourcade, Marion, and Fleur Johns. 2020. Loops, ladders and links: The recursivity of social and machine learning. Theory and Society 49: 803–32.
  • Hansen, Kristian Bondo. 2020. The virtue of simplicity: On machine learning models in algorithmic trading. Big Data & Society 7, no. 1, doi:10.1177/2053951720926558.
  • Hansen, Kristian Bondo, and Christian Borch. 2021. The absorption and multiplication of uncertainty in machine-learning-driven finance. The British Journal of Sociology 72, no. 4: 1015–29.
  • Kiviat, Barbara. 2019. The moral limits of predictive practices: The case of credit-based insurance scores. American Sociological Review 84, no. 6: 1134–58.
  • Noble, Safiya Umoja. 2018. Algorithms of oppression: How search engines reinforce racism. New York, NY: New York University Press.
  • Pasquale, Frank. 2020. New laws of robotics: defending human expertise in the Age of AI. Cambridge, Massachusetts: The Belknap Press of Harvard University Press.
  • Rona-Tas, Akos. 2020. Predicting the future: Art and algorithms. Socio-Economic Review 18, no. 3: 893–911.
  • Scharre, Paul. 2018. Army of none: autonomous weapons and the future of War. New York and London: W. W. Norton & Company.
  • Shestakofsky, Benjamin. 2017. Working algorithms: software automation and the future of work. Work and Occupations 44, no. 4: 376–423.
  • Stilgoe, Jack. 2018. Machine learning, social learning and the governance of self-driving cars. Social Studies of Science 48, no. 1: 25–56.
  • Suchman, Lucy. 2007. Human-Machine reconfigurations: plans and situated actions. Cambridge: Cambridge University Press.
  • Svetlova, Ekaterina. 2021. Ai meets narratives: The state and future of research on expectation formation in economics and sociology. Socio-Economic Review 20, no. 2: 841–61.
  • Wajcman, Judy. 2019. The digital architecture of time management. Science, Technology, & Human Values 44, no. 2: 315–37.
  • Yolgörmez, Ceyda. 2021. Machinic encounters: A relational approach to the sociology of AI. In The cultural life of machine learning: An incursion into critical AI studies, eds. J. Roberge, and M. Castelle, 143–66. Cham: Springer International Publishing.
  • Zou, James, and Londa Schiebinger. 2018. Ai Can Be sexist and racist - it’s time to make It fair. Nature 559: 324–6.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.