1,903
Views
1
CrossRef citations to date
0
Altmetric
Articles

Contextual social valences for artificial intelligence: anticipation that matters in social work

ORCID Icon
Pages 1110-1125 | Received 19 Sep 2022, Accepted 14 Jun 2023, Published online: 25 Jul 2023
 

ABSTRACT

In pilot trials, Finnish caseworkers in child welfare services used an AI tool predicting severe risks faced by their clients. Based on interviews with the caseworkers involved, this article draws on those trials to discuss AI valences, or the range of expectations of AI’s value and performance, in social work and beyond. While AI travels across sites of application and sectors of society, its value is often expected to come from the production of anticipatory knowledge. The predictive AI tool used by Finnish caseworkers offers an example: it turned past data about clients into predictions about their future, with an aim of authorizing present interventions to optimize the future. In the pilot trials, however, AI met the practice of social work. In contrast to generic expectations of predictive performance, caseworkers had contextual expectations for AI, reflecting their situated knowledge about their field. For caseworkers, anticipation does not mean producing pieces of speculative knowledge about the future. Instead, for them, anticipation is a professional knowledge-making practice, based on intimate encounters with clients. Caseworkers therefore expect AI to produce contextually relevant information that can facilitate those interactions. This suggests that for AI developments to matter in social work, it is necessary to consider AI not as a tool that produces knowledge outcomes, but one that supports human experts’ knowledge-making processes. More broadly, as AI tools enter new sensitive areas of application, instead of expecting generic value and performance from them, careful attention should be paid on contextual AI valences.

Acknowledgements

I am grateful to two anonymous ICS reviewers and colleagues at Datafied Life Collaboratory and Nordic ADM Network for comments and suggestions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Research Council of Finland grant ‘Re-humanizing automated decision-making’ [Grant Number 332993].

Notes on contributors

Tuukka Lehtiniemi

Tuukka Lehtiniemi is an economic sociologist interested in automated decision-making, human participation in AI systems, the data economy, and how the uses we invent for digital technologies are shaped by how we imagine the economy to work. He works as a postdoctoral researcher at Centre for Consumer Society Research, University of Helsinki [email: [email protected]].