2,023
Views
1
CrossRef citations to date
0
Altmetric
Articles

Contextual social valences for artificial intelligence: anticipation that matters in social work

ORCID Icon
Pages 1110-1125 | Received 19 Sep 2022, Accepted 14 Jun 2023, Published online: 25 Jul 2023

ABSTRACT

In pilot trials, Finnish caseworkers in child welfare services used an AI tool predicting severe risks faced by their clients. Based on interviews with the caseworkers involved, this article draws on those trials to discuss AI valences, or the range of expectations of AI’s value and performance, in social work and beyond. While AI travels across sites of application and sectors of society, its value is often expected to come from the production of anticipatory knowledge. The predictive AI tool used by Finnish caseworkers offers an example: it turned past data about clients into predictions about their future, with an aim of authorizing present interventions to optimize the future. In the pilot trials, however, AI met the practice of social work. In contrast to generic expectations of predictive performance, caseworkers had contextual expectations for AI, reflecting their situated knowledge about their field. For caseworkers, anticipation does not mean producing pieces of speculative knowledge about the future. Instead, for them, anticipation is a professional knowledge-making practice, based on intimate encounters with clients. Caseworkers therefore expect AI to produce contextually relevant information that can facilitate those interactions. This suggests that for AI developments to matter in social work, it is necessary to consider AI not as a tool that produces knowledge outcomes, but one that supports human experts’ knowledge-making processes. More broadly, as AI tools enter new sensitive areas of application, instead of expecting generic value and performance from them, careful attention should be paid on contextual AI valences.

1. Introduction

It’s possible to set high-risk alarms. […] By finding risk factors correlating with for example truancy or child custody, we can in the best case intervene very early on. This makes both human and economic sense.

A few years ago, Big Data was expected to improve vital human services such as welfare, social care, and healthcare. Similar attention now focuses on artificial intelligence (AI). The above excerpt comes from a 2018 press release announcing an ‘AI model’ predicting marginalization for Finnish children and youth ‘before actual problems emerge’. Tapping into a combined administrative database, a machine learning model identified features correlating with outcomes such as mental health issues, child custody, and substance abuse. According to the model, hundreds of such features (interpreted as ‘risk factors’) existed, including dental health, parents missing maternal clinic appointments, or several x-rays to the ankle, foot, or wrist – a list well aligned with the belief that with suitable methods, large datasets generate insights that would otherwise be impossible to obtain (boyd & Crawford, Citation2012). If several risk factors co-occurred, this would indicate a high-risk case. The development did not stop here, and by 2021, caseworkers in child welfare services used a pilot trial version of an AI tool that predicted the risk of emergency placement or taking into custody for individual children.

In this article, the Finnish pilot trials serve as a point of departure to discuss the social valences of AI in social work and beyond. Following Fiore-Gartland and Neff (Citation2015), social valences refer to expectations about the value and performance of AI across social settings. The notion of valence encompasses the idea that a range of expectations emerges in discourses and practices when objects, such as AI tools, are embedded in different social contexts. As I will discuss, however, while AI travels across sites of application and sectors of society, expectations of AI’s value and performance tend to be generic and very similar. Expectations about technology carry out performative work by shaping technological development in the present (Borup et al., Citation2006), suggesting that similar valences mediate similar performances for AI across social settings. In this article I examine the Finnish AI tool’s pilot trials as an opportunity to outline more contextual AI valences, rooted in the field of social work. My broader aim is to discuss the possibilities for more nuanced and contextual AI developments.

The excerpt with which I opened the article offers an example of a generic AI valence: AI’s performance in predictive knowledge-making is expected to improve social outcomes. This predictive AI valence can be located at the intersection of two qualities characterizing the present moment. Adams and colleagues (Citation2009) outline the first as the anticipatory regime, an orientation toward the future that ‘pervades the ways we think about, feel and address our contemporary problems’ (p. 248). The future, the authors describe, is inexorably coming and therefore demands action, which turns anticipation into an imperative, an obligation to be informed and secure the best possible future. Anticipation so permeates our lives that it is hard to imagine what departures from the anticipatory regime might look like (Mackenzie, Citation2013). The second quality is the widespread techno-optimism (Tutton, Citation2021) about AI. Like the future, AI is imagined as something that is inevitably coming, a force that will thoroughly disrupt society (Bareis & Katzenbach, Citation2021; Ruckenstein, Citation2022). AI, so this narrative goes, increases efficiency, transcends human rationality, and rids us of ideologically mired decision-making. Techno-optimism results in confidence that AI provides generic solutions and advances an overall positive social change. When the anticipatory regime and techno-optimism meet, the result is an AI valence that manifests as concrete predictive tools in fields ranging from finance (Fourcade & Healy, Citation2013) and security (Aradau & Blanke, Citation2017) to policing (Andrejevic, Citation2017) and social work. In fact, social work appears as particularly suited for predictive tools: it is an anticipatory practice to begin with (Pink, Ferguson, et al., Citation2022) and for example child welfare involves assessing children’s future safety and risk.

When I began to examine the Finnish AI tool more closely, I had a different, more critical AI valence in mind, informed by a stream of literature examining the social impacts of predictive AI tools. Across application fields, predictive models are involved in classificatory tasks of determining eligibility for services, risk of harm, or propensity for misconduct. These models, by design, favor and normalize some people, behaviors, and practices at the expense of others. In fields like social welfare, education, healthcare, and criminal justice this has been shown to strengthen and introduce biases, lead to errors, and amplify inequalities and discriminatory practices (e.g., Benjamin, Citation2019; Eubanks, Citation2018; Ferguson, Citation2017; Marjanovic et al., Citation2021). The critical AI valence that I initially approached the case with stemmed from the expectation that predictive tools will introduce these problems also into Finnish social work.

Significantly, the predictive and critical valences are similarly abstract and generic: both consider AI as a technology that anticipates the future, and both treat fields of application as landing sites (Pink, Ruckenstein, et al., Citation2022) on which AI makes an impact, whether positive or negative. When AI meets the practice of social work, however, a departure from these generic AI valences becomes possible. When I began discussing with caseworkers involved in the pilot trials, what I encountered stood in stark contrast to the usual expectations. Unlike the caricatural techno-optimist or the quintessential AI critic, the caseworkers expect something specific and highly contextual from AI. As I will discuss, their expectations about AI reflect their situated knowledge about social work. For them, anticipation is not about predictions, or pieces of speculative knowledge about the future. Instead, anticipation is a professional knowledge-making practice based on intimate encounters with clients.

In the following sections, I interrogate the Finnish pilot trials of the AI tool to explore contextual AI valences. As Markham (Citation2021) has persuasively argued, success in imagining technological alternatives is elusive, as current trajectories tend to be naturalized to the extent that they seem inevitable, and it is difficult to consider future digital services without reproducing the hegemonic trends. The pilot trials, however, offer a work-around. While predictive AI tools are in general widespread, they are a novelty in Finnish social work. As Wise (Citation1998) observes, ‘it is perhaps when technologies are new, when their (and our) movements, habits and attitudes seem most awkward and therefore still at the forefront of our thoughts that they are easiest to analyze’ (p. 411). The pilot trials, then, constitute an empirical probe that triggers a moment when alternative expectations about AI are observable: before predictive practices settle in, it is still unclear how to integrate AI into actual work practices, and caseworkers can consider what AI could do for them.

I begin by discussing anticipation and AI in social work. I then move on to the Finnish pilot trials and examine the expectations for AI’s performance based on interviews with professionals involved. As suggested above, viewed from within social work, anticipation is not something to be simply performed by an AI tool. Anticipation that matters in social work is a process, or a practice involving caseworkers’ encounters with clients in what are often difficult life situations. When the value of AI is considered in terms of casework, AI needs to be relevant in these client interactions. Contextual AI valences, then, emerge in relation to casework practice and mediate caseworkers’ professional needs to locate AI-produced information situationally and temporally in clients’ lives. This suggests that anticipation should not be considered as an outcome but a process: whereas the AI tool was designed to perform anticipation on behalf of caseworkers, what caseworkers expect are tools that support their own anticipatory practices.

2. Demanding anticipation

Anticipation, following Adams and colleagues (Citation2009), is a way of actively orienting oneself temporally: one inhabits the present while adjusting actions toward the future. The future, they argue, bears on the present. It is unknowable, and any knowledge about it is speculative, but its weight still cannot be ignored; the future affects the present by demanding action now. Adams and colleagues refer to a moral injunction as an obligation to know the future, and to be informed about possibilities to optimize the future in the present (Adams et al., Citation2009). Anticipation, therefore, demands standing ready, making choices in the face of speculative knowledge, and securing the best possible outcomes (Tutton, Citation2011). What results is a flattening of temporal horizons, so that knowledge about the future authorizes preparatory and preemptive action in the present. In practical terms, knowing the future relies on abduction (Adams et al., Citation2009), the Peircian form of reasoning in which observations about the known past give shape to the most likely versions of the unknown future. Through abduction, ‘action in the now is framed by imaginations and probabilistic calculations of what might be in the future’ (Montgomery, Citation2017, p. 250).

Within the last decade, the demand for knowledge about the future has increasingly been met by large datasets and the methods of predictive analytics. These methods appear to ‘harbor new capacities of peering into the future and revealing the unknowns to be tamed and governed’ (Aradau & Blanke, Citation2017, p. 374). Among them is machine learning, a method that involves ‘transforming, constructing or imposing some kind of a shape on the data and using that shape to […] predict what is happening or what will happen’ (Mackenzie, Citation2015, p. 433). In machine learning terms, datasets contain features, people are representable as feature combinations, and relevant differences between people are differences in those combinations. Prediction equals searching for feature combinations that appear together with whatever is predicted (Mackenzie, Citation2015). Machine learning provides methods that are agnostic as to field of application and can move seemingly effortlessly between problems, and the anticipatory regime’s demand for methods claiming to know the future has made prediction one of its more pervasive uses (Mackenzie, Citation2013).

The quote opening this article depicts a canonical application area for these predictive methods: AI helps triage cases, identifies targets for intervention, enables preemption of harm, and ultimately prevents human and economic costs. This suggests that risk assessment can easily be cast as a machine-learning problem. Data stored in social care registers and other databases makes it possible to fashion clients as feature combinations and to predict harm by identifying combinations that coincide with variables that suggest harm. From this perspective, it is not surprising that child welfare services are increasingly considered a suitable application area for predicting severe harm (Cuccaro-Alamin et al., Citation2017; Drake et al., Citation2020; Gillingham, Citation2019; Redden et al., Citation2020). Motivations for predictive models include making decisions more systematic and less error-prone, and increasing operational efficiency as resources diminish (Amrit et al., Citation2017; Elgin, Citation2018; Vaithianathan et al., Citation2013). Moreover, AI sits comfortably in the social work context, as similar motivations underlie the use of systematic tools based on theoretical models, expert consensus, or actuarial calculations (Keddell, Citation2019) which have been employed to assess and anticipate abuse and neglect in the field since the 1980s (Doueck et al., Citation1993). The turn to AI in social work exemplifies how machine learning has ‘revitalized the promise of prediction across social, political and economic worlds’ (Aradau & Blanke, Citation2017, p. 374).

If social work provides canonical examples of using AI for anticipation, it is also exemplary of AI’s problems. Scholars have highlighted AI's adverse effects on decision-making and how, in austerity conditions, AI tools begin to replace the work of human experts (Eubanks, Citation2018). The social work literature reiterates well-established ethical and practical concerns: the need for transparency into predictive processes (Church & Fairchild, Citation2017; Devlieghere & Gillingham, Citation2021; Keddell, Citation2019), privacy breaches, and a lack of consent (Keddell, Citation2015). Taking cue from critical data studies, social work scholars question the very possibility of accurate and objective data-based predictions (Gillingham & Graham, Citation2017), a paramount concern when the consequences of errors could be catastrophic and mere inclusion in a risk group could devastate lives (Gillingham, Citation2019; Keddell, Citation2015). Far more accuracy is thus demanded than in less sensitive contexts. Critics have also pointed out that social care data may be unsuitable for abductive purposes: administrative databases contain only known cases, include only what was recorded, and are the product of idiosyncratic recording practices (Gillingham, Citation2020). Predictions, therefore, contain errors and omissions, amplify existing biases, and are based on a limited understanding of risk. Practitioners using predictive tools have identified additional practical problems, such as integrating AI’s predictions of long-term risk into social work decision-making, which often concerns short-term safety (Kawakami, Citation2022). Responding to these shortcomings, recent scholarship in human–computer interaction has begun to work toward designing predictive tools that better support human decision-making (Cheng et al., Citation2022), while still allowing for professional discretion (Saxena et al., Citation2021).

With predictive tools, AI’s value is primarily related to its performance in producing weighty pieces of knowledge about the future. Yet, this focus on prediction may be regarded in terms of fields of application adapting to predictive promises rather than AI adapting to the demands of those fields (Mackenzie, Citation2013). Prediction is what AI is expected to do, precisely because that is what we imagine it doing. To explore how to turn the tables, so that demands for AI’s performance would emerge from within the fields of application, I examine caseworkers’ expectations for AI in the context of the pilot trials in Finland.

3. Material and method

The pilot trials were conducted in a regional organization providing social welfare and healthcare services. The organization had experimented with predictive models since 2018, motivated by a long-standing desire in Finnish care politics: early identification of heavy users of care services (Pesonen et al., Citation2023). Cost-saving is an ever-present concern for care organizations. In Finland, around 80% of care costs are attributed to 10% of service users (Liukko, Citation2020), suggesting that if those 10% were identified early on, preventive measures – and cost-cutting – would be more effective. The organization had identified heavy-user segments, including patients with several emergency visits in mental health clinics, recurring emergency contacts with social care, patients re-admitted in hospital wards, substance-abusing youth, and in child welfare services, cases of emergency placement and child custody (Pesonen et al., Citation2023). The organization partnered with an AI company to build models predicting which customers will, in the future, belong to those segments. Modeling work in the child welfare context was considered sufficiently promising, and pilot trials started in 2021.

The piloted tool, which was trained to predict the risk that a child would need emergency placement or taking into custody, months or more in the future, was built with data that the organization had already gathered on patients and clients. This meant using electronic health records and social services’ client records, which include for example contacts with healthcare, medical tests, diagnoses, operations and prescriptions, contacts with social workers and counsels, notifications on children, social care interventions, and treatment plans. Notably, this excluded data on welfare benefits, which are in Finland settled by a different authority. The tool was piloted by a team of caseworkers who meet with families to assess child welfare notifications, evaluating the situation and the need for social protection. In client interactions, caseworkers would ask for consent to participate in the trial. If consent was obtained, caseworkers could view a binary risk classification indicating whether the risk was elevated, and a list of risk factors contributing to this assessment. This user interface was integrated into the information system they already used. While the tool predicted risk for a particular child, the unit of analysis was the family: predictions employed data on the child’s and parents’ social and healthcare histories. At the pilot stage, the tool’s development remained an ongoing and open-ended work in progress. Predictive tools have clearly defined roles in other contexts (e.g., Eubanks, Citation2018), but here the caseworkers could decide for themselves how they used the tool and the information it provided.

Fiore-Gartland and Neff (Citation2015) suggest in their study on data valences across health and wellness communities that social valences for data can be empirically observed when people talk about what data can do and will do within a social system. Following this cue, I examine AI valences in social work based on in-depth, semi-structured interviews with professionals conducted in two stages. Before the pilot, I interviewed caseworkers and an IT professional knowledgeable about the tool, with a focus on expectations about AI’s performance in social work. After the pilots had concluded, I interviewed professionals to learn about their experiences of using the tool. Both stages involved a limited number of suitable interviewees. During the pre-pilot interviews, it became clear that the further I moved from the small group of caseworkers involved in the tool’s development, the less my informants talked about context-specific expectations and the more they resorted to generic AI talk. Being educated professionals, caseworkers can connect broader AI developments with their profession. However, generic AI talk was not within my research interest. The pilot trials themselves involved only a small team of caseworkers, to whom I had not talked at the pre-pilot stage.

Within these limitations, the primary data consists of 12 interviews of both junior and seasoned professionals, carried out over video connection and typically lasting between 60 and 90 min. I transcribed the interviews verbatim for analysis. The presented quotations are pseudonymized and translated from Finnish. Overall, they reflect shared sentiments, but I have chosen quotes that have most poignantly expressed the topic at hand. I have used background material as secondary data: news articles, training videos, reports, and presentations, publicly available or shared by the informants. These materials provided a detailed view of the workings of the AI tool.

My initial analytical strategy was to openly code the material with Atlas.ti, focusing on expectations of what AI does or ought to do in social work. I then iteratively located similarities and cross-cutting themes, a process that suggested an analytical focus on the tension between predictive technologies and caseworkers’ knowledge-making practices, in which anticipation played a role. Further iteration with this focus led to the three analytical themes presented below: informational needs in casework, contextualizing predictions, and the consequences of predictions on casework practice.

4. Toward contextual AI valences

4.1. Caseworkers, not AI tools, produce knowledge about clients

When considering what AI could do for them, caseworkers talked about information needs reflecting a well-known source of trouble in their work: information piles up over the years in information systems. What is knowable in principle is unknowable in practice, because organizational and IT system boundaries prevent easily locating and accessing information. Clients may therefore slip through the net or, conversely, sometimes receive overlapping help across the care system, causing unnecessary burden. Therefore, the caseworkers I interviewed before the pilot expressed hopes that the AI tool could help locate essential information:

Client relationships might develop over years and span several services. It’s time-consuming to keep up with the twists and turns. [AI] could help for sure, this is a role the tool could take. (Mikael)

Its value could come from finding things that I can’t find myself, so that these cases would not arrive on my table a year later and be more severe. It’s easier to deal with them now. (Maria)

The pilot trials revealed that AI could meet these needs, if designed properly. Besides offering risk predictions, the AI tool showed caseworkers a list of risk factors, or variables from health and social care databases, that would sometimes contain relevant information. Leo provided an example when he described how talks with clients often drag on until a key issue is identified, and wonders if AI could offer a shortcut:

[The tool] might show a parent’s positive tox screen even if the notification was on an unrelated issue. […] This takes the discussion to a new level; we could talk about substance use, about risks to their child. […] Typically, we learn about such things indirectly, if at all.

While the tool showed these variables based on how they contributed to predictions, it nevertheless unearthed contextually relevant details, proving that AI could sift through client information and bring relevant bits of it to caseworkers’ attention. Even if these details could be useful in casework, the line between useful and excessive information was very thin, as Emma explained:

The child’s behavior was why we met, and I checked [the tool] and saw the mother’s health issues over the years. There was absolutely no reason to see that history. Think about it: if there’s a notification on your child, should we see your whole patient history?

This highlights how ethical and professional justifications for viewing historical information about clients depend on its relevance in the context of case assessment. The tool can have inadvertent effects simply by displaying existing information. This left Emma wondering, ‘How could [the AI tool] even recognize relevant information? How would it identify what’s relevant for me and show only that?’

In terms of actual predictions, caseworkers were acutely aware that the tool might miscategorize clients, but accuracy did not emerge as a pressing issue. Caseworkers considered information produced by the tool to be potentially useful, but primarily because it came from an external source and was thus ‘a fact on the table’. It was, after all, indisputable that AI categorized someone belonging to a risk group. For caseworkers, however, this was a fact about the tool and not about the client. Caseworkers saw themselves – not the AI tool – as producers of knowledge about clients, although the tool could help in producing that knowledge, as Leo said: ‘We could bring a printout or show it on a computer […] we could share that we’re now using this tool, and this is the output. What can you tell us about it?’ In a similar vein, Sonja compared risk predictions to a client’s bank statement, which was something that she was accustomed to using as an informational tool:

We would look at the bank statement together. I already know the money was gambled away, but that fact is now concretely on the table, and we can look at it together. I think [AI’s predictions] could be used similarly, like ‘Hey look, this information came from the computer’, and we could start figuring it out.

Caseworkers, then, expect the AI tool to provide information to facilitate dialogues in casework, which could lead to new knowledge. Here, caseworkers expect AI to help by organizing, processing, and making visible client information or care histories that would otherwise be laborious to locate. The role for an AI tool, then, is not to produce projections about the future, but to offer windows to the past and the present.

4.2. Caseworkers consider clients to be people in situations, not bundles of features

As a profession and practice, social work relies on the ‘person-in-situation’ framework, which was already present in the field’s earliest writings (Cornell, Citation2006). The unit of analysis is a human being situated in the context of the surrounding social world. Social problems are not just qualities of a person but also emerge in context, and this interplay is considered when helping a client. The framework implies that the production of anticipatory knowledge involves the consideration of an individual’s qualities situated in a unique social context. For caseworkers, AI is not up to this task. While its predictions assume that similar clients are also similar when it comes to risks, caseworkers insisted in interviews that clients have individual strengths, coping skills, and stressors that cannot be reliably deduced in this way:

Meaningful risk depends on the family. In one family, a relatively minor issue causes a lot of pressure, and they end up with us. Another family appears the same, but they’re okay. (Leo)

Two children from a family might be completely different. […] In the exact same circumstances, they cope differently. Growing up in a substance abuse family, for instance, doesn’t affect everyone the same way. It’s not a mold that forms a person. (Heidi)

Inherent qualities, then, can help a client cope in situations where others encounter problems. Coping also involves what caseworkers called protective factors, or the complex and situational support offered by the surrounding social world. Leo illustrated this point: ‘Protective factors could include grandparents living next door, the presence of godparents in the family’s life, or the existence of any close network’. Because protective factors aid in coping, they should inform risk considerations. However, these factors are not usually contained in databases and therefore cannot be part of the AI tool’s predictions. Producing accurate data on protective factors would also be difficult, as Leo made clear: ‘They are quick to change, say, when there’s a sudden conflict between the people involved’.

Beyond the invisibility of coping skills and protective factors to the AI tool, an additional limitation is attributable to AI’s reliance on historical data to know the future. The caseworkers insisted that while a person’s health and care histories certainly bear on the present, they do not turn into future problems along a predictable trajectory:

A parent might have a history of substance abuse, but sometimes people get a grip. But AI only sees the history. We can’t assume that we need to act now based on it. We need to give people the opportunity to change the course of their lives. (Mikael)

In the abductive approach to data-based knowledge-making, all available historical data are sifted through to uncover sources of information that contain predictive value – the more data, the better the predictions. Sonja, in contrast, insisted on approaching clients solely in the present: ‘I’m not interested in their burnout years ago if it doesn’t bear on the present. My goal is to meet the person in their present situation’. Sonja also considered focusing on the present as an ethically sound working method, as anything else might lead to biases and unfounded conclusions:

It felt stigmatizing to sift through that client history. My method is to meet clients without looking at their files too closely because that sets up prejudices. I felt [AI] was feeding my prejudice.

The issue here is thus related to machine learning models’ reliance on regarding people as ‘materialized as a bundle of features’ (Mackenzie, Citation2013, p. 398), since the predictive models recast people as collections of traits and past events, and abductive logics assume that today’s person is a product of their past. Heidi, a caseworker, stressed the need to carefully situate any information produced by AI: ‘We need to think holistically. Even if risk is brought to our attention, suggesting that help is needed, we need to judge if there’s a [real] risk’. The bundle-of-features approach is particularly prone to miss capacities to cope; as such, it threatens to reduce a person to what Heidi called ‘a list of faults’, stressing risk at the expense of protection from it.

4.3. Caseworkers situate predictions within client relationships

When discussing the motivations underlying the development of AI tools, several informants brought up the theme of heavy service users introduced above. In child welfare services, besides their individual-level and family-level consequences, custody cases burden a chronically under-resourced system. Lina, a data analytics professional, considered the combined economic and human costs of custody to be extremely high: ‘Personally, I think if we could prevent even some custody cases with the tool, it would be absolutely worth the price tag’.

As suggested above, in the anticipatory regime, preventing harm involves abductive reasoning to know the future in order to act on that knowledge in the present. Accordingly, the AI tool was designed to predict severe risk – emergency placement of a child or taking them into custody – and thus to identify those children who should be targeted with social care interventions. When the aim is to anticipate the costliest harms and preemptively focus resources, predicting severe risk makes sense. Yet caseworkers, as discussed above, expected the information provided by the AI tool to serve their professional knowledge-making practice, rather than justify a possible intervention. The context in which caseworkers used the tool in was meeting with clients to assess their need for social protection. Here, child custody is rarely an immediate concern. As Emma explained: ‘all of it – provisions of support, placement in non-institutional care – there’s so much before a custody process’. According to caseworkers, when it comes to facilitating interactions with clients, prediction of extreme risk tended to be too far removed from a family’s understanding of their current situation to be practically useful. Sometimes the tool’s predictions were even counterproductive in this respect. Leo described what happened when he asked a client for consent to use the tool:

A child welfare notification is already a big issue for the family, [and] it can be devastating to talk about checking the risk for emergency placement or custody. One family was so furious that we considered calling security. They were really upset, saying ‘you absolutely will not use [AI]’.

Caseworkers thus needed to carefully consider how the tool would affect relationships with clients. Leo said that after bad experiences, he often refrained from using the tool: ‘I asked [consent from] just a small number of families. We didn’t want to annoy [the clients], and their understanding of the tool and our ability to explain it were not sufficient’. If the tool were to predict something, caseworkers suggested, predictions should concern something contextually relevant to a client’s current situation. That way, they thought the tool would facilitate rather than harm interactions. Since caseworkers were assessing the need for social protection, a possibility they mentioned was predicting whether the family were likely to become child welfare clients in the future. This is a step that has specific legal meaning in the Finnish system, and something that clients could immediately relate to. As Leo said, ‘I would imagine that the parents would then consider [AI] our everyday tool’. This underlines what caseworkers expected the AI tool to do: instead of a source of anticipatory knowledge about the client, it should serve as an interaction device and an informational tool for client work that helps in the process of determining a client’s service needs.

5. Discussion

By design, the AI tool piloted by Finnish caseworkers reflects valences rooted in expectations of anticipatory performance. The tool is a response to the moral injunction to know the future as it employs the abductive performance of machine learning, turning past data into future predictions and authorizing present interventions to optimize the future. The experiences of social work professionals using similar AI-based tools for clearly defined decision-making tasks (Kawakami, Citation2022) appear to align with those of Finnish caseworkers. Across the board, AI’s predictions omit the rich contextual information available to caseworkers, and predictive targets are not easily aligned with caseworkers’ objectives. If the aim were to better support human decision-making with predictive AI tools, in other words to consider AI’s value in terms of ‘augmenting’ human capabilities with predictions (e.g., Cheng et al., Citation2022; Kawakami, Citation2022), nuanced design implications could be drawn from the Finnish caseworkers’ experiences.

My aim however has been to depart from generic expectations of AI’s predictive value and to describe contextual AI valences by examining what caseworkers consider to be desirable from their own perspective. Even if casework is inherently anticipatory, the Finnish caseworkers did not expect a tool for anticipating the future. Instead, drawing on their encounters with the AI tool, they began to form expectations for AI rooted in social work, in which AI’s value would be in improving the outcomes of casework. To delineate this contextual AI valence, it is helpful to conceptually separate the process of producing anticipatory knowledge and the resulting knowledge. The difference lies in whether AI is geared to serve the process or the outcome. For caseworkers, social work is about producing knowledge about clients, with client interactions at the professional core: these interactions, not an AI tool, are what enable caseworkers to form a deeply situated view of each client’s problems and needs. An AI tool that predicts severe harm overrides client interaction and communication, circumvents the process of knowledge production, and thus treads on the caseworker’s professional turf. Distilling a client’s entire situation into a risk measure does not help caseworkers form an understanding of that person’s situation, as much as it jumps to the conclusion. Thus, the piloted AI tool fails to complement professional expertise in the process of making knowledge, instead replacing professional judgment with an external form of knowledge-making. For caseworkers, an AI tool that predicts severe risk does not amplify or augment human knowledge-production as much as it is a proxy (Collins & Kusch, Citation1998) that produces knowledge on behalf of humans – an attempt to replace human expertise and deskill the professional.

The fact that predictive risk tools can be ineffective, problematic, and outright harmful (Gillingham, Citation2019; Kawakami, Citation2022; Keddell, Citation2019) has inspired calls to imagine alternative uses for data-driven technologies in the context of child welfare (Stapleton et al., Citation2022). The examination of contextual AI valences offers a starting point for developing such alternatives: for AI tools to matter for casework, they should support casework practice. Based on the above analysis, AI should produce contextually relevant information that facilitates client interactions. This requires a specific temporal orientation; not predictions of the future as such but bringing a future orientation into the client’s present context. In general, anticipation flattens timescales by making the future actionable in the present (Adams et al., Citation2009), and in casework in particular, predictions too far removed from the present are not actionable. In fact, what the empirical material has underlined is that from a casework perspective, risk predictions and even risk factors can be harmful, as when they are contextually unjustified or irrelevant, too abstract for clients, or too far removed from casework practice.

Overall, the caseworkers regarded the AI tool similarly to how Suchman viewed 1980s copying machines: ‘It was as if the machine were tracking the user’s actions through a very small keyhole and then mapping what it saw back onto a prespecified template of possible interpretations’ (Citation2007, p. 11). Here, the keyhole peers into the client’s social care and healthcare history, and the mapping performs the move from historical data to future prediction – a key claim of predictive analytics (Aradau & Blanke, Citation2017) and a reason for why it suits an anticipatory regime (Mackenzie, Citation2013). For caseworkers, however, AI’s version of abduction is deeply problematic: it assumes that clients with similar histories continue to display similarity in the future and ignores other types of knowledge about people’s lives. The person-in-situation framework of social work, in contrast, encourages caseworkers to treat clients as individuals facing different situations, being surrounded by different circumstances, and having or lacking distinctive coping skills. To caseworkers, AI’s keyhole view of clients appears skewed and even biased: the predictions ignore the possibility that people can and do depart from their historical trajectories, however problematic. Unlike the AI tool, caseworkers do not view a client’s future as determined by the past – indeed, their professional commitment is to support departures from it. Given the person-in-situation framework and the wide array of resources caseworkers draw on, AI cannot meet the demand for knowledge about a unique person in a specific context. Experts in social work also more generally recognize AI’s limits in forming a holistic understanding of a case, and the need to consider the context carefully (Kawakami, Citation2022). Data-driven processes are generally imagined mitigating the effects of subjective human judgment (Pääkkönen et al., Citation2020), but in social care, human reasoning is precisely what AI tools lack.

On one level, this analysis corroborates AI critiques: caseworkers’ expectations for AI underline how the tool emphasizes certain ways of knowing at the expense of others, provides a limited view into people’s lives, and can produce suspect information as a result. Notably, however, the Finnish caseworkers were not worried about errors in AI’s predictions. The way they talked about encounters with AI in the interviews suggests that they felt they had freedom to determine how to use the tool and could therefore keep it at an arm’s length. During the pilot trials, they experienced no demands to follow AI’s predictions and, unlike in situations where AI tools have a direct role in decision-making (Keddell, Citation2019), they could not be held responsible for its errors. In contexts where AI tools are deeply integrated into social work practices, workers report experiencing organizational pressures to use them, even when the tools’ limitations in factoring in information about clients and their lives are well known (Kawakami, Citation2022). The ease with which Finnish caseworkers shrugged off AI’s predictions has to do with their sufficiently independent professional role. As educated professionals, they were able to reflect on how AI tools could slot into their own knowledge-making practices and had the authority to make decisions on whether and how the tool could serve their needs. Without institutional pressures to use AI in a predetermined manner, they were able to consider the value of AI from premises that matter for them.

The caseworkers’ limited concern over predictive accuracy also matches the role they imagined the AI tool to have. When an AI tool primarily produces anticipatory knowledge to support decisions (Cheng et al., Citation2022), it ends up feeding fully or partially automated processes and therefore contributes to decisions or interventions (Keddell, Citation2019). When an AI tool instead supports the human professional’s anticipatory process, everything consequential continues to happen in client interactions. AI does not feed into automated but human-to-human processes, and people – both caseworkers and clients – can step in, consider the credibility and relevance of AI’s outputs, correct inaccuracies, or disregard AI altogether if necessary. When AI remains in a supportive role, facilitating the processes of knowledge production rather than producing knowledge, concerns over accuracy do not take center stage; biases and errors do not lead to life-changing injustices but instead undermine the usefulness of the tool.

6. Conclusion

The confidence that AI will result in positive social change comes with a well-recognized problem: it brackets future imagination and distracts from potentially beneficial non-technical approaches. This article has suggested a different problem: our imagination about technology is limited due to context-agnostic technical solutions being favored at the expense of contextual ones. My analysis of the pilot trials has shown the contextual expectations about AI rooted in social work. Of broader interest are the implications of this research for critical analysis of AI developments.

Contextual AI valences broadly concern AI’s correct positioning alongside human expertise in knowledge production. In social work, the production of anticipatory knowledge is at the core of caseworkers’ profession. They expected AI to help with knowledge production, but by design, the AI tool bypassed this human process and sought to replace it with a different one. This means that to steer AI developments, including in fields beyond social work, it is necessary to consider AI not a tool that produces knowledge outcomes but one that supports human knowledge-making processes. Whereas AI projects typically carry the aura of disruption and overcoming human limits, contextual AI valences suggest something much more modest and mundane: AI should produce information that supports human professionals. This implies a need to scale down AI-related expectations, a development that would move against the constant scaling-up of expectations that technology companies are currently promoting.

Furthermore, this research underlines the need for careful contextualization of AI developments when critically examining them. The austere welfare environment continues to push toward cost-savings by means of automation, and AI tools continue to proliferate and land on sensitive application areas. The Finnish organization I examined offers an example: since the pilot trials, it has continued experimentation with predicting future service heavy users. Well-known AI harms – including biases, errors, and discrimination – remain undeniably relevant focal points for attention. The tendency of AI companies and projects to indiscriminately ‘revolutionize’ fields of application should, however, encourage critical analysts to step outside of this logic and consider what AI does and ought to do, contextually and domain-specifically. Careful empirical scholarship exploring AI valences is urgently needed for AI developments to matter for practices in sensitive and intimately human contexts.

Acknowledgements

I am grateful to two anonymous ICS reviewers and colleagues at Datafied Life Collaboratory and Nordic ADM Network for comments and suggestions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by Research Council of Finland grant ‘Re-humanizing automated decision-making’ [Grant Number 332993].

Notes on contributors

Tuukka Lehtiniemi

Tuukka Lehtiniemi is an economic sociologist interested in automated decision-making, human participation in AI systems, the data economy, and how the uses we invent for digital technologies are shaped by how we imagine the economy to work. He works as a postdoctoral researcher at Centre for Consumer Society Research, University of Helsinki [email: [email protected]].

References

  • Adams, V., Murphy, M., & Clarke, A. E. (2009). Anticipation: Technoscience, life, affect, temporality. Subjectivity, 28(1), 246–265. https://doi.org/10.1057/sub.2009.18
  • Amrit, C., Paauw, T., Aly, R., & Lavric, M. (2017). Identifying child abuse through text mining and machine learning. Expert Systems with Applications, 88, 402–418. https://doi.org/10.1016/j.eswa.2017.06.035
  • Andrejevic, M. (2017). To preempt a thief. International Journal of Communication, 11, 879–896.
  • Aradau, C., & Blanke, T. (2017). Politics of prediction. European Journal of Social Theory, 20(3), 373–391. https://doi.org/10.1177/1368431016667623
  • Bareis, J., & Katzenbach, C. (2021). Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values.
  • Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim code. Polity Press.
  • Borup, M., Brown, N., Konrad, K., & Van Lente, H. (2006). The sociology of expectations in science and technology. Technology Analysis & Strategic Management, 18(3–4), 285–298. https://doi.org/10.1080/09537320600777002
  • boyd, d., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878
  • Cheng, H.-F., Stapleton, L., Kawakami, A., Sivaraman, V., Cheng, Y., & Qing, D. (2022). How child welfare workers reduce racial disparities in algorithmic decisions. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems.
  • Church, C. E., & Fairchild, A. J. (2017). In search of a silver bullet: Child welfare’s embrace of predictive analytics. Juvenile and Family Court Journal, 68(1), 67–81. https://doi.org/10.1111/jfcj.12086
  • Collins, H., & Kusch, M. (1998). The shape of actions: What humans and machines Can Do. MIT Press.
  • Cornell, K. L. (2006). Person-in-situation: History, theory, and new directions for social work practice. Praxis, 6(4), 50–57.
  • Cuccaro-Alamin, S., Foust, R., Vaithianathan, R., & Putnam-Hornstein, E. (2017). Risk assessment and decision making in child protective services: Predictive risk modeling in context. Children and Youth Services Review, 79, 291–298. https://doi.org/10.1016/j.childyouth.2017.06.027
  • Devlieghere, J., & Gillingham, P. (2021). Transparency in social work: A critical exploration and reflection. The British Journal of Social Work, 51(8), 3375–3392. https://doi.org/10.1093/bjsw/bcaa166
  • Doueck, H. J., English, D. J., DePanfilis, D., & Moote, G. T. (1993). Decision-making in child protective services: A comparison of selected risk-assessment systems. Child Welfare, 72(5), 441–452.
  • Drake, B., Melissa, J.-R., Gandarilla, O. M., Maria, M., & Darejan, D. (2020). A practical framework for considering the use of predictive risk modeling in child welfare. The ANNALS of the American Academy of Political and Social Science, 692(1), 162–181. https://doi.org/10.1177/0002716220978200
  • Elgin, D. J. (2018). Utilizing predictive modeling to enhance policy and practice through improved identification of at-risk clients: Predicting permanency for foster children. Children and Youth Services Review, 91, 156–167. https://doi.org/10.1016/j.childyouth.2018.05.030
  • Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. Macmillan.
  • Ferguson, A. G. (2017). The rise of Big data policing: Surveillance, race, and the future of Law enforcement. NY University Press.
  • Fiore-Gartland, B., & Neff, G. (2015). Communication, mediation, and the expectations of data: Data valences across health and wellness communities. International Journal of Communication, 9, 1466–1484.
  • Fourcade, M., & Healy, K. (2013). Classification situations: Life-chances in the neoliberal era. Accounting, Organizations and Society, 38(8), 559–572. https://doi.org/10.1016/j.aos.2013.11.002
  • Gillingham, P. (2019). Can predictive algorithms assist decision-making in social work with children and families? Child Abuse Review, 28(2), 114–126. https://doi.org/10.1002/car.2547
  • Gillingham, P. (2020). The development of algorithmically based decision-making systems in children’s protective services: Is administrative data good enough? The British Journal of Social Work, 50(2), 565–580. https://doi.org/10.1093/bjsw/bcz157
  • Gillingham, P., & Graham, T. (2017). Big data in social welfare: The development of a critical perspective on social work’s latest “electronic turn”. Australian Social Work, 70(2), 135–147. https://doi.org/10.1080/0312407X.2015.1134606
  • Kawakami, A. (2022). Improving human-AI partnerships in child welfare: understanding worker practices, challenges, and desires for algorithmic decision support. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems.
  • Keddell, E. (2015). The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool? Critical Social Policy, 35(1), 69–88. https://doi.org/10.1177/0261018314543224
  • Keddell, E. (2019). Algorithmic justice in child protection: Statistical fairness, social justice and the implications for practice. Social Sciences, 8(10), 281. https://doi.org/10.3390/socsci8100281
  • Liukko, E. (2020). Monialaisesti palveluja tarvitsevien tunnistaminen sosiaali- ja terveydenhuollossa. Policy Brief, 21/2020, Prime Minister’s Office.
  • Mackenzie, A. (2013). Programming subjects in the regime of anticipation: Software studies and subjectivity. Subjectivity, 6(4), 391–405. https://doi.org/10.1057/sub.2013.12
  • Mackenzie, A. (2015). The production of prediction: What does machine learning want? European Journal of Cultural Studies, 18(4-5), 429–445. https://doi.org/10.1177/1367549415577384
  • Marjanovic, O., Cecez-Kecmanovic, D., & Vidgen, R. (2021). Algorithmic pollution: Making the invisible visible. Journal of Information Technology, 36(4), 391–408. https://doi.org/10.1177/02683962211010356
  • Markham, A. (2021). The limits of the imaginary: Challenges to intervening in future speculations of memory, data, and algorithms. New Media & Society, 23(2), 382–405. https://doi.org/10.1177/1461444820929322
  • Montgomery, C. M. (2017). From standardization to adaptation: Clinical trials and the moral economy of anticipation. Science as Culture, 26(2), 232–254. https://doi.org/10.1080/09505431.2016.1255721
  • Pääkkönen, J., Laaksonen, S.-M., & Jauho, M. (2020). Credibility by automation: Expectations of future knowledge production in social media analytics. Convergence: The International Journal of Research Into New Media Technologies, 26(4), 790–807. https://doi.org/10.1177/1354856520901839
  • Pesonen, K., Korpela, J., Vilko, J., & Elfvengren, K. (2023). Realizing the value potential of AI in service needs assessment: Cases in child welfare and mental health services. Proceedings of the 56th Hawaii International Conference on System Sciences.
  • Pink, S., Ferguson, H., & Kelly, L. (2022). Digital social work: Conceptualising a hybrid anticipatory practice. Qualitative Social Work, 21(2), 413–430. https://doi.org/10.1177/14733250211003647
  • Pink, S., Ruckenstein, M., Berg, M., & Lupton, D. (2022). Everyday automation: Setting a research agenda. In S. Pink, M. Berg, D. Lupton, & M. Ruckenstein (Eds.), Everyday automation. Experiencing and anticipating automated decision-making (pp. 1–19). Routledge.
  • Redden, J., Dencik, L., & Warne, H. (2020). Datafied child welfare services: Unpacking politics, economics and power. Policy Studies, 41(5), 507–526. https://doi.org/10.1080/01442872.2020.1724928
  • Ruckenstein, M. (2022). Time to re-humanize algorithmic systems. AI & Society.
  • Saxena, D., Badillo-Urquiola, K., Wisniewski, P., & Guha, S. (2021). A framework of high-stakes algorithmic decision-making for the public sector developed through a case study of child-welfare. Proceedings of the ACM on Human-Computer Interaction, 5(2), 1–41. https://doi.org/10.1145/3476089
  • Stapleton, L., Lee, M. H., Qing, D., Wright, M., Chouldechova, A., Holstein, K., Wu, Z. S., & Zhu, H. (2022). Imagining new futures beyond predictive systems in child welfare: A qualitative study with impacted stakeholders. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
  • Suchman, L. A. (2007). Human-Machine reconfigurations: Plans and situated actions. Cambridge University Press.
  • Tutton, R. (2011). Promising pessimism: Reading the futures to be avoided in biotech. Social Studies of Science, 41(3), 411–429. https://doi.org/10.1177/0306312710397398
  • Tutton, R. (2021). Sociotechnical imaginaries and techno-optimism: Examining outer space utopias of silicon valley. Science as Culture, 30(3), 416–439. https://doi.org/10.1080/09505431.2020.1841151
  • Vaithianathan, R., Maloney, T., Putnam-Hornstein, E., & Jiang, N. (2013). Children in the public benefit system at risk of maltreatment. American Journal of Preventive Medicine, 45(3), 354–359. https://doi.org/10.1016/j.amepre.2013.04.022
  • Wise, J. M. (1998). Intelligent agency. Cultural Studies, 12(3), 410–428. https://doi.org/10.1080/095023898335483