1,008
Views
3
CrossRef citations to date
0
Altmetric
Research Article

Proceduralisation of decision-making processes: a case study of child welfare practice

ABSTRACT

This article examines the use of a standardized assessment framework (the Kvello Assessment Framework (KF)), and how it guides assessment work, professional discretion and the knowledge base in child welfare practice. The KF is explored as an example of a standardized tool; it is a non-manual based assessment tool commonly used in Norway. The data stem from fieldwork in two child welfare offices and client documents from one of these offices, which were analysed using thematic analysis. The findings show that the use of the assessment tool led to proceduralisation of assessment work in two areas. First, through requirements for focus and for activities to obtain information. Second, the tool included procedural requirements of form-filling, which in turn placed interpretive demands on the professionals that turned interpretations into conclusions. The findings also identified gaps in their chain of argument. Based on these findings, I argue that use of this tool influences professionals’ discretionary activity as it leads to standardization of decision-making and a narrow knowledge base. The tool may increase the level of transparency of decision-making, and thus function as an instrument of control in association with accountability. However, the use of a standardized assessment tool does not seem to enhance child welfare professionals’ analytical skills and thus does not solve the challenges of child welfare practice. The article discusses how these shortcomings may lead to biased assessments, and emphasizes the importance of a transparent decision-making process.

Introduction

Identifying children at risk and making decisions accordingly is considered paramount in child welfare work (Munro Citation2011). Yet the decision-making process is complex and filled with uncertainty and inadequacies (Fluke et al. Citation2020). In this regard, child welfare services (CWS) have been criticized for lack of competence and systematization in their assessment work (e.g. Vis, Lauritzen, and Fossum Citation2019) and lack of transparent arguments for their decisions (MCF Citation2020; Munro Citation2011). These criticisms are often linked to professionals’ use of discretion, and are considered a threat to democratic accountability (Brodkin Citation2008). In response to such criticism we have witnessed increased use of rule-following approaches and risk assessment tools (e.g. Munro Citation2011; Sørensen Citation2018; Vis, Lauritzen, and Fossum Citation2019). This article aims to examine the use of a standardized assessment framework, and how this influences child welfare professionals’ decision-making processes.

Standardization is influenced by the idea of uniformity, objectivity and quality control by streamlining processes to ensure efficient, transparent and accountable services (Timmermans and Berg Citation2003). Brunsson and Jacobsson (Citation2000) see standards as explicit and formalized rules, which are connected to norms and function as instruments of control. Moreover, standardization has commonly been seen as a means to regulate public services (Noordegraaf Citation2015).

In the Scandinavian countries and other Western countries, the development of standardized assessment tools in CWS has accelerated alongside the movement towards evidence-based practice (Timmermans and Berg Citation2003). The aims are to enhance the use of scientific knowledge (Bergmark and Lundström Citation2011), to support frontline professionals in dealing with uncertainty and risky situations in decision-making (Bartelink et al., Citation2015; Ponnert and Svensson Citation2016) and to ensure legitimate and accountable services (Devlieghere, Bradt, and Roose Citation2018; Skillmark and Oscarsson Citation2020). This has led to a debate about the position of systematic approaches in CWS and their influence in social work practice (Skillmark and Oscarsson Citation2020; Sletten and Ellingsen Citation2020; Vis, Lauritzen, and Fossum Citation2019). For example, critics have argued that standardized assessment tools de-professionalize social work (e.g. Ponnert and Svensson Citation2016; White, Hall, and Peckover Citation2008) by relying too much on guidelines and checklists (Almklov, Ulset, and Røyrvik Citation2017). Additionally, the tools restrict professionals’ actions, oversimplify complexities, and fit poorly with social work because they overlook social and structural dimensions of life (Broadhurst et al. Citation2010; Stanley Citation2013). Moreover, the development of standardized practices is argued to be a strategy for auditing and promoting accountability in CWS (Brodkin Citation2008).

At the same time, extensive literature has identified various challenges when using experience-based approaches associated with intuitive reasoning (or ‘gut feeling’) (Munro Citation2011; Munro and Hardie Citation2018), or tacit knowledge (Polanyi and Sen, Citation2009 [1961]). An example of such challenges is that social workers seek to confirm what they already ‘know’ or assume, which may cause a cascade effect of ‘errors’ (e.g. Benbenishty, Osmo, and Gold Citation2003; Gambrill Citation2005). However, confirmation biases and errors may arise not just from the individual professional’s use of discretion, but from an interaction of multiple factors (Munro and Hardie Citation2018). In this sense, social workers’ decisions are influenced by several factors such as case characteristics, personal preferences and organizational and external factors (Benbenishty, Osmo, and Gold Citation2003; Møller Citation2021). The argument is that decision variability is related to context (Fluke et al. Citation2020), and should not be considered an isolated event (Møller Citation2021). Similarly, professionals may use their discretion to tinker with tools in various ways to make them fit their practice (Skillmark and Oscarsson Citation2020; Sletten and Bjørkquist Citation2020).

Despite growing interest in decision-making in CWS, there is limited research on how different factors (e.g. contextual, systemic and biases) influence decision-making processes (Fluke et al. Citation2020). How standardized tools are used in practice, and in turn, their impact on practice, has also been understudied (Gillingham et al. Citation2017). As described by Møller (Citation2021), there has been much focus on decision output, and decision-making has thus been viewed as an isolated event. By disregarding the steps leading to the actual decisions, there is a potential for oversimplifying the complexity of the decision-making process. In this study on how CWS professionals interact with the standards, a micro-level perspective is applied with a focus on how procedural assessment frameworks guide and influence decision-making practices. This article pursues the following question: How does the Kvello Assessment Framework tool (KF) influence CWS decision-making processes?

Standardization and the KF assessment tool

Although recent policy initiatives have aimed to ensure a more uniform practice in assessment work (Havnen et al. Citation2021), there are currently no national guidelines on how to conduct assessments in Norwegian CWS. Nevertheless, about 50% of Norwegian CWS have adopted the KF in various forms (Vis, Lauritzen, and Fossum Citation2019), which in this study constitutes an example of a standardized assessment form with embedded ‘procedural standards’ (Timmermans & Berg, Citation2003). When CWS offices use standards to create a more uniform practice, CWS professionals, as frontline workers, need to translate and adapt such guidelines into practice (Lipsky Citation2010). According to Lipsky, translating ideals into practice is often difficult due to lack of resources and limitations of the work structure.

Timmermans and Berg (Citation2003, 26) differentiate between four subtypes of standards: design standards, terminological standards, performance standards and procedural standards. Procedural standards imply guidelines for predetermined courses of action, such as describing how professionals should carry out their assessment work, and thus their decision-making process, where professional knowledge is embedded in procedures (Brunsson and Jacobsson Citation2000). Although these standards are interrelated, this article focuses on procedural standards, as these attempt to direct the professional’s behaviour and therefore may cause tensions between professional practice and the quest of standardizations for rationality, transparency, objectivity and accountability (Timmermans and Berg Citation2003).

The KF is a non-licenced standardized assessment framework for use in decision-making, developed by a Norwegian psychologist (Kvello Citation2015). Using a systematic approach, it aims to identify children at risk, limit arbitrariness, and improve professional reasoning and decision-making through enhanced competence and transparency (Kvello Citation2015). The KF shares similarities with the Swedish BBIC (‘Children’s Needs in the Centre’) and the Danish ICS (Integrated Children’s System) (Havnen et al. Citation2021). It entails the use of guidelines and a checklist on how to conduct assessments that are linked to scientific evidence, and how to report on these (Kvello Citation2015); however, it does not qualify as an evidence-based programme (Kjær Citation2019).

In addition to a textbook (Kvello Citation2015), the KF consists of an electronic form with predetermined boxes for different areas to assess by using three sources of information: i) dialogue with child and parents, ii) information from external parties (e.g. school, doctor), and iii) observation of parents and child. The broad areas to assess are: living situation, health of child and parents, the child’s development, ability and opinions, parental functioning, parents’ ability to understand the child (mentalization), child-parent interaction, and risk and protective factors (Kvello Citation2015). The KF thus directs the professionals’ actions, where the theoretical knowledge is stored in the procedures of the standards (Brunsson & Jacobsen, 2000). Accordingly, the standardized assessment form aims to make the decision-making process more predictable, as process standards are coupled with outcomes.

Moreover, it is recommended to conduct a mentalization interview, which is a certified method. Kvello (Citation2015) has also provided a checklist of the most relevant factors (32 detailed risk factors and 10 broad protective factors), which aims to determine whether there is a cumulative risk based on the number and intensity of the risk factors. This, together with the numerical dominance of risk factors, indicates a strong emphasis on risk in the assessments. However, determining cumulative risk in child welfare in general on the basis of the KF is ambiguous and therefore contested (Kjær Citation2019). Additionally, there is no manual describing how to use the framework (and the checklists), which is considered a limitation (Vis et al. Citation2020).

Previous research on standardization and decision-making in CWS

Previous studies on standardized tools show conflicting findings regarding their fitness for their purpose (Benbenishty, Osmo, and Gold Citation2003; Sletten and Bjørkquist Citation2020; Sørensen Citation2018). On the one hand, research shows that social workers find the tools supportive, suggesting that this increases their sense of competence and contributes to a common language (Gillingham et al. Citation2017; Sletten and Ellingsen Citation2020; White, Hall, and Peckover Citation2008). Studies also find that assessment becomes more structured and focused (Sletten and Ellingsen Citation2020; Vis, Lauritzen, and Fossum Citation2019), enhancing CWS professionals’ analysis of complex cases (Bartelink et al., Citation2015).

On the other hand, standardized tools are found to be time-consuming, leading to more information and long reports (Sletten and Ellingsen Citation2020; Vis, Lauritzen, and Fossum Citation2019; White, Hall, and Peckover Citation2008). Shaw et al. (Citation2009) revealed that social workers obtained different information when using the same assessment tool, which revealed variation in the information assessments are based on. Furthermore, White, Hall, and Peckover (Citation2008) argue that the tools exert descriptive and interpretive demands on CWS professionals, described as a ‘descriptive tyranny’. They may place an administrative burden on professionals and function as a control mechanism (Almklov, Ulset, and Røyrvik Citation2017). At the same time, CWS professionals, by exercising discretion, commonly modify the tools to fit their particular context, (e.g. Skillmark and Oscarsson Citation2020; Sletten and Bjørkquist Citation2020).

Moreover, studies found that risk assessments fail to nuance the level of risk on a case-by-case basis, as the social worker needs to tick off information based on a form (Gillingham Citation2019). Others find that vague risk factors cause confusion among CWS professionals (Sletten and Ellingsen Citation2020; Vis, Lauritzen, and Fossum Citation2019). Further, guidelines on how to weight the factors are limited (Sørensen Citation2018). Risk may also be assessed differently in different contexts (e.g. Fluke et al. Citation2020), which makes it difficult to establish standardized guidelines to determine a child’s level of risk (Thoburn Citation2010). Research also suggests that standardized instruments may not necessarily lead to greater consensus than intuition in determining risk (Bartelink et al., Citation2015).

As shown, research on decision-making and standardization is conflicting and these studies offer important insights into how standardized assessment tools may influence child welfare practice. To complement existing research, the present study contributes in-depth knowledge on how standardized assessment tools guide professionals’ decision-making processes in Norway.

The conceptual framework: the concept of profession in frontline practice

Analytical perspectives and concepts used in this study draw on the theory of profession (Freidson Citation2001; Molander Citation2016).

Discretion is considered unavoidable in decision-making about a child or family, which calls for applying general knowledge to a particular case (Freidson Citation2001; Lipsky Citation2010). Discretion refers to an area of delegated power where professionals exercise choice between permitted alternatives of actions based on their own judgment (Molander Citation2016). Molander (Citation2016) distinguishes between two dimensions of discretion, discretionary space and discretionary reasoning. Discretionary space refers to a structural dimension of discretion that constitutes an entrusted, but restricted area (e.g. through laws and standards) for professionals to exercise discretion. In the debate about standardization in social work practice it is claimed to restrict professionals’ ability to use discretion (Ponnert and Svensson Citation2016). However, this is contested as standards need to be interpreted into the local context (Molander Citation2016).

Discretionary reasoning refers to an epistemic dimension of discretion. This denotes a cognitive activity performed by professionals through use of their expert knowledge and skills when making reasoned decisions under conditions of uncertainty (Molander Citation2016). Professional knowledge in this sense is commonly equated with practice wisdom (Freidson Citation2001), and resonates with tacit knowledge: ‘we can know more than we tell’ (Polanyi and Sen, Citation2009 [1961]). Moreover, discretionary reasoning is a form of practical reasoning based on professionals’ own judgment, in order to determine what ought to be done in a particular case. Therefore, discretion is bounded by various normative expectations and the context, which are considered a burden on discretionary activity. Knowledge embedded in standardized tools involves formal knowledge that is codified and explicit, and is thus a mechanism for transparency and accountability to ensure the predictability linked to decision outcomes (Brunsson and Jacobsson Citation2000). Accordingly, focusing on the discretionary activities of CWS professionals and what guides them in their reasoning will provide insight into how the KF assessment framework influences the decision-making process.

Method

This article uses a qualitative case study design (Yin Citation2014) to examine how a standardized tool (KF) influences CWS decision-making processes. Standardized practice in CWS constitutes the case, in which the KF assessment framework is an example, hence an ‘exemplifying case’ (Bryman Citation2016). The study was conducted in two local child welfare offices in different regions of Norway; ‘Office A’ had used the KF for about a decade, while ‘Office B’ had recently started to use it. Moreover, A was a large office with a specialized approach, while B was a medium-sized office with a semi-generalist approach. The combination of these variations increased the likelihood of identifying patterns (Braun and Clarke Citation2006), in which the CWS professionals’ practices emerging from the use of the KF tool were analysed.

Participants and data collection

Access to the offices was granted by the management staff. Thirty-two CWS professionals who used the standard KF tool (20 from Office A and 12 from Office B), including seven in management positions, consented to participate in this part of the study. They had worked in the CWS from one to 20+ years. All except one held a bachelor’s degree in social work, although some had additional education.

The data in this article draw on fieldwork (45 days) and client documents (n = 15). The latter were only connected to Office A due to restricted approval. The fieldwork was carried out at the two offices over 12 months (April 2017 to March 2018), and included participant observation and interviews (Spradley Citation2016). I participated in day-to-day activities, internal meetings and six client meetings, and conducted interviews with the CWS staff and managers. In addition, I attended training and guidance given by Kvello in both offices. Data were recorded as handwritten notes the same day, and some informal talk was recorded and transcribed verbatim. This enabled reflection and sampling that revealed new areas for further attention. In Office A, I was provided with my own office in the same corridor as the CWS professionals, which enabled me to encounter key informants (Bryman Citation2016). The fieldwork in Office A afforded valuable knowledge of the standardized tool that made the subsequent fieldwork in Office B more concentrated in terms of participating in scheduled meetings, in addition to making the interviews more focused. Focus areas in the observations were how the standardized tool was present in the participants’ daily work, who used it and how. CWS professionals spend much of the day on casework, which enabled me to talk to them in the role of ‘conversation partner’. These conversations dealt with their assessment procedure, including what type of information they sought and their experiences of filling out the KF form. I therefore gained access (Bryman Citation2016) into what guided their assessment work, as they willingly shared ‘backstage’ information.

From Office A, 15 case assessment reports based on the KF were randomly selected. With support from the manager, the first five reports from three sub-teams in Office A that were completed in May 2017 were included. The reports provided important insights into how the CWS staff used the KF form in decision-making processes. This included the type of information emphasized, sources of information, and how the information was presented and interpreted. The purpose was to explore the CWS professionals’ focus and how this was expressed in the reports, considering that documents contain the writers’ point of view (Bryman Citation2016). Being present in the offices over time, observing and talking with professionals, together with document analysis, enhanced my understanding of how they used the KF tool and thus its influence on decision-making. The purpose of this design was to capture both formal and informal practice and possible discrepancies between these.

Data analysis

The various data sources generated thick data, which were analysed using thematic analysis (Braun and Clarke Citation2006), supported by NVivo 11. The dataset was analysed to search for patterns of common meanings (Krippendorff Citation2019). In focusing on how the standardized tool influenced decision-making processes, hence their doings and sayings, it was important to consider how the tool was actually used by the practitioners in their context, and how it was represented in their daily talk and activities, and in the documents. Coding and categorization emerged from alternation between an inductive data-driven approach (Bryman Citation2016), based on fieldwork data and documents, and a more deductive approach, based on the theory of profession and the concept of procedural standardizations, and thus links to theory (Yin Citation2014). To limit potential misinterpretation, I discussed the data and its categorization with other researchers during the analysis. The analysis resulted in 24 categories, which were carefully reviewed and refined, resulting in two broad themes: i) requirements of the tool and ii) gaps in the chain of argument.

Ethics

This study was approved by the Norwegian Centre for Research Data (project number 53,005, dated 16 March 2017). All staff members were informed about the study and all participants signed a written consent. Moreover, all parents whom I encountered in client observation provided oral consent and received oral and written study information. For the included documents, which are highly sensitive case files, special approval was granted by the Norwegian Directorate for Children, Youth and Family Affairs. Due to the ethical challenges of using such documents, the number of documents was restricted and limited to only one office, and they were anonymized beforehand by the CWS.

Strengths and limitations

The small sample of documents and the fact that they only came from one office may be regarded as a limitation. However, considering the ethical challenges involved in using client documents, it is a strength that I was granted access to them. Furthermore, the fieldwork was completed by the time I gained access. It would have been interesting to ask the participants to reflect upon some of the findings from the documents, which would have nuanced the findings further. Finally, this study did not include the perspectives of service users, which could have established how far the tools influence client involvement in the CWS. Nevertheless, few studies have followed casework ethnographically, which is a strength of this study.

Findings

Two themes were seen to be prominent in the analysis. The first concerns how the tool determined the CWS professionals’ actions. The second deals with how the tool led to gaps in their chain of argument, and thus the process of formulating a basis for their decisions. These two themes will be elaborated in more detail below.

Requirements of the tool for courses of action

The findings revealed patterns of procedural standardization in courses of action in two areas: firstly, in the process of gathering information about the family situation, and secondly, in reporting and interpreting the information obtained. The former involved the professionals’ tasks and focus of attention, and the latter how information was systematized and understood. These patterns were identified in data from both offices.

Task and focus requirements

Based on the KF, essential activities for obtaining information about the family situation are observations, mentalization interviews and risk assessments, which involve requirements as to what to focus on and look for. Such activities were found to be key aspects of the professionals’ daily work in both offices.

Several participants subscribed to observation as a source of valuable information, particularly when assessing parent-child interaction. Here, attachment and mentalization were strongly emphasized; however, parents were not necessarily told that they were being observed:

The caseworker states that the mother brought her toddler to the meeting, which enabled her to observe the interaction between mother and child. She says that she paid attention to how the mother responded to the child in this situation, which she feels could be a stressful setting. She points out that the mother did not help the child, which could be related to her culture. (…) She explains that she checked the mother’s mentalisation skills, and therefore asked her to describe her child in 3-5 words. (…). She reports not being satisfied with the mother’s reply, emphasising that the mother struggled to give a good description of the child. (Field notes, conversation with R5)

Even though the professional acknowledges that the mother’s reaction may be related to culture or stress, she still reasoned with reference to the mother’s mentalization skills. Parental mentalization abilities were a recurring theme in the professionals’ observations of parents. This was also prominent in the documents and in client meetings where mentalization interviews were conducted. However, it was common to exercise discretion to alter the interview by using only a selection of the mentalization questions with the parents. In several cases, parents had difficulty in answering such questions, which professionals sometimes related to their culture. However, the mentalization interview and questions were perceived by the CWS professionals to aid their professional judgment regardless of cultural background, and thus the tool guided their reasoning and production of knowledge of the families.

Risk and protective factors were regularly mentioned in talk about assessment work. In case discussions, comments on risk factors were more frequent than comments on protective factors. In some cases, participants emphasized that there were no protective factors, as a statement of fact. Risk and protective factors were ticked off in all documents but one; however, it varied whether these had been further assessed. Some were also concerned about the risk assessment and staff paying too much attention to risk factors:

It’s very easy to put divorced parents as a risk, but this isn’t necessarily a risk (…) In their reports, some caseworkers just list the risk and protective factors without further descriptions (…) and say that it looks more like an assembly line. (Field note from conversation with supervisor R11)

Considering the numerical dominance of risk factors described in detail, they may be easier to detect than protective factors. As the findings demonstrate, risk factors are on the CWS professionals’ agenda and are more commonly addressed in their assessments. A risk-dominated language thus shapes the professionals’ reasoning and their understanding of family situations.

Form-filling requirements

The other area of procedural standardization concerned how the professionals subscribed to the way of structuring the information in the predetermined categories in the forms, such as living situation, or risk and protective factors. Descriptive requirements directed how the information obtained was presented in written reports. Additionally, there is some evidence that these form-filling requirements placed interpretive demands upon the professionals. For example, parent-child interaction, mentalization and risks were commonly assessed, and conclusions were sometimes presented as facts. However, practical reasoning with descriptions of how they were assessed were often lacking, and thus subjective normative elements and informal practices were omitted. The following field note extract exemplifies this; here, three participants filled out the form together:

They start by ticking off type of housing and then they describe its size and how long the family have lived there. Participant A asks whether they need to put down all this information; participant B replies ‘Yes, we do’, with no further elaboration. Participant C, who is filling out the form on the computer asks A how the atmosphere was in the home. A replies: ‘That’s speculation’. C emphasises that it is important to remove speculations, but how this is done is not elaborated. C then asks about the children’s room. A describes the children’s room and how she perceived it and repeats that these are speculations. C writes the information in the form. A adds that she felt concerned about the child, but does not state what that entailed. (…) At the end of the meeting they emphasise the importance of not basing the information on speculation. (Field note from a group meeting with R20, R23 and R27).

Although one CWS professional questions parts of the form and mentions concerns about speculative responses, the information is not presented as interpretation in the form. Hence, the professionals yield to the requirements of the form, and thus the various perspectives are not included. Moreover, the professionals’ concern, which may be tacit, is not accounted for. Further, this also illustrates, as supported in the documents, that the reasons for their actions and interpretations are not stated.

However, the form-filling requirements did also focus attention on the child by making the child’s voice more explicit, which may strengthen the involvement of children in CWS work. This suggests that such requirements can enhance children’s participation, at least in terms of listening to children’s views on their situation. However, there was no clear pattern in the documents as to how or whether the child’s voice was weighted in the assessments, except for some examples where the child’s descriptions conflicted with those of the parents, and were then given more weight.

Gaps in the chain of argument

Another strong and consistent theme throughout this study is the lack of transparency of the reasoning on which conclusions were based. When the participants discussed their cases in groups, informal conversations or in consultation after a client meeting, suggestions were put forward without any articulation of the arguments leading to the suggested conclusion. In this sense, they were exercising discretion, but without making their reasoning explicit. This is illustrated by the following example from an investigative team discussing new cases transferred from the intake team for further investigation:

The child welfare professionals are discussing a case involving a family with three children with a concern for only one of the children. One participant reads from the intake report which concludes that the case needs to be further investigated, for all three children. The investigative team questions the decision that all three children need to be included in the investigation, which was not explained in the document. The participant reads on and states that the report recommends issues the family needs to work on [suggestion of measures]. Another participant says: “Well, then the case is already concluded, so what’s the point of investigating it”. A third participant replies that this happens quite often. (Field note, from intake meeting, Office A).

This shows that the reasons for their decision to investigate all three children were inconclusive, and thus, it was difficult to determine the nature of the case. Further, as seen throughout the fieldwork, measures are often suggested before a case is fully investigated. Accordingly, conclusions are presented without knowledge of what arguments or information these are based on, and thus the professionals define the family’s needs without making it explicit. This suggests use of tacit knowledge in order to arrive at a justified conclusion. These findings also relate to another finding indicating that the professionals struggled to make explicit how they interpreted the information obtained, as explained by one of the supervisors:

When they analyse, they’re supposed to state the reason for their opinion, e.g. why they believe that a risk is present (…) and how the child is affected by this risk factor. (…) However, several of the professionals struggle to differentiate between the analysis of the risk and protective factors and the overall assessment (R18).

Lack of transparent reasoning behind their analysis was also found in the documents. Participants provided detailed information about the family and child, but it was challenging to discern how these thick descriptions were interpreted and assessed, thus leaving a gap in their reasoning. Similarly, inconsistency was detected between the description of the family situation, the CWS assessment of the situation and their conclusion. For example, topics that were described were not necessarily assessed and vice versa, and in some documents, new information was presented in the conclusion. Moreover, one document stated that the child had special needs in the descriptive section. However, the nature of these special needs was not described. Later in the document, a report from the school said the child did not have any special needs, and there was no mention of the child’s special needs in the assessment section. The conclusion section, however, stated that the child had special needs, but without mentioning the basis for this conclusion. Further, how conflicting opinions of the child were assessed was not made explicit in the report. The same tendencies were found in other documents, suggesting regular gaps in the professionals’ chain of argument. The above findings demonstrate that a synthesis between the rich descriptions obtained, the risk and protective factors, and conclusions based on practical reasoning, is not accounted for. This may derive from tacit knowledge; however, when discretionary power is exercised, the decisions lack transparency. Overall, the findings show that part of the decision-making process and the CWS professionals’ focus of attention becomes standardized when using the tool; here, psychological knowledge seemed to be the preferred knowledge base.

Discussion

This study examines how use of the standardized KF influences the decision-making process in CWS, with an emphasis on discretion and professional knowledge. The analysis shows examples of proceduralisation of decision-making practice, hence professionals’ actions and focus, when the KF is used. Moreover, the use of a standardized assessment tool has several and even conflicting implications for decision-making process, which will be discussed in the following.

Standardization of actions and increased control

The findings show how the CWS professionals’ actions become standardized when following the procedures. This is particularly seen in their process of gaining information about the family situation, e.g. the types of information they pursue and their activities in collecting this information, such as talking with the child. These activities are explicitly expressed and visible in their reporting. This suggests that the requirements of the KF tool, and thus the procedural practices (Timmermans and Berg Citation2003), enhance transparency of their activities in assessment work. This corresponds with the argument that standardized assessment tools, at least in some sense, help to make the entire decision-making process in CWS more transparent and explicit (Devlieghere, Bradt, and Roose Citation2018; Ponnert and Svensson Citation2016). This form of transparency may be coupled with audit and accountability in terms of following procedures (Devlieghere and Gillingham Citation2020). Since the KF provides rules for assessment work, it functions as a form of regulation, and thus a tool of procedural accountability (Brunsson and Jacobsson Citation2000; Timmermans and Epstein Citation2010). These developments are referred to as a new mode of accountability, as they entail making the entire process accountable to a third party (Timmermans and Berg Citation2003). Consequently, this may influence the structural dimension of discretion (Molander Citation2016), as a standardized assessment framework adds new rules to decision-making practices. Some scholars have raised concerns that this limits frontline discretion that may be tacit (Brodkin Citation2008). In turn, this may restrict professionals’ body of knowledge, which needs to be both formal and tacit (Freidson Citation2001; Polanyi & Sen, Citation2009 [1961]). However, as pointed out by Timmermans and Berg (Citation2003), even the strictest guidelines allow for the use of professional discretion. Nevertheless, stricter guidelines can be understood as the creation of new social structures that favour explicit codified knowledge (Sletten and Ellingsen Citation2020) and enable increased control of professional practice (Brunsson and Jacobsson Citation2000). Accordingly, this may weaken professionals’ discretionary power and the requirement of individualization (Molander Citation2016).

Since the CWS holds authority over others, transparency may also be considered important to enable service users to understand CWS work and processes leading to their decisions. Following Blomberg and Sahlin (Citation2017), procedural standards may be understood as a quest for transparency to enhance user involvement, which is strongly coupled with democratic accountability, unlike the efficiency focus found in managerial reforms. Although procedural standards aid transparency of activities for managers and other professionals, as seen in this study, they do not seem to make assessments more transparent for service users. Examples are gaps in professionals’ arguments or lack of information to service users that they were being observed or that their mentalization skills were being assessed, and thus the professionals define the parents through their discretionary power (White, Fook, and Gardner Citation2006). Following Molander (Citation2016, 25), this may constitute a normative problem, referred to as ‘burdens of discretion’, in which professionals’ reasoning may be exposed to bias. Further, assessment tools have been found to strengthen the professional’s role through the use of a more professional vocabulary (Gillingham et al. Citation2017; Sletten and Ellingsen Citation2020). Consequently, this may increase the professionals’ discretionary power and thus make decision-making practice even less transparent to parents and children, who find it difficult to understand the terminology used. Therefore, transparency may be an important contribution to making social work practice more accessible to service users (Devlieghere, Bradt, and Roose Citation2018), and thus avoiding deceiving parents (Gambrill Citation2005) through the exercise of bias with the potential to prevent equal treatment of families (Molander Citation2016). Yet unless decision-making practices are made explicit to service users, transparency will vary according to the audience, and will therefore only be present to a certain degree (Devlieghere and Gillingham Citation2020). Although professionals’ actions become standardized as they seem to demonstrate rather strong loyalty to the tool, this study shows that procedural standards only to some extent function as a tool for democratic accountability (Brodkin Citation2008).

Standardization of knowledge

From a knowledge perspective, standardized tools such as the KF contain focus requirements to produce knowledge about the family situation, which is essential in making decisions, hence what and how we know. The present findings concur with previous research that shows that professionals favour using the knowledge base often embedded in standardized tools, namely psychological knowledge (Sletten and Ellingsen Citation2020; Stanley Citation2013). This seems to become reinforced by the language of the tools that enables professionals to make this knowledge explicit, and thus influences decision-making practices. In this way, the professionals follow the rules of the standard, which makes knowledge production in CWS become standardized, as knowledge is stored in the standard rules, such as prediction of risk in risk assessment (Brunsson and Jacobsson Citation2000). Hence, risk and mentalization seem to have become a gold standard for measuring parenting abilities that may pose new normative constraints on the professionals’ reasoning (Molander Citation2016). This suggests that their discretion is affected by the standard that in turn shapes their interpretation of the family situation. Further, the formal knowledge embedded in the standards is what counts as legitimate knowledge, and thereby the CWS professionals’ position as experts is under pressure. Consequently, there is a potential for overlooking other factors that may influence the family situation, and here the professionals may adhere to a narrow knowledge base in their reasoning. These findings concur with a recent study that found that reliance on risk assessments could potentially overlook risk-reducing factors (Krutzinna and Skivenes Citation2021). The fact that formal written knowledge is more easily stored may undermine other forms of knowledge that are harder to translate into specific rules, such as tacit knowledge and knowledge of particular cases (Brunsson and Jacobsson Citation2000; Noordegraaf Citation2015). Accordingly, there is a risk of adopting a narrow approach in knowledge production in CWS (Havnen et al. Citation2021; Stanley Citation2013), which is reinforced by increased demands for accountability (Munro Citation2011). From a decision-making perspective, accountability is essential as procedural standards influence professionals’ discretionary reasoning (Molander Citation2016), and may therefore lead to biased decision-making (Munro and Hardie Citation2018).

Reasoning and handling of uncertainty

CWS professionals found that the tool generated thick descriptions of the family situation, which was linked to form-filling and descriptive requirements posed by the tool. However, only parts of the activities and viewpoints were reported. For example, they did not report on considerations they took in relation to individual clients, and the families’ response was not always accounted for. Moreover, there was inconsistency in the information analysed, where reasons for the statements were not presented. Hence, how they interpreted the information obtained and how they handled different perspectives was not made explicit. Accordingly, there were gaps in their reasoning in their reporting of assessments. Professionals tend to rely on tacit knowledge and intuitive reasoning when exercising discretion (Hammond Citation1996). However, the amount of information generated by the tool makes it challenging to determine what information is essential to a given case (Vis et al. Citation2020). These findings are in keeping with the criticism of the Norwegian CWS by the European Court of Human Rights and the Norwegian Supreme Court, which pointed out that the CWS lacked clear arguments leading to their conclusions, and that conflicting viewpoints were not assessed (MCF Citation2020). A response to this criticism tends to involve an increased use of standardized CWS assessment tools to ensure qualified decision-making and accountability. Moreover, from a decision-making perspective it is a common perception that more information generates good decisions, particularly in cases of uncertainty (Brunsson and Brunsson Citation2015), as found in CWS decision-making practice (Fluke et al. Citation2020). However, according to Brunsson and Brunsson (Citation2015), this is a misconception and may even increase decision-makers’ level of uncertainty. This is because decision-makers may find it challenging to handle large amounts of information, to make sense of the information and to deal with conflicting viewpoints and perspectives, as seen in this study. Consequently, errors may occur due to uncertainty (Fluke et al. Citation2020). Aligned with Brunsson’s and Brunsson’s (Citation2015) argument, the use of a procedural assessment framework may in fact not produce the desired effect, i.e. less uncertainty improved the quality of discretionary reasoning, and thus led to better qualified decisions.

Conclusion

This study shows that the use of a standardized assessment tool results in proceduralisation of CWS assessment work, as CWS professionals’ actions and knowledge become standardized. To some extent, this increases the level of transparency and accountability, thus enabling increased control over professional practice with the potential to limit the professionals’ discretionary space. It is not necessarily a question of whether or not we should use such tools. However, as this research has shown, a standardized assessment tool does not alone solve the challenges of CWS practice nor prevent discretionary biases, despite the aim of the tool to improve professionals’ epistemic discretionary reasoning. It may in fact create new challenges. This study demonstrates that CWS professionals prefer the knowledge base of the standards, in which risk and mentalization become the gold standard in CWS decision-making practice. The problem arises if one uses standardized tools blindly and disregards rival perspectives, without critically revising potential biases and conclusions deriving from the standardized procedures. Analysing information is a complex task. However, the use of standardized assessment tools does not seem to enhance CWS professionals’ analytical skills nor enable them to articulate their reasoning. Considering that professionals are entrusted with discretionary power, one should expect their discretionary activity to be made explicit, since they have an obligation to others. The aim is not to avoid use of tacit knowledge, but in line with Molander (Citation2016), I argue for a greater focus on reflective activity in order to enhance professionals’ discretionary reasoning.

Acknowledgments

I would like to thank Professor Ingunn T. Ellingsen at the University of Stavanger, Professor Catharina Bjørkquist at Østfold University College and my anonymous reviewers for helpful comments and suggestions.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Almklov, P. G., G. Ulset, and J. Røyrvik. 2017. “Standardisering Og Måling I Barnevernet [Standardisation and Measurement in Child Welfare.” In Trangen Til Å Telle: Objektivering, Måling Og Standardisering Som Samfunnspraksis [The Need to Count: Objectification, Measurement and Standardisation as a Societal Practice], edited by T. Larsen and E. Røyrvik, 153–183. Oslo: Scandinavian Academic Press.
  • Bartelink, C., T. A. Van Yperen, and I. J. Ten Berge. 2015. “Deciding on Child Maltreatment: A Literature Review on Methods that Improve decision-making.” Child Abuse & Neglect 49: 142–153. doi:10.1016/j.chiabu.2015.07.002.
  • Benbenishty, R., R. Osmo, and N. Gold. 2003. “Rationales Provided for Risk Assessments and for Recommended Interventions in Child Protection: A Comparison between Canadian and Israeli Professionals.” British Journal of Social Work 33 (2): 137–155. doi:10.1093/bjsw/33.2.137.
  • Bergmark, A., and T. Lundström. 2011. “Guided or Independent? Social Workers, Central Bureaucracy and evidence-based Practice.” European Journal of Social Work 14 (3): 323–337. doi:10.1080/13691451003744325.
  • Blomgren, M., and K. Sahlin. 2017. “Quests for Transparency: Signs of a New Institutional Era in the Health Care Field.” In Transcending New Public Management, edited by P. Lægreid and T. Christensen, 167–190. London: Routledge.
  • Braun, V., and V. Clarke. 2006. “Using Thematic Analysis in Psychology.” Qualitative Research in Psychology 3 (2): 77–101. doi:10.1191/1478088706qp063oa.
  • Broadhurst, K., C. Hall, D. Wastell, S. White, and A. Pithouse. 2010. “Risk, Instrumentalism and the Humane Project in Social Work: Identifying the Informal Logics of Risk Management in Children’s Statutory Services.” British Journal of Social Work 40 (4): 1046–1064. doi:10.1093/bjsw/bcq011.
  • Brodkin, E. Z. 2008. “Accountability in street-level Organizations.” International Journal of Public Administration 31 (3): 317–336. doi:10.1080/01900690701590587.
  • Brunsson, N., and B. Jacobsson. 2000. A World of Standards. Oxford, UK: Oxford University Press.
  • Brunsson, K., and N. Brunsson. 2015. Beslutninger [Decisions]. Oslo: Cappelen Damm akademisk.
  • Bryman, A. 2016. Social Research Methods. 5th ed. Oxford, UK: Oxford University Press.
  • Devlieghere, J., L. Bradt, and R. Roose. 2018. “Creating Transparency through Electronic Information Systems: Opportunities and Pitfalls.” The British Journal of Social Work 48 (3): 734–750. doi:10.1093/bjsw/bcx052.
  • Devlieghere, J., and P. Gillingham. 2020. “Transparency in Social Work: A Critical Exploration and Reflection.” The British Journal of Social Work. doi:10.1093/bjsw/bcaa166.
  • Fluke, J. D., M. López López, R. Benbenishty, E. J. Knorth, and D. J. Baumann. 2020. “Advancing the Field of decision-making and Judgment in Child Welfare and Protection: A Look Back and Forward.” In Decision-making and Judgment in Child Welfare and Protection. Theory, Research, and Practice, edited by J. D. Fluke, M. L. López, R. Benbenishty, E. J. Knorth, and D. J. Baumann, 301–317. New York, NY: Oxford University Press.
  • Freidson, E. 2001. Professionalism: The Third Logic. Cambridge: Polity Press.
  • Gambrill, E. D. 2005. “Decision Making in Child Welfare: Errors and Their Context.” Children and Youth Services Review 27 (4): 347–352. doi:10.1016/j.childyouth.2004.12.005.
  • Gillingham, P., P. Harnett, K. Healy, D. Lynch, and M. Tower. 2017. “Decision Making in Child and Family Welfare: The Role of Tools and Practice Frameworks.” Children Australia 42 (1): 49–56. doi:10.1017/cha.2016.51.
  • Gillingham, P. 2019. “Can Predictive Algorithms Assist decision-making in Social Work with Children and Families?” Child Abuse Review 28 (2): 114–126. doi:10.1002/car.2547.
  • Hammond, K. 1996. Human Judgement and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. Oxford: Oxford University Press.
  • Havnen, K., S. Fossum, C. Lauritzen, and S. A. Vis. 2021. “How Does the Kvello Assessment Framework Attend to Important Dimensions of the Children’s Needs and Welfare? A Comparison with the BBIC and the ICS Frameworks for Child Welfare Investigations.” Nordic Social Work Research 1–13. doi:10.1080/2156857X.2021.1891959.
  • Kjær, A.-K. B. 2019. “Risikovurderinger I Barnevernet – Hva Innebærer Det Og Når Trengs Det? [Risk Assessments in Child Welfare - What Do They Mean and When are They Needed?].” Tidsskrift for familierett, arverett og barnevernrettslige spørsmål 17 (2): 131–149. doi:10.18261/.0809-9553-2019-02-0.
  • Krippendorff, K. 2019. Content Analysis: An Introduction to Its Methodology. 4th ed. Los Angeles, CA: SAGE.
  • Krutzinna, J., and M. Skivenes. 2021. “Judging Parental Competence: A cross-country Analysis of Judicial Decision Makers’ Written Assessment of Mothers’ Parenting Capacities in Newborn Removal Cases.” Child & Family Social Work 26 (1): 50–60. doi:10.1111/cfs.12788.
  • Kvello, Ø. 2015. Barn I Risiko: Skadelige Omsorgssituasjoner [Children at Risk: Harmful Care Situations]. 2nd ed. Oslo, Norway: Gyldendal akademisk.
  • Lipsky, M. 2010. Street-level Bureaucracy: Dilemmas of the Individual in Public Services. 30th anniversary expanded ed. New York: Russell Sage Foundation.
  • MCF 2020. Informasjonsskriv om behandlingen av barnevernssaker - nye avgjørelser fra Høyesterett [Information letter on the processing of child welfare cases: new decisions by the Supreme Court], Oslo, Norway: Ministry of Children and Families. file:///N:/Artikler/informasjonsskriv-om-barnevernssaker—nye-retningslinjer-fra-hoyesteretts-om-saksbehandling_BLD_2020.pdf
  • Molander, A. 2016. Discretion in the Welfare State: Social Rights and Professional Judgment. Abingdon, UK: Routledge.
  • Munro, E. 2011. The Munro Review of Child Protection: Final Report, a child-centered System. Vol. 8062. London, UK: Department of Education.
  • Munro, E., and J. Hardie. 2018. “Why We Should Stop Talking about Objectivity and Subjectivity in Social Work.” The British Journal of Social Work 49 (2): 411–427. doi:10.1093/social/bcy054.
  • Møller, A. M. 2021. “Deliberation and Deliberative Organizational Routines in Frontline decision-making.” Journal of Public Administration Research and Theory 31 (3): 471–488. doi:10.1093/jopart/muaa060.
  • Noordegraaf, M. 2015. Public Management: Performance, Professionalism and Politics. London: Palgrave Macmillan.
  • Polanyi, M., and A. Sen. 2009. [1966] The Tacit Dimension. Chicago: University of Chicago Press.
  • Ponnert, L., and K. Svensson. 2016. “Standardisation—the End of Professional Discretion?” European Journal of Social Work 19 (3–4): 586–599. doi:10.1080/13691457.2015.1074551.
  • Shaw, I., M. Bell, I. Sinclair, P. Sloper, W. Mitchell, P. Dyson, J. Clayden, and J. Rafferty. 2009. “An Exemplary Scheme? An Evaluation of the Integrated Children’s System.” The British Journal of Social Work 39 (4): 613–626. doi:10.1093/bjsw/bcp040.
  • Skillmark, M., and L. Oscarsson. 2020. “Applying Standardisation Tools in Social Work Practice from the Perspectives of Social Workers, Managers, and Politicians: A Swedish Case Study.” European Journal of Social Work 23 (2): 265–276. doi:10.1080/13691457.2018.1540409.
  • Sletten, M. S., and C. Bjørkquist. 2020. “Professionals’ Tinkering with Standardised Tools: Dynamics Involving Actors and Tools in Child Welfare Practices.” European Journal of Social Work 1–12. doi:10.1080/13691457.2020.1793114.
  • Sletten, M. S., and I. T. Ellingsen. 2020. “When Standardization Becomes the Lens of Professional Practice in Child Welfare Services.” Child & Family Social Work 25 (3): 714–722. doi:10.1111/cfs.12748.
  • Spradley, J. P. 2016. The Ethnographic Interview. Long Grove, IL: Waveland Press.
  • Stanley, T. 2013. “‘Our Tariff Will Rise’: Risk, Probabilities and Child Protection.” Health, Risk & Society 15 (1): 67–83. doi:10.1080/13698575.2012.753416.
  • Sørensen, K. M. 2018. “A Comparative Study of the Use of Different risk-assessment Models in Danish Municipalities.” British Journal of Social Work 48 (1): 195–214. doi:10.1093/bjsw/bcx030.
  • Thoburn, J. 2010. “Achieving Safety, Stability and Belonging for Children in out-of-home Care: The Search for ‘What Works’ across National Boundaries.” International Journal of Child and Family Welfare 13 (1): 34–49. Retrieved from: https://www.scopus.com/record/display.uri?eid=2-s2.0-85055407863&origin=inward
  • Timmermans, S., and M. Berg. 2003. The Gold Standard. Philadelphia, PA: Temple University Press.
  • Timmermans, S., and S. Epstein. 2010. “A World of Standards but Not A Standard World: Toward A Sociology of Standards and Standardization.” Annual Review of Sociology 36 (1): 69–89. doi:10.1146/annurev.soc.012809.102629.
  • Vis, S. A., C. Lauritzen, and S. Fossum. 2019. “Systematic Approaches to Assessment in Child Protection Investigations: A Literature Review.” International Social Work 0020872819828333. https://doi.org/10.1177/0020872819828333
  • Vis, S. A., Ø. Christiansen, K. J. S. Havnen, C. Lauritzen, A. C. Iversen, and T. Tjelflaat. 2020. Barnevernets undersøkelsesarbeid-fra Bekymring Til Beslutning. Samlede Resultater Og Anbefalinger [The Investigative Work of the Child Welfare Service: From Concerns to Decisions. Overall Results and Recommendations]. Tromsø, Norway: UiT The Arctic University of Norway.
  • White, S., J. Fook, and F. Gardner 2006. Critical reflection in health and social care, https://ebookcentral.proquest.com/lib/hiof-ebooks/detail.action?docID=295530
  • White, S., C. Hall, and S. Peckover. 2008. “The Descriptive Tyranny of the Common Assessment Framework: Technologies of Categorization and Professional Practice in Child Welfare.” British Journal of Social Work 39 (7): 1197–1217. doi:10.1093/bjsw/bcn053.
  • Yin, R. K. 2014. Case Study Research: Design and Methods. Thousand Oaks, CA: Sage.