190
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Sphere transgressions in the Dutch digital welfare state: causing harm to citizens when legal rules, ethical norms and quality procedures are lacking

ORCID Icon, ORCID Icon & ORCID Icon
Received 03 Mar 2023, Accepted 17 May 2024, Published online: 01 Jul 2024

ABSTRACT

Welfare states across the world increasingly experiment with the use of big data and algorithms in the name of efficiency gains and fair decision-making. However, recent public scandals in various countries show persistent problems with how digital welfare states operate at the cost of vulnerable populations. Problems include systemic forms of discrimination, increasing levels of surveillance, stigmatization, and restricted access to public benefits. Rather than viewing these problems as instances of technical implementation hurdles, we argue that digital welfare states currently operate in an institutional void in which legal, ethical, and quality procedures are lacking or ill equipped to address new challenges posed by digital technologies.

Based on a secondary analysis of documents, we show how this institutional void empirically manifests itself in the Dutch welfare state by zooming in on two subcases that sparked public controversy in recent years: the childcare benefits scandal that focused on fraud detection and the Top 400/Top 600 that was set-up with the aim of crime prevention. By analyzing these cases, we show how sphere transgressions – understood as the encroachment of digital logic into the sphere of social welfare – can have detrimental consequences for citizens when there is an institutional void. We end with reflections on how to fill the current institutional void and identify ‘soft signals’ that could be used as pointers to recognize the potential undesirable consequences of new sphere transgressions.

1. Introduction

Across the world, we see the rapid rise of digital welfare states in which digital technologies and algorithms are used to automate decisions about the allocation of welfare benefits and predict risks of welfare fraud (Eubanks, Citation2017; Hansen et al., Citation2018; Kersing et al., Citation2022; Ratcliffe, Citation2019; Vogl et al., Citation2020; Zouridis et al., Citation2020). Although the digitalization of welfare states is usually justified in terms of more efficiency, personalization, and fair decision-making (Jeffares, Citation2020; OECD, Citation2016), recent public scandals in various countries show otherwise. In the UK, the introduction of the Universal Credit System led to large-scale maltreatment of citizens whose benefits were unrightfully terminated or postponed (United Nations, Citation2019). In the US, screening tools that ranked families according to their likely risk of child neglect and abuse generated racially biased results and a high number of wrong assessments (Eubanks, Citation2017). In India, low-income benefit claimants were refused access to subsidized food due to administrative glitches in a large biometric identification system, which allegedly led to the death of citizens due to starvation (Ratcliffe, Citation2019).

Given these public scandals, there are growing concerns about the consequences of decision-making in digital welfare states for vulnerable citizens who are increasingly under surveillance (Alston, Citation2019; Dencik et al., Citation2018; Eubanks, Citation2017). Recently, the United Nations Rapporteur for extreme poverty and human rights Philip Alston, warned that there is a ‘grave risk of stumbling zombie-like into a digital welfare dystopia’ (Citation2019, p. 2), in which citizens are increasingly being surveilled and restricted in their access to benefits without having the possibility to opt out of the digital system or to get due process when they are unjustly labeled as having committed fraud. In this development, digital welfare states seem to operate in a ‘human rights free zone’ (Citation2019, p. 13). In this zone, (local) governments are often keen to conduct data experiments in cross-organizational networks and public–private partnerships without a clear democratic mandate and institutional checks and balances (Dencik et al., Citation2018; Grimmelikhuijsen & Meijer, Citation2022; Postma & De Oude, Citation2021; Van Zoonen, Citation2020). The scope of these data experiments ranges from the building of data warehouses and dashboards that transform the work of street-level bureaucrats into screen-level work to predictive analytical tools to prevent fraud (Kersing et al., Citation2022; Van Zoonen, Citation2020; Well et al., Citation2023).

Rather than viewing recent public scandals resulting from data experiments as implementation problems that can be overcome by technical fixes in the system, we argue that these problems are more fundamental in nature and deserve analytical attention beyond the technical (Peeters & Widlak, Citation2018). According to various authors, current welfare states operate in an ‘institutional void’ in which legal, ethical, and quality procedures are lacking or insufficient to address the new challenges posed by the introduction of digital technologies (Alston, Citation2019; Dencik et al., Citation2018; Postma & De Oude, Citation2021; Rekenkamer Rotterdam, Citation2021; Van Bueren & Klievink, Citation2017; Van Zoonen, Citation2020). The concept of institutional void, which was originally developed by Hajer (Citation2003) to analyze how constitutional rules of classical modernist political institutions are no longer providing answers to new wicked challenges, has been used by other scholars to study socio-technical developments, such as digitalization, that tend to ‘move faster than existing institutions can keep up with’ (Van Bueren & Klievink, Citation2017, p. 3). In a situation of an institutional void, the growing influence of the digital into public welfare may potentially cause harm to citizens. Drawing on the work of Walzer and Sharon (Walzer, Citation1983; Sharon, Citation2021a) on sphere transgressions, and Schildt’s notion of the logic of digitalization (Schildt, Citation2022), we understand the encroachment of the digitalization logic as a transgression into the sphere of welfare. There is a broad variety of potential detrimental consequences of such a sphere transgression that may occur in the case of an institutional void, such as increasing levels of surveillance of vulnerable populations, discrimination, stigmatization, and restricted access to benefits (Alston, Citation2019; Amnesty International, Citation2021; Eubanks, Citation2017; Giest, Citation2021; Well et al., Citation2023). However, it is still unclear to what extent and how these consequences are precisely interconnected to current institutional voids in welfare states.

To better understand how institutional voids manifest themselves in concrete settings and how sphere transgressions – in a situation of an institutional void – can cause harm, we zoom in on the digital welfare state in the Netherlands as an empirical case. Compared to more extreme cases such as the United States and India (Eubanks, Citation2017; Ratcliffe, Citation2019), at first sight the Netherlands could be considered a less salient case due to its reputation of being a welfare state with sufficient checks and balances in place. However, there is increasing criticism about digitalization of the Dutch welfare state as a consequence of various public scandals that have been picked up internationally too (Amnesty International, Citation2021; Giest, Citation2021; Henley & Booth, Citation2020). It is therefore worthwhile to investigate the Dutch welfare state in light of these recent public scandals. Although a focus on public controversies can create a bias by focusing on extreme situations, it can at the same time be analytically productive because controversies shed light on existing norms, struggles and bureaucratic practices that normally stay under the radar (Pinch & Leuenberger, Citationn.d.).

Our analysis of the Dutch digital welfare state is based on a document analysis of two subcases in the Dutch digital welfare state that have received public attention: (1) the childcare benefits scandal, and (2) Top 400/Top 600 that identifies youth at risk of criminalization. Although there is a variety in terms of the intensity of the public debate (the first case being the most discussed), each subcase has been discussed in political debates on a national and/or local government level. In both cases, public organizations have provided the initiative to implement digital technologies. Due to a lack of transparency, it is unclear to what extent these public organizations have worked together with tech companies in designing and implementing digital technologies that are central in the two subcases. For the document analysis, we made a secondary analysis of existing reports, news items and academic studies. Based on this analysis, we (a) provide a sketch of the recent institutional context of the Dutch digital welfare state, (b) give insights into how the logic of digitalization transgresses into the sphere of welfare in the two subcases, (c) show in which ways an institutional void has manifested itself in the two subcases and how this causes harm to citizens, (d) provide reflection on how to fill the institutional void, thereby potentially reducing the harmful consequences of sphere transgressions, (e) and identify potential pointers for recognizing undesirable consequences of new emerging sphere transgressions early on, which can be relevant beyond the Dutch setting.

2. Sphere transgressions, the logic of digitalization and institutional voids

The notion of sphere transgression was originally developed by Walzer who based his theory of justice and equality on the autonomy of spheres in which different society goods, such as education, welfare, wealth, friendship, political power, etc. are distributed. Walzer argues that a just society is one where an advantage in one sphere cannot be converted into dominance in another. Transgressions between spheres can be considered as a form of tyranny (Walzer, Citation1983; Sharon, Citation2021a). Building on Walzer’s conceptualization, Sharon further develops the concept of sphere transgressions to analyze the transgression of the digital sphere into the sphere of health. She argues that advantages in the sphere of digital goods are currently being converted into advantages in other spheres such as health, medicine, and politics by Big Tech companies (Sharon, Citation2021a). An example of such an advantage is that ‘technical expertise – in terms of data collection, data analytics and infrastructure development – which confers them (companies) a clear and legitimate advantage in the sphere of digital goods, is currently converted into advantages in other spheres’ (Sharon, Citation2021b, p. 50).

Compared to healthcare, the argument of sphere transgressions may apply even more to the sphere of social welfare, which is less institutionally regulated when it comes to the use of data. In this article, based on empirical cases in the Dutch welfare state, we investigate how a sphere transgression is taking place in the sense that advantages in the digital sphere, such as efficient standardization or automated decision-making, are converted into advantages in the sphere of welfare (e.g., smarter forms of surveillance), thereby crowding out practices and values that are central to the sphere of welfare, such as personalized care, face-to-face encounters, individual attention, and the need to empower people to the best of their abilities.

Since the digital sphere is a relatively new and broad concept, we operationalize the practices, norms, and values of this sphere by using the logic of digitalization (Schildt, Citation2022). In line with Schildt (Citation2022, p. 235), we define the logic of digitalization as ‘involving a new set of interconnected managerial beliefs and norms, organizational practices, and diverse material and social structures that together complement and challenge the established logics in organizations and institutional fields.’ The logic of digitalization can be characterized by two central organizing principles: ‘The pursuit of digital omniscience – the efforts to represent and conceive the world through digital data – and digital omnipotence – the efforts to bring activities inside and outside organizations under the control of information systems’ (ibid.). When these two organizing principles are present in practice, it can be argued that digital logic manifests itself as a sphere transgression into the sphere of welfare. This raises the question whether existing rules and regulations in the sphere of welfare are fit to deal with the new digital logic and are able to protect existing values of social welfare, such as person-centered care, empowerment, do no harm, and individual face-to-face attention.

As Hajer (Citation2003, p. 175) has argued, the concept of institutional void, can shed light on situations in which ‘there are no clear rules and norms according to which politics is to be conducted and policy measures are to be agreed upon’ and where they no longer seem to provide answers to new challenges in our societies. Sphere transgressions, in this case understood as the encroachment of the logic of digitalization into the sphere of social welfare, can cause harm in a situation of an institutional void. We operationalize the concept of institutional void, following Van Zoonen (Citation2020), by looking at the question whether existing legal, ethical, and quality frameworks are able to address new challenges posed by the use of digital technologies. We examine the legal and ethical frameworks, referring to the lacking (legal) ‘rules’ and (ethical) ‘norms’ in Hajer’s definition of the institutional void (Citation2003). Additionally, we include quality frameworks because, as Van Zoonen (Citation2020) argues, adhering to data quality frameworks is necessary for responsible data use. We used the following operational definitions of frameworks: (a) legal frameworks can refer to the rule of law and basic principles of administrative law (e.g., the obligation to motivate decisions, prohibition of arbitrariness, proportionality and due diligence) as well as the existence of effective complaint procedures, (b) ethical frameworks can refer to Human Rights Treaties and internal ethical procedures of organizations, and (c) quality procedures refer to protocols how data should be collected, analyzed and shared within and between organizations and with the public in an accountable way. Data scientists work with quality measures stating that data must be FAIR (findable, accessible, interoperable, reusable), FACT (fair, accurate, confidential, transparent), that the infrastructure must be ROBUST (resilient, open, beneficial, user-oriented, secure, trustworthy), and data projects should meet the human-oriented data standard of SHARED values (an abbreviation referring to the principle that data projects should not reinforce existing inequalities and should support citizens from diverse backgrounds in a positive way) (Van Zoonen, Citation2020). When examples of quality measures like these are in place, it could help, in combination with legal and ethical frameworks, to limit – or to a certain extent even fill – an institutional void.

3. Context of the digital welfare state in The Netherlands

The Dutch welfare state is experiencing a significant shift toward automated decision-making and Artificial Intelligence in public service delivery. Various bureaucratic organizations, including departments of work and pension, social security agencies, and tax authorities, rely on inter-organizational data exchange about citizens to manage citizens’ benefit eligibility and detect potential welfare fraud (Henley & Booth, Citation2020). This digital transformation has led to the emergence of a ‘digital cage,’ in which both citizens and bureaucrats are caught by the disciplining logic of digitalization (Peeters & Widlak, Citation2018).

Although human decision-making of ‘frontline’ bureaucrats based on individual cases and face-to-face interactions remains crucial in certain welfare sectors like youth care, social care, and social support, the digitalization of work is becoming more prevalent as tools such as data dashboards and risk profiling tools are gaining ground (Hupe, Citation2022; Kersing et al., Citation2022; Van Zoonen, Citation2020). In this development, discretion of frontline bureaucrats is gradually being redirected toward judging ‘complex’ cases that do not fit into predetermined categories and require more information and human judgment (Giest & Klievink, Citation2022; Peeters & Widlak, Citation2018).

Recent policy changes have created momentum for further digitalizing the Dutch welfare state on a local level. Due to recent decentralizations of social care and support (2015), Dutch municipalities have become responsible for ensuring support for vulnerable youth, older persons, and people with mental or physical challenges. Municipalities need to carry out these new responsibilities under challenging circumstances: with reduced budgets in times of public sector austerity and personnel shortages (Linthorst & Oldenhof, Citation2020). While initially the decentralizations were framed as an opportunity for personalized services (Oldenhof & Linthorst, Citation2022), increasingly face-to-face encounters are replaced by digital solutions due to rising caseloads and personnel shortages (Kersing et al., Citation2022).

Despite governments and tech companies positively framing the use of digital technologies as a ‘win-win’ situation’ for cost savings, efficiency gains, and personalized services for citizens, many smaller municipalities lack the in-house capacity and technical know-how to design and operate digital technologies (VNG, Citation2022). It is questionable whether they are in the position to critically question and reflect on this positive framing of the digitization of local welfare. Smaller municipalities often lack the expertise to formulate the right legal, organizational, and ethical conditions in their contracts with private companies to ensure control over the development of technologies and the safeguarding of public values (VNG, Citation2022). Moreover, local governments themselves have also acknowledged that the institutional embedding of the use of digital technologies, such as algorithms, is still lacking. For example, the municipality of Rotterdam, the second largest city in the Netherlands, notes that: ‘working with algorithms is a relatively new terrain. This implies that the methods and safeguards have not fully operationalized yet’ (Rekenkamer Rotterdam, Citation2021, p. 13).

Even though the national government and larger municipalities issued an algorithm register and set-up algorithm oversight boards, this does not cover the whole breadth and impact of digitization that is currently taking place in the sector. Moreover, attention to safeguarding values in governance usually focuses on privacy (via the GDPR), neglecting other crucial values such as fairness and legitimacy (ibid.).

In this Dutch policy context of increasing digitalization with still limited institutionalization of new quality procedures, and ethical and legal frameworks, a variety of public controversies have emerged in recent years that require further attention.

4. Methods

We selected the childcare benefits scandal and the Top 400/600 as cases for further analysis. Both cases (a) are located in the domain of Dutch welfare and social care, (b) have experienced recent public controversy due to the use of digital technologies and (c) are information rich due to public debates on the local and/or national level.

The two subcases have similarities, yet also contain sufficient variety as they bring to light different risks/harms of sphere transgressions, here understood as the encroachment of digital logic into the world of social welfare. The Top 400/600 shows how the logic of digitalization encroaches into the sphere of welfare through spillover of system-level data exchange between multiple organizations (police, social work, local government) into street-level surveillance of young people. Furthermore, the case shows a unique blurring of boundaries between the logic of care and punishment: information collected by street coaches is used to help youngsters but can also criminalize them. Compared to the Top 400/600 case where the inputs and workings of the algorithm were to a large extent visible to programmers and civil servants, this was not the case in the childcare benefits scandal. The black box algorithm in this case could change its workings autonomously and independently from its programmers which had important consequences for transparency and accountability.

We conducted desk research based on a secondary analysis of available documents. Different types of sources were used to be able to triangulate different standpoints in the debate and represent the views of the various actors. Firstly, academic sources such as academic articles, academic reports, and studies. Secondly, reports, such as reports by knowledge institutes, monitoring bodies of the government, or reports of collectives of researchers, journalists, and activists. Thirdly, newspaper items, such as newspaper items from several Dutch and international newspapers and websites. Fourthly, the category ‘other’, for all other sources, such as government websites, websites of Dutch civil rights organizations, news items of collectives, video material, etc. In , an overview is provided of the number of sources per case categorized by type of source. The document analysis took place between October 2022 until March 2023.

Table 1. Overview of sources per case.

For each case we analyzed, we first present a short description of the main events. Secondly, we give insights into how the transgression of the logic of digitalization into the sphere of welfare takes place. To describe to what extent algorithms led to automatic decision-making (arbiter) or were used as support (aids) in addition to the human discretion of bureaucrats, we used the conceptual distinction between decision aids and decision arbiters (Elyounes, Citation2021). Lastly, we show in which ways the institutional void manifested itself in that institutional setting and how it caused harm for citizens. In the discussion, we reflect on how to fill the institutional void, thereby reducing undesirable consequences of sphere transgressions, and we identify potential pointers for recognizing undesirable consequences of new emerging sphere transgressions early on.

5. Case analysis

5.1 Dutch childcare benefits scandal

Between 2005 and 2019, the Dutch Tax Administration wrongly accused 26.000 parents, often with dual nationalities, of benefit fraud using algorithmic decision-making (Amaro, Citation2021; Henley, Citation2021).

Childcare benefits, introduced in 2004 under the Childcare Act, are overseen by The Ministry of Social Affairs and Employment. However, the responsibility for their implementation falls under the Tax Administration, which is part of the Ministry of Finance (Frederik, Citation2020). In 2006, the Netherlands Court of Audit warned for problems with the General Act on Means-tested Benefits because it did not include a hardship clause that allowed for exceptions to be made should the prescribed procedures be unreasonable or unfair (Netherlands Court of Audit, Citation2020).

In reaction to fraud committed by a number of Bulgarian migrants (Frederik, Citation2020; RTL Nieuws, Citation2015) a Fraud Management Team (CAF) was formed to identify fraudulent childminding agencies and fraudulent benefits recipients. After using algorithmic tools, 2200 families received the label ‘Deliberate intent/Gross negligence’, most of them having dual citizenship. The childcare allowance of groups of parents was stopped without individual assessment or explanation. Families had to pay back the allowances and were refused personal payment arrangements.

Sphere transgression

The Tax Administration used a risk-detection algorithm to process social security documents from parents. If the algorithm found small administrative shortcomings, e.g., a box wrongly filled or an omitted signature, childcare allowances were automatically discontinued (University of Antwerp, Citationn.d.). Additionally, a self-learning, black box and risk-scoring algorithm autonomously selected childcare allowance recipients for further audits. It derived risk factors from the analysis of known positive and negative fraud cases, and independently made changes to how it worked without explicit programming by the programmers. The inputs and workings remained invisible to the civil servants using the system (Amnesty International, Citation2021; University of Antwerp, Citationn.d.).

The implementation of these algorithms demonstrates the transgression of the logic of digitalization as it shows the pursuit of digital omniscience to represent and conceive the world through digital data. Furthermore, it demonstrates the pursuit of digital omnipotence by seeking control over the activities inside (work processes of civil servants) and outside (citizens) of the organization through information systems.

Despite this sphere transgression, authorities downplayed the importance of digital logic in decision-making. For example, the Tax Administration claimed manual checks were conducted before adding someone to the list and that allowances were not automatically terminated. Additionally, the Ministry of Finance stated that people who were suspected of fraud were not indefinitely treated as such but only for a specific period that a civil servant should manually set, extend, and stop in the system (NOS, Citation2021). Formally, this system can therefore be viewed as a decision-aid system that allows for human discretion to some extent.

However, in practice it appeared to primarily work as an arbiter system, meaning that the system makes the decision (Elyounes, Citation2021) with bureaucrats seldom questioning it. A notable exception is a civil servant at the Tax Administration who wrote a critical report about the lack of fairness in decisions to automatically terminate benefits of citizens without proper explanation or attention to proportionality. Despite sharing this critical report internally, this civil servant was consistently dismissed: ‘don't get involved in policy matters, mind your own work.’ And: ‘even if you are right, you will never win this. You will never win against the government.’ (Klein, Citation2019b). The automatic cessation of benefits without individual assessment, and the fact that both programmers and civil servants didn’t know its exact workings, and seldom questioned it, indicates that a decision-aid system de facto was mostly used as a decision-arbiter system.

Institutional void

Throughout this case, the logic of digitization challenges the established welfare sphere logics, such as responsive support and attention for personal circumstances, due to the institutional void in which legal, ethical, and quality frameworks were not sufficiently able to deal with the risks of algorithmic decision-making.

Legally, the Tax Administration violated the legal obligation to motivate decisions by using a self-learning black box algorithm. This obstructed accountability and transparency because programmers, civil servants, and victims did not know exactly how it worked. Consequently, it remained unclear who can be held accountable for decisions based on this algorithm.

Furthermore, the absence of a hardship clause led to the infringement of the principle of proportionality. Small mistakes had big consequences because if the algorithm found small administrative shortcomings benefits were automatically discontinued (University of Antwerp, Citationn.d.).

The Tax Administration also infringed on the due process principle by terminating benefits without giving reminders or a second chance to provide the correct information (Klein, Citation2019a). Some parents were unaware of being labeled as having committed fraud, impeding their ability to contest it (NOS, Citation2021).

Ethically, the Tax Administration did not assess human rights risks before implementing the risk classification algorithm. The Dutch Data Protection Authority described their work as unlawful, discriminatory, and improper. They found three violations of the GDPR: processing dual nationality, using first nationality for risk classification models and using first nationality for organized crime detection (Hofs, Citation2021; Markus, Citation2020).

Furthermore, emails, meeting minutes, and work instructions surfaced in which Tax Administration civil servants made racist statements. In 2022, the State Secretary for Finance acknowledged this institutional racism: ‘These practices around the allocation of benefits could result in different groups of citizens having a higher chance of being selected for a manual assessment, and thus led to an unintentional inequality in treatment between people. (…) this applied (…) to Dutch people with low incomes and single people, but it also applied to people with a different nationality (…) This therefore qualifies as a form of institutional racism as described by The Netherlands Institute for Human Rights’ (Van Rij, Citation2022, p. 4).

The quality procedures were deficient due to opaque data analysis using the black box algorithm. The algorithm could autonomously select childcare allowance recipients for further audits, and it could change its workings autonomously and independently from programmers. This led to transparency and accountability issues because it was unclear how and why recipients were selected for audits. This is in contradiction with FAIR and FACT principles which state that data applications should be accessible and transparent.

Moreover, the quality procedures were deficient because using dual nationality as a fraud proxy reinforced existing inequalities. This is in contradiction with the FACT principle because using dual nationality as a fraud proxy is inaccurate. Furthermore, the use of dual nationality as a fraud proxy is in contradiction with the human-oriented SHARED values which refer to the principle that data projects should not reinforce existing inequalities and should support citizens from different backgrounds in a positive way.

The growing influence of digital logic – left unchecked by existing rules and norms – resulted in far-reaching undesirable consequences for welfare recipients, such as financial hardship, stress and mental problems and out-of-home placement of children (Parlementaire ondervragingscommissie Kinderopvangtoeslag, Citation2020).

5.2 Top 400/Top 600

The city of Amsterdam uses predictive algorithms such as Top 400 and Top 600 to prevent crime incidents (Controlaltdelete, Citationn.d.; Uchida, Citation2014).

Launched in 2012, the Top 600 program, developed in collaboration with the police and social services, aims to reduce the number of high-impact crime incidents by intervening in the lives of individuals aged 21 or older who are perceived as ‘high risk’ by the algorithm. Criteria for inclusion are prior arrest as a high-impact crime suspect; contact with the Public Prosecution Service, and a sentence for a punishment (Gemeente Amsterdam, Citationn.d.a).

Expanding in 2015, the Top 400 identified a group of minors under 16 displaying behavior causing public nuisance, without having committed serious offenses. Top 400 is a risk modeling and profiling system run by the municipality alongside with police, prosecutors, youth work, street coaches, neighborhood organizations, and social services (Fair Trials, Citation2021; Jansen, Citation2022; Gemeente Amsterdam, Citation2016, Citationn.d.b). Street coaches from welfare organizations collect information about specific youth at the municipality’s request. When they are discussed by social care workers, the police, the probation office, and other agencies are present. This shows a clear blurring of boundaries between the logics of care and punishment (Fair Trials, Citation2021).

Sphere transgression

This case shows the growing influence of digital logic in the sphere of welfare. System-level data exchange practices between multiple organizations (police, social work, local government) can be seen as the pursuit of digital omniscience because there is an effort to represent and conceive the world through digital data.

The spillover of these system-level data exchange practices into increasing street-level surveillance of young people indicates the pursuit of digital omnipotence because of the increasing efforts to bring the daily lives of young people under the control of information systems (Schildt, Citation2022). Formally, the risk classification model is designed as a system-aid that still allows for human judgements by civil servants. However, when civil servants act upon the outcomes of these models as if they are true facts (Elyounes, Citation2021), this may suggest that these models are de facto system arbiters, leading to potential stigmatization of youngsters as criminals by default. The norms and values of the welfare sphere such as social support and inclusion appear to be instrumentally used for the purposes of crime prevention and surveillance. Earlier research by De Koning (Citation2017) describes how police officers respond to complaints about nuisance: ‘They ask for their IDs, call into the precinct, and hear that four of them are in the Top 600. The uniformed police will then probably say, ‘Hey guys, you’re in the Top 600. We don’t have anything on you now, but mind you, we’re keeping an eye on you’. So, yes, they do get that stamp among police officers, and I can imagine they don’t like it’ (ibid., p. 549).

Institutional void

Existing legal, ethical, and quality frameworks fail to counterbalance the negative consequences of the institutional logic of digitalization in this case.

Legally, the municipality and the police infringed on the principle of proportionality because including youngsters on the Top 400 list that didn’t actually commit a crime may lead to subjecting them to severe sanctions and interventions.

In 2016, the municipality also violated the obligation to motivate decisions by failing to provide reasons to parents why the algorithm identified 125 minors as potentially ‘high risk’ and including them on the Top 400 list (Controlaltdelete, Citationn.d.; Jansen, Citation2022). Furthermore, the municipality violated the principle of due process because parents who requested help from the municipality often received no response or were misinformed. Consequently, they were unable to contest the municipality's decision and defend themselves (Peled, Citation2022, p. 2022).

Ethical procedures were lacking as no assessment of human rights risks was conducted. More than one third of the individuals on the Top 600 list are of Moroccan descent (NPO Citation3, Citation2020) residing predominantly in Dutch Moroccan populated areas (De Koning, Citation2016, Citation2017; NL Times, Citation2019). A 2022 report by the Parliamentary Committee highlighted that ‘people with an ethnic origin have an increased chance of being stopped by the police and therefore have more often a ‘note’ in their police file’ (Eerste Kamer der Staten-Generaal, Citation2022a, Citation2022b). Consequently, they are disproportionately affected by laws involving police files, including the Top 400 and 600 (Controlaltdelete, Citationn.d.; Jansen, Citation2022). Civil rights organizations warn that structural discrimination based on ethnic and socio-economic grounds will find its way into these systems (Fair Trials, Citation2021).

Quality procedures for the Top 400 data analysis appear inadequate because they are not in line with the FACT principle. They are not accurate because there is merely a focus on correlations between the variables and not on causal relationships. The Top 400 uses non-criminal data such as ‘serious care signals’ in its risk models, including absence from school or not finishing school; involvement in domestic violence as a victim, witness, or suspect; high-risk people in the environment; as well as mere suspicion of involvement with crime without actual evidence (Gemeente Amsterdam, Citationn.d.b). Minimal contact with the police, varying from ID checks to arrests, even without any conviction, can place individuals on the list (Controlaltdelete, Citationn.d.; Fair Trials, Citation2021; Jansen, Citation2022). A concerned lawyer warned the system has become a self-fulfilling prophecy. Being listed increases the risk of being arrested. Moreover, even if the police find no incriminating evidence, individuals still get a note in their file. The more notes one receives, the higher on the list one gets (Fair Trials, Citation2021; Groenendaal, Citation2014). The Top 400 and Top 600 fail to adhere to the FACT and SHARED principle, resulting in unfair consequences for listed individuals. The Top 400 and Top 600 deeply impact youngsters’ lives, subjecting them to police and enforcement action such as arrest and regular home checks, constant surveillance, and stigmatization (De Koning, Citation2017; Peled, Citation2020, Citation2022).

6. Conclusion and discussion: filling the institutional void and identifying pointers for spotting undesirable consequences of new emerging sphere transgressions

Across the world, digital welfare states are experiencing challenges with using digital technologies in inclusive ways (Eubanks, Citation2017; Ratcliffe, Citation2019; United Nations, Citation2019). The Dutch welfare state is no exception. In fact, the analysis of two recent controversial cases in the Dutch digital welfare state – the childcare benefits scandal and the Top 400/Top 600 – reveal a clear institutional void. Existing legal, ethical, and quality frameworks did not sufficiently address the new challenges posed by the use of algorithms to detect the risk of welfare fraud or criminal behavior. First, legal principles of administrative law, such as the imperative to motivate decisions to citizens and enact due diligence, are increasingly difficult to adhere to due to the use of black box systems that contain self-learning algorithms, as was done by the Dutch tax authorities. Since civil servants cannot assess the accuracy of the input and fairness of the output of these systems, legal principles risk becoming empty shells. This risk increases even further when automated decision systems that are initially presented as decision aids for civil servants become de facto decision arbiters when civil servants can no longer understand their workings and implications (Elyounes, Citation2021). Second, the cases show that ethical frameworks were not used proactively beforehand to assess the risks of using new algorithms and risk models, such as discrimination. Third, quality frameworks for assessing the data quality (i.e., the timeliness, completeness, and correctness of the dataset) did not appear to be in place or were ineffective. As, for example, the Top 400/Top 600 case shows, incorrect proxies were used that led to inconclusive correlations and the broadening of inclusion criteria stretched beyond the safety domain: even to such an extent that victims of crimes rather than perpetrators were included in the risk models too.

The digital sphere transgression had detrimental consequences for vulnerable citizens dependent on the state for benefits because there currently is an institutional void in the Dutch digital welfare state. They experienced increased levels of surveillance, discrimination, stigmatization, wrongful termination of benefits, and the new acquisition of debts due to pay-back policies that were based on inaccurate information. These harms inflicted upon citizens have been publicly acknowledged by politicians, judges, and policymakers in the case of the benefit scandal. However, In the case of the Top 400/Top 600 case, key politicians have continued their support for the use of risk models to prevent criminal behavior despite concerns about discrimination and stigmatization of young people who have not committed serious crimes.

To prevent the harmful impact of digital sphere transgressions on the lives of vulnerable citizens, it is necessary to fill the current institutional void. This cannot be done by imposing a quick fix as institutions change incrementally. It is therefore crucial that various actors in the field engage in institutional work by (re) developing legal, ethical, and quality frameworks and embedding these frameworks in daily organizational routines and norms (Lawrence & Suddaby, Citation2006). This institutional embedding ensures that frameworks do not become paper tigers but are used as living documents that enable reflection on the consequences of sphere transgressions and can be mobilized by actors as a counterbalance against harmful consequences.

When developing and institutionally embedding legal, ethical and quality frameworks, it is crucial to take into account the relation between government and citizens and invest in different forms of legitimacy as defined by Grimmelikhuijsen and Meijer (Citation2022): input legitimacy (translating the preferences of citizens as expressed in democratic processes into the design and use of algorithms), throughput legitimacy (setting up and adhering to legal and fair processes to ensure fairness about the way outcomes are achieved) and output legitimacy (ensuring that the use of algorithms contributes to the realization of values that citizens find important). To ensure these different forms of legitimacy, the current dominant fraud prevention logic that prevails in many digital welfare states needs to shift toward more service-oriented values and the social rights of citizens. In this light, special rapporteur of the UN Alston recently urged governments to stop ‘obsessing about fraud, cost savings, sanctions, and market-driven definitions of efficiency, the starting point should be on how welfare budgets could be transformed through technology to ensure a higher standard of living for the vulnerable and disadvantaged’ (Citation2019, p. 2). To ensure that the preferences and values of citizens are taken into account, it is necessary to go beyond the usual methods such as surveys and democratic elections, where data transitions are usually not a very prominent topic on the political agenda. Alternative more experimental methods include qualitative data dialogs, games to increase data awareness, virtual data walks, and data commons (Van Zoonen, Citation2020). Some municipalities and public service providers have already started to work with these methods, although it is still at a very early stage and needs to be further developed.

Even when the institutional void is successfully addressed and reduced in the coming years, there are always risks of undesirable consequences of new emerging sphere transgressions coming up as digital technologies rapidly develop in unforeseen ways. Based on the analysis of the two Dutch subcases, we can distill early ‘soft signals’ that pointed toward emerging problems in the system, yet were ignored for too long, thereby leading to full-blown public controversies. These soft signals can be used as pointers for recognizing undesirable consequences of new sphere transgressions early on. First, when civil servants experience an implicit ‘this does not feel right feeling’, as we have seen in the childcare benefits scandal case, this may be considered as a soft signal pointing toward potential unethical routines or decision-making in the organization. Second, when citizens experience the bureaucratic system as a digital cage rather than a helping support system (see for example Peeters & Widlak, Citation2018), as we have seen in both cases, this can be considered a soft signal too. This soft signal can become a hard signal when the burden of proof is placed on the shoulders of citizens to challenge decisions without having information on what grounds decisions have been made in the first place, thereby making complaint procedures ineffective (see also Eubanks, Citation2017). Third, boundary blurring-between domains, such as care and crime prevention in the Top 400/Top 600 case, can be a soft signal. When actors in digital welfare states are able to recognize soft signals like these and act upon them, much potential harm to citizens can potentially be prevented.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Lieke Oldenhof

Lieke Oldenhof is an Associate Professor in the anthropology of the changing welfare state at the Erasmus School of Health Policy & Management. She investigates the rise of the digital welfare state and the implications for civil servants and citizens. LinkedIn: https://www.linkedin.com/in/liekeoldenhof/

Margot Kersing

Margot Kersing is a PhD student at the Erasmus School of Health Policy & Management and investigates the use of (big) data in social policy. LinkedIn: https://www.linkedin.com/in/margotkersing/

Liesbet van Zoonen

Liesbet van Zoonen is a Professor of sociology at the Erasmus University and a director of the Centre for BOLD Cities that coordinate multidisciplinary research projects about digitalization in the public sector. LinkedIn: https://www.linkedin.com/in/liesbet-van-zoonen-4955b343/

References