Publication Cover
Policing and Society
An International Journal of Research and Policy
Volume 33, 2023 - Issue 3
2,631
Views
1
CrossRef citations to date
0
Altmetric
Articles

Enacting criminal futures: data practices and crime prevention

Pages 333-347 | Received 31 Mar 2022, Accepted 05 Aug 2022, Published online: 24 Aug 2022

ABSTRACT

This paper analyzes how in predictive policing, data ‘make’ criminal futures that can be subjected to crime prevention. In doing so, it shows that the production of specific accounts of the future is predicated on the data practices of police departments. Specifically, the analysis identifies two sets of data practices that play a key role in how predictive policing brings specific criminal futures into being and renders them amenable to crime prevention measures. The first one pertains to the production of crime data and draws attention to the digital devices and interfaces that co-constitute how criminal phenomena are recorded and classified. The second one relates to ‘quality control’, i.e. the corrections and updates that keep on transforming crime data even after their original production. Theoretically, the analysis builds on Law and Urry’s concept of ‘enactment’ and substantiates how data-driven analytics perform fluid versions of social realities and simultaneously subject them to interventions. The conceptual lens of enactment allows us to understand how seemingly mundane activities such as producing reports from crime scenes and amending case files throughout an investigation are performative in the sense that they contribute to the making of crime forecasts and corresponding prevention measures. The criminal futures that the police target in an effort to render crime prevention more effective and efficient should thus be understood as contingent on the everyday practices of field officers and analysts.

Police work has become strongly reliant on knowledge production by means of digital data. While datafication can be traced back to the implementation of information and communication technologies in police departments around the world from the 1990s onwards (Ackroyd et al. Citation1992; Chan et al. Citation2001; Manning Citation2008), analytical capacities have more recently taken on a novel dynamic in the nexus of increased computing power, storage, information availability, and algorithmic means for intelligence extraction. One of the pertinent developments in recent years has been predictive policing, i.e. the idea to mobilise pattern recognition in data to anticipate and prevent crime (Perry et al. Citation2013). Being geared either towards the assessment of individual persons allegedly prone to commit crimes (or become victimised) or towards the identification of spaces vulnerable to increased criminal activity, the operational rationale of predictive policing is to enable targeted approaches to crime prevention and raise the overall efficiency and effectiveness of police work (Beck and McCue Citation2009).

Increasingly sophisticated and automated forms of crime analysis have drawn renewed attention to the data that form the basis for knowledge and intervention in police work. Critical data scholars have shown how the form and informational value of data depends on the context of their production and that they are subject to ongoing changes and modifications (Gitelman Citation2013; Kitchin Citation2014). This renders crime analysis and resulting crime prevention measures contingent on how the police produce crime data and interact with them. Criminological research, despite a long-standing concern with crime data, has only started to engage data processes in police organisations and their implications for how the police perceive of the world and act within it. This paper analyzes how in predictive policing, data ‘make’ criminal futures that can be subjected to crime prevention. In doing so, it argues that the production of specific accounts of the future is predicated on the data practices of police departments. Theoretically, the analysis builds on the concept of ‘enactment’ (Law and Urry Citation2004) and substantiates how data-driven analytics perform fluid versions of social realities and simultaneously subject them to interventions (Ruppert Citation2011).

Specifically, the analysis identifies two sets of data practices that play a key role in how predictive policing brings specific criminal futures into being and renders them amenable to crime prevention measures. The first one pertains to the production of crime data and draws attention to the digital devices and interfaces that co-constitute how criminal phenomena are recorded and classified. The second one relates to ‘quality control’, i.e. the corrections and updates that keep on transforming crime data even after their original production. The conceptual lens of enactment allows us to understand how seemingly mundane activities such as producing reports from crime scenes and amending case files throughout an investigation are performative in the sense that they contribute to the making of crime forecasts and corresponding prevention measures.

The paper proceeds as follows. After a brief review of data and methodology, it first engages with the criminological literature that has investigated the relation between crime and data, as well as the ways in which police organisations produce and handle data. It then introduces conceptual work that draws attention to how data enact specific realities and suggests to analytically focus on data practices. The empirical section eventually reconstructs how, in the case of predictive policing, digital devices and interfaces as well as updates and corrections enact particular criminal futures and subject them to crime prevention measures.

Data and methodology

The empirical data analyzed for this paper have been collected in the context of a multi-year research project on shifting practices of knowledge production and intervention in police work (Egbert and Leese Citation2021). Between 2016 and 2019, the experiences of 11 police departments in Germany and Switzerland implementing and using predictive policing software were covered using a triangulation of qualitative methods (expert interviews, ethnographic observations, document analysis). Overall, a total of 62 semi-structured expert interviews with police officers (strategic, tactical, and operative level) as well as industry representatives were conducted. Audio recording of all interviews were transcribed or, in cases where no recording was permitted, detailed notes were produced. Interviews were complemented with 40 field protocols from ethnographic research (mostly shadowing software operators and other persons involved in the predictive policing process) as well as 350 + public and police-internal documents on predictive policing (e.g. user manuals, internal guidelines, press releases, legal documentation, etc.).

The resulting dataset of interview transcripts, field protocols and documents was coded in-vivo using qualitative data analysis software (MaxQDA), i.e. topics and themes discussed by respondents were taken as a starting point for the formation of categories across the material. During multiple rounds of coding, these categories were refined, including the creation of sub-categories. The resulting code tree contained 3.555 coded segments distributed among eleven main categories and two levels of sub-categories. This structure allowed for the clustering of thematic blocks that represent the experiences, opinions, and most notably practices of police departments around data, algorithmic modes of crime analysis, and changing forms of policing and crime prevention. As per agreement with research participants, all data have been anonymized to prevent the identification of individuals or organisations. References to empirical data throughout the manuscript are labelled ‘I’ (interview), ‘P’ (protocol) and ‘D’ (document) and numbered to indicate the corresponding file.

It should be noted at this point that there was some expected variance between the researched police departments in terms of organisational structure, available resources and capacities, as well as policies that informed strategic, tactical, and operational preferences regarding crime analysis and crime prevention. This is due to the historical developments that shape police departments and have often resulted in idiosyncratic organisational structures, and moreover due to the federal political structure of both Germany and Switzerland that subjects police departments to political programmes in accordance with state-level governments and policies. Moreover, there was some minor variance in regard to the actual predictive policing software that was used. During the research period, most of the involved departments used PRECOBS by German manufacturer IfmPt, which at the time was the only commercially available, off-the-shelf German language software package. And while some police departments were experimenting with alternative tools developed in-house, the majority of the empirical data relate to the actual everyday use of PRECOBS. The software will be described in more detailed below.

Importantly, variance in structure can be expected to have an effect on practice. However, the empirical data clusters suggest similar challenges and strategies across all involved police departments, resulting in robust cross-cutting themes of data practices. Thus, although there can be no claims as to the generalizability of the findings, consistent experiences reported and encountered across various different police organisations indicate that the practices described here and their theoretical and conceptual implications can inform criminological research on a broader scale.

Crime and data

As an academic discipline, criminology has always been attentive to how crime is turned into data, and how aggregate statistics in turn inform political preferences and policing strategies. The idea to measure crime through the systematic production of data goes in fact back to the nineteenth century when ‘moral statisticians’ Quetelet (Citation1842) and Guerry (Citation1833) developed the idea that moral behaviour could be politically regulated once sufficient amounts of data would reveal patterns and regularities in ‘socially deviant’ activity. This idea still very much persists, as official crime statistics are politically used to identify particularly worrying developments, to demonstrate to the general public that crime fighting or prevention strategies are successful, or to allocate resources to particular programmes in an evidence-based fashion (Maguire Citation2012: 208).

At the same time, criminologists have been mindful of the fact that crime data can never be an objective mirror of an external reality, but that they are always contingent on the social processes that bring them into being (Coleman and Moynihan Citation1996; Haggerty Citation2001; Hope Citation2013; McCabe and Sutcliffe Citation1978). To be turned into data and become treatable in bureaucratic ways (Harper Citation1991), crime needs to be (1) uncovered, (2) classified, and (3) recorded – with each step involving numerous choices and rationales (Skogan Citation1974: 26). The uncovering of crime is, for instance, complicated by variations in reporting behaviour by the population and the simple fact that many crimes are never detected by the police (Biderman and Reiss Citation1967; MacDonald Citation2002). The classification of crime is complicated by both technical and political considerations that determine how a particular offence becomes classified and thus how it appears in reports and statistics (Ward et al. Citation2021). And the recording of crime is complicated by discretionary (and potentially biased) choices of police officers as to whether officially record an incident or not in the first place (Farrington and Burrows Citation1993; Varano et al. Citation2009). Due to these issues, crime statistics and crime data have for a long time been subject of fierce scholarly debates (Maltz Citation1977), with some pleading not to use them at all within research contexts (Beattie Citation1960) and others arguing to acknowledge their social constructedness and contextualise them accordingly (Black Citation1970).

The long-standing debate about the relationship between crime and data has been given new impetus by the increasing digitisation of police work since the 1990s. New-found capacities to produce, store, and analyze larger quantities of data with more speed and in a more fine-grained fashion have sparked a turn towards data-driven managerialism in police departments (Bratton and Malinowski Citation2008; Walsh Citation2001) as well as a renewed focus on crime analysis to support patrolling and crime prevention strategies (Beck and McCue Citation2009; Perry et al. Citation2013). In light of these tendencies, police departments around the world have, as Maguire (Citation2012: 227) puts it, developed ‘an almost unquenchable thirst for information about crime’ as input for data-driven analytics, hoping to open up new ways of knowing crime and acting upon it.

Scholars have taken this turn towards digital data in everyday police work as a starting point for inquiries into how datafication transforms rationales as well as strategic and tactical approaches to policing. This literature, first of all, points out how data-driven crime analysis reinforces a larger trend towards prevention and pre-emption, aiming to anticipate crime and stop it from actually happening (Egbert and Krasmann Citation2020; Wilson Citation2018). Equally, attention has been paid to the wider societal repercussions of data-driven police work, especially with regard to how technologically mediated knowledge could potentially reinforce and rationalise existing problems such as racialized police practices or surveillance capacities (Brayne Citation2021; Ferguson Citation2017; Hannah-Moffat Citation2019). Some even go as far as considering predictive policing and other data-fuelled analytical applications as an incubator that is likely to pave the way for the establishment of powerful cross-cutting data platforms and mining applications in law enforcement (Egbert Citation2019). In light of these concerns, Bennett Moses and Chan (Citation2018) have called for a critical reflection of the theories and assumptions that data and algorithmic crime analysis are based on, including the fact that data and resulting analytical knowledge should not be mistaken as neutral and objective.

Besides studies of the larger trajectories of police and society vis-à-vis digital data, criminological literature inspired by critical data studies (Dalton and Thatcher Citation2014) has also started to investigate the ways in which data come to matter in everyday police work. Sanders and Condon (Citation2017) have for instance analyzed how the use of digital data in crime analysis is determined not only by technical features and platforms but explicate how the implementation of data-driven analytics is contingent on socio-organisational factors as well as subjective labour of analysts within algorithmic environments. Chan et al. (Citation2022) similarly have highlighted how datafication transforms the epistemic foundations of criminal intelligence but keeps clashing with entrenched cultural and political characteristics of police organisations. And Christin (Citation2017) has diagnosed several forms of resistance against the implementation of algorithmic assessment tools in criminal justice.

Moreover, Egbert and Leese (Citation2021) have shown how data-driven crime analysis in predictive policing is predicated upon the translation processes that make analytical output legible and actionable across functionally separated divisions within larger police organisations. And others have demonstrated how algorithmic rationalities might clash with more traditional imaginaries of police work as ‘craft’ (Ratcliffe et al. Citation2020) and how the role of the analyst in law enforcement is changing rapidly, requiring new competencies that include data literacy and technical skills (Weston et al. Citation2020). Finally, Kaufmann and Leese have retraced how the police handle data from their original production through preparatory and analytical work and until their eventual discarding due to the loss of informational value over time (Kaufmann Citation2020; Kaufmann and Leese Citation2021).

These contributions have several things in common. First of all, they foreground how data-driven analytics are unlikely to revolutionise police work, but rather must become aligned with professional routines and established processes to gain traction and provide novel capacities for action and knowledge (Huey et al. Citation2021; Sandhu and Fussey Citation2021). Secondly, they shift the analytical focus slightly away from data themselves and instead foreground their embeddedness within larger socio-technical structures. In doing so, they open up an analytical perspective on the ‘journey’ that data experience on their way through and between organisations (Bates et al. Citation2016). And finally, they foreground that the question how data come to matter should be treated as one that is grounded in practice. This paper contributes to this literature by showing the centrality of practices in the making of crime forecasts and corresponding prevention measures. To do so, it turns to sociological and critical data studies literature that has explored the performative effects of data practices in other domains and has demonstrated how data simultaneously make worlds and subject them to interventions.

Studying police work through data practices

Sociological literature on data practices starts from the same assertion as much criminological work on the relation between crime and data, i.e. that data are not a neutral representation of an external reality, but that they are socially constructed (Gitelman Citation2013). Data, in other words, are mediated by technological factors as much as by cultural, organisational, economic, and not least political considerations. The specific ways in which data are brought into being are thereby key in what a dataset eventually looks like and what it tells us about the world. It would, however, be misleading to limit the social constructedness of data to their production. Rather, they keep on transforming through their exposure to the activities of those who work with them, including transmission, merging with data from other sources, consolidation and ‘cleaning’, analysis, and eventually deletion (Bates et al. Citation2016; Kitchin Citation2014, Citation2021).

To understand how data transform knowledge and action, they thus need to be studied as embedded within social contexts. To do so, scholars have drawn on the notion of practice, i.e. the routinised forms of behaviour and tacit knowledges through which social settings are produced and maintained. As a key ‘site of the social’ (Schatzki Citation2002), practices can be defined as ‘embodied, materially mediated arrays of human activity centrally organised around shared practical understanding’ (Schatzki Citation2001: 11). While there are many different approaches to and conceptualizations of practice in the literature, a common denominator revolves around the shared skills or understandings that constitute practices and that are the result of interactions in particular settings that become stabilised over time. Importantly, practices can be mediated by technical or natural artifacts that enable certain ways of saying and doing (Pickering Citation2001).

The particular advantage of a practice approach lies in its focus on the common implicit understandings of the actors involved in a specific activity. These implicit understandings and resulting actions must not necessarily match overarching institutional or organisational policies or rules, but rather draw attention to the role of ‘individuals, (inter)actions, language, signifying systems, the life world, institutions/roles, structures, or systems in defining the social’ (Schatzki Citation2001: 12). The study of practices thus provides a window into solidified rationales and forms of meaning-making within a given social group, in this case patrol officers and crime analysts. A focus on their practices accounts for the seemingly banal, often informal and routinised ways in which humans produce data and interact with them (Ruppert Citation2011). In professional organisations, in science, or in public administration, data practices are for instance often relegated to the technical realm, where operations such as data cleaning or consolidation are regarded as techno-bureaucratic activities that are left to those engaging with data on an everyday basis (Edwards Citation2010; Pelizza Citation2016).

The importance of practices becomes even more pertinent when taking into consideration the performative dimension of data, i.e. how they ‘make’ social realities by constituting knowledge about them. Crucially in this regard, Law and Urry (Citation2004) have developed the notion of ‘enactment’ to account for the simultaneous process of bringing phenomena into being and subjecting them to intervention. While not contesting the material reality of the world, Law and Urry’s (Citation2004: 396) point is that social constructs (e.g. poverty, crime, populations) do not exist independent of the ways in which they are described and measured, meaning that they are only ‘made in the relations of investigation’ and thus require close attention to how these investigations are conducted. As they argue, ‘the social knowledge of both ‘lay’ and ‘professional’ agents permeates the social world, making it and remaking it’ (Law and Urry Citation2004: 393), pointing for example to methods of data gathering and statistical analysis that bring phenomena such as suicide rates into being and at the same time rendering them governable and amenable to interventions. Methods should thus not be misunderstood as neutral tools in creating knowledge about the world, but rather as having ontological effects in that they constitute social phenomena that would otherwise not exist. Taking Law and Urry’s work as a starting point, critical data studies scholars have explored how data-driven analytics – understood in the vein of a scientific method to create knowledge – are productive of social phenomena (Ruppert Citation2013; Savage Citation2013). Studies have for example shown how populations (Ruppert Citation2011; Scheel Citation2020), identities (Pelizza Citation2020), or migratory movements (Scheel et al. Citation2019) are brought into being and regulated through digital data.

Historically, data have always been a key instrument in statecraft, with tools such as censuses and aggregate statistics being used to craft bureaucratic representations of the world and subject it to techniques of administration and government (Desrosières Citation2002; Scott Citation1998). Digitisation has, however, scaled these processes up and rendered them much more dynamic, i.e. the continuous production, exchange, and analysis of digital data today provides a more nuanced, but also more volatile constitution of phenomena to be governed. As Ruppert (Citation2011) has argued with regard to censuses in the digital age, today’s statistical instruments thus enact fluid versions of ‘the’ population that are both conceptually and epistemically unstable as they hinge on the practices employed in creating and modulating population data. In her words, population should thus be understood as ‘not a singular entity but an outcome of multiple practices’ (Ruppert Citation2011: 224), and analytical attention should be shifted from ontology (i.e. what does the population look like?) to practices (i.e. how was a specific version of the population enacted?).

Moreover, scholars engaging the performative dimension of digital data have explored the mutually constitutive relationship between social realities and digital devices (Amicelle et al. Citation2015; Amoore and Piotukh Citation2016; Ruppert et al. Citation2013). For Ruppert (Citation2012), the digital tools (i.e. databases, algorithms) that private and public organisations use for purposes of managing data and producing knowledge serve similar epistemic functions to methods in scientific contexts: they attempt to capture social phenomena to render them knowable and analyzable – and in doing so, they actively enact particular realities that are in turn contingent on the used devices and their capacities.

These considerations are important in the context of research on how the police attempt to know and prevent crime through data. Digitisation has given rise to broader transformations of police work, notably turning the police into knowledge workers (Ericson and Haggerty Citation1997) that operate in a scientifically informed fashion (Ericson and Shearing Citation1986). These tendencies have been reinforced by the turn to digital devices (i.e. managerial tools such as COMPSTAT or algorithmically supported crime prevention tools such as predictive policing) that rely on establishing statistical (and theory-backed) relations between socio-environmental variables and the occurrence of crime (Kaufmann et al. Citation2019). Police departments today in fact largely perceive of the world through data and advanced analytics, which in turn pre-structure the concrete ways in which they attempt to produce and maintain social order. There is thus a direct link between the practices that make and organise crime data and the ways in which these data constitute the criminal futures that inform crime prevention. Data practices in police work should thus be understood as a key incision point to understand how knowledge and action are constituted and how society is policed.

Analytically, a focus on practices has two major implications. Firstly, it requires to attend to data activities, i.e. the ways in which the police ‘collect, store, retrieve, analyse, and present data through various methods means to bring those objects and subjects that data speaks of into being’ (Ruppert et al. Citation2017: 1). And secondly, attention also needs to be given to the digital devices used in this context. With regard to police work, this means that we need to engage with the techno-scientific ways in which the police produce and handle crime data. Methodically, it is thus imperative to create a first-hand perspective on how the police ‘do’ data work in culturally and organisationally entrenched ways. Methods of qualitative empirical research are usually considered a prime means to account for the seemingly banal, everyday activities that constitute data practices. The empirical material that this paper is based on allows for an account of these practices, providing a detailed (yet necessarily locally delimited) understanding of how the police make and remake data on an everyday basis.

Enacting criminal futures

The rationale of predictive policing is to provide insights into alleged criminal futures and subject these futures to targeted crime prevention measures. To do so, the researched police departments mostly used the software package PRECOBS by German manufacturer IfmPt that specialises in residential burglary analysis and forecasts (Balogh Citation2016; Schweer Citation2015, Citation2018). The theoretical model mobilised by the software is predicated upon near-repeat victimisation theory, i.e. the assumption that there is an increased likelihood for (serial) follow-up activity after a successful burglary (Farrell Citation1995; Polvi et al. Citation1991; Townsley et al. Citation2003). Analytically, PRECOBS uses comparatively few data points. The main variables for the identification of future burglary risk are the time of the incident, modus operandi, haul, type of housing, street address, and GIS coordinates.

Based on these variables, the PRECOBS algorithm searches for patterns in current crime data that could indicate serial burglary activities and produces spatio-temporal risk estimates (i.e. increased likelihood of further burglaries within a limited timeframe and radius). These parameters are then operationalised in the form targeted crime prevention, including for instance adjusted patrol routes, control activities, or awareness campaigns for residents. Although the theoretical model, the algorithm, as well as the data used for analysis are in the case of PRECOBS arguably rather plain, the software is used as a key tool for the redistribution of resources in order to render crime prevention more efficient and effective. It mainly does so by providing significant increases in speed and scale, analyzing crime data in an automated fashion and generating and updating situational pictures without much delay (Okon Citation2015).

From an operational point of view, police departments use risk estimates produced with PRECOBS to guide the behaviour of street level patrol officers. Patrol officers are briefed about the results of the analytical process and instructed to treat risk areas preferentially when carrying out patrol activities. In practical terms, data analysis thus constitutes the distribution of policing within space and time, notably defining which areas or neighbourhoods receive increased attention. Moreover, the notion of alleged risk space has been shown to have effects on the behaviour of patrol officers and their interactions with their environment based on higher levels of alertness and suspicion (Egbert and Leese Citation2021: 159). The data-driven construction of particular social realities (i.e. criminal futures) thus shapes the ways in which society is being policed and the ways in which the police interact with the general public.

In doing so, it is exemplary of what Ericson and Shearing (Citation1986) have described as the ‘scientification’ of police work. Data-driven crime analysis follows the rationale of the scientific method by producing data, analyzing these data in relation to theory and other empirical evidence, learning something new about the world, and eventually modifying the world according to desired outcomes. In line with Law and Urry’s (Citation2004) argument about how social worlds (in this case: particular criminal futures) are enacted through methods, and moreover in line with Ruppert’s (Citation2011) argument that data-driven analytics create social phenomena and simultaneously subject them to possible interventions (i.e. the possibility to act upon those criminal futures before they materialise), the following empirical analysis directs attention to the data practices that inform crime analysis and enable crime prevention. The analysis identifies two important sets of practices that pertain to how crime data are produced and modified. The first one relates to the technologically mediated ways in which police officers craft data from crime scenes and highlights the role of digital devices and interfaces. The second one relates to quality control activities, i.e. how crime data are after their original production subjected to changes in the form of updates and corrections.

Digital devices and interfaces

Today, police departments usually create data about crime by means of digital devices. Rather than producing pen and paper reports from crime scenes or interviews and turning those into digital versions at a later point in time, available information is captured directly on laptop computers, tablets, or smartphones and transmitted to the central database instantly. Interviewees consistently highlighted how accelerated data production through digital devices and interfaces is key in predictive policing, as only timely analysis would enable police departments to implement targeted crime prevention measures in a meaningful way (I02; I09; I24; I26; I44; I51). Central for such acceleration are the interfaces on digital devices that guide police officers through the process of filling out mandatory fields, choosing the right categories within complex classification systems, and submitting the completed report right away. As discussed earlier, the ways in which crime is datafied and classified has been subjected to substantial debates in criminology. Lesser attention has, however, so far been paid to how the translation of criminal phenomena into data is co-constituted by the design and properties of the graphical user interfaces that police officers see on their devices.

Crime data are usually produced in the field. In practice, this means that in response to a call for service, a patrol car is dispatched, and officers record all available information at the crime scene. From this data, a digital record is created that serves as the central reference point for further investigations as well for algorithmic forms of crime prediction (I23; I46). Digital user interfaces in this context serve two main purposes. The first one is to ensure the standardised form of the resulting data, enabling comparative analyses and pattern recognition. To achieve standardisation, police departments use pre-defined information categories within which criminal phenomena are fitted, resulting in sometimes highly complex and fine-grained classification systems. Bowker and Star (Citation1999: 10) have described such classification systems as ‘set[s] of boxes (metaphorical or literal) into which things can be put to then do some kind of work – bureaucratic or knowledge production’. In providing these ‘boxes’, classification systems offer consistent and mutually exclusive organising principles within a closed ecosystem that provides a frame of reference for any further interaction with the data that they contain (Bowker and Star Citation1999: 10-1).

In this regard, interfaces on digital devices are no different from paper-based reporting forms. They do, however, offer an additional benefit that goes beyond acceleration: interfaces can be designed in a way that forces officers to complete all mandatory fields before finalising a report and submitting the produced data to the central database. This is considered key in the context of predictive policing, as software tools such as PRECOBS can only effectively search for patterns when datasets are as complete as possible. As several analysts detailed, their departments had, in the context of the implementation of predictive policing software, specifically redesigned the user interfaces of portable devices to ensure the completeness of crime data for ensuing algorithmic data analysis (P07; P70; P77; I09). One senior analyst added that interfaces were perceived to close the professional gap between analysts and field officers – two groups that allegedly ‘aren’t always on the same page – the officer wants to be done with the case, whereas the analyst wants the best possible data’ (I07).

Rather than smoothly aligning those groups and providing more accurate and complete data, respondents did, however, point out how the use of digital devices and interfaces in practice unfolds a number of unforeseen effects. One such effect pertains to conflicts between classification systems and design. Classification, as discussed above, is a key principle for how crime can be captured and subjected to (bureaucratic) forms of knowledge production. In predictive policing, the classification of crime has considerable repercussions, as categories are used in modelling particular forms of offender behaviour that can be identified in the form of patterns. As one interviewee explained the practical intricacies of classification in the production of crime data:

We have categories for ‘residential burglary’, for ‘armed residential burglary’, for ‘organized residential burglary’, for ‘organized larceny in a residence’, and so on. And now I have 50 cases where the categories point towards residential burglary, and out of those 50 maybe 17 actually have been residential burglaries […]. Larceny in an apartment could be residential burglary – so maybe the field officer used the wrong category because he didn’t remember 436000 and he didn’t have the time for a search. And so he just used 400000, which is simply larceny. But in category for location it says apartment. Now I need to look into the free text description: maybe it was the brother who smashed the piggy bank, technically that’s larceny because the money was protected against theft – and it was also in the apartment. But maybe the door was kicked in – then it would have been a burglary. (I50)

As this statement illustrates, finding the ‘correct’ categories for empirical phenomena is likely to be difficult and error-prone in the first place. But such complexity becomes additionally amplified by usability issues that interviewees highlighted. One issue that was perceived as particularly problematic were the drop-down menus that offer categories to select from in the form of a list that pops up when clicking on a particular reporting field. With regard to the above quote, the field for ‘type of offence’ would thus result in a long list of categories and sub-categories to pick from. Faced with time constraints, officers thus often tend to simply pick the very first entry from a drop-down menu or the broadest and most generic category available – the latter sparing them from an extra effort to look for a more fine-grained sub-category. As several interviewees put it, patrol officers in their experience simply considered it a nuisance to scroll through a lengthy list of categories, often with numerous sub-categories, in order to find the most fitting one (I07; P77).

The result of such workarounds are crime data that on the surface appear complete yet might lead to distorted analytical results due to presumable misclassifications. This phenomenon ties in with what Huey et al. (Citation2021) have called ‘irrationalities of rationality’ in the ways in which the police produce and analyze data. As they have shown, efforts to render data processes more efficient and in turn increase calculability and predictability curiously tend to produce behaviour and practices that contradict these goals, for example short-cuts and trade-offs that affect data quality and are likely to backfire and create even more paperwork for both reporting officers and auditors (Huey et al. Citation2021: 13). Most notably, however, Huey et al. (Citation2021: 13) point to the ‘trickle effect’ of data irrationalities, illustrating how misguided data processes become perpetuated when sketchy data lead to ill-advised decision-making. Understood through the notion of enactment, apparently mundane and banal data practices that are co-constituted between interfaces and the behaviour of police officers matter because once scaled up and part of the analytical baseline for predictive policing, they ‘make’ particular versions of criminal futures. Crime prevention measures then become directly affected by the performative effects of crime data that are in turn mediated by design and usability on digital devices.

From the police point of view, this is, needless to say, considered problematic. After all, the rationale that underpins predictive policing is to render crime prevention more effective and efficient. Policing the ‘wrong futures’ would undercut these aims and instead bind resources in spaces unlikely to be affected by near-repeats. Applied criminology has in this context already proposed strategies to mediate misclassifications and thus raise the effectiveness of data-based crime prevention programmes (Loftin et al. Citation2014; Nolan et al. Citation2011). In everyday practice, as interviewees pointed out, the most widespread way to sort out potentially erroneous crime data is, however, to simply perform several instances of ‘quality control’ before and during analysis. The next section accordingly engages with how quality control in the form of updates and corrections modify data in form and content.

Updates and corrections

The notion of ‘data quality’ usually refers to how well data are structured and whether they are regarded accurate and up-to-date (Batini and Scannapieca Citation2006; Wang et al. Citation2002). For the police, major quality criteria pertain to the timeliness, consistency, validity, completeness, and accuracy of crime data (P49). Only if all (or most) of these dimensions are regarded as sufficient can data be used for analyses and statistical aggregation in a meaningful way. The problem with crime data, however, is that they are notorious for failure in several or even all of these categories, and today’s wide-spread reliance on data for knowledge and action in police work has scaled up concerns about the reliability of crime data (Cope Citation2008; Maltz Citation1999; Santos Citation2013). This has major implications for data-driven crime analysis, as knowledge created based on ‘faulty’ data would inevitably lead to equally faulty analytical insights. Interviewees in this sense referred to the IT principle of ‘garbage in, garbage out’ when speaking about data quality in the context of predictive policing (I05).

Data quality in crime data is additionally complicated by the very nature of criminal investigations. Information about criminal events is per definition limited, fragmented, and can often only be substantiated and validated after lengthy investigations. In predictive policing, where swift crime analysis and the timely implementation of prevention measures are considered key, this means that there is an engrained trade-off between speed and data quality. Respondents highlighted how crime data, once produced in the field and submitted to the central database system, are in fact not considered fit for further analysis right away. Rather, they need to be subjected to several forms of treatment before they can be processed in any way. For this purpose, although a certain level of quality control is expected from anyone involved in data handling (I50), most departments have designated quality control officers whose task it is on a daily basis to double-check all produced crime data before they are potentially greenlighted for further analytical tasks. One officer described this process as follows:

Most burglary data are produced between 4:00 pm and 2:00 am. During these times, we usually only have very limited capacities at the station. There’s one supervisor who’s in charge, and he’s also responsible for quality control. But there’s usually a lot going on at the station, so he won’t really get around to that. So de fact quality control starts during the day shift, from 7:00 am onwards […] and is usually finished around 9:00 am, maybe 9:30. (I03)

As part of quality control activities, crime data are likely to be modified along two dimensions. The first one pertains to completeness. Although, as discussed above, digital interfaces make it in many cases impossible to complete a report without filling out all mandatory fields, other non-mandatory variables might have been left empty. This is usually the case when not all relevant information is yet available right after a residential burglary has been discovered and residents call the police. As one analyst explained while showcasing an incomplete dataset:

Here we have some empty fields in the case file. Those are missing because we don’t have information on them yet: the haul, the things that have been stolen – in most cases this is added later, because during original data production there is no definite knowledge about that. So we capture that at a later point. (I26)

Completeness of crime data is, however, an important factor in pattern recognition. For instance, a key variable in PRECOBS is the ‘modus operandi’ of an incident, i.e. the way in which an offender got access to an apartment or a building. The rationale behind the use of this variable is the assumption that professional serial burglars use specific methods that form an identifiable pattern in the data. The use of specific tools or techniques can, however, often only be determined by forensics specialists and must thus also be added at a later point in time. In the meantime, a missing value means that the algorithm misses key information that determines whether a particular offense could be part of a pattern in the analyzed data or not.

A second, arguably even more pertinent way of data modification during quality control pertains to potential re-classification. As ongoing investigations might reveal hitherto unknown facts, cases might ‘mutate’ (I07) from one type of offense to another one. Another analyst gave the following example for such re-classification:

You start with the original data, and it’s possible that at some point in time, there will be an amendment, some corrections. But because our people work shifts, it might be the case that you get new information only three days later. And all of a sudden, it’s no longer a residential burglary case, but property damage. So we need to take this into account with regard to the situational picture, we need to re-evaluate whether risk indicators are still valid. (I76)

Such mutations tie in with what Ruppert (Citation2011: 224) has described as the fluidity of social phenomena made up through data. Crime, in other words, can indeed shape-shift as a direct result from data practices in the form of updates and corrections to already existing records. This is why some police departments opted to pay tribute to the dynamic changes in the database through multiple analytical updates per day (I80).

Finally, updates and corrections crucially interfere with the timeliness dimension of crime data. As predictive policing is about intervention into ongoing criminal activity, speed is considered paramount – both in terms of data analysis and the implementation of operational crime prevention measures. Analysts thus seek to work with incoming crime data as soon as possible, as otherwise presumed burglary series might already be over once actionable intelligence about them becomes available. Quality practices do, however, at times stand in stark contrast with the ambition of acceleration, as the complex interplay between investigations, available information, and rather banal organisational matters such as shift schedule can considerably delay the modification of data. As one analyst detailed:

I need these changes as quickly as possible. As soon as our colleagues have new information, they need to make amendments. Time of an offense, that’s a classic. During investigations, we can often approximate the time frame, or even determine a more or less precise point in time. But our guys don’t change the data in the central system. The case file still says ‘weekend’. These things are important. (I51)

As these considerations imply, data quality should not be mistaken for an objective or even absolute notion. Even if we were to assume that, in theory, crime data could flawlessly capture criminal realities given that enough time and resources would be available – something they can in principle not do due to the social constructedness of crime in the first place – the empirical material suggests that in everyday police work data quality is defined by the practices of those involved in rendering data fit for analysis. Notably, the police are aware of the many uncertainties surrounding crime and thus choose to consider data quality as sufficient once data appear trustworthy enough to be analyzed. Trustworthiness, as Kitchin (Citation2021: 9) has argued, can be understood as a practical pinnacle of data quality. It is not so much concerned about the actual quality of data in (pseudo-)objective terms, but rather it is concerned with the question whether data are good enough for the task at hand.

In predictive policing, as we have seen, trustworthiness pertains not only to the potential ‘trickle effect’ where inaccuracies are passed on from data production to analysis and then eventually to decision-making and action (Huey et al. Citation2021), but it is also crucially affected by time constraints in crime analysis. As predictive policing rests on the acceleration of crime analysis in order to provide timely knowledge for crime prevention, it might be necessary to analyze ‘bad’ data sooner than what would be desirable from an epistemic point of view (Egbert and Leese Citation2021). In the trade-off between speed and certainty, speed and resulting actionability might in the end prevail, providing a version of a criminal future that might be a bit ‘off’ from an analytical perspective – but still preferrable to no version of a criminal future and untargeted crime prevention.

This reveals a striking mismatch between an organisational understanding of data quality and its actual practices. From a professional point of view, as Pelizza (Citation2016: 37) points out, ‘high quality data are assumed as neutral and ahistorical representations of states of the world, a Platonic substance to which actual data used in everyday practices should tend.’ Practice, as shown, does, however, not match such a Platonic ideal but rather reconstructs data quality through a series of sometimes unstructured and erratic updates and corrections that go on until timing dictates to run analyses and turn crime data into actionable criminal futures.

Conclusions

This paper has analyzed how data practices play a key role in how criminal futures are made and subsequently inform crime prevention measures. In explicating the socio-technically mediated ways in which the police make and remake data it has shown how seemingly mundane practices such as producing crime data through digital devices and interfaces, as well as quality control processes in the form of updates and corrections, render crime data and subsequently analytical outcomes fluid. The criminal futures that the police target in an effort to render crime prevention more effective and efficient should thus be understood as contingent on the everyday practices of field officers and analysts that have not yet received sufficient attention from criminologists.

A practice approach, as has been shown throughout the analysis, helps us to spell out and analytically capture how data enact specific realities and thus pre-structure possible modes of governance and intervention. Criminal futures and ensuing crime prevention measures should in this context be considered ‘a volatile and contingent accomplishment that hinges on mutable data practices whose operation and maintenance requires continuous work’ (Ruppert and Scheel Citation2021: 38). The ways of producing data through digital devices and interfaces, in combination with later quality control activities that subject crime data to ongoing corrections and updates, directly impact how the police – mediated through algorithmic forms of data analysis – perceive of allegedly risky futures and distribute their resources accordingly. A particular perspective on data practices highlights how the reliance on data and algorithmic analyses in crime prevention establishes organisational dependencies that lay bare the importance of seemingly mundane activities and devices.

Studying the everyday practices of police departments in regard to data can also help us curb some of the unrealistic claims that are floated by industry, politicians, and police chiefs when it comes to the capacities of data and algorithms. Data are without a doubt powerful, but their effects might actually turn out to be a little different than expected. Notably, a practice perspective foregrounds how particular versions of social reality are enacted with and through data, painting corresponding police interventions not so much in terms of efficiency and effectiveness, but rather in terms of the intricate ways in which knowledge comes into being and is turned into action. Equally so, it helps us curb dystopian visions of technologically mediated police work that leaves human officers – and by extension society – at the discretion of powerful machines that distribute the provision of services based on data and algorithms (Andrejevic Citation2018; McGuire Citation2021). Rather, a focus on practices allows us to understand how data and resulting effects are deeply enmeshed in the mundane and the everyday and that they cannot be separated from the infrastructures and humans that they correspond with.

In summary, this paper has shed light on a so far under-researched aspect in the nexus of policing, data, and crime: the question how data practices make criminal futures and crime prevention – and potentially larger trajectories of police work in the context of ongoing digitisation. As data-driven tools for police work are likely to gain even more traction and impact more aspects of how the police produce and maintain social order, for example through cross-cutting interfaces that connect previously siloed databases, future criminological research can arguably benefit from a stronger focus on how data are produced, how they are standardised, and how they are subjected to ongoing modifications. A theoretical and conceptual approach that revolves around the notion of enactment and data practices can thereby be productively mobilised for research that complements and extends existing work on the relation between crime, data, and policing.

Acknowledgements

Earlier versions of this manuscript have hugely benefited from from constructive feedback received at several occasions. I would like to particularly thank Silvan Pollozek and Wouter Van Rossem for their generous engagement. Moreover, I am indebted to two anonymous reviewers and the editors at Policing and Society.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

  • Ackroyd, S., et al., 1992. New technology and practical police work. Buckingham/Philadelphia: Open University Press.
  • Amicelle, A., Aradau, C., and Jeandesboz, J., 2015. Questioning security devices: performativity, resistance, politics. Security Dialogue, 46 (4), 293–306.
  • Amoore, L., and Piotukh, V., 2016. Algorithmic life: calculative devices in the age of big data. Milton Park/New York: Routledge.
  • Andrejevic, M., 2018. Data collection without limits: automated policing and the politics of framelessness. In: A Završnik, ed. Big Data, crime and social control. Milton Park/New York: Routledge, 93–107.
  • Balogh, D.A., 2016. Near repeat-prediction mit PRECOBS bei der stadtpolizei zürich. Kriminalistik, 5, 335–341.
  • Bates, J., Lin, Y.-W., and Goodale, P., 2016. Data journeys: capturing the socio-material constitution of data objects and flows. Big Data & Society, 3 (2), 1–12.
  • Batini, C., and Scannapieca, M., 2006. Data quality: concepts, methodologies and techniques. Berlin/Heidelberg: Springer.
  • Beattie, R.H., 1960. Criminal statistics in the United States. 1960. The Journal of Criminal Law, Criminology, and Police Science, 51 (May-June), 49–65.
  • Beck, C., and McCue, C., 2009. Predictive policing: what can we learn from Wal-Mart and Amazon about fighting crime in a recession? The Police Chief, 76 (11), 18–24.
  • Bennett Moses, L., and Chan, J., 2018. Algorithmic prediction in policing: assumptions, evaluation, and accountability. Policing and Society, 28 (7), 806–822.
  • Biderman, A.D., and Reiss, A.J., 1967. On exploring the “dark figure” of crime. The ANNALS of the American Academy of Political and Social Science, 374 (1), 1–15.
  • Black, D.J., 1970. Production of crime rates. American Sociological Review, 35 (4), 733–748.
  • Bowker, G.C., and Star, S.L., 1999. Sorting things out: classification and Its consequences. Cambridge: MIT Press.
  • Bratton, W. J., and Malinowski, S. W., 2008. Police performance management in practice: taking COMPSTAT to the next level. Policing: A Journal of Policy and Practice, 2 (3), 259–265.
  • Brayne, S., 2021. Predict and surveil: data, discretion, and the future of policing. Oxford: Oxford University Press.
  • Chan, J., et al., 2001. E-Policing: the impact of information technology on police practices. Brisbane: Criminal Justice Commission.
  • Chan, J., et al., 2022. Datafication and the practice of intelligence production. Big Data & Society, 9 (1), 1–13.
  • Christin, A., 2017. Algorithms in practice: comparing web journalism and criminal justice. Big Data & Society, 4 (2), 1–14.
  • Coleman, C., and Moynihan, J., 1996. Understanding crime data: haunted by the dark figure. Buckingham/Philadelphia: Open University Press.
  • Cope, N., 2008. Interpretation for action?': Definitions and potential of crime analysis for policing. In: T Newburn, ed. Handbook of policing. Cullompton/Portland: Willan Publishing, 404–429.
  • Dalton, C., and Thatcher, J. 2014. What Does a Critical Data Studies Look Like, and Why Do We Care? Available at https://www.societyandspace.org/articles/what-does-a-critical-data-studies-look-like-and-why-do-we-care (accessed 30 April 2020).
  • Desrosières, A., 2002. The politics of large numbers: A history of statistical reasoning. Harvard: Harvard University Press.
  • Edwards, P.N., 2010. A vast machine: computer models, climate data, and the politics of global warming. Cambridge: MIT Press.
  • Egbert, S., 2019. Predictive policing and the platformization of police work. Surveillance & Society, 17 (1/2), 83–88.
  • Egbert, S., and Krasmann, S., 2020. Predictive policing: not yet, but soon preemptive? Policing and Society, 30 (8), 905–919.
  • Egbert, S., and Leese, M., 2021. Criminal futures: predictive policing and everyday police work. London/New York: Routledge.
  • Ericson, R.V., and Haggerty, K.D., 1997. Policing the risk society. Oxford: Clarendon Press.
  • Ericson, R.V., and Shearing, C., 1986. The scientification of police work. In: G Böhme, and N Stehr, eds. The knowledge society: the growing impact of scientific knowledge on social relations. Dordrecht: Reidel, 129–159.
  • Farrell, G., 1995. Preventing repeat victimization. Crime and Justice, 19, 469–534.
  • Farrington, D.P., and Burrows, J.N., 1993. Did shopliftig really decrease? The British Journal of Criminology, 33 (1), 57–69.
  • Ferguson, A.G., 2017. The rise of Big data policing: surveillance, race, and the future of Law enforcement. New York: New York University Press.
  • Gitelman L (ed.) 2013. “Raw data” is an oxymoron, Cambridge: MIT Press.
  • Guerry, A.-M., 1833. Essai sur la statistique morale de la France: précédé d’un rapport à l’Académie des sciences. Paris: Chez Crochard.
  • Haggerty, K.D., 2001. Making crime count. Toronto: University of Toronto Press.
  • Hannah-Moffat, K., 2019. Algorithmic risk governance: Big Data analytics, race and information activism in criminal justice debates. Theoretical Criminology, 23 (4), 453–470.
  • Harper, R.R., 1991. The computer game: detectives, suspects, and technology. The British Journal of Criminology, 31 (3), 292–307.
  • Hope, T., 2013. What do crime statistics tell us? In: C. Hale, K. Hayward, A. Wahidin, and E. Wincup, eds. Criminology. 3rd Edition. Oxford: Oxford University Press, 43–64.
  • Huey, L., Ferguson, L., and Koziarski, J., 2021. The irrationalities of rationality in police data processes. Policing and Society, doi:10.1080/10439463.2021.2007245.
  • Kaufmann, M., 2020. Vocations, visions and vitalities of data analysis. An introduction. Information, Communication & Society, 23 (14), 1981–1995.
  • Kaufmann, M., Egbert, S., and Leese, M., 2019. Predictive policing and the politics of patterns. The British Journal of Criminology, 59 (3), 674–692.
  • Kaufmann, M., and Leese, M., 2021. Information In-formation: algorithmic policing and the life of data. In: V Badalič, and A Završnik, eds. Automating crime prevention, surveillance, and military operations. Cham: Springer, 69–83.
  • Kitchin, R., 2014. The data revolution: Big data, open data, data infrastructures & their consequences. Los Angeles/London/New Delhi/Singapore/Washington DC: Sage.
  • Kitchin, R., 2021. Data lives: How data are made and shape our world. Bristol: Bristol University Press.
  • Law, J., and Urry, J., 2004. Enacting the social. Economy and Society, 33 (3), 390–410.
  • Loftin, C., et al., 2014. The accuracy of supplementary homicide report rates for large U.S. cities. Homicide Studies, 19 (1), 6–27.
  • MacDonald, Z., 2002. Official crime statistics: their use and interpretation. The Economic Journal, 112 (477), F85–F106.
  • Maguire, M., 2012. Criminal statistics and the construction of crime. In: M Maguire, R Morgen, and R Reiner, eds. The Oxford handbook of criminology. Oxford: Oxford University Press, 206–244.
  • Maltz, M.D., 1977. Crime statistics: A historical perspective. Crime & Delinquency, 23 (1), 32–40.
  • Maltz, M.D., 1999. Bridging gaps in police crime data: A discussion paper from the BJS fellows program. U.S. Department of Justice.
  • Manning, P.K., 2008. The technology of policing: crime mapping, information technology, and the rationality of crime control. New York/London: New York University Press.
  • McCabe, S., and Sutcliffe, F., 1978. Defining crime: A study of police decisions. Oxford: Blackwell.
  • McGuire, M.R., 2021. The laughing policebot: automation and the end of policing. Policing and Society, 31 (1), 20–36.
  • Nolan, J.J., Haas, S.M., and Napier, J.S., 2011. Estimating the impact of classification error on the “statistical accuracy” of uniform crime reports. Journal of Quantitative Criminology, 27 (4), 497–519.
  • Okon, G., 2015. Vorhersagen von Straftaten – Vision oder Wirklichkeit? arcAKTUELL, 4, 22–23.
  • Pelizza, A., 2016. Disciplining change, displacing frictions: two structural dimensions of digital circulation across land registry database integration. Tecnoscienza: Italian Journal of Science and Technology Studies, 7 (2), 35–60.
  • Pelizza, A., 2020. Processing alterity, enacting Europe: migrant registration and identification as co-construction of individuals and polities. Science, Technology, & Human Values, 45 (2), 262–288.
  • Perry, W.L., et al., 2013. Predictive policing: The role of crime forecasting in Law enforcement operations. Santa Monica: RAND Corporation.
  • Pickering, A., 2001. Practice and posthumanism: social theory and a history of agency. In: T.R. Schatzki, K. Knorr Cetina, and E. von Savigny, eds. The practice turn in contemporary theory. London/New York: Routledge, 172–183.
  • Polvi, N., et al., 1991. The time course of repeat burglary victimization. The British Journal of Criminology, 31 (4), 411–414.
  • Quetelet, L.A.J., 1842. A treatise on man and the development of his faculties. Edinburgh: W. and R. Chambers.
  • Ratcliffe, J., Taylor, R.B., and Fisher, R., 2020. Conflicts and congruencies between predictive policing and the patrol officer’s craft. Policing and Society, 30 (6), 639–655.
  • Ruppert, E., 2011. Population objects: interpassive subjects. Sociology, 45 (2), 218–233.
  • Ruppert, E., 2012. The governmental topologies of database devices. Theory, Culture & Society, 29 (4-5), 116–136.
  • Ruppert, E., 2013. Rethinking empirical social sciences. Dialogues in Human Geography, 3 (3), 268–273.
  • Ruppert, E., Isin, E., and Bigo, D., 2017. Data politics. Big Data & Society, 4 (2), 1–7.
  • Ruppert, E., Law, J., and Savage, M., 2013. Reassembling social science methods: the challenge of digital devices. Theory, Culture & Society, 30 (4), 22–46.
  • Ruppert, E., and Scheel, S., 2021. Data practices. In: E Ruppert, and S Scheel, eds. Data practices: making up a European people. London/New York: Goldsmiths Press/MIT Press, 29–48.
  • Sanders, C.B., and Condon, C., 2017. Crime analysis and cognitive effects: the practice of policing through flows of data. Global Crime, 18 (3), 237–255.
  • Sandhu, A., and Fussey, P., 2021. The ‘uberization of policing’? How police negotiate and operationalise predictive policing technology. Policing and Society, 31 (1), 66–81.
  • Santos, R.B., 2013. Crime analysis with crime mapping. Thousand Oaks/London/New Delhi/Singapore: Sage.
  • Savage, M., 2013. The ‘social life of methods’: a critical introduction. Theory, Culture & Society, 30 (4), 3–21.
  • Schatzki, T.R., 2001. Introduction: practice theory. In: T R Schatzki, K Knorr Cetina, and E von Savigny, eds. The practice turn in contemporary theory. London/New York: Routledge, 10–23.
  • Schatzki, T.R., 2002. The site of the social: a philosophical account of the constitution of social life and change. University Park: Pennsylvania State University Press.
  • Scheel, S., 2020. Biopolitical bordering: enacting populations as intelligible objects of government. European Journal of Social Theory, 23 (4), 571–590.
  • Scheel, S., Ruppert, E., and Ustek-Spilda, F., 2019. Enacting migration through data practices. Environment and Planning D: Society and Space, 37 (4), 579–588.
  • Schweer, T., 2015. Vor dem Täter am Tatort” - Musterbasierte Tatortvorhersagen am Beispiel des Wohnungseinbruchs. Die Kriminalpolizei, 32 (1), 13–16.
  • Schweer T., 2018. Predictive policing mit PRECOBS. In Institut für Versicherungswirtschaft der Universität St. Gallen (ed.) St. Galler Trendmonitor für Risiko- und Finanzmärkte. Produkt- und Serviceinformationen 40 (1). St. Gallen: Universität St. Gallen, 12–14.
  • Scott, J.C., 1998. Seeing like a state: how certain schemes to improve the human condition have failed. New Haven/London: Yale University Press.
  • Skogan, W.G., 1974. The validity of official crime statistics: an empirical investigation. Social Science Quarterly, 55 (1), 25–38.
  • Townsley, M., Homel, R., and Chaseling, J., 2003. Infectious burglaries: a test of the near repeat hypothesis. The British Journal of Criminology, 43 (3), 615–633.
  • Varano, S.P., et al., 2009. Constructing crime: neighborhood characteristics and police recording behavior. Journal of Criminal Justice, 37 (6), 553–563.
  • Walsh, W.F., 2001. COMPSTAT: an analysis of an emerging police managerial paradigm. Policing: An International Journal of Police Strategies & Management, 24 (3), 347–362.
  • Wang, R.Y., Ziad, M., and Lee, Y.W., 2002. Data quality. New York/Boston/Dordrecht/London/Moscow: Kluwer Academic Publishers.
  • Ward, T., Durrant, R., and Dixon, L., 2021. The classification of crime: towards pluralism. Aggression and Violent Behavior, 59, 1–6.
  • Weston, C., Bennett-Moses, L., and Sanders, C., 2020. The changing role of the law enforcement analyst: clarifying core competencies for analysts and supervisors through empirical research. Policing and Society, 30 (5), 532–547.
  • Wilson, D., 2018. Algorithmic patrol: the futures of predictive policing. In: A Završnik, ed. Big Data, crime and social control. London/New York: Routledge, 108–127.