4,034
Views
0
CrossRef citations to date
0
Altmetric
ARTICLES

Justice by Algorithm: The Limits of AI in Criminal Sentencing

Abstract

Criminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms in sentencing – even in an advisory role – threatens to undermine this value. The paper argues that a principle of “meaningful public control” should be met in all sentencing decisions if they are to retain their condemnatory status. This principle requires that agents who have standing to act on behalf of the wider political community retain moral responsibility for all sentencing decisions. While this principle does not rule out the use of algorithms, it does require limits on how they are constructed.

Introduction

Criminal justice systems have traditionally relied heavily on human decision-making. Juries are asked to decide whether a given defendant is guilty of a crime beyond a reasonable doubt. Judges are tasked with selecting an appropriate sentence for those who are found guilty. And the role of parole boards is to investigate whether incarcerated individuals are sufficiently rehabilitated to re-join society. Because of the discretion that is given to human agents at various stages, the goals of these systems are vulnerable to being subverted as a result of the biases and irrationalities of those who are asked to make choices. One study, for example, found that judges were more likely to make a favorable decision in parole hearings immediately after lunch.Footnote1

One way in which we might reduce the effects of human error is by making use of artificial intelligence to supplement human decision-making. Algorithms are being consulted in a number of jurisdictions to provide judges with recommendations about what type and severity of punishment convicted criminals should receive. Because of the sophisticated calculations that these programs can conduct, as well as their immunity from the extraneous factors that undermine the efficacy of human reasoning, it might be thought that utilizing them in this manner may ensure that criminal punishment will better serve its proper goals.Footnote2

One notable program of this sort is Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), used in a number of US jurisdictions. Using answers to 137 questions (which are either provided by defendants directly or inferred from their criminal record), it provides an indication of a risk of recidivism on a 10-point scale. This risk score can then be used to inform sentencing: a higher risk score is supposed to indicate that a longer sentence would be appropriate, other things equal.Footnote3 COMPAS has come in for significant criticism. It has been argued that the software is discriminatory on the basis that it falsely flags black defendants as being at a high risk of reoffending at a greater rate than white defendants.Footnote4 Worries about accuracy have also been raised.Footnote5

Despite concerns about existing algorithms, it might be thought that as systems such as this improve, and fairness and accuracy in their use can be guaranteed,Footnote6 they might have a legitimate role to play in sentencing decisions.Footnote7 This paper examines what exactly this role should be. I will argue that significant limits may need to be placed on how algorithms are developed or used – and this is true even with respect to algorithms that exceed current technological constraints. My reason for this is that the sentencing process is an important site through which condemnation can be expressed to those guilty of crimes, and that this practice has value. The movement toward the greater use of algorithms at this stage threatens to undermine this practice.

I begin in the next section by setting out a test case that will help us uncover less contingent problems of using algorithms in sentencing (i.e. problems that do not relate to the current state of technology). The section that follows it explains why condemnation in the sentencing process is valuable. While this argument draws heavily from the literature on “expressive” theories of punishment, it does not presuppose them: one can reject these theories while accepting my main point. In this section, I also explain why using algorithms may undermine condemnation. Then I develop a necessary condition for condemnation to be maintained in sentencing—I call it “meaningful public control.” The final section examines what limits need to be put in place for meaningful public control to be achieved in practice.

A Test Case

In order to focus the question, it will be useful to consider two imaginary judges who have differing practices of sentencing:

Judge Judy relies solely on her own judgement in determining what sentence to hand down. She does her best to consider all the relevant factors that should go into determining the type and severity of criminal punishment and makes a decision on the basis of these.

Judge Joe consults the Program that Really Accurately Calculates Threats or Retribution (ProTRACTOR), a computer algorithm that uses a range of data to come to a recommendation about what sentence is appropriate for each defendant whose information it is provided with. ProTRACTOR performs equally well as Judge Judy in specifying what the appropriate sentence is. Joe always follows the recommendation of ProTRACTOR, and never exercises his own judgement when sentencing.

The question I propose we consider is whether something important is lost when judges behave like Judge Joe rather than like Judge Judy.

Of course, few if any judges will act like Joe. In reality, it is likely that those with access to risk-assessment software will use their judgement to some extent; in many real-world cases where algorithms are being used, judges may be somewhere between Judy and Joe on a scale of how much algorithms are relied on.Footnote8 However, it will be useful to consider an extreme case like the one presented here. For one, the possibility of an automation thing, whereby humans who are provided with recommendations of algorithms may place too much uncritical faith in the quality of their outcomes, may mean that many judges will look more like Joe in practice than we might otherwise expect.Footnote9 More importantly, by examining what, if anything, is deficient in the situation in which a judge completely surrenders their judgement, we will learn something about the moral limits of the use of sentencing software. Or so I will argue.

Before continuing, it is worth pausing to consider how sophisticated a computer program would need to be in order to have genuinely equal capacities as an even remotely competent human judge in making sentencing recommendations. It might be thought that there is a list of objective factors that determine the appropriate sentence.Footnote10 One such relevant factor, according to some legal theorists, is how different sentences will affect the overall crime rate in society.Footnote11 Only when punishment leads to a sufficient reduction in crime, these “instrumentalist” theorists say, is it permissible.

One way in which punishment might reduce crime is by keeping dangerous criminals away from society until such a time as they are rehabilitated. While it may be impossible to know in advance how much time this will take for any given criminal, the outputs of risk assessment calculations might serve as a useful proxy. The greater the risk score, the longer sentence may be needed. (Fine-tuning sentences can also be done later at parole hearings.) I have already mentioned, however, worries about the accuracy of existing risk-assessment software. Even reaching an acceptable capacity in discerning this factor, then, may require technological advances.

Another way in which punishment might reduce crime is by providing a deterrent to would-be criminals. The deterrent effects of different punishments might thus be thought to be another factor that should be taken into account during sentencing. If a particular sentence is viewed as overly lenient by people, they may be more willing to commit crimes. If this is the case, ProTRACTOR might need to somehow incorporate the likely deterrent effects of its sentencing recommendations in order to be on a par with Judge Judy. This would most likely necessitate a range of behavioral and sociological data being used as inputs, and would take us further beyond existing software used in sentencing. On the other hand, I am not sure if this would really be needed. Perhaps individual sentencing decisions do not have much impact on the perceived incentive structures in society. If this is true, maybe the deterrent effect of punishment should be taken into account at the tariff-setting stage (i.e. the stage at which lawmakers decide on the outer bounds of legal sentences for given crimes) rather than the sentencing stage (where judges select a sentence within those bounds for individual defendants).

Other theorists think that reducing crime is not the (only) proper end of criminal punishment. Retributivists argue that it can be appropriate to impose criminal sanctions even if this has no effect on crime levels. This might be because criminals deserve to be punished, for example.Footnote12 While retributivism comes in a number of different forms, most retributivists seem to be committed to a principle of lex talionis which, roughly speaking, suggests that the severity of punishment of a given defendant should be proportionate to the badness of the crime they committed: worse crimes should be met with harsher punishments, other things equal.

A sentencing algorithm that takes sufficient account of retributivist concerns might have to be significantly more sophisticated than any of the risk-based programs that we are familiar with.Footnote13 It is generally assumed that a number of different factors need to be considered when determining the badness of a given crime: whether it was pre-meditated or not, the overall harm done, the sort of motivation behind it, and so on. If these factors interact in complex ways to determine the overall disvalue of a given criminal act,Footnote14 it would be a significant engineering challenge to construct an algorithm that made the calculations correctly.Footnote15 Some researchers suggest that AI systems deployed in ethically high-stakes situations should be programmed to mimic virtuous agents rather than as simple rule followers, and this form of AI might be needed here to avoid the difficulties noted.Footnote16

A complicating issue here is that a number of authors have argued that, in considering what the appropriate punishment is (especially from a retributive perspective), we should take into account the wider social factors that lead to criminality. Social deprivation, for example, might be thought to undermine the retributive case for punishment, or at least point toward the need for a more nuanced application.Footnote17 While a perfect sentencing algorithm would have to be sensitive to this, if these authors are correct, I am only concerned here with whether an algorithm could match human capacities. Since existing human judges tend not to adjust sentencing practice in light of these considerations, we can set them aside.

It might be thought that sentencing decisions should also depend on subjective evaluations. Perhaps discerning the appropriate sentence is a fundamentally political decision: it should be the people (acting through elected lawmakers) who should decide how the criminal justice system should operate. This of course happens in many jurisdictions, where sentencing guidelines (such as the Federal Sentencing Guidelines in the US) are provided to judges. There may be a number of reasons for thinking that these sorts of guidelines need not track any set of objective factors precisely, not least of which is that the sorts of factors we considered above may only set outer limits to sentencing, within which a number of different approaches might be permissible.Footnote18

Even if this is the case, though, there is still significant work to be done at the sentencing stage. Guidelines themselves are probably not the sort of thing that can be mechanically applied – they require judgement in answering questions like “Is this a mitigating factor?” and “What is the risk of this individual re-offending?” Moreover, even once these questions are answered, guidelines tend to only give out maximum and minimum sentences that can be given. Judges are expected to select a sentence within those bounds that reflects the specific circumstances of the case before them.Footnote19 And, even if there is no single correct answer about how different defendants should be punished, this does not give judges carte blanche to impose whatever sentence takes their fancy: they must act for principled reasons, if nothing else to ensure consistency in sentencing.Footnote20 Any algorithm that matched human capacities would still need to carry out complex calculations.

All of this is simply to point out that we may be a long way off being able to construct a sentencing program that at least matches the relevant human capacities (or even knowing how to construct such a program). Given the potential for automation bias, discussed earlier, it may seem like we should avoid using programs until the technology improves, even in a strictly advisory role. I take no firm stand on this point for the purposes of this paper. I will argue that, even if we did have a program that matched human capacities (like the hypothetical ProTRACTOR), there may still be reasons against its use.

Expressive Theories of Punishment

Reducing crime and retribution are not the only possible aims of the criminal justice system, though. Some theorists also think that it should involve an expressive function, condemning those who have been convicted of crimes. This, as we will see, may be a function that a criminal justice system where algorithms play a significant role in sentencing decisions has difficulty carrying out.

In a well-known account of an “expressivist” theory,Footnote21 Joel Feinberg argues that the expressive aspect of punishment plays a role in both defining it and justifying it. He writes:

Punishment is a conventional device for the expression of attitudes of resentment and indignation, and of judgements of disapproval and reprobation, either in the part of the punishing authority himself or of those “in whose name” the punishment is being inflicted.Footnote22

It is the expression of negative reactive attitudes, according to Feinberg, that distinguishes punishment from mere “penalties,” such as parking fines. Penalties such as this appear different from, say, prison sentences given to murderers.Footnote23 This is not only because of the greater severity of the latter, but also because of the stigma that attaches to the act of being imprisoned, which does not similarly attach to fines. Punishment involves a certain sort of attitude on the part of those inflicting it, and a corresponding cost to those punished (in addition to the costs of the act of punishment itself). It involves the condemnation of those punished, and not simply their material interests being set back.

As well as being conceptually linked with punishment, its expressive aspect is also thought to be important in justifying it. For Feinberg, condemnation enables us to achieve various valuable goals. These include: the authoritative disavowal of certain actions; upholding the law’s status qua law; and the unambiguous absolution of those who are innocent from blame.Footnote24 Perhaps the most important function of punishment that Feinberg mentions is the symbolic non-acquiescence of certain forms of conduct. When heinous crimes are left unpunished, the law may seem to condone or even approve them. And, says Feinberg, if this is the case, “the law … speaks for all citizens in expressing a wholly inappropriate attitude toward them.”Footnote25 This may give citizens a reason to see crimes in their community punished appropriately: the acquiescence that would be symbolized by non-punishment may make them partially morally responsible for those crimes.Footnote26 But arguably more significant is the effect of condemnation for (potential) victims of crime. Individuals’ self-respect may depend on the recognition by others of their status as an independent reason for action.Footnote27 And unless effective laws are put in place by a political community which involve punishing those who violate individuals’ rights, this recognition may be lacking.Footnote28

Those who think that punishment involves (and should involve) the condemnation of those punished might agree with retributivists that the severity of punishment that is given to an individual should be adjusted in line with the wrongness of the crime they committed. This is because, if condemnation is to be meaningful, it should be tailored to the circumstances of those condemned. Giving the same punishment to someone who has been convicted of stealing a packet of cigarettes and a racist serial killer might not only treat the former unfairly. It might also undermine the beneficial effects of condemnation. It may suggest that those who are the victims of hate crimes are not treated as seriously as they should be.Footnote29 Thus Robert Nozick, in discussing the expressive function of punishment, claims that punishment informs criminals “this is how wrong what you did was,”Footnote30 (and not simply, “what you did was wrong”).

While Feinberg is primarily interested in the role punishment plays in expressing condemnation, we might equally think that the sentencing procedure itself can contribute toward this end. Consider what happens when a judge is tasked with deciding how long a prison sentence is to be given to a defendant found guilty of a crime. The judge in question might give a sentence toward the upper bound permitted by law, and justify this by saying that the defendant acted without any mitigating factors, or with blatant disregard for public safety, or without any sign of remorse. This expresses the view that what the defendant did was wrong and, more specifically, worse that those guilty of the same crime but who are given lighter sentences. Even if judgements are made without an explicit statement the gravity of the offence, the expression of this might often be implicit—that is, inferred from the sentence decided upon. So, the sentencing process, too, might be an important site of condemnation. Indeed, punishment itself might lack any effective expression without this process having already been carried out.Footnote31

One notable objection that has been raised against the expressive justification for punishment is that punishment is too costly a practice to be justified by the need to condemn criminals. Punishment involves harsh treatment, which it would ordinarily be impermissible to impose on people. If it is to be justified, at the very least, this treatment must at least be necessary to serve an important goal. Condemnation of criminals, and the valuable consequences of this, may look like such an important goal. But, critics suggest, punishment is not necessary to achieve it: there could be other, less costly, ways of doing so. A public statement condemning those committed of a crime could be sufficient, for example.Footnote32 If punishment is to be justified, it is claimed, other goals (such as the reduction of crime) need to be invoked.

Even if this objection succeeds, it might still be argued that condemnation through the sentencing procedure can still be justified. As noted above, the institution of sentencing ordinarily involves condemnation in the same way that punishment can. And since this does entail the sort of harsh treatment that punishment itself involves, the obstacles to justifying it will be significantly less. Of course, the practice of criminal sentencing depends on it being followed by punishment: sentencing someone without the sentence being carried out would be nonsensical. But if punishment can be justified through some independent theory (such as instrumentalism or retributivism), we can still welcome the condemnation that can occur in sentencing processes.Footnote33 The conclusions I reach here can thus be shared by those who reject the expressivist theory of punishment.

If criminal sentencing is to achieve the aim of expressing condemnation, though, there are certain limits on how it can be organized. As we saw above, condemnation is expressed when judges hand down sentences of certain lengths and the reasons for it are apparent (even if not explicitly stated). But for condemnation to really be carried out, at least two things need to occur. First, someone must actually be in a position to do the condemning. And, second, this condemnation must be sufficiently public, in the sense that the act that involves or implies condemnation must be available for others to observe. This second idea may partly underlie the familiar demand that justice must not merely be done, but also seen to be done. A society in which sentences are decided on and handed out in secret would not achieve the valuable outcomes of condemnation that Feinberg notes. Nobody could be sure that their state was taking violations of their rights sufficiently seriously, for example, if they did not know what punishment (if any) their state inflicted on their perpetrators and why.

We are now in a position to outline a prima facie problem with the actions of Judge Joe. Because, unlike Judge Judy, he does not make a decision himself, his actions cannot either involve or imply condemnation when he hands out sentences. When he gives some sentences to some people at the highest possible level, he has not considered the reasoning behind this. Whatever sentence he gives out, therefore, does not even imply any negative attitude such as condemnation on his part.Footnote34 Does this mean that the use of algorithms in sentencing decisions is inconsistent with the condemnation of those sentenced? Not necessarily. In the following sections, I argue that the expressive function of sentencing can be maintained even if algorithms are deferred to. For this to be the case though, certain conditions will need to be met.

Meaningful Public Control

What exactly is it about Judge Joe’s actions that renders his judgements empty of reactive attitudes? I want to suggest that he lacks a certain sort of moral responsibility for the sentences that are handed out. It might be said that he does not control each sentencing decision. And, because control is thought to be a necessary condition on moral responsibility, the latter might appear to be undermined.Footnote35 Of course, in one sense, Judge Joe does retain control over the sentence: there is no coercive law requiring him to follow the recommendations of ProTRACTOR. If he wanted to, he could choose to ignore the outputs, and reason about the case in much the same way as Judge Judy does. But because he has decided to surrender his own judgement in each case that comes along, we can plausibly say that he has no control over the content of each individual judgement. While he appears to be morally responsible for the decision to defer to ProTRACTOR, he is not, I want to say, responsible for each individual sentencing decision after he does so.

This characterization might be challenged. Moral responsibility is sometimes associated with praiseworthiness or blameworthiness. To be morally responsible for an action, on this account, is to be praiseworthy (in the case of good outcomes) or blameworthy (in the case of bad outcomes) for it. And surely, it might be claimed, we could blame Joe for each unjust sentence if he knowingly deferred to an unreliable algorithm. I am inclined to say that this is true, but the example that I presented is one in which the algorithm used is equally good as a competent human judge. In this case, I am no longer inclined to say that Joe is responsible for each individual sentencing decision, even in cases where this generally reliable algorithm occasionally recommends an unjust punishment. But, in my experience, people’s intuitions differ about these sorts of examples.

In any case, I do not need to rely on these questionable intuitions to make my argument. This is because I do not mean to suggest that Joe lacks “responsibility as accountability” (as it can be called), but rather “responsibility as attributability.”Footnote36 While the former is related to praise and blame in the way discussed above, being responsible in the latter sense requires that actions can properly be attributed to the agent in question. As Gary Watson explains, someone responsible in this way is “an agent in a strong sense, an author of her conduct, and is in an important sense answerable for what she does.”Footnote37 And it looks like this is not true with respect to the sentencing decisions that Joe makes. If ProTRACTOR involved some racial bias that was not discernable in advance, for example, the biased sentencing decisions would not reflect Joe’s values if he is not racially biased himself. Each sentencing decision is not attributable to Joe; they are not his decisions. And while it would be appropriate to ask Joe to justify his decision to delegate sentencing to the algorithm, it would not make sense to make him answer each individual sentencing decision that follows.

Because Joe lacks responsibility for each individual sentence (in the sense specified above), any judgement he hands out cannot involve the same condemnation on his part as Judge Judy’s does. Because he does not adjust the sentence based on his own reasoning and values, giving a higher sentence in one case rather than another will not express the opinion that the first crime was worse than the second. Indeed, his sentencing process on the whole cannot be considered to be condemnatory at all.

What is required for the expressive function of sentencing to be maintained, then, even when sentencing decisions are used with the aid of algorithms, is that judges (or some other human agents—more on this below) be in control of each sentencing decision, where “control” is understood in the strict sense of having both the power and willingness to make their own decisions about sentencing and implement those decisions.

Related issues have been raised in ongoing debates about the ethical use of lethal autonomous weapons systems (LAWS), which would combine sophisticated AI with machinery capable of deadly force. These machines would have the capacity to select and engage targets without direct human input. It has been thought to be important to ensure human control over LAWS is maintained, for example, in order for clear lines of accountability to remain in place, respect the dignity of those killed, or provide safeguards when malfunctions occur.Footnote38 It is recognized, however, that not just any form of control will do: giving a military commander a veto over the decisions taken by LAWS will be of little use if that person lacks the time, knowledge, or confidence to override decisions that are taken. A more robust form of control is needed.

“Meaningful human control” is the term favored by practitioners to designate whatever thicker type of control is necessary.Footnote39 There is currently no consensus about exactly how to define this however: the phrase simply serves as a placeholder awaiting further specification. Nonetheless, many of the worries about lack of control stem from the fact that no human agent is morally responsible for the behavior of LAWS.Footnote40 It therefore seems like meaningful human control should be defined in a way that would mean that those in control have moral responsibility for LAWS.Footnote41 Among other things, this might require that they have both the causal efficacy to affect the behavior of LAWS in significant ways and the mental capacities to effectively use this power.Footnote42

Drawing on this idea, we might think that judgements in the courtroom of Judge Joe will involve condemnation if we are able to ensure that some form of meaningful human control over sentencing decisions is maintained despite the use of algorithms, where control is meaningful in the sense of rendering Joe morally responsible for the sentencing decisions. Two possible arrangements that might achieve this suggest themselves.

First, we might ensure that judges like Joe do not merely rubber stamp the recommendation given by an algorithm, but rather take it into account in an overall assessment of what sort of sentence is appropriate. Laws, regulations, guidelines, and perhaps even professional norms may all be used to ensure that the judge’s own reason is used. Many judges may, of course, come to the same conclusions as the algorithm, but in these cases, it would be because, understanding why the algorithms have made the recommendations, they have decided that these are the correct sentences. They retain control over each individual sentencing decision and, consequently, their judgements can in part convey reactive attitudes. Such an arrangement would involve meeting the following principle:

Meaningful Judicial Control: judges must have meaningful control over sentencing decisions.

Achieving this outcome, however, is more difficult than it might appear. For one thing, it may not be possible in principle for most judges to understand the inner workings of sentencing algorithms. If we genuinely want an algorithm that equals or surpasses the capacities of judges, as we saw in the previous section, it may need to be fairly complex. Relatedly, even if algorithms are in principle understandable by those using them, those users may be blocked from knowing how they work. COMPAS, for example, is developed by a private company, and details about how it works are a trade secret. If sentencing is to retain its expressive function, though, judges relying on algorithms must have an idea about why they reach the recommendations they do. At the very least, we will want algorithms to be explicable, in the sense of their inner workings being both transparent to and understandable by judges.Footnote43

Even if the algorithms being used are explicable, though, there is no guarantee that judges will, in fact, exercise sufficient judgement. As noted previously, it is sometimes thought that there is an automation bias on the part of humans when they are presented with the recommendations of algorithms, leading them to trust the recommendations too much. If judges would be subject to this when using risk-assessment algorithms, they may not have enough control over individual sentencing decisions to ensure the effective expression of reactive attitudes. While they possess both the knowledge and the power to adjust sentences based on their own reasoning, they will not exercise this power. While ensuring meaningful judicial control that may well enable condemnation is carried out, then, we may not be able to actually achieve this demanding condition in practice.

This brings us to a second possible arrangement to be considered, which may be more attainable. In the ongoing discussion about the ethics of LAWS, it has been recognized that meaningful human control does not need to be exercised at the point at which LAWS are used (by a commander in an army, for example). Rather, the relevant sort of control might be exercised across the life cycle of the system including its development stage.Footnote44 Computer programmers, software developers, and even regulators might be potential loci of responsibility. This is why the appropriate principle is named “meaningful human control” rather than “meaningful commander control.”

In a similar way, we might think that judges need not be the ones who exercise meaningful control over sentencing decisions if condemnation to be expressed. Those who develop the sentencing software might have the relevant sort of control too, for example. By programing the software in certain sorts of ways – so that it will predictably give higher sentences to some criminals rather than others – they may be viewed as implicitly condemning those who are sentenced. So long as onlookers can understand this – which may not require detailed knowledge of the algorithms themselves – the condemnation will be sufficiently public, in the sense introduced above. If one defendant receives a sentence twice as long as another, for instance, they may infer a higher degree of condemnation.

This idea suggests that we can adopt a weaker principle than meaningful judicial control, more in line with the principle currently operative in the LAWS discussions:

Meaningful Human Control: humans must have meaningful control over sentencing decisions.

A problem with the principle of meaningful human control in this context, however, is that it seems to matter which human has control, and thus which human is doing the condemning. An individual act of condemnation by a private individual may not be sufficient to ensure that condemnation is to have the value it is supposed to. Why is this the case? Consider again why the condemnation of crimes is valuable. As we saw in the previous section, one reason given by Feinberg is that it reaffirms the status of the victim or victims through a demonstration that their community takes their rights seriously. If a form of condemnation is to achieve this end, though, it must be carried out by a certain type of agent. Many individual people may sympathize with a victim of crime, and condemn the criminals who did it. But this may have little effect on the self-respect of the victim unless they think that this is not a widely-shared attitude in society. Perhaps a well-publicized crime would lead to widespread condemnation through media, and the victim could take this to show that a majority of their co-citizens take their rights seriously. But this will be a rare case: in practice, most victims will not have the opportunity to view a widespread public reaction.

Instead, in the standard case, the recognition of the victim as a valid source of reasons for action can only be undertaken institutionally. That is, through their political institutions, society demonstrates a commitment to upholding the status of the victim. And one key way in which it can do this is by granting an agent the normative standing to speak on their behalf and condemn those who have wronged them. We are used to this being a judge, but in principle it could be carried out by any agent who can be viewed as acting on behalf of the community: it could equally be the members of a public agency who design an algorithm that sentences criminals based on the crime they commit. What it cannot be is a private agent who lacks the standing to act on behalf of a political community.Footnote45

While control need not be exercised by a judge, then, it must at least be exercised by an individual who has standing to speak on behalf of the wider political community. I will refer to such agents as “public agents” (further discussion about who is a public agent will be found in the next section). I conclude this section with a statement of a necessary condition on acceptable sentencing which, in my view, is superior to both the unduly narrow principle of meaningful judicial control and the unduly wide principle of meaningful human control:

Meaningful Public Control: public agents must have meaningful control over sentencing decisions.

Where the category of public agents can include judges or other human agents acting on behalf of the political community.Footnote46

Achieving Meaningful Public Control

To understand what meaningful public control would involve in practice, two questions need to be addressed. First, who is a public agent? Second, what is meaningful control?

I introduced the concept of a public agent as a catch-all term for the sort of agent whose condemnation can be viewed as condemnation by the wider community. In theory, such condemnation could alternatively be carried out by a series of individual expressions of condemnation by each and every member of that community in a public forum. We can imagine each citizen of a state having to line up in a courtroom at the conclusion of a trial and express their disapproval of a guilty defendant’s actions, or perhaps agree that the sentence that has been given to them reflects their negative appraisal of the defendant’s actions.

Of course, such a practice would be unduly burdensome. The idea behind representative democracy is that we can achieve greater efficiency by delegating essential political tasks to certain office-holders. This takes different forms with respect to different sorts of office-holders: lawmakers in democracies tend to be elected directly, while bureaucrats are subject to laws and conventions that are supposed to constrain their behavior in ways that reflect the values of the political community as a whole. When it comes to judges, different jurisdictions have different practices. In the US, many are elected, while in other contexts, this is not the case. The choice represents a more or less populist ideal, depending on whether judges are accountable directly to the public or to other political bodies that are in turn elected. In both cases, however, judges are ideally taken to have been delegated tasks by the wider political community.Footnote47

For our purposes, we need to consider what sorts of delegation render the wider public responsible for the actions of delegates: when can the representatives’ actions properly be said to be done in the name of the wider community? There are a number of different views about this. “Externalist” accounts hold that representatives do this when their behavior meets certain standards: that they defer to a public point of view via an openness to political guidance and intervention,Footnote48 or that they operate within the mandate that they have been given,Footnote49 for example. While there are potential shortcomings of the internalist view, which may lead one to require additional “internalist” limits on the sorts of reasons that delegates can act in order for their action to be truly representative,Footnote50 some sort of externalist limit is probably still a necessary (if not sufficient) condition. In broad terms, what this means is that, for an actor to count as a genuine public agent, their actions must in some sense be in line with the wider values or preferences of the political community.

Whichever form these externalist limits take, there are two reasons why sentencing done with the aid of algorithms might not be viewed as done in the name of the political community in practice. First, even if those who design sentencing algorithms can be considered to be public agents in the sense I have in mind, it may be the case that they cannot be considered to be responsible for the recommendations they come to, and thus the public at large cannot be responsible for it. If these algorithms employ forms of machine-learning, no human programmer would input the coding that determines recommendations. This would rather be written by the algorithm itself.Footnote51 Responsibility for these sorts of algorithms may be difficult to achieve: it may require developing and implementing various forms of explicable AI: techniques that can provide humans with indications about how machine-learning algorithms operate.

Even if we can view the creators of algorithms as responsible for their outputs, however, there is a second issue here. Note that algorithms may not be developed by those who are employed in the public sector. Recall that COMPAS was developed by a private company. We may wonder whether the people creating it counted as public agents in the relevant sense. If they do not, we cannot achieve meaningful public control merely by ensuring that the developers within this company are responsible for the outputs of the algorithm. This might look akin to a judge not subject to public control passing down sentences based solely on their personal preferences: their actions would lack the right sort of condemnation, since they could not be viewed as acting on behalf of the wider community.

I do not think, however, that meaningful public control requires avoiding involving the private sector in research and development altogether. A public agent was defined by their being effectively controlled by the political community: public agents are characterized by their following the preferences or values of that community. This function was not defined by the sort of entity they work for. What this means is that, in principle, those working for a private company could count as public agents in the relevant sense if they are under effective control by democratic bodies. Even if power is delegated to them, so long as they work within whatever external limits we think are necessary, their actions can count as being done in the people’s name.

What is needed if such delegation is really going to ensure that the actions of the designers can also be the responsibility of those who do the delegating? This would appear to necessitate strict specifications be laid out when designing an algorithm is contracted out, so that the wider people control the specifics of the algorithm in some way.Footnote52 One might wonder what the point of such delegation would ultimately be: if those who are doing the delegating know what the algorithm is supposed to look like, why would they contract out the job of designing it rather than produce it in-house in a government body?

Perhaps specifying the exact form that an algorithm should take in advance is not necessary to ensure ultimate responsibility for its outputs. David Miller has argued that individuals can be properly considered responsible for the actions of collectives which they form if they “share aims and outlooks in common, and who recognize their like-mindedness, so that when individual members act they do so in light of the support they are receiving from other members of a group”.Footnote53 In a somewhat similar way, it might be suggested, if an algorithm is used that recommends sentences on the basis of widely-held convictions in society, individual members in that society can be properly viewed as responsible for the outcomes in light of those outcomes reflecting society’s values. If an algorithm is designed to sentence certain types of crime harshly, for example, because these crimes are viewed as despicable by members of the society in which it is deployed, members of that society (at least those who share in this view) might be viewed as responsible for the outputs and the sentencing decisions taken on the basis of them.

There are a couple of points worth making about this suggestion. First, Miller is discussing “outcome responsibility,” which is assigned when “a particular agent can be credited or debited with a particular outcome.”Footnote54 He further explains: “There is a presumption that where A is outcome responsible for O, then the gains and losses that fall upon A should stay where they are, whereas gains and losses falling upon P and Q may have to be shifted”.Footnote55 This does not seem to be the sort of responsibility we are looking for in order for decisions to be viewed as the decisions of the wider public; it is possible that the individuals in society are responsible for the outcome of sentencing decisions that result from an algorithm being used (so that, for instance, public money should be used to pay compensation when bad decisions are made), but at the same time the decision cannot be viewed as theirs.

Second, even if outcome responsibility is the right sort of relationship between individuals and decisions, one might wonder how we can ensure this relationship is in place in practice. Private companies might have diverging values from wider society due to their financial incentives.Footnote56 Ensuring value alignment might thus require limiting their discretion through more precise contracts and product specification. The limits on outsourcing here might look like the limits that I suggested above to ensure that delegation transferred responsibility.

So while achieving meaningful public control does not rule out private sector involvement in the development of sentencing algorithms, it may necessitate limiting the discretion that private companies have when they create algorithms. Failing this, development may need to be brought in-house to government agencies if condemnation is to be expressed by a political community through sentencing.

Conclusion

I have argued that, if the expression of reactive attitudes is a permissible and desirable function of sentencing decisions, the principle of meaningful public control must be met. Achieving this requires limits to be placed on the development of sentencing algorithms. These limits primarily relate to their explicability and the nature of private sector discretion in the design process.

In practice, determining whether it is appropriate to use the criminal justice system as an expressive institution may depend on wider facts about the justice (or lack thereof) of the society in which it operates. Tommie Shelby has argued that, when a state is complicit in the crimes that it is punishing (or, indeed, participates in similar forms of wrongdoing), those acting on its behalf may lack the standing to condemn wrongdoing. He consequently suggests that a more acceptable approach by US authorities (who he suspects are deficient in this way) would simply use punishment as a method of containing violent crime, without any pretense of taking the moral high ground.Footnote57

One might conclude from this that, since many states cannot engage in expressions of condemnation without hypocrisy, not much would be lost (and perhaps something would be gained) by bringing in the private sector to develop sentencing algorithms. However, I would suggest that there are additional reasons for the principle of meaningful public control in non-ideal circumstances such as ours. For having public agents in control of these decisions may make the contradictions of the system more apparent. Rather than outsourcing the need to make difficult decisions to opaque computer programs or unaccountable companies, we should remember that sunlight is sometimes the best disinfectant.

[I am very grateful to audiences at the Higher Seminar in Philosophy of Law at Uppsala University; the Political Theory Seminar at Stockholm University; and the workshop on “Ethics of AI in the Public Sector” at KTH Royal Institute of Technology for discussions on previous drafts of this paper; as well as to the anonymous reviewers from Criminal Justice Ethics for very helpful comments.]

[Disclosure Statement: No potential conflict of interest was reported by the author(s)].

Notes

1 Danziger, Levav and Avnaim-Pesso, “Extraneous Factors in Judicial Decisions.”

2 Pamela McCroduck suggests that many members of disadvantaged groups may want to take their chances with an impartial computer over a (potentially biased) human judge. See McCorduck, Machines Who Think, 375.

3 Yong, “A Popular Algorithm is No Better at Predicting Crimes Than Random People.”

4 Angwin, Larson, Mattu, and Kirchner, “Machine Bias.” The question of whether algorithms can avoid objectionable forms of discrimination has been addressed in Davis and Douglas, “Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.”

5 Dressel and Farid, “The Accuracy, Fairness, and Limits of Predicting Recidivism.”

6 One worry here is that there is no possible algorithm that can simultaneously meet various intuitively plausible criteria of fairness. See, for example, Chouldechova, “Fair Prediction with Disparate Impact.” I set this issue aside for the purposes of this paper, and assume that a fair algorithm is at least possible to construct. This might be because some of the purported criteria of fairness which cannot be met simultaneously are not, in fact, genuine moral requirements. Cf. Hedden, “On Statistical Criteria of Algorithmic Fairness;” Eva, “Algorithmic Fairness and Base Rate Tracking.”

7 For the purposes of this paper, “sentencing decisions” will be taken to include not only the initial decision on severity and type of sentence given to criminals immediately after conviction, but also similar decisions made while punishment is being carried out (for example in parole hearings).

8 A gap in research about how risk assessment scores are used in the criminal justice system more generally has been noted. See Law Society of England and Wales, “Algorithms in the Criminal Justice System.” 52.

9 Skitka, Mosier and Burdick, “Does Automation Bias Decision-Making?”

10 By “objective,” I mean that these factors do not consist of individuals' evaluations or psychological reactions. The contrast with subjective factors will be made in due course.

11 The utilitarian Jeremy Bentham argues that the reduction of crime is the only legitimate end of criminal punishment. See Bentham, An Introduction to the Principles of Morals and Legislation, 170–203.

12 Moore, Placing Blame.

13 Ryberg, “Risk and Retribution.” Not all retributivists are resistant to risk-based sentencing. See Husak, “Why Legal Philosophers (Including Retributivists) Should be Less Resistant to Risk-Based Sentencing.”

14 Cf. Chiao, “Predicting Proportionality,” 341–3.

15 An algorithm that based its recommendations on previous judicial decisions may provide a useful proxy for these factors. An algorithm of this sort is imagined in ibid. Jesper Ryberg notes a dilemma for retributivists seeking to justify the use of machine-learning algorithms using existing cases as inputs. Either these algorithms will rely on a sample that is too small to give acceptable outcomes, or one that is too large to be easily constructed. See Ryberg, “Sentencing Disparity and Artificial Intelligence.”

16 Abney, “Autonomous Robots and the Future of Just War Theory,” 347; Wallach and Vallor, “Moral Machines.”

17 Duff, Punishment, Communication, and Community, 175–202; Lacey, “Socializing the Subject of Criminal Law;” Ristroph, “Desert, Democracy, and Sentencing Reform.”

18 The retributive element might be thought to have this form. See Morris, The Future of Imprisonment.

19 Up to the 1970s such wide discretion was given to judges in the US criminal justice system for instrumental reasons. See Berman, “Re-Balancing Fitness, Fairness, and Finality for Sentences,” 157–8.

20 This might be based on the idea that desert is to some extent comparative, in the sense that what someone deserves (for example, the appropriate sentence on a retributivist account) might depend on what others have received. On this idea, See Miller, “Comparative and Noncomparative Desert.”

21 For another notable example, see Wringe, An Expressive Theory of Punishment.

22 Feinberg, “The Expressive Function of Punishment,” 400.

23 Ibid., 397–8.

24 Ibid., 404–8. We might add that condemnation might be welcomed by both instrumentalists (because the possibility of condemnation can serve as a useful disincentive) and retributivists (because the stigma that is produced by condemnation may form part of the harsh treatment that retributivists view as intrinsically valuable).

25 Ibid., 406.

26 Ibid.

27 Honneth, The Struggle for Recognition.

28 Ibid., 118–21.

29 For the view that expressivists should support harsher sentencing for hate crimes, see Wellman, “A Defense of Stiffer Penalties for Hate Crimes,” 68.

30 Nozick, Philosophical Explanations, 370.

31 Cf. Shelby, Dark Ghettos, 240–241, where it is argued that it is the conviction (and not the sentencing) stage which involves an expressive element.

32 Boonin, The Problem of Punishment, 176–9.

33 For the view that censure is valuable, but not sufficient to justify punishment by itself, see Narayan, “Appropriate Responses and Preventive Benefits;” Hirsh, Censure and Sanctions, 6-19.

34 It should be noted that certain theories of punishment that might be labelled “communicative” rather than expressive might also explain why there is a problem with certain uses of algorithms. These theories suggest that punishment should be a reciprocal act that requires some degree of rational engagement from those punished (see, for example, Duff, Punishment, Communication, and Community; Hampton, “The Moral Education Theory of Punishment”). Because certain algorithms may not be explicable to those who are sentenced, rational engagement might be impossible. While this idea warrants further attention, I cannot provide it within this paper.

35 Fischer and Ravizza, Responsibility and Control, 12–4.

36 For a helpful outline of these different forms of responsibility, see Jeppsson, “Accountability, Answerability and Attributibility.”

37 Watson, “Two Faces of Responsibility,” 229.

38 Sharkey, “Autonomous Weapons Systems, Killer Robots, and Human Dignity;” Sparrow, “Robots and Respect;” Sparrow, “Killer Robots,” 67; Taylor, “Who is Responsible for Killer Robots,” 232–3.

39 This concept has emerged as the guiding principle for framing ongoing international negotiations on the regulation of LAWS. See United Nations Office at Geneva, “Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems,” 25.

40 See, for example, Sparrow, “Killer Robots.”

41 Santoni de Sia and van den Hoven, “Meaningful Human Control over Autonomous Systems.”

42 Taylor, “Who is Responsible for Killer Robots?,” 234.

43 Explicability is a widely-recognized requirement of AI ethics, for various different sorts of reasons than the one outlined here. See Floridi and Cowls, “A Unified Framework of Five Principles for AI in Society,” 8. The importance of explicability more generally is discussed in Vredenburgh, “The Right to Explanation.” On the sort of explicability we want from algorithms, see Chiao, “Transparency at Sentencing;” Ryberg, “Sentencing and Algorithmic Transparency;” Ryberg and Petersen, “Sentencing and the Conflict between Algorithmic Accuracy and Transparency.”

44 Roff and Moyes, “Meaningful Human Control, Artificial Intelligence and Autonomous

Weapons.”

45 Cf. Wellman, Rights Forfeiture and Punishment, 49.

46 The complementary argument that punishment (rather than sentencing) should be undertaken by public agents for expressive reasons is given in Dorfman and Harel, “The Case Against Privatization,” 92–6.

47 Cf. the distinction between direct and indirect delegation in Lawford-Smith, Not in Their Name, 117.

48 Dorfman and Harel, “The Case Against Privatization,” 71–6.

49 Ripstein, Force and Freedom, 190–8.

50 Cordelli, The Privatized State, 159–71.

51 This would be a “bottom-up” algorithm in the terms of Tasioulas, “AI and Robot Ethics,” 337.

52 How strict these limits are will depend on the precise externalist principles we endorse. On Dorfman and Harel's account, for example, which requires that public agents defer to the public point of view, necessitates a community of practice – an institutional structure that allows the public point of view to be articulated and deferred to. See Dorfman and Harel, “The Case Against Privatization,” 82–3. It is unclear how this could genuinely be put in place when private companies are involved.

53 Miller, 117.

54 Ibid., 87.

55 Ibid.

56 This might be particularly true in the security sector. See Pattison, The Morality of Private War, 84–100.

57 Shelby, Dark Ghettos, 238–48.

Bibliography

  • Abney, Keith. “Autonomous Robots and the Future of Just War Theory.” In Routledge Handbook of Ethics and War, edited by Fritz Allhoff, Nicholas G. Evans and Adam Henschke, 338–351. Abingdon: Routledge, 2013.
  • Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine bias.” Pro Publica (2016).
  • Bentham, Jeremy. An Introduction to the Principles of Morals and Legislation. New York: Hafner Press, 1948.
  • Boonin, David. The Problem of Punishment. New York: Cambridge University Press, 2008.
  • Berman, Douglas A. “Re-Balancing fitness, fairness, and finality for sentences.” Wake Forest Journal of Law and Policy 4, no. 1 (2014): 151–177.
  • Chouldechova, Alexandra. “Fair prediction with disparate impact: A study of bias in recidivism prediction instruments.” Big Data 5, no. 2 (2017): 153–163.
  • Chiao, Vincent. “Predicting proportionality: The case for algorithmic sentencing.” Crim Justice Ethics 37, no. 3 (2018): 238–261.
  • Chiao, Vincent. “Transparency at Sentencing: Are Human Judges More Transparent That Algorithms?” In In Sentencing and Artificial Intelligence, edited by Jesper Ryberg, and Julian V. Roberts, 34–56. Oxford: Oxford University Press, 2022.
  • Cordelli, Chiara. The Privatized State. Princeton: Princeton University Press, 2020.
  • Danziger, Shai, Jonathan Levav, and Liora Avnaim-Pesso. “Extraneous factors in judicial decisions.” Proc Natl Acad Sci USA, no. 17 (2011): 6889–6892.
  • Davis, Benjamin, and Thomas Douglas. “Learning to Discriminate: The Perfect Proxy Problem in Artificially Intelligent Criminal Sentencing.” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg, and Julian V. Roberts, 97–121. Oxford: Oxford University Press, 2022.
  • Dorfman, Avihay, and Alon Harel. “The case against privatization.” Philos Public Aff 41, no. 1 (2013): 67–102.
  • Dressel, Julia, and Hany Farid. “The accuracy, fairness, and limits of predicting recidivism.” Sci Adv 4, no. 1 (2018): 1–5.
  • Duff, R. A. Punishment, Communication, and Community. Oxford: Oxford University Press, 2001.
  • Eva, Benjamin. “Algorithmic fairness and base rate tracking.” Philos Public Aff 50, no. 2 (2022): 239–266.
  • Fischer, John Martin, and Mark Ravizza. Responsibility and Control: A Theory of Moral Responsibility. Cambridge: Cambridge University Press, 1998.
  • Floridi, Luciano, and Josh Cowls. “Issue 1.” Harvard Data Science Review 1, no. 1 (2019): 1–15.
  • Hampton, Jean. “Punishment.” Philosophy and Public Affairs 13, no. 3 (1995): 112–142.
  • Hedden, Brian. “On statistical criteria of algorithmic fairness.” Philos Public Aff 49, no. 2 (2021): 209–231.
  • Honneth, Axel. The Struggle for Recognition: The Moral Grammar of Social Conflicts, trans. Joel Anderson. Cambridge: Polity Press, 1995.
  • Husak, Douglas. “Why Legal Philosophers (Including Retributivists) Should be Less Resistant to Risk-Based Sentencing.” In Predictive Sentencing: Normative and Empirical Perspectives, edited by Jan W. de Keijser, Julian V Roberts, and Jesper Ryberg, 33–49. Oxford: Hart, 2019.
  • Feinberg, Joel. “The expressive function of punishment.” Monist 49, no. 3 (1965): 397–423.
  • Jeppsson, Sofia. “Accountability, Answerability and Attributability: On Different Kinds of Moral Responsibility.” In The Oxford Handbook of Moral Responsibility, edited by Kay Nelkin, and Derk Pereboom, 73–88. New York: Oxford University Press.
  • Lacey, Nicola. “Socializing the subject of criminal Law: criminal responsibility and the purposes of criminalization.” Marquette Law Rev 99, no. 3 (2016): 541–557.
  • Lawford-Smith, Holly. Not in Their Name: Are Citizens Culpable for Their States’ Actions? Oxford: Oxford University Press, 2019.
  • McCorduck, Pamela. Machines Who Think: A Personal Enquiry Into the History and Prospects of Artificial Intelligence. Boca Raton: CRC Press, 2004.
  • Miller, David. “Comparative and Noncomparative Desert.” In Desert and Justice, edited by Serena Olsaretti, 23–44. Oxford: Oxford University Press, 2003.
  • Miller, David. National Responsibility and Global Justice. Oxford: Oxford University Press, 2007.
  • Moore, Michael. Placing Blame: A General Theory of the Criminal Law. Oxford: Oxford University Press, 1997.
  • Morris, Norval. The Future of Imprisonment. Chicago: University of Chicago Press, 1974.
  • Narayan, Uma. “Appropriate responses and preventive benefits: justifying censure and hard treatment in legal punishment.” Oxf J Leg Stud 13, no. 2 (1993): 166–182.
  • Nozick, Robert. Philosophical Explanations. Cambridge, MA: Harvard University Press, 1981.
  • Pattison, James. The Morality of Private War: The Challenge of Private Military and Security Companies. Oxford: Oxford University Press, 2014.
  • Ripstein, Arthur. Force and Freedom: Kant’s Legal and Political Philosophy. Cambridge, MA: Harvard University Press, 2009.
  • Ristroph, Alice. “Desert, democracy, and sentencing reform.” Journal of Criminal Law and Criminology 96, no. 4 (2006): 1293–1352.
  • Roff, Heather M., and Richard Moyes. “Meaningful human control, artificial intelligence and autonomous weapons.” Briefing Paper for Delegates at the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), 2016. Available online at http://www.article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf.
  • Ryberg, Jesper. “Risk and Retribution: On the Possibility of Reconciling Considerations of Dangerous and Desert.” In Predictive Sentencing: Normative and Empirical Perspectives, edited by Jan W de Keijser, Julian V Roberts, and Jesper Ryberg, 51–68. Oxford: Hart, 2019.
  • Ryberg, Jesper. “Sentencing and Algorithmic Transparency.” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg, and Julian V. Roberts, 13–33. Oxford: Oxford University Press, 2022.
  • Ryberg, Jesper, and Thomas S. Petersen. “Sentencing and the Conflict Between Algorithmic Accuracy and Transparency.” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg, and Julian V. Roberts, 57–73. Oxford: Oxford University Press, 2022.
  • Ryberg, Jesper. “Sentencing disparity and artificial intelligence.” J Value Inq 57, no. 3 (2023): 447–462.
  • Santoni de Sia, Filippo, and Jeroen van den Hoven. “Meaningful human control over autonomous systems: A philosophical account.” Frontiers in Robotics and AI 5, no. 15 (2018): 1–14.
  • Sharkey, Amanda. “Autonomous weapons systems, killer robots, and human dignity.” Ethics Inf Technol 21, no. 2 (2019): 75–87.
  • Shelby, Tommie. Dark Ghettos: Injustice, Dissent, and Reform. Cambridge, MA: Harvard University Press, 2016.
  • Skitka, Linda J., Kathleen L. Mosier, and Mark Burdick. “Does automation bias decision-making?” Int J Hum Comput Stud 51 (1999): 991–1006.
  • Sparrow, Robert. “Killer robots.” J Appl Philos 24, no. 1 (2007): 62–77.
  • Sparrow, Robert. “Robots and respect: assessing the case against autonomous weapon systems.” Ethics Int Aff 30, no. 1 (2016): 93–116.
  • Tasioulas, John. “AI and Robot Ethics.” In Ethics and the Contemporary World, edited by David Edmonds, 335–352. London: Routledge, 2019.
  • Taylor, Isaac. “Who Is responsible for killer robots? autonomous weapons, group agency, and the military-industrial complex.” J Appl Philos 38, no. 2 (2021): 320–334.
  • The Law Society of England and Wales. “Algorithms in the Criminal Justice System,”. Available online at https://www.lawsociety.org.uk/en/topics/research/algorithm-use-in-the-criminal-justice-system-report, 2019.
  • United Nations Office at Geneva. Report of the 2019 Session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems.” 25 September 2019. Available online at https://documents.unoda.org/wp-content/uploads/2020/09/CCW_GGE.1_2019_3_E.pdf.
  • von Hirsh, Andrew. Censure and Sanctions. Oxford: Oxford University Press, 1996.
  • Vredenburgh, Kate. “The right to explanation*.” Journal of Political Philosophy 30, no. 2 (2022): 209–229.
  • Watson, Gary. “Two faces of responsibility.” Philosophical Topics 24, no. 2 (1996): 227–248.
  • Wellman, Christopher Heath. “A defense of stiffer penalties for hate crimes.” Hypatia 21, no. 2 (2006): 62–80.
  • Wellman, Christopher Heath. Rights Forfeiture and Punishment. Oxford: Oxford University Press, 2006.
  • Wallach, Wendell, and Shannon Vallor. “Moral Machines: From Value Alignment to Embodied Virtue.” In Ethics of Artificial Intelligence, edited by S. Matthew Liao, 383–412. Oxford: Oxford University Press, 2020.
  • Wringe, Bill. An Expressive Theory of Punishment. New York: Palgrave Macmillan, 2016.
  • Yong, Ed. A Popular Algorithm is No Better at Predicting Crimes Than Random People.” The Atlantic, January 17, 2018. Available online at https://www.theatlantic.com/technology/archive/2018/01/equivant-compas-algorithm/550646/.