181
Views
0
CrossRef citations to date
0
Altmetric
Article

Sentencing, Artificial Intelligence, and Condemnation: A Reply to Taylor

Abstract

In a recent article in this journal, Isaac Taylor warned against the unconstrained use of algorithms as instruments to determine sentences in criminal cases. More precisely, what he argued is that it is important that the sentencing process serves a condemnatory function, and that the introduction of sentencing algorithms threatens to undermine this function. In this reply to Taylor, it is argued that even though his considerations are interesting as they direct attention to the sentencing process and not merely the output of algorithmic sentencing, there are nevertheless reasons to be skeptical of his argument. More precisely, it is argued, first, that it is far from clear that we must accept the basic premise underlying Taylor’s argument, namely, that the sentencing process should serve a condemnatory function and, second, that even if this premise is true, it does not follow that the condemnatory element of the sentencing process is undermined by introducing algorithms as advisory instruments at sentencing.

The possibility of improving criminal justice systems through the introduction of artificial intelligence is receiving increasing attention from ethicists. This is not surprising. Algorithmic tools have been used for some time now to provide criminal courts with relevant information. A much-discussed example is the use of algorithms to provide risk assessments of offenders. However, attempts have also been made at introducing algorithms even more directly in sentencing decisions. For instance, the first algorithmic systems designed to determine sentences in cases including rape and drug possession have already been put into practice in some countries.Footnote1 Some states have declared their intentions of implementing “intelligent courts” based on the use of artificial intelligence in judicial decision-making including sentencing.Footnote2 Thus, given the fact that a state’s power over the lives of citizens is at its height when it comes to the imposition of sentences, it is unsurprising—and indeed desirable—that the introduction of such new technological tools in this sort of decision-making be subject to thorough ethical consideration.Footnote3

In a recent article in this journal, Isaac Taylor presented his thoughts on limits that should be placed on the use of algorithms at sentencing.Footnote4 What makes Taylor’s views interesting is that they do not concern the type or severity of the sentences determined by a criminal court drawing on algorithmic tools. His considerations thereby deviate from much of the current discussion on the use of algorithmic sentencing.Footnote5 For instance, much attention has been devoted to the fact that algorithms may be biased. It is an obvious problem if a sentencing algorithm reproduces existing patterns of discrimination and leads to the imposition of even more severe sentences on certain groups of citizens.Footnote6 Another issue that has attracted recent attention concerns how one should compare the performance of human judges and that of algorithms in cases where they determine different sentences, or how to decide whether one algorithm is performing better than another if there are deviations between sentences determined by two competing systems.Footnote7 However, these discussions usually concern the output of the sentencing process; that is, they address the severity of punishments ultimately imposed on offenders.Footnote8 By contrast, Taylor’s argument focuses on the sentencing process itself. He neatly underscores this by outlining two different examples of judicial decision-making: Judge Judy who considers all the relevant factors of a criminal case and then determines a sentence based on her own judgement; and Judge Joe who does not exercise his own judgement but who has decided always to follow the recommendations made by a sentencing algorithm. What Taylor suggests is that something valuable has been lost in the case of Judge Joe. Noteworthily, this is so even if Judge Joe and Judge Judy ultimately reach the same decisions; that is, if they end up imposing the very same punishments. But how can this be if the imposed punishments are identical? In short, Taylor’s argument seems to be that it is valuable that the sentencing process itself serves an expressive function, meaning that it should send a condemnatory message to the offender. If sentencing decisions are surrendered to an algorithm, however, as in the case of Judge Joe, then the sentencing process itself no longer sends this condemnatory message. Therefore, something valuable is lost when sentencing decisions are based on algorithmic recommendations.

Even though Taylor’s argument constitutes an interesting and novel contribution to the discussion on the use of algorithmic tools at sentencing, it nonetheless seems to me that there are several reasons to be skeptical of his arguments, as I argue in the following pages. To buttress this contention, the remainder of the article will proceed as follows. The first section concerns the basic premise underlying Taylor’s argument; namely, that the sentencing process should serve an expressive function. Even though Taylor suggests that this premise is compatible with various theories of punishment, and therefore cannot be accused of presupposing a particular view of the justification of punishment, the paper will argue that this premise gives rise to several challenges when interpreted in the light of traditional penal theories. In particular, this is the case when it is considered on the grounds of a retributivist view of punishment. This is why it is not as obvious as Taylor believes that the premise should be accepted. The second section examines the main argument concerning the relation between the assumption that the sentencing practice should serve a condemnatory function and that this function will be undermined when the criminal court moves toward a greater use of algorithms. It argues that it is far from clear why a condemnatory function cannot be maintained even if algorithms are used in the way illustrated in the case of Judge Joe. The third and final section summarizes and concludes. The overarching argument holds that both the main premise of Taylor’s argument and the inferential move from this premise to the conclusion concerning the undesirability of algorithmic sentencing face serious challenges.

1. Should Sentencing Practice Serve a Condemnatory Function?

Taylor’s considerations of problems associated with the introduction of sentencing algorithms are founded on a basic premise; namely, that the sentencing process should serve an expressive function, that is, that it should convey a condemnatory message to the offender. Taylor expresses belief that significant limits should be placed on the use of algorithms, and that the “reason for this is that the sentencing process is an important site through which condemnation can be expressed to those guilty of crime, and that this practice has value;” what is more, the “movement toward the greater use of algorithms at this stage threatens to undermine this practice.”Footnote9 The precise argument that links the use of algorithms to the lack of proper condemnation of offenders is something we shall revisit in the following section. The question to be considered here is whether we should accept the basic premise concerning the purpose of the sentencing process.

At first glance, it seems to be an uncontroversial premise. As Taylor rightly points out, there are today many penal theorists inspired by Feinberg’s pioneering notion that a punishment should be seen as carrying “symbolic significance.”Footnote10 He further underlines that the premise does not presuppose that one accepts a particular theory of punishment. On the contrary, he believes that the premise can be accepted independently of whether one subscribes to an “instrumentalist” theory (by which he seems to mean some sort of consequentialist approach) or a retributivist theory. However, even though it may seem an advantage that the premise is not contingent on a particular penal theory—because this means that it cannot simply be brushed aside if one does not subscribe to this particular theory—this does not imply that the premise is unaffected by penal theoretical considerations. In fact, as we shall now see, it is precisely when the premise is considered in the light of standard penal theories that it becomes less obvious that it should be accepted in the first place.

Suppose first that the premise is considered within the framework of some sort of consequentialist theory. As Taylor underlines, the condemnation of offenders may have several desirable consequences. For instance, victims of crime may feel that their self-respect depends upon the recognition by others of their status as victims and on the recognition by others that the offender’s actions were wrong.Footnote11 However, if one wishes to sustain the premise that the sentencing process should serve a condemnatory function, then it is not sufficient to point to its possible implications. Rather, what one will first have to show is that a condemnatory sentencing process does in actuality have such consequences. On this point, Taylor offers no empirical evidence. Second, and more importantly, even if it is the case that the condemnation of offenders does have desirable consequences, it will also have to be shown that these consequences are not already obtained in other ways. For instance, as has been argued by some theorists, it may be the case that the conviction itself serves a condemnatory function. Moreover, as Feinberg and many other theorists hold, the punishment itself may serve a condemnatory function. If this is the case, then it is not obvious that condemnation related to the sentencing process is required to achieve the outlined desirable consequences. At the least, it will have to be shown that an extra benefit follows from the condemnatory element of the sentencing process which had not already been reached by other expressive elements associated with the punishment of offenders. To this effect, Taylor has not offered any evidence, either theoretical or empirical. Third, it is important to note that even if one accepts that the expressive function of the sentencing process has desirable consequences and, as we shall see shortly, that these consequences will not be realized if algorithms are attributed a significant role at sentencing, it still does not follow that one should not base sentencing on algorithmic tools. The reason for this assertion is that there could be other valuable effects associated with the use of such instruments.Footnote12 For instance, as some adherents have suggested, it could be the case that the use of sentencing algorithms would have desirable consequences by leading to a more uniform distribution of punishments or by speeding up the processing time in court or reducing the cost of courts’ work.Footnote13 Therefore, in sum, if one subscribes to some sort of consequentialist theory, much more needs to be said in order to show that a condemnatory function of the sentencing process has desirable consequences and that these consequences are not already realized through other parts of the sentencing process. And even if this can be shown, there might still be, all things considered, reasons to maintain a sentencing practice in which algorithms play a significant role. Thus, seen from a consequentialist point of view, much more work needs to be done to show that we need to accept the basic premise underlying Taylor’s argument.

Suppose, alternatively, that the premise concerning the condemnatory function of the sentencing process is analyzed on the basis of a retributivist view of punishment. It is a fact that almost all theorists who have been inspired by Feinberg’s initial thoughts belong in the modern retributivist camp. Various versions of expressivist theories have been developed by retributively minded theorists over the past decades.Footnote14 But what will happen to Taylor’s basic premise if it is considered in light of the view that offenders deserve appropriate condemnation? As Taylor rightly notes, Feinberg’s focus was not on the expressive function of the sentencing process, but on the fact that punishments serve an expressive function. It is precisely this view of punishment that has been adopted by modern expressivists. However, if a punishment can serve an expressive function by appropriately condemning an offender, then it becomes less clear why we would also have to accept the premise that the sentencing process itself should serve an expressive function. After all, it seems that the imposition of a punishment on an offender would be sufficient to satisfy the retributivist desiderata. For instance, suppose that one accepts the view held by many expressivists, namely, that the severity of a punishment determines the stringency of the condemnation and, further, that an offender deserves the condemnation that is conveyed by, say, a two-year prison term. In that case, all the criminal court would have to do is impose the two years behind bars. In other words, given a retributivist view, offenders can be given what they deserve even if the sentencing process itself does not serve an expressive function. The only way in which Taylor would be able to maintain the view that the sentencing process itself must contain a condemnatory element would be to contend that the expressive element of a punishment is contingent on the sentencing process itself condemning the offender. It is hard, however, to see why we should accept this assumption. Taylor, for his part, does not provide any arguments to this effect. Even if it is taken for granted that the expressive function of punishment is contingent on the work of the criminal court itself being appropriately expressive, this would still not be sufficient to save Taylor’s premise. For instance, if it is correct—as argued by Shelby and others—that the conviction involves condemnation of the offender, then it is still not clear that we must accept condemnation as a necessary element at sentencing.Footnote15 Therefore, on the ground of a retributivist theory of punishment, it is far from clear why we should accept the basic premise concerning the expressive function of the sentencing process.

In summary, it might at first sight seem an advantage that Taylor’s basic premise does not hinge on a particular view of punishment. Yet, as argued, this is not the same as holding that penal theoretical views do not have implications for whether the premise should be accepted. What we have seen is that if the premise is considered from a consequentialist view, it is not clear that it should be accepted; furthermore, it might be the case that even with all things considered there would be sufficiently strong reason to maintain the use of algorithms which Taylor warns against. Alternatively, if the premise is considered from a retributivist point of view, it is also unclear why we need to accept the premise. Still, since the premise that the sentencing process should serve a condemnatory function is a sine qua non for Taylor’s argument against the unconstrained use of algorithms at sentencing, it seems fair to hold that his reasoning does not rest on a firm ground.

2. Does Algorithmic Sentencing Undermine Condemnation?

Taylor’s considerations of reasons to be skeptical of the introduction of algorithmic tools at sentencing, even when these tools serve only an advisory role, are based on the view that such tools will undermine the condemnatory function that a sentencing process should serve. As we have just seen, there are reasons to question the plausibility of this view. However, suppose arguendo that this premise is accepted. What precisely does this imply regarding the use of sentencing algorithms?

To illuminate Taylor’s reasoning against the use of algorithms, let us return to his example of the two judges. Suppose that Judge Judy and Judge Joe both enunciate to a particular offender that: “what you did was wrong, that is, you should not have acted in this way; therefore, you will be given one year in prison as a response to your misdeed.” Suppose, further, that a statement along these lines possesses the linguistic properties to serve as a condemnation under the right conditions, and also that the offender regards this statement as condemnatory. We can also assume that the statement will be interpreted in this way by other parties or the public at large. Let us finally assume, following Taylor, that Judge Judy has reached the decision of imposing a one-year prison term by giving it thorough consideration, whereas Judge Joe reached the corresponding decision by simply following recommendations from a sentencing algorithm. The question now is: Why should we regard the two sentencing processes as significantly different? That is, more precisely, how can Taylor reach the conclusion that the offender has been genuinely condemned only in the case of Judge Judy, but not in the case of Judge Joe? As we shall now see, Taylor seems to have several reasons at hand to support this conclusion. What follows will argue, however, that it is far from clear that these reasons can withstand scrutiny.

A first condition which Taylor believes will have to be satisfied in order “for condemnation to really be carried out” is that it must be sufficiently public “in the sense that the act that involves or implies condemnation must be available for others to observe.”Footnote16 As he rightly emphasizes, such a condition would not be satisfied in cases involving secret judicial decision making. It is much less clear, however, why this requirement should be seen as relevant for the comparison of the sentencing processes in the examples of the two judges. As the example is presented, there is no difference in the accessibility of sentencing statements in the two cases. Both were presented as the culmination of the sentencing phase, and both were addressed to the offenders. Moreover, both sentencing statements, we can assume, were equally open to other parties and thus the public at large. It might perhaps be objected, that the important difference between the cases of Judge Judy and Judge Joe does not lie in the openness of the statement of the sentences, but in the fact that Judge Joe has reached his decision by the use of an algorithmic instrument which itself may not be fully transparent, and that it is this part of the process that conflicts with an accessibility requirement. However, to contend that in order for a sentencing statement to serve as a genuine condemnation it must not be the case that there are non-transparent elements in the work leading up to the sentencing statement, seems to be far too stringent of a requirement. Surely, there will also be elements of the process that led to the sentencing statement in the case of Judge Judy which were not fully available to everyone. In most cases, judges do not present each and every consideration that precedes a specific sentencing decision. Sometimes it may not even be clear to judges themselves how precisely they reached their specific judgement (e.g. unconscious mechanisms may inform a decision). Moreover, it is obvious that no one has any insight whatsoever into the neural brain processes that underlie a specific sentencing decision. Thus, such a requirement would imply that neither the condemnations in the case of Judge Judy nor in the case of Judge Joe could amount to genuine condemnations. What this shows, it should be underlined, is clearly not that there are no strong reasons in favor of setting up requirements regarding the transparency of algorithms used at sentencing. There is currently a comprehensive and indeed important discussion of the significance of algorithmic transparency.Footnote17 Rather, the point here is the more modest one, that it is hard to see that a requirement of public access suffices in explaining why Judge Judy is genuinely condemning the offender, while Judge Joe is not.Footnote18

Let us therefore turn to one of the other explanations Taylor offers for what constitutes relevant difference between the two sentencing processes. What he holds is that since Judge Joe has simply surrendered his sentencing decisions to the recommendations of the algorithm, and has not himself thoroughly considered the case, his decisions do not contain the proper “negative attitude” toward the offender.Footnote19 Though I must admit that Taylor’s argument here is not absolutely clear to me, one possible interpretation might be that condemnation is a type of speech act that presupposes a particular negative attitude in the condemner, and that this will be missing in cases where a judge has surrendered the sentencing decisions to an algorithm. If this is the idea, then it would seem to explain why the cases of Judge Judy and Judge Joe are different despite the fact that from an outside perspective they appear identical. Thus, the question we are left with is this: Does a condemnatory sentencing process presuppose a negative attitude in a judge in order for it to be genuine? In my view, there are several reasons to be skeptical of this requirement.

First, it is far from obvious that it is true that proper condemnation presupposes a negative attitude in a judge. It seems obvious that a judge must articulate some sort of reactive attitude toward an offender. However, at least as the cases of Judge Judy and Judge Joe have been depicted by Taylor, there are no differences between how the sentence statements are formulated. To contend that some sort of negative attitude must be felt by the individual judge seems a dubious requirement. There is no doubt that there are many cases of (serious) crime that come before the courts where even experienced judges may feel some sort of personally negative attitude toward the offender, e.g. anger or disgust. However, it is equally obvious that there are many cases where such attitudes are not genuinely felt. It is certainly not the case that a judge experiences or possesses a negative attitude in each and every instance of a crime they are faced with. In frequently occurring cases involving petty theft, minor traffic offences, or other similar non-serious crimes, it is probably often more apposite to hold that judges deal with such cases largely without significant emotional attachment. Moreover, there are certainly many cases in which a judge’s attitude to an offender is one of pity or compassion, even if this is not necessarily what gets articulated when the sentence is presented. Thus, the requirement has the dubious implication that many cases are currently being handled in an undesirable manner even if all affected parties (the offender and the public) believe that the sentences constitute genuine and appropriate cases of condemnation (recall, again, that from an outsider perspective there is no detectable difference between the sentences determined by Judge Judy and Judge Joe).

Second, it might perhaps be suggested that all that earlier objection shows is that there are many standard cases—beyond those that involve sentencing algorithms—that do not live up to the requirement that the sentencing process should be genuinely condemnatory. Even if one insists that this is the case, however, it is still not sufficient to show that something valuable has been lost in the case of Judge Joe and that sentences should not be surrendered to algorithmic tools. The reason is simply that even though a judge surrenders the specific determination of sentences to an algorithm, this does not exclude the possibility that the judge may have a negative attitude toward the offender. Even if a judge has not thoroughly considered the determination of the sentence, he or she has become cognizant of the nature of the crime that was committed at the guilt phase of the trial. Moreover, an algorithm will usually also have to be fed with the relevant information of a case in order to provide a sentence recommendation. Thus, there is no reason to believe that the judge’s knowledge (at least in some cases) is not sufficient to generate a genuine negative attitude toward an offender, and therefore that this will not be sufficient to satisfy the purported requirement that genuine condemnation presupposes a negative attitude in the parties doing the condemning. Therefore, in short, it is far from clear that we should accept the requirement that a judge’s genuine condemnation presupposes a genuine negative attitude on their part. Even if one insists that this must be so, there may well be cases where judges generate sufficiently negative attitudes from the more general knowledge of a case even if the sentencing decision is left to an algorithm. Thus, the requirement does not seem to have provided us with a sufficient reason to regard the condemnation involved in the case of Judge Joe as less genuine than the condemnation involved in the case of Judge Judy. But if reference to judges’ negative attitudes cannot hold water, are there other reasons that are more convincing?

In attempting to explain the differences in how the two judges handle the sentencing process, Taylor offers yet another argument. In fact, if I understand him correctly, this argument constitutes the main reason that something is lost when algorithmic instruments are introduced at sentencing. What he contends is that because Judge Joe “has decided to surrender his own judgement in each case that comes along, we can plausibly say that he has no control over the content of each individual judgement.”Footnote20 More precisely, the idea is that condemnation presupposes that actions can properly be attributed to the agent in question—that he or she is the author of the sentence—which is not the case when Judge Joe surrenders his sentencing decisions to an algorithm. In other words, the kind of control which Taylor believes judges should possess in order to be in a genuinely condemnatory position should be understood “in the strict sense of having both power and willingness to make their own decisions about sentencing and implement those decisions.”Footnote21 The question therefore is, should we accept this argument? That is, should we believe that this type of control constitutes a necessary condition for the expression of condemnation, and that genuine condemnation is therefore absent in the case of Judge Joe but not in the case of Judge Judy? Though again I must admit that Taylor’s answer to this question is not entirely clear to me, it seems that there are several reasons at play.

One possible answer is that the lack of proper control “leaves judge Joe empty of reactive attitude.”Footnote22 If this is understood not merely as a restatement of the view that condemnation presupposes control, but rather as an explanation for why this is the case—namely, that condemnation presupposes a negative attitude in a judge and that this attitude cannot be generated without proper control—then what really matters is the existence of a negative attitude in a judge. In other words, if this is the case, then the control argument collapses into the attitude argument which we had just considered above, and which, for several reasons, turned out to be dubious.

Another possible answer is provided when Taylor contends that “because he [Judge Joe] does not adjust the sentence based on his own reasoning and values, giving a higher sentence in one case rather than another will not express the opinion that the first crime was worse than the second. Indeed, his sentence process on the whole cannot be considered to be condemnatory at all.”Footnote23 Even though this argument has some affinity with the argument about negative attitudes, they are not identical. What Taylor is suggesting here is that condemnation presupposes what is lacking when a judge is not in control; namely, that the sentence is based on the judge’s “own reasoning and values.” This argument, however, is vulnerable to similar objections that the attitude argument faced. First, it is far from clear why genuine condemnation presupposes that a sentence is based on a judge’s own “reasoning.” For instance, there are some countries in which minor crimes are responded to with sentences of a pre-determined magnitude. For instance, in some jurisdictions, shoplifting goods within a certain value range, or incurring a minor traffic offence, can result in specific and rigidly codified fines. One of the reasons for removing genuine judicial discretion in such cases is that it eliminates sentencing disparities for non-serious, frequently occurring crimes. Now, should we say that in a case in which a judge metes out a particular pre-determined fine on a shoplifter or a minor traffic offender, and does so in a way that seems condemnatory—e.g. by stating “you have committed this crime and this was wrong of you; therefore, we give you a set fine of 100 dollars”—that the judge is nevertheless not genuinely expressing condemnation, because the sentence is not a result of his own reasoning? This strikes me as hard to believe. More generally, this indicates that there is reason to doubt that a judge’s own reasoning really constitutes a necessary condition for the expression of (genuine) condemnation. The same is the case with regard to the contention that genuine condemnation presupposes that a sentence is determined on the ground of a judge’s “own values.” For instance, it is certainly not always the case that judges agree on the tariffs that lawmakers have determined for specific types of crime (for instance, many judges have opposed the three-strike laws that have been enacted in some US states). Should we really say that a judge who metes out sentences in accordance with these laws does not genuinely condemn the offenders, because the sentences do not reflect the judges’ own values? Once again it seems hard to believe that this condition must be met in order for the sentencing process to serve a condemnatory function.

There is, however, also a final reason provided by Taylor for why control in the suggested sense should be seen as a necessary condition for the expression of genuine condemnation. In fact, if I understand Taylor correctly, this reason constitutes his main argument. What he contends is that control “must at least be exercised by an individual who has standing to speak on behalf of wider political community.”Footnote24 This view has some immediate appeal. If control requires that a judge be what Taylor calls a “public agent,” then it becomes clearer why a lack of control undermines a judge’s possibility of properly condemning offenders. After all, it is not just anybody who can express the proper societal condemnation of an offender. This requires an agent who stands in the right representational position to the community, and it is this position that an appointed (or elected) judge usually possesses.Footnote25 However, what is much less clear is why this particular status is undermined if a judge leaves individual sentencing decisions to an algorithmic tool. Why is it that Judge Joe has lost his standing as a public agent by following sentences recommended by the algorithm?

What Taylor suggests is that public agents are characterized by “their following the preferences or values” of the community.Footnote26 But if the point is that individual sentences must represent the preferences of the public, then it is difficult to see how this requirement could not be satisfied by a judge who surrenders sentencing decisions to the algorithm. Of course, it might be held that this sort of representation would be lost if the algorithm prescribed sentences that varied significantly from those that would have been determined by a judge without the use of an algorithm. However, this is a point concerning the output (i.e. the punishment) of the sentencing process, not the process itself. And as we have assumed, the severity of the sentences is the same in the cases of Judge Judy and Judge Joe. Thus, this cannot suffice to explain why Judge Joe has lost his representational standing. Moreover, it is probably often the case that judges themselves determine sentences that do not precisely reflect the preferences of the public and where it would seem absurd to hold that, as a result, they have lost their standing as judges (since they are just doing the job they are appointed to do). In short, whether individual sentences accord with the preferences of the public is not dependent on whether sentences have been determined via the process of either Judge Judy or Judge Joe. However, Taylor also provides yet another reason that the use of an algorithm undermines a judge’s standing as public agent. What he contends is that algorithm-based sentencing is not done in the name of the public, because it is not the public at large that is responsible for the recommendations but those who have designed the algorithm (and, if the algorithm is a machine learning model, no one is genuinely responsible because the recommendation will be reached by the algorithm itself).Footnote27 Yet even if we accept that the judge and ultimately the public at large have not determined the sentences when these are reached as the output of an algorithm, it remains unclear why this undermines the condemnatory element in the sentencing process. To see this, consider the following analogy.

Suppose that a president has decided to give an apology to members of a minority group of citizens who have been victims of prior state-initiated misconduct. Suppose, further, that it is decided to follow up on this apology by providing those who have suffered from the misdeeds with appropriate compensation. Finally, suppose that it is quite complicated to determine the appropriate amount of compensation, but that the best way to do so is to leave the calculation to an algorithm. Now suppose that the president presents the official apology in a way that satisfies all formal requirements of an apology. Should we then say that the mere fact that the accompanying compensation has been determined by an algorithm (and perhaps even one that has been developed by a private firm), undermines the apology? Should we hold that even though the president, the victims, and the public at large all believe that a genuine apology has been provided, they have all been fooled and that the speech act of apologizing has been undermined by the fact that compensation was determined by the algorithm? In my view, this is very hard to believe. There may of course be cases where a compensation would threaten an apology. For instance, if one is apologizing for a very serious misdeed and subsequently offers a 10-dollar compensation to the victims, then it might well seem as if the apology was not serious or well-intended after all. However, this has to do with the magnitude of the compensation and not with the question of who has made the calculation. In fact, in order to stay as close as possible to the example of Judge Judy and Judge Joe, we can imagine that officials would have arrived at the very same amount of compensation as the one determined by the algorithm if they had engaged in the complicated calculations themselves. In this case, it is in my view very difficult to see why the apology should be undermined. In this light, it is equally hard to understand why the mere fact that the sentences have been determined by an algorithm should undermine the fact that Judge Joe is condemning an offender when he articulates and thereby imposes the algorithm-based sentences on offenders that come before his court. At the very least, much more needs to be said to underpin the view that there is a link between the genesis of the sentence and the successfulness or genuineness of the speech act of condemning offenders at the sentencing phase of the criminal court.

In summary, what we have considered in this section is the question whether Judge Joe surrendering his sentences to an algorithm undermines the condemnation, which Taylor regards as a crucial or at least a valuable element of the sentencing process. The first reason of Taylor’s that was considered was that this would deprive the judge of the opportunity of possessing the appropriate negative attitude toward offenders. However, as argued, it was difficult to see why such an attitude is a necessary condition for a successful condemnation and, even if this condition is necessary, there was no reason to believe that Judge Joe could not possess such an attitude, even when the specific sentences were surrendered to an algorithm. The second reason that was considered was that the condemnatory element of the sentencing process would be undermined if sentences did not reflect the judge’s own preferences and values. However, as argued above, there was no reason to maintain that such preferences and values must be reflected in the sentencing process in order for it be genuinely condemnatory. The final argument discussed concerned how the condemnatory element of the sentencing process was undermined because Judge Joe did not possess proper control over the sentences. Several reasons that control should be regarded as crucial were considered, but it was argued that none of these reasons sufficed to establish that genuine condemnation could not be expressed in the absence of this type of control. Therefore, all in all, it seems to me that the inferential move from the premise that the sentencing process should be condemnatory, to the conclusion that this function would be undermined when sentencing algorithms are introduced, is premature. At the very least, much more needs to be said to establish that there is reason to be concerned when it comes to advisory use of algorithms at sentencing.

3. Conclusion

Artificial intelligence is currently infiltrating many parts of society’s decision-making processes. There are strong reasons to expect that algorithmic instruments will also come to play a more prominent role in the work of criminal courts in the future. Taylor’s recent attempt at directing attention to the desirability of implementing algorithms at sentencing should therefore be welcomed. What he suggests, as we have seen, is that condemnation constitutes a valuable element of the sentencing process, and that this element may be undermined if sentencing algorithms are implemented along the lines exemplified in the case of Judge Joe. However, what has been argued in this comment is, first, that it is much less obvious than he assumes that we should accept the basic premise that the sentencing process itself should be condemnatory. At least, it was not clear that this view must be accepted if it is analyzed on the background of traditional theories of punishment. Second, it was argued that even if the premise is nevertheless accepted, it is still not clear that an advisory use of algorithms would threaten to undermine the condemnatory function of the sentencing process. Thus, more needs to be said to establish that it is not only the severity of the sentences themselves, but also the quality of the sentencing process that will be challenged by the increasing involvement of algorithms in criminal justice practice.

[Disclosure Statement: No potential conflict of interest was reported by the author(s).]

Additional information

Notes on contributors

Jesper Ryberg

Jesper Ryberg is a Professor of Ethics and Philosophy of Law, Roskilde University, Denmark.

Notes

1 This is the case in Malaysia. See Khazanah Research Institute, #NetworkedNation: Navigating Challenges, Realizing Opportunities.

2 This has recently been declared by the Chinese State Council. See Shi, “Artificial Intelligence, Algorithms and Sentencing.”

3 For a recent comprehensive discussion of various ethical aspects of algorithmic sentencing, see Ryberg and Roberts, Sentencing and Artificial Intelligence.

4 Taylor, “Justice by Algorithm.”

5 There are, however, some other discussions which do not focus on the sentencing outcome. For instance, much attention has been directed to the significance of algorithmic transparency. See e.g., Chiao, “Transparency at Sentencing” or Ryberg, “Sentencing and Algorithmic Transparency.”

6 See e.g., Lippert-Rasmussen, “Algorithm-Based Sentencing and Discrimination,” and Davies and Douglas, “Learning to Discriminate.”

7 See Ryberg, “Punishment and Artificial Intelligence”; Ryberg, “Criminal Justice and Artificial Intelligence” or Ryberg “Artificial Intelligence at Sentencing.”

8 Another output-related challenge is known as the “input problem.” The problem is that it may sometimes be very difficult to input an algorithm with all the relevant factors of a crime and the criminal. If an inadequate picture of a crime is used as input for an algorithm, this may of course affect the output of the algorithm. However, Taylor’s argument is not based on considerations of challenges at the input phase. For a discussion of the input problem, see e.g., Schwarze and Roberts, “Reconciling Artificial and Human Intelligence.”

9 Taylor, “Justice by Algorithm,” 2.

10 Feinberg, “The Expressive Function of Punishment,” 98.

11 Taylor, “Justice by Algorithm,” 6.

12 As we have seen, Taylor holds that something valuable is lost by the introduction of algorithms at sentencing. However, he does not enter into a discussion of how valuable it is that the sentencing process serve a condemnatory role, or of how this should be weighed against the possible benefits of introducing algorithmic sentencing.

13 For a discussion of the use of algorithms as instruments to reduce sentencing disparity, see e.g., Chiao, “Predicting Proportionality,” and Ryberg, “Sentencing Disparity and Artificial Intelligence.” For a discussion of reduced costs, see Bagaric and Hunter, “Enhancing the Integrity of the Sentencing Process.”

14 See, for instance, Duff and Garland, Punishment or Matravers, Punishment and Political Theory.

15 Shelby, Dark Ghettos. An anonymous reviewer of this paper has suggested that censure may come in right at the start, with criminalizing conduct. If this is correct, then it will of course constitute a further reason to doubt Taylor’s contention that the expressive element will be lost if sentencing decisions are surrendered to an algorithm. Thanks to the reviewer for suggesting this possibility.

16 Taylor, “Justice by Algorithm,” 8.

17 For an overview, see e.g., Ryberg, “Sentencing and Algorithmic Transparency.”

18 Another important issue, or one that has sometimes been brought up in relation to the discussion of algorithmic transparency, is accountability. It may be considered whether there is a risk that the use of technology such as sentencing algorithms will make it less clear who should be regarded as accountable for sentencing decisions. However, since Taylor’s argument is not based on considerations of accountability, this issue will here not be considered any further.

19 Taylor, “Justice by Algorithm,” 8.

20 Ibid.

21 Ibid., 9.

22 Ibid., 8.

23 Ibid., 9. As we shall see shortly, Taylor also suggests that a judge should express the preferences and values of the community. It is not clear to me that this view and the statement that a sentence should be based on a judge’s own reasoning and values are entirely consistent.

24 Ibid., 12.

25 For a broader discussion of the moral significance of public opinion for punishment, see Ryberg and Roberts, Popular Punishment and “Introduction: Exploring the Normative Significance.”

26 Taylor, “Justice by Algorithm,” 14.

27 Ibid., 13.

Bibliography

  • Bagaric, Mirko, and Dan Hunter. “Enhancing the Integrity of the Sentencing Process Through the Use of Artificial Intelligence.” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg and Julian V. Roberts, 122–144. New York: Oxford University Press, 2022.
  • Chiao, Vincent. “Transparency at Sentencing: Are Human Judges More Transparent Than Algorithms?” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg and Julian V. Roberts, 34–56. New York: Oxford University Press, 2022.
  • Davies, Benjamin, and Thomas Douglas. “Learning to Discriminate: The Perfect Proxy Problem in Artificial Intelligent Sentencing.” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg and Julian V. Roberts, 97–121. New York: Oxford University Press, 2022.
  • Duff, Antony, and David Garland, eds. Punishment. Oxford: Oxford University Press, 1998.
  • Khazanah Research Institute. #NetworkedNation: Navigating Challenges, Realizing Opportunities of Digital Transformation. Kula Lumpur: Khazanah Research Institute, 2021.
  • Lippert-Rasmussen, Kasper. “Algorithm-Based Sentencing and Discrimination.” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg and Julian V. Roberts, 74–95. New York: Oxford University Press, 2022.
  • Matravers, Matt, ed. Punishment and Political Theory. Oxford: Hart Publishing, 1999.
  • Ryberg, Jesper. “Sentencing and Algorithmic Transparency.” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg and Julian V. Roberts, 13–33. New York: Oxford University Press, 2022.
  • Ryberg, Jesper. “Sentencing Disparity and Artificial Intelligence.” The Journal of Value Inquiry 57, no. 3 (2023): 47–462.
  • Ryberg, Jesper. “Criminal Justice and Artificial Intelligence: How Should We Assess the Performance of Sentencing Algorithms?” Philosophy & Technology 37 (2024): 1–15.
  • Ryberg, Jesper. “Artificial Intelligence at Sentencing: When Do Algorithms Perform Well Enough to Replace Humans?” AI and Ethics (2024). doi:10.1007/s43681-024-00442-5
  • Ryberg, Jesper. “Punishment and Artificial Intelligence.” In The Oxford Handbook of the Philosophy of Punishment, Edited by Jesper Ryberg. New York: Oxford University Press, forthcoming (2024).
  • Ryberg, Jesper, and Julian Roberts, eds. Popular Punishment. New York: Oxford University Press, 2014.
  • Ryberg, Jesper, and Julian Roberts. “Introduction: Exploring the Normative Significance of Public Opinion for State Punishment.” In Popular Punishment, edited by Jesper Ryberg and Julian Roberts, 1–13. New York: Oxford University Press, 2014.
  • Ryberg, Jesper, and Julian Roberts, eds. Sentencing and Artificial Intelligence. New York: Oxford University Press, 2022.
  • Shelby, Tommie. Dark Ghettos: Injustice, Dissent and Reform. Cambridge, MA: Harvard University Press, 2016.
  • Schwarze, Matthis, and Julian Roberts. “Reconciling Artificial and Human Intelligence: Supplementing not Supplanting the Sentencing Judge.” In Sentencing and Artificial Intelligence, edited by Jesper Ryberg and Julian V. Roberts, 206–229. New York: Oxford University Press, 2022.
  • Taylor, Isaac. “Justice by Algorithm: The Limits of AI in Criminal Sentencing.” Criminal Justice Ethics 42, no. 3 (2023): 193–213.