652
Views
0
CrossRef citations to date
0
Altmetric
A View on the Past: Historical Roots

Towards a Post-Keynesian Welfare Economics: 35 Years Later

ABSTRACT

I re-visit the two arguments in my original paper, published in this journal in 1989. Pleasingly (but perhaps suspiciously) I have no reason to challenge them, but the insights that have come from behavioural economics over the intervening 35 years provide important new sources of support for both. The insights with respect to endogenous preferences add weight to the first argument for why collective/social choice is important (i.e., why welfare economics matters). In the second argument over the specific egalitarian flavour of post-Keynesian welfare proposals, I now fine tune this suggestion with the aid of behavioural insights so as to focus on the egalitarian character of the rules (and not the outcomes in society).

JEL CODES:

1. Introduction

My original article addressed two questions.

  • Do we need to do welfare economics?

  • What is distinctive about a post-Keynesian angle on welfare economics?

The issue at stake in the first question is whether there are good reasons for people to want to have institutions of collective/social choice. Are there important choices about the state of the world that naturally interest us and which can only be made collectively? The growing influence, perhaps even ascendence, 35 years ago of the New Right in its various forms made the first question timely in 1989. For example, the arguments of Nozick and Hayek had in different ways cast doubt over the need for forms of collective agency much beyond a minimal State. They were influential and continue to be so. My answer to this first question turned on using Hayek against himself. I used the model of Hayek’s spontaneous order as formally developed by Schotter (Citation1981) and my colleague at the time, Robert Sugden (Citation1986). It was distinctly post-Keynesian, I suggested, because the rationale for collective action in this model revolved around, in part, the existence (and significance) of uncertainty in decision making.

I return to this question in the next section. I think my original argument was good and still stands but I shall re-visit the question here to add another dimension to my answer. I do so by drawing on what has been one of the most significant developments in economics over those intervening 35 years: the rise of behavioural economics, and, in particular for this first question, its evidence on the endogeneity of preferences.

My answer to the second question also seems to me to pass the test of time rather well. Of course, I am parti pris, and to judge by the small number of citations of my original article, my opinion is not widely held! Nevertheless, if the last 35 years have taught me anything about academic publishing, it is that what gets noticed depends in large measure on serendipity (and, of course, connections); and so sometimes one has to expect that good arguments will fall foul of serendipity!

My original answer to the second question had two premises. One was that in conditions of uncertainty, people tend to be influenced by what others do. I offered two reasons for such a dependence. The first comes from the decision environment. The old adage that ‘you can fire one journalist but you cannot fire them all’ is a feature of that environment and it makes hunting in packs the individually safe way of responding to uncertainty. The second comes from individual psychology: people come to judgements of self esteem through relative comparison with others. The other premise was that the collectively/socially appropriate response to uncertainty is to have experimentation: i.e., diversity of response within a population is to be valued. I then showed formally for three different kinds of decision settings, how the degree of experimentation in a population shrinks as inequality grows in that population. The specific post-Keynesian (because it revolved around the existence of uncertainty in decision making) welfare inference that I drew was that we have an egalitarian collective/social interest in being able to influence the extent of inequality in society.Footnote1 I return to the second question in Section Three of this paper and add to my original answer, again by drawing on the findings of behavioural economics over the ensuing years. I conclude in Section Four by suggesting that while the broad thrust of my arguments remain the same in the sense that a post-Keynesian welfare economics is predisposed to egalitarianism, my understanding of what this means has subtly changed. What matters is whether the rules governing economic and social life embody the egalitarianism of liberty, understood as all being free to act subject to the constraint of the no-harm principle. The latter constraint is particularly important and significant in its effects, but is often overlooked.

2. Why We Need Institutions of Collective Agency

My original argument was based on an (evolutionary game theoretic) analysis of how conventions will emerge in many common decision problems where coordination with others matters. Many possible actual conventions could provide this key coordinating function. In this sense, the actual convention that arises through such an evolutionary process in a society will be arbitrary. Its specificity cannot be traced back to the underlying preferences of the population and yet its specificities will often be important because they determine how the gains from coordination are divided among the population. The distribution of these gains matters to people for a variety of reasons. As a result, people in any society have reasons to be interested in the character of the conventions they use, but, crucially they have no individual way of deciding on the specificities of those conventions. To turn the selection of convention into a matter that is consciously guided by the preferences of the population with respect to how the gains from coordination should be distributed, rather than being the product of historical happenstance, a society needs institutions of collective decision making. To re-make the underpinning analytic point entirely prosaically: we’re stuck with the QWERTY keyboard when writing in English because it is an equilibrium in what is a coordination game. There are no real incentives for the typical individual to change to some alternative like Dvorak-Dealey. Of course, the keyboard ‘enthusiast’ may eccentrically adopt an alternative, but the only real way to change the keyboard used by almost all people in English speaking societies would be through a collective decision to move to some other letter arrangement.

The general point in this argument is that individually taken actions have emergent properties in any society that is moderately complex. These properties are the products, in the famous phrase, ‘of human action but not the execution of any human design’ (Ferguson Citation1767, p. 187). It is a longstanding and important observation in economics. For example, Adam Smith famously makes it and uses it in exactly the way that I proposed: that is, we have good reason to want to choose the institutions or conventions that determine the character of these emergent properties; and the vehicle for exercising choice in this case is collective. It has to be collective precisely because the properties arise from individual choice but not design. This is, in part, why Smith writes the Wealth of Nations. He wants to persuade his audience (the ‘us’ of his time) to make the collective choice to have economic outcomes largely determined by the institutions of markets and private property. He is, in effect, addressing his fellow citizens with an argument that is designed to influence their political decision making (i.e., their collective choices).

The emergent property of conventions (or more generally any rules) that concerned me in 1989 was the distribution of the benefits from coordination. Many of us have social preferences regarding how such benefits should be distributed and, in so far as we can come to an agreement over what specific rules best satisfy these social preferences, then we should be able to act on these preferences by collectively choosing the rules. It might be argued that agreement in such matters is difficult to come by, but liberal-democratic political institutions for making collective choices are often selected for precisely this reason. In fact, my argument in 1989 did not depend on being able to resolve such disputes because, in the second argument, I provide examples of how the egalitarian character of the rules affects experimentation and therefore how well in terms of ordinary, selfish (and not social) preference satisfaction a society is able to embrace uncertainty. In this sense, it anticipated the current public policy interest of the OECD (Citation2015) and the IMF in how inequality can slow growth — albeit on the basis of very different mechanisms that link inequality with growth. Nevertheless, I mention now the possibility that the collective choice of rules could also be the vehicle through which social preferences for justice are satisfied and that unanimity need not be crucial for this to take place because the new dimension to the additional argument that I want to advance now turns on the existence of social preferences more generally.

One of the insights of behavioural economics that has accrued since 1989 is, indeed, that we have social or other regarding preferences with respect to the interests of others; and, in addition, crucially, that they also vary with the institutional context in which people make what is otherwise exactly the same decision. In other words, social preferences cannot be taken to be exogenous: they are, in part and sometimes, endogenous to institutional choice. This is extremely important. It expands the reasons underpinning why we might reasonably want to make collective decisions over what are otherwise emergent properties determined by happenstance: i.e., these rules can influence what (social) preferences we have.

The best example of such endogeneity comes from the experiments that reveal the crowding-out of social preferences in market or market-like settings (see Bowles and Polania-Reyes Citation2012, for a survey). The ‘classic’ experiment of this kind in economics is Gneezy and Rustichini (Citation2000) on Israeli nurseries. The nurseries in their field experiment typically had a problem with late pick-ups and so they treated a group of them by introducing a fine for late pick-ups. Paradoxically, the late pick-up problem got worse in the treated group of nurseries; and remained at the higher level even after the fines were removed. Gneezy and Rustichini’s interpretation of this perverse result is that the fine was regarded as a price and so turned lateness into a market transaction — something that could be bought. Whereas turning-up late had hitherto been governed by a social norm of good behaviour in what was a social relationship between parent and carers, the fine turned the relationship into something more akin to a market one where these norms did not apply (or applied with less force). The effect of the resulting decline in the intrinsic, social reason for not turning up late happened in their experiment to outweigh the effect of the new material (extrinsic) reason for not turning up late so as to produce the overall rise in the practice of late pick-ups.

This phenomenon of crowding-out was already well-known in the psychology literature by 1989 (e.g., see Deci Citation1975) and had been used by Frey (Citation1997) in public policy discussions in economics to make the specific point that is revealed by the Gneezy and Rustichini (Citation2000) experiment: i.e., how standard policy initiatives that tweak incentives can backfire. However, it had not been used in 1989, as far as I am aware, to make the argument that I am now advancing. It is not just that an incentive-tweaking public policy intervention may backfire because the intrinsic social reasons for taking the action degrade, it is that we can no longer use preference satisfaction as the criterion or standard by which we judge outcomes under different institutional arrangements. This is because we may not have the same (social) preferences under the different rules.

To see this slightly differently, recall the standard Coase-based explanation of the selection of the rules for governing an exchange: should they be the rules of the ‘market’ or those of the ‘firm’. Within this tradition in institutional economics, we choose the governance arrangement for the exchange on the basis of which minimises the transactions costs. The grounds for this transactions cost standard is that the lower the transactions costs, the better satisfied will the preferences of the parties to the exchange. This is because whatever the ‘in principle’ benefits from the exchange, the transactions costs incurred in facilitating the exchange are an unfortunate and an unavoidable deduction from those benefits. Thus, minimising the transaction costs is the same as maximising the net-benefits from the exchange. If, however, the governance rules change who we are (i.e., the preferences that we have) then there is no exogenous and independent basis for making the comparison between the two governance rules by using the degree of net preference satisfaction. To put this slightly differently and usefully, the transactions cost minimisation standard misses a key ingredient of the choice when preferences are endogenous to the choice of rules: the existential one of who to be.

This does not just mean that the transactions cost standard cannot sensibly be used, it also means that we have a fresh reason to be concerned with the character of the rules that govern our individual choices because another of the potential emergent properties of those rules is what kinds of people we are. In short, it is not just, as I suggested in 1989, that we might care about the distributional characteristics of the rules; we surely should also care about how those rules impact on who we are.

Of course, it is possible to argue that people have (meta) preferences over what preferences they might want to have; and so sidestep this issue and loop us back to the simpler issue of minimising transaction costs (i.e., there are underlying stable meta-preferences). This seems implausible to me.

Equally, I doubt we would agree in any discussion over the rules, and their emergent properties in this respect, over what types of people we should be encouraging through our collective choice of rules. Instead, I find much more plausible the thought that in liberal democratic societies what we value in these circumstances is that people should come to own their preferences. That is, they should have the resources so as to be able to reflect on what preferences they wish to hold: they should, in this sense, be autonomous agents, genuinely authors of their own actions. Thus, it is not the character of the preferences that matters but rather the process through which people come to have them and I suspect we might more readily agree about this aspect of the collective discussion over the choice of the rules. I develop this suggestion in the next section when moving to the second question about the specifics of a post-Keynesian welfare economics.

3. A Post-Keynesian Angle on Welfare Economics

There are two broad and major insights that behavioural economics has given us. One is that many of us have social preferences — and this is what I have used above to build upon the argument for collective action institutions and a vibrant welfare economics discussion. The other insight is that we are often not well described by the rational choice model. Our behaviour does not seem to be captured by a model where we act so as to best satisfy our preferences (whether they be selfish, social, spiteful or of any other kind).

Some of the experiments that seem to reveal this last characteristic are those where the presence of decoys or extraneous pieces of information seem to influence our behaviour. For example, people are asked to write down their social security number, then they bid for a bottle of wine and apparently those with the higher social security number bid more that those with low ones (see Ariely Citation2008). Others that are similarly worrying for the rational choice suggest that we cognitively process risk in ways that are different to the expected utility maximising version of the rational choice model. For example, people are asked to consider two gambles: the $bet has a high money sum with a small probability and the Pbet has relatively small sum with a high probability. Typically, when asked how much they are willing to pay, most people place a higher value on the $bet. However, when given a straight choice between the two, most people select the Pbet. In other words, from the revealed preference perspective of rational choice theory, people reverse their preferences.

One interpretation of this evidence is that people often rely on quick and easy decision heuristics (sometimes called ‘system 1’; following Kahneman Citation2003) and this can lead them astray. People still have well behaved underlying preferences but the so called ‘system 2’ of deliberative reflective decision making on the basis of these preferences has not been used in these particular circumstances. ‘System 2’ is a scarce (and tiring) resource and can only be used grudgingly when it is really important to do so — so this should not be so surprising. This interpretation has spawned a public policy agenda of ‘nudging’: interventions in the choice architecture that will play into system 1 heuristics so as to guide decision making towards the outcomes that will best satisfy that person’s preferences. I do not dispute that some examples of anomalous, from a rational choice perspective, behaviour is sensibly interpreted in this way. However, whenever people make decisions over outcomes that are novel in important respects and/or where genuine uncertainty attaches to what the outcomes might be, then I suggest the better interpretation of this evidence is that people do not have any of the relevant preferences to guide them in the manner of the rational choice model (see Hargreaves Heap Citation2017, for a lengthier argument). They are instead groping towards an action through some conscious process of possible experimentation and reflection through accumulation of evidence and argument (like that suggested by Stewart’s (Citation2009) theory of decision making by sampling).

The issue that arises is how to judge policy interventions that change the rules in these circumstances. If people cannot be presumed to have relevant preferences or there is too much uncertainty over what the outcomes might be, then the policy interventions cannot be judged using the usual standard of preference satisfaction.

Before I turn directly to this question of how to make public policy judgements in these circumstances, I should point out that I have slipped something more than a behavioural insight into the argument. The standard consequentialist approach for evaluating policy changes by how well their outcomes satisfy people’s preferences fails for two distinct reasons. One is that we sometimes simply do not have preferences over the outcomes. This is the insight of some evidence from experimental economics and psychology. The other is that the outcomes are genuinely uncertain and we cannot sensibly attach a probability distribution to their likely occurrence, not least because we know there could be outcomes that we cannot even imagine ex ante. While the former is a condition of the experience of novelty and I see no reason to suppose that this has become any more or less likely over time, there is a Smithian/Hayekian argument for supposing that uncertainty in this sense is increasing. The centrality of the division of labour to economic productivity growth is the Smithian part of this and Hayek’s is that his ‘knowledge problem’ will grow with the extent of the division of labour.

The more men know, the smaller the share of all that knowledge becomes that any one mind can absorb. The more civilized we become, the more relatively ignorant must each individual be of the facts on which the working of civilization depends. The very division of knowledge increases the necessary ignorance of the individual of most of this knowledge. (Hayek Citation1960 [Citation2011], p. 78)

One possible answer to the substantive question of what evaluative standard to use would be to follow Hayek: slip a precise consequential standard and instead rely on ‘pattern’ (or general) consequential properties of rules. In effect, this is what Hayek does when he defends market rules. He shows how they create mechanisms for addressing the knowledge problem and illustrates this with his famous example of the ramifications that follow from a change in the tin industry. His argument does not pretend to be a precise analysis of what would happen because this would not be possible, but he is able to show how market rules will encourage broadly appropriate responses to the developments in the tin industry.

An alternative evaluative standard would come from breaking with consequentialism altogether. Instead, one could focus on the characteristics of the rules. Do the rules embody purely procedural desirable characteristics? Some care is required here not to turn this distinction into a matter of semantics. For example, I am going to suggest that a desirable procedural characteristic of any set of rules is that they embody a commitment to individual liberty (in ways which I will be more precise about in a moment). It would be easy to turn this embodiment of the principle of liberty into a ‘consequence’ of the rules and so retain a ‘consequential’ perspective. This semantic twist is, in principle, fine with me if others are predisposed to make it, we will talk ‘consequentially’ about everything, even if a ‘consequence’ is stretched. This semantic move, though, should not obscure that if the process for generating choice (liberty) is part of a consequence, then such a consequence is very different to the normal consequential standard in economics.

To see this point most clearly, consider the typical social welfare function approach for the evaluation of policies. It presumes we can evaluate the outcome of each policy for each person and then judge each policy by some suitably weighted function of these individual outcomes. This approach is widely used, not least by those who are traditionally concerned with equality when judging policy interventions (e.g., see Rawls’s (Citation1971) use of the maximin criterion to choose between the different social arrangement where the relevant features of each are the consequences for different individuals under one arrangement rather than another). Likewise those who are egalitarian in an equal opportunity (rather than outcome) sense have principally been concerned with how to measure and ensure an equal starting point: i.e., the state of affairs at a moment in time (e.g., see Dworkin Citation1981; Romer Citation1994; and also see Anderson’s (Citation1999) criticism, of this kind of egalitarianism, which is the same as mine). Contrast this with the desirability of having rules that embody individual liberty. We cannot predict what the outcomes, in this sense, of such a set of rules might be. And even if we could initially, and they were subject to a social welfare function assessment and adjustment, then whatever the desirable outcomes are that might be initially created, they would be bound to be undone by the actions of free individuals as aspects that are unknowable about the future unfold. As a searing illustration of this insight at the time of writing, who could have anticipated COVID or the Russian invasion of Ukraine when assessing the outcomes of some policy intervention in 2019? This is Nozick’s point about using any patterned or consequential standard for judging outcomes and it should worry all consequentialists. This does not mean that nothing can be said. One can still ask: do the rules embody the desirable characteristic of individual liberty? (And if they do, then whatever eventuates is fine.)

Let me return to the main argument, by first taking up why individual liberty is a desirable property of the rules and then I will be more specific over what they might concretely entail. The simplest argument is that of J.S. Mill (Citation1859 [1989]) in On Liberty. If we value individuality, and we all aspire for this reason to be individuals, and we take seriously the point that we are not born with settled values/preferences over most things, then we must have had the freedom to engage in ‘experiments in living’ and time to reflect and discuss with others what we want to value (or what preferences to have, if we retain the language of preferences and preference satisfaction). In this way, whatever we take to be specific to ourselves in making us individuals will be explicable to ourselves through our own experiences and our reflections upon them, including the tests which they have passed in discussion with others. I use Mill here because of the point of connection that it brings to the earlier argument. I suggested earlier that we would be unlikely to agree over existential questions of ‘who to be’, but we would be more likely to agree that people should live under conditions where they can sensibly think about this decision for themselves and so come to own the decision of ‘who to be’. In other words, we might agree on the desirability of conditions that promote a sense of individual autonomy. This, in turn and following Mill, suggests we should agree to have rules that embody individual liberty: i.e., that enable experiments in living and discussion and deliberation.

What might this mean concretely?

Several things can be deduced. First, since the value attached to liberty comes from that which we attach to being an individual, there can be no reason for this value not to attach equally to all people in a society. The rules therefore have to treat all people in the same way. This is what is meant when Hayek and others refer to a ‘rule of law’. The rules have to be strictly egalitarian in this sense. Nor is this a casual or innocuous practical observation. Many of the egalitarian changes of the post war period can be understood as an imperative of having a rule of law in this sense. For example, there can be no reason for having laws that treat, either in theory or in practice, people of different race or different genders differently in what are effectively the same circumstances. To do so is to offend against the presumption of the rule of law and it undermines the principle of liberty.

Second, individuals have at some relevant point in their lives to be endowed with the capabilities for reflection and discussion since they are not born with these skills; and so they will need to be provided with a suitable education and perhaps some minimal or basic income. Again, the principle of equal treatment in this respect will follow from the argument above.

Third, Mill was adamant, again out of respect for everyone’s potential individuality, that liberty should not be understood as a ‘free-for-all’: that is, the rules of liberty do not permit people to do whatever takes their fancy at a moment in time. Instead, liberty requires that people should be free to do whatever they like but only in so far as it does not harm other people.

The only freedom which deserves the name, is that of pursuing our own good in our own way, so long as we do not attempt to deprive others of theirs, or impede their efforts to obtain it. (Citation1859 [1989], p. 16)

The no-harm principle seems largely to have been forgotten by libertarians when discussing what liberty requires. It follows, though, directly from Mill and, with the possible exception of Nietzsche, it is difficult to find any foundational advocate of liberty who does not place some similar constraint on what people can do under the rules of liberty. For example, Kant places a similar constraint for a similar reason when arguing that rational action cannot involve treating other people as means to your own ends; and it is from this line of argument that the Austrians bar acts of what they refer to as coercion in their constitutions of liberty.Footnote2 That the no-harm principle, or something like it, has largely been forgotten is a great contemporary weakness within the contemporary liberal tradition that places uniquely important value on the conditions of individual liberty. This is because several further forms of specificities for the rules of liberty follow from it.

Fourth, in practice, there will be natural disputes over what constitutes a harm and so a liberal society has to have institutions that are capable of determining what should be taken by all as a ‘harm’. We may not always privately agree with the definitions that come from these institutions but unless there is a public, and so shared, standard, there could not in practice be said to be a functioning no-harm principle since, without such ‘agreed’ definitions of what are ‘harms’, the principle of liberty is nothing more than a free-for-all. In short, a commitment to liberty in the full sense of involving the no-harm principle has to involve a commitment to public definitions of what are harms — otherwise the commitment is meaningless and nugatory. In practice, of course, our legislative and judicial institutions are responsible for this activity and much that has moved liberal societies in an egalitarian direction over the postwar period can be traced to discussions leading to legislation and legal innovation over what constitutes ‘harm’. For example, we no longer inflict corporal punishment on people and we no longer regard the practice of gay and lesbian sex as causing a harm to others.

Fifth, in a dynamic economy, the rules of liberty cannot remain static. This is because in an evolving economy a constant set of rules will in many of the new and evolving circumstances cause an unnecessary growth in harms to all. Or to put this slightly differently the rules of liberty should change, when the opportunity exists, to remove what are unnecessary harms under the old rules. This is a natural entailment of the no-harm principle part of individual liberty and it should endear the principle to orthodox welfare economics because it could easily be construed as a pareto improvement criterion — although see the next entailment of the no-harm principle as a serious qualification. The climate emergency offers a good illustration of this use of the no-harm principle. The earth’s atmosphere has the capacity to absorb a certain amount of greenhouse gases without changing the composition of the atmosphere in ways that affect climate. Through most of human history, that threshold level of absorption was never reached. It is only relatively recently that the threshold has been exceeded and greenhouse gas concentrations have been increasing. Thus, there was no need for the rules of liberty to apportion rights to use the atmosphere for most of human history. However, to avoid the wholesale harms of the climate emergency, we must now fine tune the rules of liberty by giving, in effect, property rights in the atmosphere so that its use is no longer ‘free’ and its harm when used is recognised. Many of the changes in the definition of property rights (e.g., the innovation of limited liability) can be seen in this light.

Sixth, while legislative and judicial institutions can make concrete and public the ‘harms’ that can be anticipated and so people can take these harms into account when deciding what to do, many actions have consequences that cannot be anticipated at the time the action is taken. This is a simple consequence of the growing complexity of the economy and society that has led, as suggested earlier, to a growth in the knowledge problem identified by Hayek. One of my favourite illustrations of this comes from comparing what was the new Boeing aeroplane of the 1960s, the Boeing 737, with its new plane of the 2000s, the Boeing 787. Most of the 737 was manufactured by Boeing (the fuselage, wings and bits of the engines). With the 787, Boeing only manufactured the tail fin and the wing-to-body fairing. The other bits came from 12 companies in 8 different countries. Most of those involved in the construction of the 737 might reasonably imagine how their work might interact (and so possibly cause harms) to fellow workers in Boeing. Those involved in the construction of 787, in contrast, probably did not even know the national identity, let alone the identity of the employer, of their co-workers involved in its production. In short, how any individual worker’s actions might contribute to a plane outcome have become in the course of 40 years genuinely opaque. But it is not that their actions do not have consequences for others, it is just they are unimaginable and unforeseeable with growth in the division of labour.

Let me change the example. Could the retailer who signed a new contract for the supply of shirts with a producer in China in 1982 have reasonably foreseen that this would cause a harm to domestic producers of shirts. It would have been clear that in so far as this new Chinese contract was at the expense of a contract with a domestic supplier that this might have caused a harm to that domestic supplier, but the appropriate counterfactual may not be easy to identify. For instance, perhaps the retailer’s contract with a Chinese producer enabled price cuts that increased demand for shirts. If this were the case, then the Chinese contract need not have harmed the domestic producer. Suppose, however, there was such a reduction in demand for the domestic producer of shirts. To know whether this contraction in demand for shirts by the retailer actually caused a harm would require that the retailer anticipate what actions the domestic producer took and how they interacted with other local developments. For instance, would the domestic producer’s workers, if initially injured by being made redundant, nevertheless move into equivalent local jobs with another employer? How could the retailer anticipate the answer to this question when it depends on the decisions of many other agents in the economy — not least the location decisions of other firms over whether to set up a plant for producing widgets, perhaps, or some such, in the area of the domestic producer.

The point again is that the retailer’s decision has likely consequences for others, but they are not practically speaking imaginable or foreseeable. Liberals have to take seriously the possibility that harms arise from such spillovers from people’s actions because they are committed to the no-harm principle. The fact that they are not anticipable is not a reason to ignore them. However, precisely because they cannot be anticipated ex ante, the legal route of attaching a price to such harmful spillovers is not an effective vehicle for a no-harm principle remedy. There has to be some other way of dealing with these harms. Since an ex ante approach is not possible, the vehicle of compensation for the harms will have to be something like a social insurance scheme that effectively, ex post, compensates the losers: i.e., those who it turns out have been harmed by other people’s actions. The reason it has to be a social insurance type scheme is because one cannot sensibly trace who might have been responsible for inflicting this harm for the same reason that the harms cannot be anticipated. So, charging the responsible individuals ex post is not a possibility. The charge has to be general: on everyone. It arises because we know such harms could arise, cannot be ignored under the no-harm principle and that they cannot be traced to specific individual actions in a complex and dynamic economy. This is life and liberals have no business in engaging in the kind of magical thinking, that they rightly condemn in their populist opponents, whereby the possibility of these harms is overlooked.

What might such a scheme look like? One cannot know, by the nature of the problem, the exact harms: who suffers them and by how much. Instead, at best we have to imagine a scheme that will plausibly compensate anyone who experiences a harm of this unanticipable form.

A public health care system does this for harms to health. The most common form of harm is, however, probably job loss leading either to re-employment at a lower wage because the worker’s skills have been devalued or to lengthy unemployment. The provision of unemployment benefits and re-training possibilities is one way to combat these harms. But there are aspects of possible harm that neither addresses adequately: for example, suppose the devalued skills were firm specific rather than generic ones, there may be no generic re-training possibilities that could compensate for this locally. Alternatively, the job loss may come at a time in life (e.g., when near retirement) when re-training is unlikely to yield materially new job opportunities.

To cover these circumstances, something else is required and the simplest solution is a high marginal tax rate as this provides an automatic form of compensation for income losses. For instance, suppose you lose £10k per annum, with a 40 per cent marginal tax rate, your disposable income only falls by £6k and the tax system has effectively automatically given you a £4k compensation for the 10k loss. Of course, high marginal rates create a tension with the incentive to work. They can also create a problem with certain forms of income tax progressivity.

Most income tax systems create progressivity in two ways. One is by having a tax free allowance and the other is by having a tax schedule with progressively higher tax rates at higher incomes. The latter is problematic as far as social insurance is concerned because it means that marginal rates rise with income and as a result the in-built fiscal compensations for income losses that comes through paying smaller taxes is relatively weak at low income levels. In other words, the automatic compensation works better for high income earners than low ones. For this reason, it may be better to create progressivity through the tax free allowance. In practice, to create equivalent progressivity to what currently obtains, this would mean boosting the tax free allowance and thereafter flattening the tax structure. Since the tax free allowance has often been likened to a basic income for those in work, it is tempting to dispense with this and all other welfare payments (including unemployment benefits and the state pension) and simply introduce a tax free basic income for all.

In short, the no-harm principle might be satisfied by a public health care system, re-training opportunities and a tax free basic income plus a flat tax structure.

Two of these three elements are commonly found in most rich countries. The unusual component is a tax free basic income and a flat tax structure. For this reason, it may be useful to flesh out what it could look like and do some cross-checks. I will use the UK as an illustration. The current average annual wage before tax is around £31k, and £20k is roughly the annual income when paid the minimum wage.

There is currently a tax free allowance of £12.5k and shows how the current income tax rate and national insurance rate (an income tax by another name) rises thereafter from 33.3 to 48.3 per cent in what are largely two steps. The two combine to determine the average tax rate at each income level. Suppose instead that there is a tax free basic income of £7k and there is a flat tax of either 45 or 42 per cent. For the sake of comparing the same incomes from work, I assume that those in work take their basic income as a tax free income allowance £15.5k (i.e., .45 × £15.5k = £7k).Footnote3

Table 1. Current and alternative tax rates (source: Martin Lewis Money https://www.moneysavingexpert.com/tax-calculator/

It is perhaps somewhat surprising that both versions of the flat tax are in many respects more progressive than the current tax system. The only clear counter is for those earning £200k under the 42 per cent flat tax, where the average tax rate drops by 3.5 per cent point compared with the status quo. Otherwise, those on income below the average pay less tax and those above the average pay more or the same under the basic income plus flat tax proposal. At first this may seem a little surprising, but the current system is not so progressive because high income earnings benefit from lower taxes on their income up to around £50k and the tax rate on income above this under the flat tax proposals, at 42 or 45 per cent, is not deeply different to the 43 or 48 per cent that the higher income groups contribute under the current regime.

The fact that the below average income groups pay less while the above average income groups pay more tax also means that the total tax take is unlikely to be very different under one or other version of the basic income plus flat tax proposal. The calibration of the basic income substitute for the array of benefits and the personal tax free allowance has been determined to maintain roughly constant levels of government expenditure. Thus, the basic income plus flat tax proposal should be broadly neutral with respect to the public finances.

The proposed marginal tax rate, at 42–45 per cent, is higher for low income earners than the current rate, but it is not materially different for current high income earners. Should this be a worry? Few now argue, as far as I can tell, that a 43 or 48 per cent marginal rate for those earning over £50k seriously discourages work among the better-off. Since any discouragement should increase as income rises (if the marginal benefit of more income relative to that of more leisure decreases as income increases, as might supposed from conventional consumer theory if there is diminishing marginal returns from consuming more goods), then any discouragement to work from a 43 per cent tax rate will be even less at lower income levels. Hence if 43/48 per cent is not obviously a problem for the better-off now, there does not seem to be a reason for supposing it would be a problem for those earning less than £50k.

In one sense the basic income substitute for the array of benefits and the personal income tax allowance looks like a casual act of simplification. It is a simplification with well-known advantages (e.g., see Van Parijs Citation2017). But it is not entirely casual because it also appeals directly to liberty’s requirement of equal treatment. Of course, a set of rules can apply to all and yet the rules can be sensitive to individual differences with the result that the application of the same rule to all has different effects on differently placed individuals. Nevertheless, when everyone receives the same basic income and is taxed above this level at the same rate of x per cent, the equal treatment is rendered transparent without the need for further argument or agreement. There do not need to be a range of further agreements over why and how differently placed individuals should be treated differently which becomes necessary under a more individually nuanced system of rules.

One final cross check comes from comparing a basic income of £7k per annum with what the current benefits regime delivers. It matches the basic state pension (and at the moment this is what most pensioners receive). It is close to matching for a family of 3, the cap placed on the household receipt of universal credit and other welfare payments and is more generous than the cap for a family of 4. The current unemployment benefit (the ‘job seeker’s allowance) is typically not paid for a full year, but if it were, it would be £4k per annum. The ‘personal independence payment’ for those who are disabled can be anything from £1.2k to £8k per annum. Thus, the basic income would not be markedly more or less generous on average. The only clear case where it would be notably less generous is for disabled pensioners where the combination of current benefits could be close to double the basic income.

4. Conclusion

The nub of my argument 35 years ago was that uncertainty in decision making both created a reason for engaging in welfare economics and a reason for believing in the welfare debate and discussion should be a biased towards egalitarian collective choices. I endorse those earlier arguments and add to them here principally by acknowledging that uncertainty permeates decision making not just because we cannot know in any probabilistic way what the outcomes will be for many of our decisions, but also because our decisions involve to some degree who we will be. The problem of uncertainty, in other words, attaches both to outcomes and how we will value them.Footnote4

As a result of the new twists to my argument, I have argued that we will have to rely more explicitly on procedural criteria (rather than consequential ones) when evaluating our possible collective choices: that is, ‘do the chosen rules, under which we make decisions, embody desirable procedural characteristics?’ rather than ‘do we like the specific outcomes that the rules generate?’. In such a discussion, I have sketched an argument for the desirability of the procedural criterion of advancing individual liberty in the manner of JS Mill. Rules should embody the principle of individual freedom — however, liberty should not be understood as a state where people can do what they like. Most arguments for liberty always circumscribe this definition of liberty by constraining what people can do by something like J.S. Mill’s no-harm principle. With this understanding of what liberty means, I have developed what this might mean for the kinds of rules that we should endorse. Two characteristics emerge. First, the rules have to adjust over time. We should not expect the rules to be static because in an evolving economy this stasis will be a source of growth in what are unnecessary harms for all. Second, the rules should be egalitarian in three specific ways. One is the rule of law egalitarianism whereby the rules treat people equally. Another is through the equal initial provision of the requisite capabilities for experimentation, reflection and discussion. Finally, the rules should contain provision for compensating people ex post when they suffer a harm that could not have been anticipated (due to uncertainty) by the people who made the decisions that contributed to the harm. To round out this last argument, I have given a fanciful illustration of how a basic income plus a flat tax might contribute to such a compensation mechanism.

In short, to change the angle of my argument, one way of summarising the thrust of it today is to see it as a dispute with those who see a tension between equality and liberty. When egalitarianism is understood in the ways I have suggested, then liberty and equality walk hand-in-hand.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Notes

1 You will see perhaps why I am inclined in retrospect to commend my early argument in this respect because it anticipated in part the subsequent growth of public policy interest in the influence of inequality on economic outcomes and the mechanisms responsible for this have become, as it were, ‘bread and butter’ insights in behavioural economics but they were not in 1989.

2 For Kant this flows from the presumption that everyone has the capacity for reasoning (they are rational) and thus any rule of reason must be universalisable. This is one version of Kant’s famous maxim, the other is that we should not treat others as means to our ends.

3 Since I have previously invoked Mill to explain why liberty would/should be valued by all, I should note that the ‘tax free allowance plus flat tax’ proposal I have developed is also that of Mill (see Mill Citation1848, Book V, chapter II).

4 In truth, this twist is more Hayekian than it is post-Keynesian as Hayek clearly believes that our objectives are often protean, uncertain and malleable.

References

  • Anderson, E. 1999. ‘What is the Point of Equality?’ Ethics 109: 287–337.
  • Ariely, D. 2008. Predictably Irrational. New York: harper Collins.
  • Bowles, S., and S. Polania-Reyes. 2012. ‘Economic Incentives and Social Preferences: Substitutes or Complements?’ Journal of Economic Literature 50 (2): 368–425.
  • Deci, E. L. 1975. Intrinsic Motivation. New York: Plenum Press.
  • Dworkin, R. 1981. ‘What Is Equality? II. Equality of Resources.’ Philosophy and Public Affairs 10: 283–345.
  • Ferguson, A. 1767. ‘An Essay on the History of Civil Society.’ London, 187.
  • Frey, B. 1997. Not Just for the Money: An Economic Theory of Personal Motivation. Cheltenham: Edward Elgar.
  • Gneezy, U., and A. Rustichini. 2000. ‘A Fine is a Price.’ Journal of Legal Studies 29 (1): 1–17.
  • Hargreaves Heap, S. 2017. ‘Behavioural Public Policy: The Constitutional Approach.’ Behavioural Public Policy 1: 252–265.
  • Hayek, F. A. 1960 [2011]. The Constitution of Liberty, Vol. 17 of The Collected Works of F. A. Hayek. Edited by R. Hamowy. Chicago, IL: University of Chicago Press.
  • Kahneman, D. 2003. ‘Maps of Bounded Rationality.’ American Economic Review 93 (5): 1449–1475.
  • Mill, J. S. 1848. Principles of Political Economy. London: John W. Parker.
  • Mill, J. S. 1859 [1989]. On Liberty and Other Writings. Edited by S. Collini. Cambridge: Cambridge University Press.
  • OECD. 2015. In it Together: Why Less Inequality Benefits All. Paris: OECD.
  • Rawls, J. 1971. A Theory of Justice. Cambridge, MA: Harvard University Press.
  • Romer, J. 1994. Egalitarian Perspectives. Cambridge: Cambridge University Press.
  • Schotter, A. 1981. Economic Theory of Social Institutions. Cambridge: Cambridge University Press.
  • Stewart, N. 2009. ‘Decision by Sampling: The Role of the Decision Environment in Risky Choice.’ The Quarterly Journal of Experimental Psychology 62 (6): 1041–1062.
  • Sugden, R. 1986. The Economics of Rights, Co-Operation and Welfare. Oxford: Basil Blackwell.
  • Van Parijs, P. 2017. Basic Income: A Radical Proposal for a Free Society and a Sane Economy. Cambridge: Mass: Harvard University Press.