1,163
Views
5
CrossRef citations to date
0
Altmetric
Research Article

On the scientist’s moral luck and wholeheartedness

Pages S12-S24 | Received 28 Feb 2020, Accepted 29 Jul 2020, Published online: 13 Aug 2020

ABSTRACT

Moral luck is real but living with this knowledge is difficult, in particular because attending to the radical uncertainty of future implications of one’s work may have a significant impact on research directions or publication plans. The scientist, the engineer, and the technological innovator, as individuals, may experience increased anxiety or regret an action performed in their professional capacity. I argue that a recipe for avoiding such ethical doom is a necessary part of the first-person attitude enabling serene research activity in full acceptance of the unpredictable twists of publicly assigned responsibility. By pursuing Bernard Williams’s analogy between the analytic approach of rational ethics and the moral doctrine of religious ethics, I argue in favor of the judgment based on trying wholeheartedly as the scientist’s objectively desirable, albeit externally unmeasurable, way of conduct.

Objective and subjective dimensions of moral luck

The concept of moral luck was introduced by Bernard Williams (Citation1976, Citation1981) and critically reevaluated shortly thereafter by Thomas Nagel (Citation1979). Bearing a mark of Kantian ethics, the debate between Williams and Nagel focused on feasibility and theoretical availability of an analytic account of luck. This notion was divided by Williams into three types: luck with regard to the result of one’s action (resultant luck), its course (incidental luck), or internal qualities of the acting agent (constitutive luck). In the following years moral luck, particularly its resultant variety, became a subject of significant controversy and debate in moral philosophy, decision theory, and psychology. Arguments in the literature cover a wide spectrum: some tend to deny the moral luck’s very existence or to problematize its relevance for moral judgment (Richards Citation1986; Statman Citation1993; Rosebury Citation1995; Wolf Citation2001; Kneer and Machery Citation2019); among those who accept it (e.g. Hanna Citation2014), some claim that the evidence of moral luck in empirical data produces a set of full-fledged paradoxes that stand as fundamental obstacles to the construction of ethical theory (Williams Citation1993; Mele Citation1999).

Even if a broad and systematic study of resultant moral luck in ethics of science and technology is still missing, it has been observed and analyzed in this area, e.g. by Horner (Citation2010). In technology ethics, luck takes the form of a consequentialist judgment made under radical uncertainty: the agents of change (scientist, engineer, technological innovator) cannot predict the contexts and the effects of future use of their discovery or innovation lying resolutely beyond their control or intent, and yet the results of such use may alter moral judgment on their action. If the results are negative, then, unsurprisingly, they will produce a negative impact on the individual whose work had enabled them. The person in question may even live to see a reversal of their moral fortune with Williams’s first-person ‘agent-regret’ a likely consequence. In other cases, the results may occur after the person’s lifetime: luck may, and probably will, provoke a rewriting of scientific or technological history.

Judgments based on sheer luck may be outrageously unjust (Sand Citation2019). My goal is neither to evaluate moral luck as a normative regime, i.e. decide whether such judgments should be objectively suppressed or dismissed, nor to draw moral consequences from the intrusion of luck into the ‘parental’ dimension of technological normativity and the mechanism that is at work behind it, whether causal or symbolic (Grinbaum and Groves Citation2013). Instead I shall focus on the agent’s conduct, so to say, in the first person. To analyze responsibility in the sphere of science and technology, it is helpful to categorize it as legal or moral, and as individual or collective. Setting legal aspects aside, I shall be concerned, not with the collective responsibility that involves an entire scientific community, but with the moral responsibility of individual human agents. The latter includes their own moral judgment on themselves, denoted here as ‘first-person’ attitude, as well as external evaluations of their individual responsibility.

Typically, the scientist knows that moral luck exists. If she is aware of the existence of moral luck in general, then she should also be aware that luck may influence a future judgment on her own activity and on herself as a person. This knowledge can have an effect – a natural one – on her professional conduct. For example, the scientist may become so deeply concerned with her future reputation that she will shift her research direction or alter publication plans (for a well-known case see Fouchier et al. Citation2012). Recent literature contains numerous testimonies that such forward-looking projections exercise a significant impact on the content and methods of research, e.g.

[…] there was this idea of the scene being set ten years in the future when certain things had happened, which obviously haven’t happened yet, but potentially could have based on research that was going on in Bristol. […] it was good because it got the researchers to actually think beyond their little project of ‘I have got to do these chores after I go and use the microscope and measure these cells’ … to actually thinking wow, holy crap yeah … actually if this goes towards it could have big implications. In this case they went for a therapy where they design these molecules with an adaptive biological mechanism that could release drugs at certain times and control someone's mood […] Actually controlling someone's mood, that's really dangerous […] In the play the scientists said that depression is a problem, we want to solve it but through the act, we actually realise that … yeah it is a problem, but you can't just manipulate someone’s mood … That's manipulating someone's emotions … It was just through that dialogue that you saw a lot of people opening up and realising things they potentially hadn't realised before. (Pansera and Owen Citation2019)

For sure, an influence of this sort on one’s research or publication trajectory is not causally bound to occur. Concerns about the future are not the only driver of scientific careers, and reputation is not the only force behind the scientific vocation (search for truth, curiosity, ambition, or professional duty count as well, among others). When this influence occurs, it is often lived as something ‘good’, as in the example above. But such positive first-person judgment crucially depends on the scientist retaining control of her research and its implications; it may be deeply unsettling for her to realize that future effects, hence her responsibility, lie beyond her own control and will be painted in black or white depending on sheer luck, as in the Bristol example quoted above: the manipulation of someone’s mood may turn out to be either beneficial or troublesome and thence lead to retroactive moral judgment on the scientist who enabled it.

As society becomes increasingly aware, if not weary, of the effects of technological progress on the environment and on human values, the future of the scientific profession becomes less straightforward than it may have seemed a few decades ago (despite early warnings, e.g. Florman Citation1976). Individual researchers are more often driven toward a sense of anxiety, if not isolation, by the visible reticence of the public to endorse all scientific developments in such domains as synthetic biology, nanotechnology, or artificial intelligence. I argue that such feelings can be overcome in the daily scientific conduct. This will not, however, solve the problem of moral luck in normative ethics (I do not believe in the availability of theoretical solutions). My only intention is to demonstrate a way of scientific conduct that may transcend luck from the first-person point of view. In other words, I would like to insist on moral relevance of the relationship between the daily conduct and the first-person judgment at a future time, as explicated by Kant in The End of All Things:

 … for the practical aims of every human being judging himself (though not for being warranted to judge others) – a preponderant ground for it: for as far as he is acquainted with himself, reason leaves him no other prospect for eternity than that which his conscience opens up for him at the end of this life on the basis of the course of his life as he has led it up to then. (Kant Citation1794, 8:329, my emphasis)

The ethical doom of moral luck

At the core of the problem of the scientist’s responsibility in today’s world lies the paradox of uncertainty. In a work that includes the infamous dictum ‘Science does not think’, Heidegger understood science to be technology-oriented, i.e. as a set of prescriptions with a goal to optimize a set of performance indicators, thus equating it with the procedural mode of operation (Heidegger Citation1954/Citation1968). One does not have to agree with his position in order to see that, to the extent technology represents such a procedural regime of making, it is not necessarily accompanied by a reflection on the non-technical (e.g. legal, moral, societal, psychological) meaning of what is being made. This inability to imagine the consequences and the scope of one’s technological action, much discussed in philosophy after World War II, in other words the inability of scientists, taken collectively, to look beyond the metaphorical heavy doors of the laboratory, constitutes a political failure. Transposed at the individual level, it becomes a moral failure: a version of what Arendt called ‘thoughtlessness’ (Arendt Citation1958). And yet, even if this failure seems to bestow imperative status on the need to comprehend the consequences of technological action unto the world, they cannot be imagined, at least neither completely nor veridically. In Arendt’s own words, in a world driven by technology, ‘uncertainty rather than frailty becomes the decisive character of human affairs’.

Moral luck is an utmost manifestation of this paradox. The scientist appears ethically doomed in virtue of the following syllogism: (a) in technology ethics, uncertain future events are bound to occur; (b) when they occur, they retroactively alter the judgment on one’s conduct as a scientist; hence (c) the scientist as an individual cannot avoid being labeled as ‘thoughtless’ in the case the consequences of her work turn out to be harmful. Obviously, the unforeseen and unforeseeable consequences may also happen to be beneficial, making the scientist ‘lucky’ in the positive sense of the word.

A rational way of addressing this problem was proposed in the form of a collective imperative: ‘researchers and research institutions’ should study all possible impact of their work ‘for present and future generations’ (European Commission Citation2008). It is clear from Arendt’s characterization of the human condition that this prescription is bound to fail. If uncertainty is a paramount trait of the technological world, it cannot be lifted through a better intellectual effort, whether individual or collective, that would imply that factual knowledge can only be established by following the scientific method. In the words of US researchers during the Covid-19 pandemic, ‘the challenging contexts of uncertainty’ lead science to not mandate technological solutions that would have desirable outcomes (Kahn et al. Citation2020).

An alternative path is ‘living with uncertainty’ (Grinbaum and Dupuy Citation2004; Morin Citation2020): it is a daunting task that involves accommodating moral luck in one’s professional conduct. More generally, ‘living with uncertainty’ means that the reality of a future catastrophe, whether global or personal, is inscribed in one’s life, pushing the agent to continuously work toward the goal of avoiding this catastrophe. The difficulty is that it is logically necessary but psychologically untenable to attend continuously to future luck and to moral judgment in general:

Stressing the moral dimension of each engineering task might well serve to diminish the engineer's effectiveness and blunt his enthusiasm. […] An engineer [cannot] do his best work if he is excessively apprehensive and anxious about the ethical value of his every move. (Florman Citation1976)

Like van den Hoven, Lokhorst and van de Poel many years later (van den Hoven, Lokhorst, and van de Poel Citation2012), Florman emphasizes that engineers may spend some time, but not all their time, thinking about the implications of their work. But there is a further dimension to this uncertainty. Suppose the engineer can foresee a certain risk; then ‘of course, should [the engineer] happen to discover that some aspect of his work will subject the public to a previously unsuspected danger, he will be honor bound to speak up’ (Florman Citation1976). Speaking up against the malevolent use of one’s own inventions may indeed trigger a mechanism that will prevent such use through timely and appropriate change in legislation or by raising public awareness. But it will not erase the influence of luck: even if the engineer speaks up about what he had imagined as possible malevolent use, this would not render him immune to moral luck, because the consequences may turn out to be different and have a retroactive effect on moral judgment. A recent example is provided by the decision not to release a system of artificial intelligence in open source made by the designers of OpenAI GPT-2 machine learning software for natural language processing. The concern was that if they had done so, then their technology would be used ‘to generate deceptive, biased, or abusive language at scale’ (Radford et al. Citation2019). The authors of GPT-2, who acted out of precaution and expected praise for taking such an ethical stance, were in fact criticized for putting academic research at a disadvantage and seeking to broaden media coverage (Vincent Citation2019). Within months, their work was replicated by a group of students who did not share their ethical concerns (Gokaslan et al. Citation2019). When the authors of GPT-2 finally released their full model several months later, they had to justify themselves, for they felt to be ridiculed rather than supported by their own professional community and society at large (Solaiman, Clark, and Brundage Citation2019). Moral luck turned out quite unexpectedly for those who believed they were showing exemplary ethical conduct. When the same group announced the next version of the software, called GPT-3 (Brown et al. Citation2020), ethical issues were mentioned as a concern, but there was no moratorium on publication despite significantly increased potential for misuse.

For the engineer, posing as a prophet of doom is also a professional risk on its own right: it may invite competition from less ethically minded colleagues or ‘blunt the enthusiasm’ of the entire society for a particular type of technological innovation. But if the engineer gives up and puts this kind of imaginative work to a halt, then he will run the risk of becoming thoughtless, hence deeply immoral, in Arendt’s sense.

Is there a middle way? I do not have a theoretical answer. Moral luck and unpredictable future judgment do indeed occur in science and technology. The idea of making all individuals ‘moral’ runs contrary to empirical findings regarding the division of ethical labor in scientific community (Politi and Grinbaum Citation2020). It may also turn out to be counterproductive for meeting other ideals of scientific inquiry and for respecting its other values, particularly effectiveness and productivity. In the next section, I will attempt to sketch a particular way of conduct that may alleviate the ethical doom of the scientist and the engineer without entirely removing it. At the risk of repeating myself, a way of conduct is an attitude that applies at the individual level; this is the level of individual values and decisions where professional conduct cannot be systematically disentangled from the first-person attitude.

To use Florman’s expression, thoughtlessness is what one hopes that the engineer will avoid, for the scientist is ‘naturally’ a clever person. Laypeople know that scientists are experts in obscure matters that they do not master. They form a natural expectation that the scientist will be both clever and wise, implying in particular that she will have considered the non-technical implications of her work and shielded society from possible evil consequences. If and when a failure occurs, like at Tchernobyl or Fukushima, the loss of trust in the scientist is so dramatic that it can hardly be restored. Simondon compares this embarrassment with a similar feeling encountered by religious individuals in the past:

A failure of the technical gesture – a rocket that falls near its base or escapes control – creates a collective effect as embarrassing as when, in the case of the Romans, sacred chicken did not want to eat or a sacrificial bull fled the altar carrying away in a terrible wound the axe of the sacrificer. The launches of rockets, the launches of satellites play the same role as the lectisternia and the hecatombs: they, as modern collective sacrifices, answer to the existence of a tension, of a collectively felt anxiety. (Simondon Citation1961, 119)

Embarrassment undermines trust in technology experts much in the same fashion as it destroys trust in public religion. In ancient societies, this could shift ritual practice toward new gods and new cults. Simondon emphasizes the collective aspect: scientists, who are always aware of professional competition and of the need for public approval, feel collective anxiety, possibly enacting major shifts in the relationship between science and society. But the same phenomenon may also shift individual careers. In what follows, I shall address the concerns of the individual for whom it may eventually become unbearable either to live in such a society or to belong to such a professional community.

Transcending moral luck

When Bernard Williams wrote about moral luck in ethics, he kept in mind a comparison with the doctrine of final judgment and predestination in religious thinking from Stoicism (with its special role for moral indifference) to Christianity (with its retrospective Last Judgment). Like Williams, I shall speak two languages, that of myth and that of rational ethical theory, trying to express in the latter the ethical thinking that was initially established in the former. Like Williams, I submit that ‘we must reject any model of personal practical thought according to which all my projects, purposes, and needs should be made, discursively and at once, considerations for me’ (Williams Citation1993, 200). The demand for total transparency in morality is based on a misunderstanding of rationality. Truthfulness of propositional reasoning does not suffice on its own to build the moral edifice; trust and unreflected commitment are required as well. Even the ultimate rational ethical theory of the twentieth century, John Rawls’s theory of justice, leaves obscure the origin of consensual preference for primary goods (Rawls Citation1971). In spite of all attempts to derive it from other principles (Rawls Citation2001), the common agreement on the selection of primary goods remains opaque and an enigma for the rational theorist. This demonstrates a methodological interest of mythologically inspired moral arguments akin to those that motivated Williams: unlike discursive analytic considerations, they are the instruments that possess a capacity to address both the transparent and the opaque components of ethics.

Without calling the ancient principle of adiaphora by name, Williams shows the limits of individual moral indifference. He strongly emphasizes that luck is an efficient factor in Christian ethics and transposes this lesson to analytic theory:

The idea that one's whole life can in some such way be rendered immune to luck has perhaps rarely prevailed since (it did not prevail, for instance, in mainstream Christianity), but its place has been taken by the still powerfully influential idea that there is one basic form of value, moral value, which is immune to luck and – in the crucial term of the idea’s most rigorous exponent – ‘unconditioned’. (Williams Citation1981, 20)

Williams marks here two important points. First, he says that it is evident from history that the fundamental idea of Stoic ethics – one’s moral standing can be made immune to luck through indifference – has not met significant success in moral thinking. This is perhaps due to its counterintuitive character, for mainstream religious ethics could not be historically dissociated from the very human expressions of emotion and affect. On the layperson’s view, moral fortune is indeed often influenced by events that seem random, yet manage to make this person strongly emotional, so that no claim to indifference can pragmatically overrun such impact.

Second, Christian ethics has developed a doctrine of moral value which, contrary to individual life, remains immune to luck. The central strength of this account comes from its claim to purity: only pure value is unconditioned by life events. For Williams, this is the one and only ideal of morality that transcends luck as it lies ‘beyond any empirical determination’ (Williams Citation1993, 195). In other words, the source of all axiology is not to be found in the world but belongs strictly within the limits of the doctrine itself. In the language of myth, ‘only a divine song will never be judged wrong’ (Catullus 64, 322): human conduct, subject to uncertainty and obviously lacking divine character, is not immune to luck. This concept of pure, divine, or unconditioned moral value may seem to be entirely dependent on its religious foundation, however it was successfully implemented in rational ethical theory by Kant: ‘Much of the most interesting work in moral philosophy has been of basically Kantian inspiration’ (Williams Citation1981, 1).

As I said at the beginning, the debate between Williams and Nagel was, too, of basically Kantian inspiration. Nagel (Citation1979) only hinted at what was later to become, in the work of other authors, a major, non-Kantian (even anti-Kantian) line of critique: Williams’s recourse to psychology, identified with the use of the notion of ‘agent-regret’. Yet the standpoint taken by Williams does not have to be limited to regret: it invites other first-person attitudes as well. I now turn to myth to discover one such attitude and to try to understand it in the language of rational ethics.

Christian ethics asks whether human conduct should be influenced by future judgment caused by an event that is uncertain from the standpoint of an individual who bears an obligation to determine, during his lifetime, whether such influence should enter in his moral consideration. A variety of theological and moral conclusions have been drawn from this debate, e.g. Christian ethics condemns the belief that everything is permitted, including debauchery, since final judgment cannot be dependent on earthly conduct; the mainstream religious doctrine also condemns the reverse attitude, i.e. that a person who wishes to be saved must spend their entire life in penitence and self-flagellation. None of these conducts present any guarantee that the ultimate judgment will be what the person during his lifetime may wish it to be.

I am concerned here with a different kind of conclusion. It has been asked in the Judeo-Christian tradition whether one ought to strive to know the future and whether such desire is, in and by itself, good or bad. In the Apocalypse of Esdras the quest for knowledge, presented as an existential condition of the wise narrator, is rebuked by God in a single ‘you cannot’: ‘I labor to comprehend the way of the most High, and to seek out part of his judgment. And he said unto me, Thou canst not. And I said, Wherefore, Lord? whereunto was I born then?’ (2 Esdras 5:35). As the last question demonstrates, the impossibility to know provokes real grief and incomprehension because the narrator must also speak to his fellow humans about things that he cannot know: ‘How may I then speak of these things?’ (2 Esdras 5:39). Similar questions tormented other prophets in the Biblical corpus, most famously Jonah (Jonah 3:9). Nowadays, they are the daily bread of the scientist and the innovator who cannot predict the consequences of bringing technologies or new scientific knowledge into the world, yet ‘fellow humans’ require that they speak as experts about future impact of their work. If the consequences turn out to be less positive than the forecast, then moral luck will fall on them: like Jonah to whom such course of events ‘seemed very wrong’, the scientist and the innovator, too, may ‘become very angry’ (Jonah 4:1). A very angry scientist would certainly not able to pursue her vocation with equanimity. Yet the only answer available in mythological narrative to anyone who wishes to overcome this condition by obtaining the missing knowledge seems to be ‘you cannot’.

Lessing, who was aware of the possibility of this moral outcome, drew from it a conclusion that brings us closer to analytic theorizing of rational ethics: ‘Why can one not await a coming life as patiently as one awaits a coming day? […] Supposing that there was an art of divining the future, it would be better if we did not learn it’ (Lessing Citation1777). Lessing marks two important points. First, a surprising conclusion in the conditional: if one were to consider an imaginary scenario in which we would know the future, then not knowing it ‘would be better’. The choice is made in an impossible world, yet it is clearly a moral choice. It cannot, however, be straightforwardly transposed to science, for it runs against the basic desire to know, characteristic of the scientist. The second point helps to restore some interest in Lessing’s position: the renunciation to know does not have to be assimilated in its unbearable wholeness, but only as an outcome of ‘patiently await[ing]’ the consequences of one’s action. The central element is a reference to the virtue of patience intended to balance the ambition to know. If the condition of the scientist is to be constantly torn between ‘either redoubled activity or despair’ (Arendt Citation1958, 293), then patience is arguably the only virtue able to sustain professional and social stability of the scientist’s first-person way of conduct by preventing her from ‘becom[ing] very angry’.

Williams also refers to virtue ethics in another form, without referring to patience. He shifts his analysis from the implacable consequentialist perspective to a moral judgment that is focused more on trying than succeeding. This latter one involves ‘a kind of trying that lies beyond the level at which the capacity to try can itself be a matter of luck’ (Williams Citation1993, 195). This centrality of trying places Williams’s basis of morality within the domain of individual virtues relevant to scientific conduct, e.g. trying (with Lessing: patiently trying). It is a departure from the previous focus on the consequences of one’s action that remain subject to moral luck. Yet in virtue ethics, too, moral conflicts do exist in the form of mutually incompatible virtues. In this perspective, the virtue of trying may compensate for well-known moral dangers of another indisputable virtue, success (Dupuy Citation2010).

Trying wholeheartedly

It remains to be seen whether trying (perhaps only of a particular kind) can alleviate the burden of moral luck and salvage the scientist from ethical doom. Williams himself, as quoted above, limits the scope of trying to the one enabled by some force beyond luck. I submit that this scope of ‘good’ trying can be specified even further: its chief characteristic should be wholeheartedness.

Wholeheartedness is a complex notion that appears in mythologically informed religious ethics as well as in rational ethical theory. The author of Mesilat Yesharim, an influential Jewish ethical treatise written in the eighteenth century, Ramchal (Moshe Luzzato), defines wholeheartedness (shelemut ha-lev) as purity of motive. It is opposed to two other kinds of conduct: one that is ‘like wavering between two sides’ and one ‘like doing out of habitual role’ (Luzzato Citation1738/Citation2007). The purity of motive introduced by Ramchal should not be confused with the purity of morality discussed by Williams. For the latter, purity stands in strict opposition to empirical phenomena. For the former, purity is a strongly empirical prescription that consists, essentially, in completely attending to one’s action and never performing it automatically or out of habit. An action executed wholeheartedly is the opposite of an action done habitually. When the agent’s entire being is devoted to performing an action, when she is filled by it to the very limits of self, then her conduct corresponds to the idea of wholeheartedness.

To add a different ethical account of the same concept, consider the so-called ‘third attitude’ prescribed by John Dewey: ‘There is no greater enemy of effective thinking than divided interest. […] A genuine enthusiasm is an attitude that operates as an intellectual force. When a person is absorbed, the subject carries him on’ (Dewey Citation1933, 30). Dewey’s concern was not with transcending moral luck but rather with attaining effective thinking. Nevertheless, he marks a remarkable opposition between the idea of wholeheartedness with its total absorption of the individual and various adversarial ways of conduct similar to what was described by Ramchal.

Trying, if done wholeheartedly, is a way of conduct that is able to transcend moral luck in individual conduct. It is perhaps the only path that leads to living with the kind of radical moral uncertainty and anxiety caused by attending to luck. For these reasons, trying wholeheartedly is a value that science as an institution needs to put forward and keep in esteem on a par with, if not higher than, effectiveness and productivity. This value has, of course, appeared under other names in many accounts of the scientist’s individual ethic. Nobel-prize winning physiologist Ivan Pavlov spoke of ‘consistency’ and ‘passion’ (Pavlov Citation1935); the poet W.H. Auden remarked that ‘Nature rewards perilous leaps’ (Auden Citation1947). Passionate and consistent leaps sometimes turn out to be perilous, but them being leaps, not steps, alters ethical judgment, including consequentialist judgment, by shifting focus to virtues of the individual actor. I submit that an attitude that enables this shift away from consequentialism should explicitly involve these two qualities: (a) trying, for what should be evaluated in individual conduct is not only its success, and (b) wholeheartedness, for trying should require total absorption of the acting agent as a person.

To be sure, success, including professional success, is still important in life, in the very least for economic reasons; but ethically speaking, it need not be dominant. If moral self-interrogation – ‘the question I have become for myself’ (Augustine of Hippo Citation400, X:33) – is focused on success, then a sudden reversal of luck will likely entirely destroy the individual. But if she values trying over succeeding, then she will not be so blocked by luck: like the Stoic indifferents, success may be re-classified as something relatively marginal for ethics. This reappraisal of the scientist’s individual morality has a chance to help remove anxiety and enable serene professional conduct by focusing her first-person attitude primarily on trying.

Wholeheartedness is necessary if trying alone, with or without success, were to achieve moral serenity. A fundamental reason for this resides, once again, in the purity of motive. Purity confers exclusivity on researcher’s conduct: trying fills her entire person and there is no room left for unrelated considerations. Wholeheartedness itself, like any virtue, is not an object of consideration and does not become a subject of reflection: ‘A courageous person does not typically choose acts as being courageous, and it is a notorious truth that a modest person does not act under the title of modesty’ (Williams Citation1993, 10). In particular, then, the purity of motive will leave no room left for the anxiety of luck.

At the same time, in spite of this seemingly unreflective attitude, wholeheartedness does not make one cognitively or rationally blind to moral luck. Luck remains present in transparent, analytical reasoning, but now one has also taken into account the opaque component of ethics. By filling the person with enthusiasm, wholeheartedness not only saturates, but goes beyond the agent’s cognitive capacity as a moral agent.

The reason for the opacity of wholeheartedness with regard to rational choice sits with a disturbing quality of this virtue: it cannot be prescribed. Kant insists that ‘it is a contradiction to command not only that someone should do something but that he should do it with liking’ (Kant Citation1794, 8:338). Similarly to Kant’s first-person ‘liking’, purity of motive cannot be attained via willful decision: in a situation of moral choice, the agent’s free will is helpless to guide him toward wholehearted conduct. In normative ethics it would be meaningless to try to enter wholeheartedness into an analytic moral calculation. This un-normative opacity is arguably essential for the power of wholeheartedness to dispel the ethical doom of moral luck. Such a virtue is never present in the individual as a consequence of deliberate choice and, to use Williams’s term, cannot be made ‘a consideration for me’, but is still relevant to ethical judgment. Williams says of Gauguin that he ‘simply preferred to live another life’ and ‘perhaps from that preference, his best paintings came’ (Williams Citation1981, 23, my emphasis). The link between ‘simply’ and ‘best’, explicit in this sentence, is at the same time a link between purity of motive and future judgment. Wholeheartedness partakes in a simplicity that no critical consideration can attain. By doing so, it shifts focus from the calculation of consequences to the assessment of personal virtues and from the transparent to the opaque component of ethics.

The virtue of trying wholeheartedly can also be found in the Greek tradition. In Plato, for instance, it is directly related with professional quest for knowledge:

It is by means of the examination of each of these objects, comparing one with another – names and definitions, visions and sense-perceptions, – proving them by kindly proofs and employing questionings and answerings that are void of envy – it is by such means, and hardly so, that there bursts out the light of intelligence and reason regarding each object in the mind of him who uses every effort of which mankind is capable.’ (Plato, Letter VII 344b, my emphasis)

If one fully attends to the Greek original, then one perceives a certain imprecision in the standard English translation (Plato Citation1966): the original is better rendered as ‘then suddenly appears a streak of light: one understands and conceives of the object of study, provided that one has made their effort at full stretch, as much as it is possible for man’. This ‘full stretch’ in Plato’s words corresponds to the purity of motive, the entirety of effort, and the engulfing enthusiasm captured by the notion of wholeheartedness in the Jewish tradition as well as in Dewey’s pragmatism.

For sure, the wholeness of effort is no moral panacea. There is a wholeheartedness of the simpletons, who are full of thoughtlessness. However, the scientist, who is presumed to be trained in critical thinking, rarely falls into this category. But wholeheartedness may be seen as problematic for a different reason, which also involves the scientist. In one legend, prophet Jeremiah succeeds in building a perfect human-like golem. It is able to speak; as it talks to Jeremiah, the golem warns him about the confusion he had brought unto the world: ‘When a man meets another man in the street, he will not know whether you made them or God made them’ (Atlan Citation2010). According to the legend, Jeremiah is taken by surprise and even asks the golem what he should now do. One interpretation of his apparent thoughtlessness (‘how come Jeremiah had not thought about this problem?’) involves precisely the wholeness of his effort to make a golem: whatever a supremely wise and intelligent man he was, he was so filled with enthusiasm that he had not taken the time to consider the consequences. All his mental resources were devoted to trying (note that Jeremiah is not a Faustian character knowingly denying any significance of moral concerns). In the legend, Jeremiah wakes up to the undesired consequences and unmakes the golem. The text does not contain any reference to his psychological state: it appears that Jeremiah felt no Williams’s ‘agent-regret’ or, at the very least, the legend does not demonstrate that such regret had any bearing on his capacity to conduct creative work. Still, Jeremiah’s ‘science’ is not paralyzed either by its consequences or by Jeremiah’s lack of focus on his ‘research’. While scientists of the Jeremiah type are necessary (Politi and Grinbaum Citation2020), they can hardly sustain their activity unless they perform their research like Jeremiah, i.e. wholeheartedly.

This example provides a putative answer to another important gap between wholeheartedness and analytic moral calculation: the sincerity of trying cannot be measured objectively. The wholeness of effort remains opaque to the external observer. Again, this does not leave it outside the field of ethics, for it still serves a purpose that is central from the first-person (Jeremiah’s own) perspective: it enables serene conduct. This conduct may lead to either fortunate or unfortunate consequences, which may be foreseeable or occur at random: moral luck in the consequentialist perspective has not left the stage. What wholeheartedness achieves, though, is that it enables the researcher’s professional and personal advancement.

To conclude, the attitude of trying wholeheartedly should not be used to deny societal and political responsibility of the scientist, the engineer, and the technological innovator. If the consequences of their action turn out to be harmful in the future, then they will bear a responsibility assigned by the public – and rightly so. Moral luck still remains a reality in the practice of law, ethics, and politics of science and technology. It should be noted, however, that not all scientists are involved in ethical thinking in equal measure. Empirically, only a fraction does so at the collective level, and the division of ethical labor need not imply its universal appeal for each individual researcher. At the same time, individual scientists who have deliberately dismissed the necessity and the importance of ethical thinking are ‘the only category not to be found in the empirical data’ (Politi and Grinbaum Citation2020, my emphasis). It follows, then, that there exist scientists who pursue their professional work without being paralyzed by moral luck, even if they never claim to be blind to ethical consequences. The question addressed in this essay is how this could be possible. My answer is that at the individual level, i.e. in terms of personal morality, scientists who have fully stretched themselves need not be paralyzed by the knowledge of future ethical doom. While their wholeheartedness will not solve the normative problem of ethical judgment, in daily conduct the entirety of their effort may still let them transcend moral luck.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes on contributor

Alexei Grinbaum , Ph.D., HDR, is a physicist and philosopher at LARSIM, the Philosophy of Science Group at CEA-Saclay near Paris. His main interest is in the foundations of quantum theory. He also writes on the ethical and social aspects of emerging technologies, including nanotechnology, synthetic biology, robotics and artificial intelligence. He was coordinator for France of the ‘European Observatory of Nanotechnologies’ and partner in the project ‘Responsible Research and Innovation in Practice’. Grinbaum is a member of the French national ethics committee for digital technologies and AI as well as of the French ethics commission for research in information technology (Cerna). His books include ‘Mécanique des étreintes’ (2014) and ‘Les robots et le mal’ (2019).

Additional information

Funding

This work was supported by Horizon 2020: [grant number 709637].

Bibliography

  • Arendt, H. 1958. The Human Condition . Chicago : University of Chicago Press.
  • Atlan, H. 2010. The Sparks of Randomness, vol. 1 . Stanford : Stanford University Press.
  • Auden, W. H. 1947. The Age of Anxiety . Princeton: Princeton University Press, 2011.
  • Augustine of Hippo, 400 . 2014. Confessions . English translation by C. J.-B. Hammond (Loeb Classical Library 26). Cambridge, MA : Harvard University Press.
  • Brown, T. , B. Mann , N. Ryder , M. Subbiah , J. Kaplan , P. Dhariwal , A. Neelakantan , et al. 2020. “Language Models are Few-Shot Learners.” arxiv:2005.14165.
  • Dewey, J. 1933. How We Think A Restatement of the Relation of Reflective Thinking to the Educative Process . Boston : Heath & Co.
  • Dupuy, J.-P. 2010. “The Narratology of Lay Ethics.” Nanoethics 4: 153–170. doi: 10.1007/s11569-010-0097-4
  • European Commission . 2008. Recommendation on ‘A code of conduct for responsible nanosciences and nanotechnologies research’, C (2008) 424, Brussels.
  • Florman, S. 1976. The Existential Pleasures of Engineering . New York : St. Martin’s Press.
  • Fouchier, R. A. M. , A. Garcia-Sastre , Y. Kawaoka , W. S. Barclay , N. M. Bouvier , I. H. Brown , I. Capua , et al. 2012. “Pause on Avian Flu Transmission Research.” Science 335: 400–401. doi: 10.1126/science.1219412
  • Gokaslan, A. , V. Cohen , E. Pavlick , and S. Tellex . 2019. “OpenGPT-2: We Replicated GPT-2 Because You Can Too”. Blog of 22 August 2019. https://blog.usejournal.com/opengpt-2-we-replicated-gpt-2-because-you-can-too-45e34e6d36dc .
  • Grinbaum, A. , and J.-P. Dupuy . 2004. “Living With Uncertainty: Toward a Normative Assessment of Nanotechnology.” Techné 8 (2): 4–25.
  • Grinbaum, A. , and C. Groves . 2013. “What is ‘Responsible’ About Responsible Innovation? Understanding the Ethical Issues.” In Responsible Innovation , edited by R. Owen , and J. Bessant , 119–142. Chichester, UK : John Wiley & Sons, Ltd.
  • Hanna, N. 2014. “Moral Luck Defended.” Noûs 48: 683–698. doi: 10.1111/j.1468-0068.2012.00869.x
  • Heidegger, M. 1954/1968. What is Called Thinking? New York : Harper & Row.
  • Horner, D. S. 2010. “Moral Luck and Computer Ethics: Gauguin in Cyberspace.” Ethics and Information Technology 12: 299–312. doi: 10.1007/s10676-010-9248-0
  • Kahn, J. , and Johns Hopkins Project on Ethics and Governance of Digital Contact Tracing Technologies . 2020. Digital Contact Tracing for Pandemic Response: Ethics and Governance Guidance . Baltimore : Johns Hopkins University Press.
  • Kant, I. 1794. “Das Ende aller Dinge.” Berlinische Monatschrift 23: 495–522. English translation by A.W. Wood: “The End of All Things.” In I. Kant, Religion and Rational Theology. Cambridge: Cambridge University Press, 1996, pp. 217–232.
  • Kneer, M. , and E. Machery . 2019. “No Luck for Moral Luck.” Cognition 182: 331–348. doi: 10.1016/j.cognition.2018.09.003
  • Lessing, G. E. 1777. Womit sich die geoffenbarte Religion am meisten weiß, macht mir sie gerade am verdächtigsten. Cited in: J. Taubes, Occidental Eschatology. Stanford: Stanford University Press, 2009. p. 131.
  • Luzzato, M. C. 1738/2007. The Complete Mesillat Yesharim . Cleveland : Ofeq Institute.
  • Mele, A. 1999. “Ultimate Responsibility and Dumb Luck.” Social Philosophy and Policy 16: 274–293. doi: 10.1017/S0265052500002478
  • Morin, E. 2020. « Nous devons vivre avec l'incertitude ». Le Journal du CNRS, 06 April 2020. https://lejournal.cnrs.fr/articles/edgar-morin-nous-devons-vivre-avec-lincertitude .
  • Nagel, T. 1979. “Moral Luck.” In Mortal Questions , edited by T. Nagel , 24–38. Cambridge : Cambridge University Press.
  • Pansera, M. , and R. Owen . 2019. Report from National Case Study: United Kingdom. Deliverable 5.1 of the Horizon-2020 project ‘Responsible Research and Innovation in Practice’ (RRI-Practice). Available at https://www.rri-practice.eu/wp-content/uploads/2019/06/RRI-Practice_National_Case_Study_Report_UNITED-KINGDOM.pdf .
  • Pavlov, I. P. 1935. “Letter to Youth.” In Complete Works, edited by I. P. Pavlov, Vol. 1, 22–23. Moscow: Izdatelstvo Akademii Nauk, 1951.
  • Plato . 1966. Plato in Twelve Volumes, Vol. 7 . Translated by R. G. Bury . Cambridge, MA : Harvard University Press.
  • Politi, V. , and A. Grinbaum . 2020. “The Distribution of Ethical Labor in the Scientific Community.” Journal of Responsible Innovation . doi:10.1080/23299460.2020.1724357.
  • Radford, A. , J. Wu , D. Amodei , D. Amodei , J. Clark , M. Brundage , and I. Sutskever . 2019. “Better Language Models and Their Implications.” OpenAI blog, February 14, 2019. https://openai.com/blog/better-language-models .
  • Rawls, J. 1971. A Theory of Justice . Cambridge, MA : Harvard University Press.
  • Rawls, J. 2001. Justice as Fairness: A Restatement . Cambridge, MA : Harvard University Press.
  • Richards, N. 1986. “Luck and Desert.” Mind; A Quarterly Review of Psychology and Philosophy 65: 198–209. doi: 10.1093/mind/XCV.378.198
  • Rosebury, B. 1995. “Moral Responsibility and “Moral Luck”.” The Philosophical Review 104: 499–524. doi: 10.2307/2185815
  • Sand, M. 2019. “Did Alexander Fleming Deserve the Nobel Prize?” Science and Engineering Ethics 26 (2): 899–919. doi: 10.1007/s11948-019-00149-5
  • Simondon, G. 1961. “Psychosociologie de la technicité.” In Sur la technique, edited by G. Simondon, 25–129. Paris: PUF.
  • Solaiman, I. , J. Clark , and M. Brundage . 2019. “GPT-2: 1.5B Release.” OpenAI blog, November 5, 2019. https://openai.com/blog/gpt-2-1-5b-release/ .
  • Statman, D. 1993. Moral Luck . Albany : State University of New York Press.
  • van den Hoven, J. , G.-J. Lokhorst , and I. van de Poel . 2012. “Engineering and the Problem of Moral Overload.” Science and Engineering Ethics 18 (1): 143–155. doi:10.1007/s11948-011-9277-z.
  • Vincent, J. 2019. “AI Researchers Debate the Ethics of Sharing Potentially Harmful Programs.” The Verge, February 21, 2019. https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai .
  • Williams, B. 1976. “Moral Luck.” Aristotelian Society Supplementary Volume 50: 115–152. doi: 10.1093/aristoteliansupp/50.1.115
  • Williams, B. 1981. Moral Luck . Cambridge : Cambridge University Press.
  • Williams, B. 1993. Ethics and the Limits of Philosophy . London : Fontana Press.
  • Wolf, S. 2001. “The Moral of Moral Luck.” Philosophical Exchange 31 (1): 1–19.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.