1,695
Views
0
CrossRef citations to date
0
Altmetric
Editorial

The Risks of Subconscious Biases in Drug-Discovery Decision Making

&
Pages 771-774 | Published online: 06 Jun 2011

It has long been understood by psychologists that people are subject to unconscious biases in their decision making, particularly when the stakes are high and they are faced with complex, uncertain data Citation[1]. This is the situation that is presented in pharmaceutical R&D, so it is unsurprising that there is evidence of similar effects in the drug-discovery process Citation[2]; after all, scientists are people too! Biases can lead to poor decisions and, hence, to reduced productivity and efficiency, missed opportunities and expensive late-stage failures.

Psychologists describe two cognitive systems that people use when making decisions: System 1, gut instinct, has evolved to provide rapid decisions based on complex information, particularly when a quick response is important (an example of this is the fight or flight response); System 2 provides a more rational decision based on conscious analysis of the available information Citation[3]. It has been shown experimentally that System 1 forms an initial decision when presented with information, which is then modified by System 2 when time permits. However, the correction provided by System 2 is often imperfect, leading to a residual degree of irrationality even after further consideration.

One example of this is the so-called ‘anchoring’ effect: when asked to provide an estimate, people start with an implicit reference point (the anchor) and then modify this to come up with their final decision. In an early demonstration of this effect, Tversky and Kahneman asked participants in their study to estimate the percentage of African nations that are members of the United Nations. Before making their estimate, a ‘wheel of fortune’ was spun to determine a number between 0 and 100, and the participants were asked to indicate whether the true number was greater or less than the number indicated by the wheel. The initial random number had a remarkable effect on the estimates given by the participants. For example, the median guess for those participants who received ten as a reference value was 25%, while the median was 45% for those who received an initial number of 65 from the wheel Citation[4].

In this article we will briefly describe four important psychological biases in decision making that could have an adverse impact on the drug-discovery process. We will then illustrate the risks they pose at different stages of the pharmaceutical pipeline with practical examples. Finally, we will draw some conclusions and discuss approaches for mitigating these risks.

Four biases

In this section, we will give a brief review of four important biases that have been shown to affect decision making – so-called ‘cognitive’ biases. A more detailed discussion of each of these can be found elsewhere Citation[2] and in the references cited.

Confirmation bias

Confirmation bias describes the tendency for people to look for evidence to support their hypotheses and not actively seek evidence that would contradict them. This can lead to premature closure of potential routes of investigation and an insufficiently wide search for solutions. Confirmation bias can lead people to cling onto an idea for too long, in the face of mounting contradictory evidence.

Poor calibration

People have a tendency towards overconfidence in their ability to estimate, known as calibration bias. To illustrate this, try the following experiment: ask a group to individually estimate some quantity, for example the length of the river Thames in kilometers. Instead of providing a single number, ask them to provide a range of values, as wide as they want, in which they are 90% confident the true value lies. If human confidence had perfect calibration, 90% of the ranges provided should contain the correct value. When we tried this, as an informal experiment with experienced scientists, only 20% of the estimated ranges contained the correct value!

Availability bias

Recent or vivid events tend to affect our decisions disproportionately more than evidence gained over the long term. Newsworthy events have a large impact on our assessment of probabilities. For example, when asked, people rated the chance of death due to homicide as approximately 1.6-times greater than that due to stomach cancer Citation[5]. In fact, the probability of dying due to stomach cancer is five-times more likely than being murdered in the USA; clearly homicide is more newsworthy than stomach cancer.

This bias is also referred to as ‘neglect of the prior’. The ‘prior’ is the probability of an event in the absence of any new evidence, based on past experience.

Excess focus on certainty

It is natural to pay more attention to avoiding very negative outcomes than moderately negative ones. However, even allowing for the scale of the outcome, we behave as if low probabilities were higher than they are – betting excessively on long odds (the study of this behavior is known as ‘prospect theory’ Citation[6]). This accounts for the same people buying both lottery tickets and taking out insurance policies against low-probability events; appearing at the same time both risk seeking and risk averse. People do not maximize (probability × gain) in the way that, logically, they should. Thus, where there are multiple risk factors, people tend to focus excessively on the high-impact risks even when more could be gained from mitigating higher probability events of somewhat less impact.

Examples of the dangers for drug discovery

In this section we will provide examples of some of the risks posed by these biases in different phases of drug discovery, illustrate these and suggest potential approaches to mitigate them.

Hit-to-lead

In hit-to-lead, the objective should be to explore as wide a range of chemistry as necessary, in order to identify at least one high-quality chemical series to progress into lead optimization. One risk to this is posed by confirmation bias; when a ‘quick win’ appears possible, it is tempting to progress this hypothesis disproportionately, at the expense of exploring alternative options in parallel. This can lead to missing better alternatives that a wider search would have identified, or losing competitive advantage through the time taken resolving issues if they are encountered downstream. This is particularly important as the uncertainty in our assessment of the ‘best’ compounds is likely to be high, due to experimental variability or predictive error in the data on which we base these decisions.

The impact of confirmation bias is illustrated by a project with a goal of identifying an orally bioavailable compound for a target in the central nervous system (discussed in more detail in Segall et al.Citation[7]). Early pharmacokinetic data showed that compounds in one chemical class could have either good oral bioavailability or good penetration into the central nervous system. This suggested to the team that similar compounds may exhibit both desirable properties simultaneously. Therefore, the project returned repeatedly to the same chemistry in the expectation that an optimal compound could be found. Only after progressing almost 200 compounds for detailed in vitro studies and approximately 50 compounds for in vivo pharmacokinetic studies was an alternative strategy pursued in earnest that ultimately led to an improved in vivo disposition.

Lead optimization

In lead optimization, a common tendency is to pursue the optimization of a single, high-priority parameter without paying sufficient regard to the effect that this optimization may be having on properties that may also be important to the ultimate success of a project. This unidimensional optimization is tempting because it can be difficult to simultaneously juggle the many, often conflicting, requirements for a successful drug candidate, particularly when the available data has significant uncertainty. However, this is an example of an excess focus on certainty; in practice, multidimensional optimization is often necessary to quickly focus on chemistry with a high overall chance of downstream success.

This challenge may be overcome by using an approach that can help to simultaneously assess all of the available property data when prioritizing compounds, such as the probabilistic scoring approach implemented in the StarDrop™ software Citation[101]. Using this, a project team can define a profile of properties that represent the goals for an ideal compound. For each property, both the desired outcome and the importance of that criterion can be defined, to ensure that the overall score reflects the acceptable trade-offs between different properties. A score is then calculated for each compound, reflecting the likelihood of the compound successfully achieving the overall profile and explicitly taking into account the uncertainty in the underlying data. This helps to provide an objective basis to select compounds with the best chance of exhibiting the required balance of properties.

An excessive focus on a small number of factors can have a further impact on screening strategy; namely that the transferability of findings between closely related compounds may not be sufficiently trusted. In our discussions with both large and small companies about their approach to lead optimization, it has emerged that smaller companies are generally more willing to sample sparsely within the mix of compounds tests and screens used. Through basic assumptions on chemical similarity, and a ‘Swiss cheese’ approach to allocating lead-series compounds to screens, some of the lower probability risks can be addressed, when a campaign of testing all compounds on a smaller range of assays would give less information.

Late-discovery screening & candidate selection

A key objective for candidate selection is to avoid compounds with a significant risk of toxicity. However, choosing the best screening strategy for a particular toxicity is a challenge due to the need to balance the cost and reliability of different screening options with the prevalence of the underlying toxicity, the cost of downstream failure and the value of a successful drug. It is common, for example, in cardiotoxicity assessment, to use a combination of a high-throughput in vitro predictive method to provide an inexpensive initial risk assessment with an in vivo screen to confirm that a candidate is suitable to go into formal good laboratory practice toxicity testing. However, even for such a simple cascade, there are five different ways in which these screens could be combined: a double filter in which only compounds that ‘pass’ both screens are progressed; a ‘sentinel’ approach in which only compounds that ‘fail’ both screens are rejected; in vitro only; in vivo only and no screens at all before good laboratory practice work is commenced (typical for low-probability risks, such as neurotoxicity). One risk here is that poor calibration can lead to a difficulty in balancing the risks of action and inaction based on the results from an individual screen: there is a tendency to trust the results too much and overreact to faint signs of toxicity from methods with low reliability. Potentially the in vitro only and double filter approaches can reject good compounds and miss opportunities to find a good drug. Availability bias also comes into play when considering questions such as these, as recent findings of toxicity tend to have greater influence than the long-term evidence.

One approach to improving decision making in challenging situations, such as these, is to provide the opportunity to explore different options in a safe, simulated environment, sometimes described as a microworld. These can provide rapid feedback on the impact of different strategies, something that is difficult to achieve in the real world of drug discovery where the impact of a strategic decision may not be felt for months, or even years, making it difficult for an individual to learn from their mistakes. An interactive simulation that explores the trade-off between accuracy and cost in the light of different underlying prevalence in a two-screen cascade can be found elsewhere Citation[102].

Conclusion

In this article, we have described four different psychological biases that pose risks to good decision making in drug discovery. We have illustrated these with three examples from different stages in the drug-discovery pipeline. These examples are, of course, far from exhaustive but hopefully indicate some of the effects that these insidious biases can have.

How can we overcome these psychological effects? Other fields where high-stake decisions must be made despite limited, uncertain information, such as evidence-based medicine, addressed analogous challenges through a combination of feedback from outcomes, experiential training and software tools that help to guide decisions. Similar approaches, notably interactive visualization of potential outcomes, can be applied to drug discovery to guide good decisions based on complex and uncertain data. This is a low-cost way to improve productivity, efficiency and opportunity for success.

Financial & competing interests disclosure

Matthew Segall is CEO of Optibrium Ltd, which develops and sells the StarDrop™ software platform. Andrew Chadwick is Principal Consultant for Life Sciences at Tessella plc. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

No writing assistance was utilized in the production of this manuscript.

Additional information

Funding

Matthew Segall is CEO of Optibrium Ltd, which develops and sells the StarDrop™ software platform. Andrew Chadwick is Principal Consultant for Life Sciences at Tessella plc. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.

Bibliography

  • Hammond JS , KeeneyRL, RaiffaH. The hidden traps in decision making.Harvard Business Review76(5), 118–126 (2006).
  • Chadwick AT , SegallMD. Overcoming psychological barriers to good discovery decisions.Drug Discov. Today15(13–14), 561–569 (2010).
  • Sanfey AG , ChangLJ. Of two minds when making a decision. Scientific American. 3 June 2008.
  • Tversky A , KahnemanD. Judgment under uncertainty: heuristics and biases.Science185(4157), 1124–1131 (1974).
  • Lichtenstein S , SlovicP, FischhoffB, LaymanM, CombsB. Judged frequency of lethal events.J. Exp. Psychol. Hum. Learn.4(6), 551–578 (1978).
  • Hardman D . Judgment and Decision Making. BPS Blackwell, Oxford, UK (2009).
  • Segall MD , BeresfordAP, GolaJM, HawksleyD, TarbitMH. Focus on success: using in silico optimization to achieve an optimal balance of properties. Expert Opin. Drug Metab. Toxicol.2(2), 325–337 (2006).

Websites

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.