905
Views
10
CrossRef citations to date
0
Altmetric
Original Articles

Probabilistic thinking and health risks: An editorial

, &
Pages 1-11 | Received 01 Dec 2012, Accepted 01 Dec 2012, Published online: 14 Apr 2013

Abstract

This special issue is the third in a four-part series, Health Care Through the ‘Lens of Risk', which focus on risk categorisation, valuing, expecting and time-framing respectively, and published or to be published in 2012 and 2013. The present editorial introduces the issue of probabilistic thinking about health in relation to an interview-based article and five substantial research articles, with further articles to appear subsequently in an annex in the next issue of Health, Risk & Society.

Introduction

This editorial introduces the third of a series of four special issues, Health Care Through the ‘Lens of Risk', which will be published in Health, Risk & Society in 2012 and 2013. The series focuses on health risk-thinking, with its starting point The Royal Society Risk report (1992, p. 2, our numbering) definition of risk as:

the probability (3) that a particular adverse (2) event (1) occurs during a stated period of time (4a), or results from a particular challenge (4b).

The report framed this definition as the grounding for quantitative risk assessment, portrayed as ‘a powerful tool for investigation and reduction of risk'. The four special issues will each offer interpretivist critiques of one of the numbered elements flagged up in the Royal Society report definition, with events recast as categories, adversity as negative value, probabilities as uncertain expectations and time periods as time-frames (Heyman, Shaw, Alaszewski, & Titterton, Citation2010, p. 21). Although the four identified elements of risk-thinking can be considered separately, their interdependence should not be lost sight of. Unless at least one out of a set of constructed outcome categories is valued negatively, a risk will not be identified. Probability estimation will be affected by the location of category boundaries, for instance if a broader or narrower definition of mental disorder is specified, and by time-framing, for example if three or five year post-intervention survival is considered.

Probability plays a key role in risk-thinking, as the distinctive feature which differentiates the ‘lens of risk' from other ways of considering what might happen such as fatalism or belief in divine will (Zinn, Citation2008). Its importance is expressed in the sentence following the Royal Society definition:

As a probability in the sense of statistical theory risk obeys all the formal laws of combining probabilities.

The uncharacteristic clumsiness of the follow-on statement in a generally fluent document perhaps indicates unarticulated unease about the assertion that probabilities can be appropriately analysed through rigorous mathematics. Hájek has offered a sceptical counterpoint:

Once upon a time I was an undergraduate majoring in mathematics and statistics. I attended many lectures on probability theory, and my lecturers taught me many nice theorems involving probability … One day I approached them after a lecture and asked him … ‘What is probability?' He looked at me like I needed medication, and he told me to go to the philosophy department. (2008, p. 91, quoted author's emphasis)

Ironically, an aura of precision has been wrapped round a concept which is used to encode its opposite. Probability is only applied to outcomes which cannot be accurately foretold at the individual level, for instance in relation to survival after major surgery or the possible presence of a condition flagged up through screening. Furthermore, application of the mathematics of statistics requires the assumption to be made that uncertainty can be recast as randomness. It will be argued below that the metaphor of randomness offers no more than the projection of uncertainty.

Historians assert that the concept of probability was invented in seventeenth-century Western Europe (Bernstein, Citation1996; Hacking, Citation1975) and offered a crucial contribution to the development of science and technology which have transformed human societies and the natural world globally, for better and for worse. Probabilistic thinking provides a means of bringing empirical observational methods to bear on forecasting real-life outcomes which are too complex to be precisely predicted, and is particularly applicable to inexact ‘low sciences' such as medicine (Hacking, Citation1975, p. 46). Social scientists who wish to understand the role of probabilistic thinking in health care are confronted by a voluminous, indigestible and, to most of us, largely abstruse body of historical debate which has attempted to answer Hájek's question. It is tempting to avoid this issue as too difficult for non-mathematicians, and to assume that social scientists, along with the public, ‘are weak in probabilistic thinking' (Douglas, Citation1992, p. 57). In relation to the four-ingredient definition of risk offered above, social scientists might consider probabilities, perhaps along with categorisation and time-framing, to belong to natural sciences, in contrast to valuing which can only be done by those whom a particular risk may affect (Heyman, Alaszewski, & Brown, 2012). However, such a position would take for granted the epistemological status of scientific probabilistic reasoning, and puts ‘humans', that is non-scientists, in an inferior position in which they ‘appear to fail miserably when it comes to rational decision-making' about risks (Breakwell, Citation2007, p. 79). Although shortcomings of individuals in probabilistic reasoning have been well-documented, accepting its objectivity uncritically legitimates a wide status gap between official science and the beliefs of ‘lay' people, and, as Wynne (Citation1996) argued, gives too much credit to the former and too little to the latter.

Such claims to superiority would be justified if key assumptions underpinning the mathematics of probability, particularly randomness and independence of events, hold true unproblematically. However, as sketched out below, these presuppositions themselves provide no more than heuristics, simplifications which usefully offer partial glimpses of the future, but only at the price of deliberately accepting distortions which generate systematic errors. As also briefly illustrated below, patients sometimes appreciate the limitations inherent in probabilistic reasoning itself, and their responses to its inherent shortcomings matter clinically. Furthermore, well-documented rules of thumb used by ‘people', the de-cultured representatives of human nature constructed in much main-stream psychology (Valsiner, Citation2012), for example viewing a more mentally available risk as also more probable, stand at one remove from the official probability heuristic. Such rules of thumb can therefore be viewed as heuristics about heuristics. This analytical move adds a further layer of complexity to social scientific consideration of health risk decision-making, but also slightly rescues ‘humans' from the charge of incompetence vis a vis science, since the crudity in lay thought processes merely add another layer to the low forms of science which are forced to rely on probability assessment, faute de mieux. A deconstruction of probabilistic thinking is sketched out below.

A brief outline of key features of probabilistic thinking

The starting point for analysing the attributes of probabilistic thinking should not be the concept itself, but rather the problem which it is designed to deal with. If it is accepted that probability was invented in seventeenth century Western Europe, then the question has to be addressed as to what, if anything, it began to at least partly replace. One way to locate probabilistic thinking in a wider historical and cultural framework is to consider it as a particular variant of thinking contingently (Heyman, Citation2012). Contingency is perceived whenever an observer considers that one of two or more alternative outcomes might happen, or retrospectively, might have happened. Since, in nature, unique events merely take place, the view that alternatives could possibly occur can only originate in the mind. Furthermore, for social scientists who concern themselves with explaining organised social action, then the answer to the question ‘What might happen' must be ‘Absolutely anything!', including events which scientists consider impossible. If the Aztecs did carry out human sacrifices in order to appease the Sun God (Meyer, Sherman, & Deeds, Citation2003), the ostensible rationale for doing so was to deal with a societally mobilising contingency, of the sun deciding not to rise unless appeased. As such examples illustrate, social groups organise themselves around contingencies which concern them, but do not necessarily attempt to manage what might happen probabilistically. Gross and Shuval (Citation2008, p. 555) concluded from a study of Ultra-Orthodox Jews that their resistance to prenatal screening relates to its underpinning cosmological presupposition that the Universe is ultimately shaped by chance rather than God's will (see also Green's discussion of accidents in her interview published in this special issue). This analysis assumes that the capacity to imagine contingencies is a universal characteristic of ‘humans', whereas both generic understandings of contingency and selections of concerns from the infinity of possibility are culturally mediated.

Probabilistic thinking can be brought to bear on contingencies in various ways, all involving quantification of varying degrees of precision. Personal statements of the form ‘I am 90 per cent sure that I will do X tomorrow' can only be tested directly via introspection, and cannot be falsified, although a history of non-delivery may invoke scepticism. With respect to rare or unique large-scale disasters such as the melt-down of nuclear plants or human-caused catastrophic global warming, observation cannot be utilised in the estimation of probabilities which must rely on the inevitably conjectural process of modelling. (However the frequency of nuclear accidents, which modelling purported to show were vanishingly rare, is beginning to take them into the zone where inductive probability estimation becomes feasible.Footnote1) The complexity of health problems such as cancer or pregnancy complications precludes modelling in individual cases. However, in compensation, the frequency of most adverse health events allows their probabilities to be inductively estimated, both absolutely and in relation to identified potential risk factors.

The rise of probabilistic thinking as a way of understanding and attempting to manage contingencies is associated with the development of science in Western Europe. Probability can be deconstructed as the projections of uncertain expectations onto the world via the metaphor of randomness. This formulation draws upon the philosophical Bayesian approach which views probabilities as expressions of degrees of knowledge limitations (Heyman, Henriksen, & Maughan, Citation1998; Suppes, Citation1994).Footnote2 It can best be demonstrated through examples which expose the non-literal status of randomness as an attribute of non-quantum events. Pregnant women who accept non-invasive but inaccurate screening tests for Down's syndrome are given ‘their' probability, for example one chance in 100 of their baby having a chromosomal anomaly. But the chromosomal status of the baby was determined at conception and could be ascertained with close to certainty (see Austin et al. (Citation2013) in this special issue) by means of a more accurate but also more invasive and risky diagnostic test. Although screening information is presented in terms of chance, the event in question has already happened, and the outcome in question is therefore determined. If personalised medicine had fulfilled the promises made for it, it would have allowed doctors to predict which patients would not benefit from a drug treatment by taking into account genetic markers. Unfortunately, failure to-date leaves patients in the arms of a metaphorical chance which offers merely a place-holder for the ignorance remaining when known correlates are allowed for, as expressed more accurately in statistical error terms. Completely accurate prediction would have totally banished probability, and therefore risk, from this domain. Even a reduction in the drug reaction error term would apparently diminish the role of chance. As the above examples illustrate, knowledge improvements per se appear to appear to diminish randomness, and this illusion is underpinned by the culturally sanctioned projection of uncertainty onto events.

Probabilistic thinking based on inductive observation of historic frequencies can be dubbed ‘the probability heuristic' to strip away its pretentions. It entails the operation of a whole set of usually unarticulated presuppositions: that members of a constructed event category can be considered equivalent (for example people with Down's syndrome), allowing them to be sensibly counted; that all those to whom an identified and categorised risk factor applies ‘carry' the observed aggregate rate of adverse event occurrence; that variations not predicted by selected risk factors result from randomness; and that the past provides a good guide to the future. For example, future life expectancies can only be estimated by observing the ages at which individuals died in the recent past. Actuarial calculations require some form of extrapolation, itself a leap of faith which can only be tested in retrospect. Calculation must be based on extrapolative assumptions of some sort, perhaps that the average age of death will stay constant, or that the rate of increase observed in most developed societies will continue at the same pace. Individuals can play the system, for example by controlling ‘their' probability of dying before a particular age, although not unfortunately their age of death, by moving to an area with lower life expectancy so as to obtain a cheaper annuity!

Heuristic acceptance of the ecological fallacy that an aggregate property of a constructed category, rates of occurrence, can be applied to individual members is built into probabilistic inductive reasoning, and thereby into a vast body of medical knowledge (Rose, Citation1981). Hunt (Citation2003, p. 176) has noted a resulting ‘constant moving back and forth between individualizing and totalizing logics' in risk analysis. Recognition of the prognostic usefulness of turning a blind eye to the ecological fallacy should not preclude acknowledgement of the irredeemable limitations which its acceptance entails. In relation to the social science of health risk, a particularly interesting issue opened up by this analysis is that of how service-users and professionals themselves variably understand and utilise inductive probabilistic reasoning, and navigate its shortcomings. The limited amount of relevant research which has been undertaken suggests that service-users may recognise multiple probabilities arising from differences in selection of risk factors and control information in order to manage ‘their' probability of experiencing an adverse event such as Huntingdon's disease (Leontini, Citation2010); or may view membership of a higher risk category as itself a physical health problem (Heyman et al., Citation2006). The following quotation (Heyman & Henriksen, 1998, p. 183) illustrates the active management of ‘personal' probabilities.

And if I had the AFP [serum screening] test, that would let me know when it was a high risk or a low risk. And if it were a low risk, well, I could practically rule out having a Down's baby anyway. So that was brilliant anyway because that give me peace of mind. (Pregnant woman, 37 who had ruled out prenatal diagnostic testing and pregnancy termination)

In this unusual and instructive case, the respondent had ruled out diagnostic testing for Down's syndrome and pregnancy termination, but nevertheless opted for serum screening. In effect, she decided not to leave ‘her' probability at the level associated with maternal age (1:150 at 16 weeks pregnant). The screening test would either reduce or increase ‘her' probability, but she could not know in advance which of these prognostic outcomes would occur. The negative result lowered her chance of experiencing the adverse outcome in question to effectively zero, but a positive probability estimation, that is one above the threshold at which diagnostic testing was recommended might have left her to worry for the remainder of her pregnancy. As the quotation also illustrates, probabilities mark out a level of chance with respect to a particular outcome above which a contingency should be treated as a cause of concern, and below which a risk is considered not to ‘exist'.

Risk managers' prognostic task can be more accurately characterised as uncertainty appraisal than as risk assessment (Aven & Guikema, Citation2011). However, the framing of professional practice within the broader culture of science, and the patient expectations which this framing encourages, work against such a tentative construal. A cultural affinity for the ‘“calculability” of consequences' (Weber, Citation1978, p. 351) thus lays the foundations for the reification of risks and the tacit collective ‘deletion' of uncertainty (Law, Citation1995). Official representations of probabilities often fail to acknowledge limitations of the research base from which they are derived such as the privileging of positive over non-significant findings in the publication process (Rakow, Vincent, Bull, & Harvey, Citation2005). Further complexities arise in relation to the many options available to professionals for the communication of probabilities to service-users (Bowling & Ibraham, 2001), for instance as percentages, decimals to a base of one, or illustrative frequencies, and in terms of the chance of an adverse event happening or not happening.

Probability and moral judgement

Douglas (Citation1990) argued that the development of probabilistic thinking was linked to a major change in social relations and institutions, with risk replacing sin as a key organising principle. She proposed that risk and sin perform similar functions, facilitating the allocation of responsibility for past events especially those such as disasters that threaten the social order; and that both offer means for predicting and managing future threats, providing guides for proper social action (Douglas, Citation1992). The shift from sin-based institutions to those framed by risk can been seen in the decline of the authority of religious experts who draw on sacred texts, and the corresponding rise of professional experts who use scientific knowledge to measure risks. This shift is associated with a move from social relations underpinned by ascribed social status legitimated by divine authority to their grounding in contracts (Maine, Citation1861). There appears to have been a major change in the role ascribed to morality. Experts in sin-based institutions explicitly make value judgements about past and future actions, but those who operate in social systems grounded in risk appear to avoid overt moralising. Doctors are not expected to judge the moral character of their patients, but to minimise illness and prevent premature death (Parsons, Citation1991, p. 289). The claim to expertise of professionals is grounded in their technical knowledge of the probabilities of different outcomes and their ability to apply such knowledge to particular cases. The communication of such knowledge forms the basis of informed consent. On this view of the division of labour between the public and experts, the former calibrate the personal value of outcomes, and set their own levels of risk tolerance, but draw on the latter's probability assessments.

However, the present division of technical and moral work is not as clear-cut as it might appear. Closer scrutiny of actual practice, particularly at points of tension, indicates that there is a moral dimension to the use of probability. Probabilistic thinking contributes to the concealment of the moral dimension of social interactions, and plays an important part in the process through which moral problems are reframed as technical issues of risk management. There are situations in which professionals not only make judgements about probabilities, but also specify desirable outcomes, disregarding any preferences which patients or clients may have. Such situations arise when there is a clear societal pressure for certain outcomes and/or patients or clients are judged not to be capable of or willing to make a ‘rational' and acceptable decision. In the present special issue, Heyman et al. (Citation2013) describe the ways in which staff in forensic units are required to prevent the patients they discharge from harming members of the public, backgrounding other risks such as the likelihood of such patients experiencing a poor quality of life in the ‘community'. Stanley (Citation2013) notes the pressure on social workers to prevent parents from harming their children. Stanley (Citation2013) argues that social workers deal with their anxieties by building up evidence that minimises uncertainty by confirming that, unless they take action, a child will be put at risk of being seriously harmed. Inimplementing this precautionary process, professionals downplay the harm that may result from a child's removal from their family. Scamell and Alaszewski (Citation2012) described a similar situation with respect to child-birth. They argue that as midwives have so at stake both professionally and personally if adverse events do occur they focus on dreaded negative outcomes, effectively disregarding both the risks associated with medical interventions and the high overall probability of births being normal.

Even when patients are apparently treated as rational, autonomous decision-makers, it still possible to identify a moral dimension to the ways in which professionals use probability estimates. For example Williams, Alderson, and Farsides (Citation2002) concluded that experts involved in prenatal screening felt that parents had the right to information about the probabilities of foetal genetic abnormalities, because such knowledge enabled them to make informed choice, but not to information about the probable gender of a foetus if such information was to be used for gender selection. Similarly, Hallowell (Citation1999), in a study of genetic counselling for hereditary breast/ovarian cancer, found that counselling was not neutral but prescriptive. Counsellors indicated to women that they had a responsibility to manage their risk in particular ways, thereby guiding them to take a ‘responsible' course of action, for example by ignoring risks which the counsellor considered to have too low a probability of occurring to be worth mobilising against.

The articles in this special issue

The five articles included in this special issue are all concerned with the nature of probabilistic thinking, in health and social care. The articles focus at least as much on official as on ‘lay' framings of uncertain expectations as probabilities. Austin et al. (2013) distinguish five distinctive medical decision-making contexts in which a test or other form of investigation is used to assess probabilities: with symptomatic and asymptomatic patients for a present condition; for a future health problem; for a supposed risk factor such as high cholesterol and for a variety of conditions (‘shotgun testing'), for example commercial blood tests for a variety of purported genetic risk markers. Confusion between these contexts can lead to systematic errors in probabilistic inference which the article charts. For example, screening for risk factors will be ‘oversold' if they are conflated with the health problem of concern, and the often high probability that the former will occur without the latter is discounted. Similarly, interpretations of shotgun screening results will mislead if the increase in likelihood of false alarms resulting from multiple testing is ignored. Such confusions should not be considered as merely failures of expertise because they can be driven by societal processes, and by commercial and professional interests.

The following articles raise questions about probabilistic thinking with respect to serious offending against the person, in relation to child protection (Kearney, Citation2013; Stanley, Citation2013) and the discharge of patients from forensic mental health services (Heyman et al., Citation2013). In each case, the authors aim to pose questions about official probabilistic thinking itself, standing on its head the notion challenged by Douglas (Citation1992) that members of the public are ‘weak' on probability. Kearney applies to child protection practice the widely cited work of Kahneman, Slovic, and Tversky (Citation1982) into the systematic cognitive errors which ‘people' make when reasoning about uncertainty. His starting point is the question of why non-accidental child injury or death is seen as highly probable despite its rarity, a phenomenon found particularly in the UK. Kearney argues that the findings of cognitive behavioural psychology, such as the power of the availability heuristic and the hindsight effect, are rarely applied to those who set child protection policy. He estimates, tellingly, that, that an ‘eye-watering' 40,000 recommendations have been produced as a result of inquiries into non-accidental child deaths in England since 2003. Following media and political publicity concerning a single, horrific child murder, of Baby ‘P', there were over 50,000 extra child protection referrals (Munro, Citation2010, cited in Kearney, Citation2013). But Kearney estimates that the probability of a child identified by English child protection services being killed or accidentally injured is 0.025%, and concludes that few social workers will ever experience such an event directly or indirectly during their working life. Although their improbability makes these events almost impossible to predict, they are treated as ‘real' risks which are entirely avoidable.

The articles by Stanley (Citation2013) and Heyman et al. (2013) are concerned with actual processes of probability assessment in child protection and forensic mental health services respectively. Stanley depicts a circular, self-reinforcing form of probabilistic reasoning in which the accumulation of records about a particular family is used as an indicator in itself that a child is at increased risk of being harmed. Heyman et al. analyse the obscuring effect of prevention efforts on probabilistic inference, discussed in relation to the discharge of offenders from secure forensic mental health services. The ‘inductive prevention paradox' arises, they argue, whenever measures are taken to avoid or reduce the chance of an adverse event occurring, thereby cutting-off the supply of observational evidence needed for risk assessment. The article focusses on the inevitably flawed strategies through which service providers attempt to see beyond the inductive prevention paradox, patients seek to influence assessments of their riskiness, and staff, in turn try to discount such self-presentational manoeuvres.

The final research article (Young et al., Citation2013) is concerned with the impact of media coverage on public perceptions of health risks. The authors confirm the findings of other studies that comparable conditions which receive greater media attention are assessed as more serious, more representative of disease, and also, perhaps surprisingly, less probable than those which are given less coverage. Their study contributes to this body of knowledge by comparing perceptions of population and personal probabilities of developing an infection. The authors conclude that a stronger correlation between these probabilities exists for diseases with lower as against higher media coverage than for those in which it is higher, ceteris paribus; and that this difference is reduced but not eliminated by the provision of further information. Thus, media exposure opens up a gap between perceptions of collective and personal probabilities, associated with ‘unrealistic pessimism', that is a common belief that one is more likely to become infected than an average member of the population.

In relation to probabilistic thinking, these findings can be linked to the constant alternation between totalising and individualising logics (Hunt, Citation2003) referred to above. This alternation arises from the crude, albeit useful heuristic epistemological status of probabilistic inductive inference from observed frequencies. Probabilities of this form, central to epidemiology, can only be quantified as aggregated rates for constructed categories, leaving unanswered the question of their applicability to individual category members. In turn, the non-specificity of such probabilities allows societal processes, particularly the unequal distribution of social power, to influence perceptions even of this apparently most factual element of risk-thinking (see the interview with Peter Taylor-Gooby (Heyman & Brown, 2013) in this special issue).

Conclusion

This special issue directs attention towards the place of probabilistic thinking in the ‘lens of risk'. Induction from observed frequencies allows uncertainty about individual outcomes to be reduced, but only at the price of provisionally accepting the ecological fallacy. It must be assumed that all members of a category ‘carry' the rate of occurrence observed across it, and that intra-category variations, such as which individuals do or do not develop a disease, result from random processes. The projective status of this presumed randomness can be easily demonstrated through consideration of examples where further information reduces or eliminates such unpredictability. The projection of uncertainty onto randomness generates superficially bizarre consequences such as multiple probabilities of the same single event and the possibility of controlling ‘personal' probabilities through information management.

The research articles included in the special issue address a range of features of probabilistic thinking in health contexts, including non-obvious informational differences between types of screening, the distortions which can arise when probabilities are assessed in sensitive areas such as child protection and forensic mental health care, and the impact of the media on relationships between perceptions of personal and population probabilities.

Notes

1.  Perrow (1984) pointed out in relationship to the Three Miles Island accident that even a single occurrence of an event which experts predicted to be highly improbable alters collective perceptions of its probability of reoccurrence. Such thinking can be readily explained as use of the availability heuristic (Tversky & Kahneman, 1973) because prior probabilities cannot be validly inferred from the occurrence of single events.

2.  An important implication of a philosophical Bayesian approach is that the distinction between probability and uncertainty is not epistemologically grounded in the difference between randomness of some phenomena as against ignorance about others (Winkler, 1996), the position taken by Hacking (1975, p. 13) who distinguishes ‘aleatory' from ‘epistemic' probabilities. Nor can it be accounted for in terms of probabilities, but not uncertainties, being quantifiable through the availability of a history of observations, a currently popular idea derived from Knight (1921). An alternative pragmatic rendition of the everyday uncertainty/probability distinction would bring in the attitude of the observer. Uncertainty language justifies delay in decision-making, e.g. in relation to global warming, whilst the language of probability contains the implicit imperative to decide one way or the other (Heyman et al., 2010, pp. 87–89).

References

  • Austin , L. C. 2013 . The structure of medical choices: Uncertainty, probabilities and risk in five decision situations . Health, Risk & Society , 11 : ▪ – ▪ .
  • Aven , T. and Guikema , S. 2011 . Whose uncertainty assessments (probability distributions) does a risk assessment report: The analysts' or the experts'? . Reliability Engineering and System Safety , 96 : 1257 – 1262 .
  • Bernstein , P. L. 1996 . Against the Gods: The remarkable story of risk , New York : John Wiley & Sons .
  • Bowling , A. and Ebrahim , S. 2001 . Measuring patients' preferences for treatment and perceptions of risk . Quality and Safety in Healthcare , 10 : i2 – –i8 .
  • Breakwell , G. M. 2007 . The Psychology of Risk , Cambridge : Cambridge University Press .
  • Douglas , M. 1992 . Risk and blame: Essays in cultural theory , London : Routledge .
  • Douglas , M. 1990 . Risk as a forensic resource . Dædalus: Journal of the American Academy of Arts and Sciences , 119 : 1 – 16 .
  • Gross , S. E. and Shuval , J. T. 2008 . On knowing and believing: Prenatal genetic screening and resistance to ‘risk-medicine' . Health, Risk & Society , 10 : 549 – 564 .
  • Hacking , I. 1975 . The emergence of probability: A philosophical study of early ideas about probability, induction and statistical inference , Cambridge : Cambridge University Press .
  • Hallowell , N. 1999 . Advising on the management of genetic risk: Offering choice or prescribing action? . Health, Risk & Society , 1 : 267 – 280 .
  • Hájek , A. 2008 . “ A philosopher's guide to probability ” . In Uncertainty and risk: Multidisciplinary perspectives , Edited by: Bammer , G. and Smithson , M. ▪ – ▪ . London : Earthscan .
  • Heyman , B. 2012 . “ Risk and culture ” . In Oxford handbook of cultural psychology , Edited by: Valsiner , J. ▪ – ▪ . Oxford : Oxford University Press .
  • Heyman , B. , Alaszewski , A. and Brown , P. 2012 . Values and health risks: An editorial . Health, Risk & Society , 14 : 399 – 408 .
  • Heyman , B. and Brown , P. 2013 . Perspectives on ‘the lens of risk' interview series: Interviews with Judy Green and Peter Taylor-Gooby . Health, Risk & Society , 11 : ▪ – ▪ .
  • Heyman , B. 2006 . On being at higher risk: A qualitative study of prenatal screening for chromosomal anomalies . Social Science & Medicine , 62 : 2360 – 2372 .
  • Heyman , B. 2013 . Assessing the probability of patients' reoffending after discharge from low to medium secure forensic mental health services: An inductive prevention paradox . Health, Risk & Society , 11 : ▪ – ▪ .
  • Heyman , B. , Shaw , M. , Alaszewski , A. and Titterton , M. 2010 . Risk, safety and clinical practice: Health care through the lens of risk , Oxford : Oxford University Press . Introduction retrieved December 6, 2012, from http://eprints.hud.ac.uk/6392/
  • Heyman , B. , Henriksen , M. and Maughan , K. 1998 . Probabilities and health risks: A qualitative approach . Social Science & Medicine , 9 : 1295 – 1306 .
  • Hunt , A. 2003 . “ Risk and moralization in everyday life ” . In Risk and morality , Edited by: Ericson Doyle , R.V.A. ▪ – ▪ . Toronto : University of Toronto Press .
  • Kahneman , D. , Slovic , P. and Tversky , A.E. 1982 . Judgements under uncertainty: Heuristics and biases , Cambridge : Cambridge University Press .
  • Kearney , J. 2013 . Perceptions of non-accidental child deaths as preventable events: The impact of probability heuristics and biases on child protection work . Health, Risk & Society , 11 : ▪ – ▪ .
  • Knight , F. 1921 . Risk, uncertainty and profit , Boston : Houghton Mifflin .
  • Law , J. 1995 . “ Organisation and semiotics: Technology, agency and representation ” . In Accountability, power and ethos , Edited by: Mouritsen Munroe , J. R. ▪ – ▪ . London : Chapman and Hall .
  • Leontini , R. 2010 . Genetic risk and reproductive decisions: Meta and counter narratives . Health, Risk & Society , 12 : 7 – 20 .
  • Luhmann , N. 1993 . Risk: A sociological theory , New Brunswick : Aldine Transaction .
  • Maine , H. S. 1861 . The ancient law: Its connection with the early history of society, and its relation to modern ideas , London : John Murray .
  • Meyer , M. C. , Sherman , W. L. and Deeds , S. M. 2003 . The course of Mexican history , (7th ed.) , New York : Oxford University Press .
  • Munro , E. 2010 . The Munro review of child protection: Part one: A systems analysis , London : Department for Education .
  • Parsons , T. 1991 . The social system , London : Routledge .
  • Perrow , C. 1984 . Normal accidents: Living with high risk technologies , New York : Basic Books .
  • Rakow , T. , Vincent , C. , Bull , K. and Harvey , N. 2005 . Assessing the likelihood of an important clinical outcome: New insights from a comparison of clinical and actuarial judgment . Medical Decision Making , 25 : 262 – 282 .
  • Rose , G. 1981 . Strategy of prevention: Lessons from cardiovascular disease . British Medical Journal , 282 : 1847 – 1851 .
  • Scamell , M. and Alaszewski , A. 2012 . Fateful moments and the categorisation of risk: Midwifery practice and the ever-narrowing window of normality during childbirth . Health, Risk & Society , 14 : 207 – 221 .
  • Stanley , T. 2013 . ‘Our tariff will rise': Risk, probabilities and child protection . Health, Risk & Society , 11 : ▪ – ▪ .
  • Suppes , P. 1994 . “ Qualitative theory of subjective probability ” . In Subjective probability , Edited by: Wright , G. and Ayton , P. ▪ – ▪ . Chichester : John Wiley .
  • The Royal Society . 1992 . Risk , London : The Royal Society .
  • Tversky , A. and Kahneman , D. 1973 . Availability: A heuristic for judging frequency and probability . Cognitive Psychology , 5 : 207 – 232 .
  • Valsiner , J. 2012 . “ Introduction: Culture in psychology: A renewed encounter of inquisitive minds ” . In Oxford handbook of cultural psychology , Edited by: Valsiner , J. ▪ – ▪ . Oxford : Oxford University Press .
  • Weber , M. 1978 . “ The development of bureaucracy and its relation to law ” . In Weber: Selections in translation , Edited by: Runciman , W. 341 – 356 . Cambridge : Cambridge University Press .
  • Williams , C. , Alderson , P. and Farsides , B. 2002 . ‘Drawing the line' in prenatal screening and testing: Health practitioners' discussions . Health, Risk & Society , 4 : 61 – 75 .
  • Winkler , R. L. 1996 . Uncertainty in probabilistic risk assessment . Reliability Engineering and System Safety , 54 : 127 – 132 .
  • Wynne , B. 1996 . “ May the sheep safely graze? A reflexive view of the expert-lay knowledge divide ” . In Risk, environment & modernity: Towards a new ecology , Edited by: Lash , S. , Szerszynski , B. and Wynne , B. ▪ – ▪ . London : Sage .
  • Young , M.E. 2013 . The influence of popular media on perceptions of personal and population risk in possible disease outbreaks . Health, Risk & Society , 11 : ▪ – ▪ .
  • Zinn , J. 2008 . Heading into the unknown: Everyday strategies for managing risk and uncertainty . Health, Risk & Society , 10 : 439 – 450 .

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.