1,362
Views
5
CrossRef citations to date
0
Altmetric
Research articles

RRI as the inheritor of deliberative democracy and the precautionary principle

ORCID Icon
Pages 38-64 | Received 13 Feb 2017, Accepted 24 Apr 2017, Published online: 05 Jun 2017

ABSTRACT

Responsible Research and Innovation (RRI) can be considered as the potential encounter between participatory technological assessment (PTA) and ethical assessment. Responsibility has the power to establish a link between both domains. Nevertheless, broadening ethics to a more political understanding of responsibility or simply to more inclusive participation raises many problems that require solutions. There is a need to go beyond participation towards deliberation. The Meeting of Minds Project (2005–2006; expert and citizen conferences in 9 European countries with 2 European conventions in Brussels) presented as a deliberation on the future of brain sciences in European Union, one of the most ambitious PTA experiment, might be improved by this distinction. Deliberation is not only a condition for RRI as some analysts suggest when defending care ethics and anticipatory governance but also a promising theory of democracy (TDD). Mainly focused on the question of inclusiveness, fair cooperation and rational decision-making, TDD could take advantage of the precautionary principle (PP), one possible understanding of responsibility, to be focused on the controversies relative to the future. The PP has the potential to structure the meta-deliberation of minds at the intersection of ethics, politics, sciences and technologies.

1. Introduction

Observers of developments in the European Union (EU) research financing system have witnessed the arrival of the Horizon 2020 Research Program which includes a new concept: that of Responsible Research and Innovation (RRI). In several documents, including tenders that specify the concept of RRI (e.g. European Commission Citation2013, 16), the EC presents RRI based on five (or six) pillars:

  1. participation (and commitment) of stakeholders,

  2. scientific education (or literacy, science literacy),

  3. gender equality in the research process and content,

  4. openness to scientific knowledge (data and results) and

  5. governance (ethics).Footnote1

Given that governance and ethics are not the same thing, a sixth pillar, which teases apart the two elements of pillar 5, should be added (Gianni Citation2016). Moreover, as a transversal research topic, RRI has also reconfigured the Science and Society research program into Science with Society.

The potential encounter between on the one hand, participatory technological assessment (PTA) and, on the other hand, ethical assessment adds a new dimension to international research policy trends. Very often, these two types of assessment are differently supported and executed. Moreover, they concern different epistemic communities and disciplines. PTA is mainly handled by the social and political sciences as is the case for the Sciences, Technologies and Society programs, while ethics involves different types of applied ethics.Footnote2 It is the case with neuroethics, relevant for neurotechnologies issues. The emergence of responsibility – and RRI – has the potential to establish a link between neuroethics in particular and applied ethics in general and PTA as it is a key concept for both of them. Nevertheless, opening ethics to a more political understanding of responsibility or simply to more inclusive participation (RRI pillar 1) involves finding solutions for the many practical and theoretical problems raised. The same is true for citizen and/or stakeholder participation in order to determine how, and why, such participation should take place. These practical and theoretical challenges are relevant for many emerging and controversial technologies paving the innovation race, expectations and discourses, but even more for technologies with great and uncertain impacts on the evaluation area: the brain.

Although the notion of fair cooperation between equal participants is a welcome initiative, it is a minimal requirement and does not provide the means to investigate the substance of research projects from an ethical point of view (RRI pillar 6). A framework must therefore be constructed to deal with both the political and ethical dimensions. Participation alone is not sufficient. As shall be seen below, deliberation provides a promising concept for both ethics and politics. The theory of deliberative democracy (TDD) is one of the most prominent concepts available for the political dimension and may well provide a solution for governance (RRI pillar 5). With regard to the ethical aspect, deliberation is a requisite in cases of conflict, disagreement or uncertainty, as, for example, when moral intuitions are shaken by new problems, typically surrounding emerging or controversial technologies or when ongoing research brings uncertainty with it. This is typically the case with emerging neurotechnologies, not only the possible ones but the discourse around their seemingly endless potentialities (Grunwald Citation2016). In other cases where social or legal conventions and rules are in place, there is no need for ethical discussion.

Apart from the above, the moral and political aspects of assessment should be developed in conjunction with the techno-scientific aspect (RRI pillars 2 and 4), which often tends to be well documented and robust. This leads to a situation where prevention prevails. In this case, we know very precisely the phenomenon at stake and can build probabilities. In other cases, such as many neurotechnological experiments the uncertainties are so strong that the probabilities cannot be ascertained. This leads to a state of precaution. Indeed as John Donoghue, a leading international researcher in the field of neuroengineering noted during the course of the OECD’s Neurotechnology and Society: Strengthening Responsible Innovation in Brain Science workshop (Garden Citation2017),

We still don’t understand the principles about how the human brain works – there is no major effort that pulls together the data and knowledge in order to address basic questions about, e.g. cognition, memory, thought and speech.

As we shall see, the precautionary principle (PP), as an action principle, allows precisely proportional measures to be taken in order to prevent significant and/or irreversible damage from occurring. Because precaution is anticipatory, it does not allow uncertainty on the scientific side of assessment to be used as an excuse when serious presumptions of significant and/or irreversible damages have been made. The deliberation and the PP are proactive, and undoubtedly provide an entirely relevant match for the moral-political and scientific assessment of different disciplines. Both have a great potential to build the future of strong RRI, even if the latter is inheritor of them.

From a historical and a terminological point of view in the EU, the PP is used as a term for responsibility in specific cases. It has been the case with genetically modified organisms (GMOs) or Bisphenol A, but may be useful for some neurotechnologies experiments regarding the Donoghue quotation above. The PP increases the incorporation of RRI into science and technology in cases of significant uncertainty and brings with it the different possibilities of political and ethical assessment. Because RRI and the PP take the future into account, they may well be used as a resource to broaden and improve the theory of deliberative democracy, which for the moment is mainly focused on the problem of inclusiveness, fair cooperation and rational decision-making. Within the EU, the PP brings rationality to controversies surrounding the possible and the impossible, and all the modalities in between (i.e. certain, plausible, likely according to different sorts of probabilities).

The framework shall firstly be used to show that ethics in research means different things and that the encounter between ethics and RRI leads to a number of reconfigurations of the problems. Secondly, we shall analyze how deliberation as a condition of RRI is considered in the relevant body of work on RRI, playing an important role and how it is defined. Thirdly, we shall point out a certain number of limitations in current research including the fact that TDD is quasi absent from such research. Fourthly, we will identify a number of strengths provided by the PP as a tool for anticipatory governance, focusing particularly on its capacity to document different types of scientific uncertainties and the need for action in spite of these uncertainties. This framework matching ethics, politics and sciences could help for PTA, and RRI in general, but for neurotechnologies in particular.

2. Ethics in research compared with RRI

Ethics has been an old concern in research, much longer before RRI. We will consider different understandings of research ethics and compare them with RRI.

2.1. Different scopes for ethics in research

Ethics in research includes at least three different considerations:

  1. the integrity of the researchers involved;

  2. ethical review (ER) (ethical compliance) regarding the protection of humans and animals involved in the project and potential impact on the environment and

  3. ethics as one of the six pillars of responsible innovation and research.

ER processes and RRI are partly convergent and partly divergent (Pellé and Reber Citation2016). Within these three conceptions, the scope of ethics, the entities concerned (subjects and objects of the action) and the conceptual tools to handle and assess the problems are not the same. Indeed, research ethics covers very different types of problems: the conduct of researchers in their work, publication of their results and precautions to be taken to protect those involved in experiments. These precautions are also relevant for the results produced when they include the invention of new processes or technologies that may fundamentally alter the environments and societies in which they will be transformed into innovations. Different ethical approaches should therefore be applied depending on the research activity considered, as research activity becomes ever more complex and collective. We shall, for example, discuss integrity in research when considering the individual conduct of researchers in their investigations and publication of their results. We shall apply other concepts from bioethics and its lists of principles to appraise collective projects in biomedical research in order to protect people and patients who may or may not be ill or healthy and who may or may not be vulnerable.

The scope of this research may be further expanded to focus on good ethical practices relating to the use of neurotechnologies (the focus of this Special Issue). Indeed, as highlighted by this collection of papers, such an important field of research goes beyond the narrow scope of researcher integrity and the protection of people in general and patients in particular. For instance, most of the rationale contained in the report drafted by the French National Consultative Ethics Committee for Health and Life Sciences concerns the risk of undue influence caused by competitiveness and the race for performance (CCNE Citation2014). Looking at just three different aspects – ethics, research integrity (RI), ER – of funded projects and RRI, significant differences emerge.

The recent concern to ensure the integrity of research that requires more explicit and rigorous forms of institutionalization as a result of integrity codes in research should firstly be considered. In Europe, this has resulted in the founding of The European Code of Conduct for Research Integrity (European Science Foundation Citation2011) co-edited by the European Science Foundation (78 research institutions from 30 countries) and the All European Academy (53 academies from 40 countries). Researchers must therefore not only comply with rules specific to their practices (epistemic norms; Pellé and Reber Citation2016) but also with ethical norms (e.g. integrity). They are thus entrusted with other kinds of responsibilities. This might seem obvious in work on RRI, but in reality tends not to be the case, a fact which should equally be borne in mind.

2.2. Research ethics exposed to stakeholders deliberation

The three forms of research ethics are different to RRI. Even if most of the RRI analysts never return back to ER to integrate it in their proposals, we do not see how it is possible to avoid to obey the requirements of ERs where ethics is understood as a form of compliance. As the following discussion will illustrate, RRI opens up new ways to integrate and explore ER.

Firstly, the RRI compass is significantly more open to research both upstream and downstream than ER and RI. By way of example, the European Code of Conduct for Research Integrity in research did not want to be included in the ‘social context’ preferring to stick to what is understood by ‘ethics’ (European Science Foundation Citation2011, 10). This distinction, expressed in different ways, has been a crucial issue and debated on at least two occasions (conferences taken place in Madrid 2008 and in Amsterdam 2009). First of all, the Code recognizes that ‘[…] scientists operate in a value-bound context. […] All refer to the ethical Footnote3 and social context in which science proceeds’. Secondly, without establishing a ‘perfect watershed’, the Code distinguishes between ‘problems related to science and society, emphasizing the socio-ethical context Footnote4 of research, and problems related to scientific integrity, emphasizing standards when conducting research’. Finally it states: ‘This document will not deal with this wider ethical context Footnote5 of science, but focus on the second category, the responsible conduct of research.’

A certain amount of hesitation, if not confusion, between the ethical and the social can be observed in the text. In the same way, the opposition between conduct and context may seem puzzling. No practice is performed outside of a context. It would be more relevant to recognize similar or different practices in different contexts. Finally, responsibility in the Code only concerns the conduct of research, but should concern broader ethical and social contexts as expressed in its two wordings – ‘socio-ethical context’ and ‘ethical context’.

For its part, RRI embraces the social context. Moreover, and for the sake of precision at least three contexts can be distinguished: one that is research-related (RI), a second that takes persons, animals and plants involved in research into account (ERs) and a third, called ‘ethical (or socio-ethical) context’ in the Code, which resembles the social context, but which also includes scientific policy.

A second difference is that ERs do not cover scientific knowledge (the RRI pillars relating to science literacy (2) and openness (4)). These reviews are carried out after the scientific assessment that the EC terms the ‘technical’ side. However, a number of questions contained in the Ethical Review Questionnaire make it impossible to respect this limitation. This is the case when animals are involved; one should be able to say why the use of animals in the research project cannot be avoided and to show the benefits other animals and human beings can derive from such usage.

A third difference is the recurrent use of the word ‘ethics’ in ER, whereas it is only one of the RRI key (RRI pillar 6). Moreover, ethics does not have the same concerns as ER pays specific attention to persons, animals and plants that research protects as best it can. On the contrary, RRI discusses issues more broadly and from different points of view. If the two are brought more closely together, there is a risk that ethics in research-related tasks may become destabilized or decentered. Indeed, in such a case ER would be exposed to what the Code related to integrity in research calls ‘ethical context’. The disparity, indeed the oppositions, between research in moral philosophy and applied ethics on the one hand and in political sociology or even the sociology of science and technology illustrate these two different orientations.

To summarize, the most groundbreaking or innovative RRI pillar compared to ER is participation (RRI pillar 1). In fact, it is not only one of the most innovative pillars compared to ER, but also one of the greatest destabilizers. One could argue that ethical requirements in research have overlapped with those of the PTA. In fact for more than 30 years now, many countries including, for example, Australia, Denmark, France, Germany, India, the Netherlands, New Zealand and Switzerland have included so-called ordinary citizens from very diverse backgrounds in assessment exercises (Dryzek et al. Citation2009). RRI formulations by the EC sometimes use the term ‘stakeholders’ and at other times ‘citizens’. These publics are not the same and neither are their responsibilities. The majority of PTA exercises greatly appeal to ordinary citizens. RRI thus provides an answer to this interest by including not only the public directly affected by the research in question and ordinary citizens, but also all other stakeholders.

Thus, concern for more inclusive stakeholder participation is not new, particularly, in the case of technology assessment (Hennen Citation1999; Joss and Brownlea Citation1999; Klüver Citation2000; Callon, Lascoumes, and Barthes Citation2001; Decker and Ladikas Citation2004; Reber Citation2005b, Citation2006b, Citation2010, Citation2011a; Grunwald Citation2016). There already exists a list of possible mechanisms, rules and criteria to organize and assess the participation of lay people and experts in these fields (Reber Citation2005a). However, one important question, the manner in which participation is conceived of in understandings if not theories of responsibility, deserves attention. The inclusion of responsibility in participation should go beyond the minimal meaning of responsiveness (Pellé and Reber Citation2015) especially for issues related to neurotechnologies. Responsibility as virtue, capacity or authority should be considered to select the most relevant participants and not only ‘ingenuous’ (‘candide’ in French) people, the term used, for instance, in France for the first citizen conference (Reber Citation2011a), or lay people to use a close term from the Anglo-Saxon worlds. It should be noticed that the requirement that participant should be neutral and knows quite nothing on an issue is much more substantial than the classical socio-demographic characteristics (gender, profession, origin and so on) guaranteeing panels diversity.

As we shall see, a number of authors have proposed RRI dimensions that are more abstract than the six EC pillars. Deliberation is one of the conditions they define as necessary for the emergence of responsibility in research and innovation. However, the ‘collective’ processes (Mitcham Citation2003) of ‘inclusion’ of stakeholders or the processes of ‘dialogue, engagement or debate’ (Owen, Bessant, and Heintz Citation2013; Owen et al. Citation2013; Stilgoe, Owen, and Macnaghten Citation2013) are rather vaguely described, as shall be seen in the next section. The concrete processes through which inclusion, participation, dialogue and/or deliberation are put in place are never thought of or developed as such. Moreover, we shall see that authors do not always clearly distinguish between the various notions, and notably between participation and deliberation (Pellé and Reber Citation2016). The step from participation to deliberation is already a step towards responsibility.

3. What kind of deliberation as one of the four dimensions of RRI?

In this section, we discuss the limitations of the seminal work edited by Richard Owen, John Bessant, and Maggy Heintz, Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society (Citation2013). The authors of this book are prominent in the field of RRI not only in the EU but also in the United States (US). In their work, they tend to refer mostly to responsible innovation only, although their examples frequently take responsible research (i.e. the case of Francis Crick and James Watson working in Cavendish Laboratory in Cambridge University, 45) into account. Thus, the distinction is not made between research and innovation.Footnote6

3.1. RRI general framing

To return to their four dimensions of RRI and to better analyze their understanding of deliberation, a more in-depth analysis of the second chapter of their book: A Framework for Responsible Innovation is called for. It is written by Richard Owen, Jack Stilgoe, Phil MacNacnaghten, Mike Gorman, Erik Fischer and Dave Guston (Owen et al. Citation2013). This chapter summarizes RRI under four ‘dimensions of Responsible Innovation’ (Citation2013, 38), and is more frequently cited in RRI research work than the six EC RRI pillars.

Although the book is a collection of numerous perspectives by other authors – the ambition being to offer a framework for RRI, or RI – the six authors of this chapter have tried to take into account the various perspectives presented in this important work. A careful reading of Chapter 2 shows one element that has not been sufficiently underlined: the ‘commitment to be [… .] deliberative’ should be interpreted from a more grounded perspective. This perspective is prospective in that it emphasizes the dimensions of care and responsiveness (36, 44). Indeed, the authors follow Adam, Groves and Grinbaum (Groves Citation2006; Adam and Groves Citation2011; Grinbaum and Groves Citation2013), who themselves borrowed briefly from the much-cited philosopher Hans Jonas (1903–1993; Citation1984).

Care ethics is presented as the philosophical anchor of the entire Owen et al. book, despite the diversity of the various authors. One of the reasons for the importance of care ethics in the book is perhaps due to the fact that the only professional philosophers are Groves and Grinbaum, and they defend this ethical theory. Strictly speaking, Grinbaum is a physicist and more an epistemologist than a moral philosopher. In their text, the question of care seems to cover and ground RRI. As shall be shown at the end of this section, the emphasis and prominent role attributed to care ethics is problematic. Indeed, all the authors of Chapter 2 seem to agree that: ‘the predominant reciprocal and consequentialist view of responsibility is inadequate’ (Owen et al. Citation2013, 36). Immediately afterwards they add that the new conceptualization of responsibility should be future-oriented. With this in mind, they propose a very simple RRI definition:

Responsible innovation is a collective commitment of care for the future through responsive stewardship of science and innovation in the present. (Owen et al. Citation2013, 36)

3.2. The four dimensions of RRI

Before discussing their conception of deliberation, the three other dimensions of RRI they propose should be considered (Owen et al. Citation2013, 38) are:

Anticipation could be summarized as ‘the entry point for reflection on the other purposes, promises, and possible impacts of innovation’.

In terms of modalities, it is to some extent counterfactual to ask the question, ‘what if … ’ or ‘what else might it do?’ Anticipation is closer to ‘plausibility’ than predictability.

Reflectivity has a very particular and narrow meaning. It describes ‘underlying purposes, motivations and potential impacts’, focusing on ‘what is known (…) and what is not known.’ Interestingly, for ‘what is known’ they write: ‘areas of regulation, ethical review, or other forms of governance’, and for the unknown: ‘associated uncertainties, risks, areas of ignorance, assumptions, questions, and dilemmas’.

Deliberation comes after these two first dimensions, and is followed by the last one: responsiveness.

Responsiveness like deliberation means different things. ‘Iterative’ responsiveness is an ‘inclusive’, ‘open process of adaptive learning, with dynamic capability’. It is presented as a ‘collective process of reflexivity’ to ‘set direction’ and ‘influence’ innovation trajectories. They write in favor of ‘effective mechanisms of participatory and anticipatory governance’. They add as a footnote that this anticipatory governance is a ‘broad-based capacity extended through society’ to manage ‘emerging knowledge-based technologies while such management is still possible’.

In this sense, they reduce responsiveness to what might be understood as anticipatory governance. This recalls the proverb ‘to govern is to anticipate’.

Finally, Owen et al. (Citation2013) believe that these dimensions achieve two goals:

  1. They collectively build a ‘reflexiveFootnote7 capital’, ‘in an iterative, inclusive and deliberative way’, and

  2. This capital is coupled ‘with decisions’ about the ‘goals of innovation’, the ‘uncertain and unpredictable’ modulation of its trajectory, ‘that is, how we can collectively respond’.

Within this general framework, what do they understand deliberation to be? At first glance, deliberation is concerned with the problem of the inclusiveness Footnote8 of ‘diverse stake-holders’. It encompasses many different elements and entails: the ‘introduction of a broad range of perspectives to reframe issues’, to ‘authentically embody diverse sources of social knowledge, values, and meaningsFootnote9 and identify ‘areas of potential contestation’.

In order to manage this openness, they propose a ‘collective deliberation’ which is very vaguely formulated as: ‘processes of dialogue, engagement, and debate’. They only mention ‘listening to wider perspectives’.

3.3. Discussion of deliberation as one RRI condition

Tribute should be paid to this first attempt to clarify RRI and render it operational. Even though it is not an attempt to interpret the six pillars of the EC description of RRI, the authors’ work can nonetheless be discussed and commented on from that perspective with some limitations being underlined. The first two remarks concern the four dimensions.

3.3.1. Redundancy

Some definitions of the different dimensions seem somewhat redundant. The need for inclusiveness is contained in deliberation and responsiveness. In the same way, the need to listen to wider perspectives is included in anticipation, reflectivity and deliberation. Responsiveness reintroduces reflexivity in its definition. Risks and uncertainty in reflectivity are also included in anticipation. Thus, their definitions of the four dimensions could perhaps be made more precise and distinguishable from each other.

3.3.2. Ranking of the dimensions

It might also be useful to change the ranking of the four dimensions. What the authors call deliberation could be the starting point, continuing from there with the need to open the discussion as broadly as possible, and following that to be responsive and to learn, attempting to be more reflective and finally finding other narratives. Incidentally, these narratives are more general attitudes than communicative capacities (distinguished from interpretation, argumentation or reconstruction; see Ferry Citation1991, Citation2004).

3.3.3. Deliberation as a first step towards discussion

Deliberation and responsiveness are seen as more political concerns, mainly linked to participation, while anticipation and reflectivity are seen as more cognitive. The latter constitute a first premise, a step into the epistemological and normative discussion. This discussion concerns the different responsibilities at work. In the same way, it may concern different orders of reflectivity (scientific, moral and political) and constraints, but also the content of deliberation.

3.3.4. Short and heterogeneous reflectivity

Reflectivity is the most advanced condition to show the way in which the various stakeholders should deliberate, understood as being more than participation, discussion or dialogue. However, reflectivity points in very different directions.

Firstly, the ‘things we know’ are presented as ‘forms of governance’. For this reason they should be included in their definition of responsiveness. Indeed, this is the section where governance is discussed.

Secondly, elements that are situated at different levels of knowledge are grouped together with the unknown, which is slightly problematic. For instance, ignorance is not the same as a dilemma. Dilemmas contain different positions (often moral), where the difficulty is to know how to choose between them. Dilemmas are not pure and broad ignorance. Incidentally, real ethical dilemmas are rare. When moral philosophers and psychologists have to build dilemmas for their experiments, they make up curious, if not cruel, thought experiments with the famous trolley cases running in direction of workers on a line when the tested participants have to choose between throwing a fat guy to stop it and saving them, or to let the trolley hitting them. Most of the so-called moral values conflicts are solvable. They are more tensions of values. For instance, most of the democratic countries take into account solidarity and individual freedom. More often, the so-called dilemmas are merely oppositions where no discussion to identify the lines of opposition or how to reinterpret the situation to reach agreement has taken place (Reber Citation2011b). We cannot ask ‘questions’ about that which we do not know.

Their notion of assumptions adds a most promising element to the field. We shall return to this when discussing the relevance of the PP for RRI and what might be considered as ‘second order reflexivity’, contained in descriptions of situations or normative propositions. Indeed, reflectivity concerns descriptions or practical judgements (first order), rather than the way both are built (second order).

3.3.5. Responsibility as a whole reduced to responsiveness

The authors’ perception of responsibility seems to be slightly more problematic. For the authors, ‘responsible Footnote10 innovation’ (used in the in the sub-title 2.4 of their chapter) is the main problem to be addressed by the four dimensions (Owen et al. Citation2013). However, at the same time the selected definition of responsibility as responsiveness reduces it to only one of its definitions, when at least ten possible understandings of responsibility exist (Pellé and Reber Citation2015, Citation2016; Reber Citation2016a): cause/consequence, blameworthiness/praiseworthiness, liability, accountability, task, role, authority, capacity, obligation and virtue.

3.3.6. Responsiveness as capacity?

On closer examination, we find that responsiveness is transformed, or at least associated with capacity, or to use the authors’ term: ‘capability’. Capacity and responsiveness are two different understandings of responsibility.

A number of authors such as Nobel prize winner and economist Amartya Sen or the neo-Aristotelian and philosopher Martha Nussbaum have used the term ‘capability’ extensively. Their contributions to the field are significant and would be a welcome addition to the chapter.

In other parts of the book analyzed here, more is learnt about their understanding of competencies. Indeed, they refer explicitly to Webler, Kastenholz, and Renn (Citation1995), or Renn, Webler, and Wiedermann (Citation1995), Fairness and Competence in Citizen Participation. This title probably explains the participative scope of all these authors (Lee and Petts Citation2013, 146, 163).

3.3.7. Responsiveness as care?

While responsiveness has been changed into ‘capacity’, which is just two of the possible understandings of responsibility, other definitions of responsibility can also be found in this text. Their grounds for the four RRI dimensions are not only responsiveness but also and mainly care. There seems to be a conceptual hesitation towards responsibility which might be made more explicit for the sake of the discussion and to better understand what they consider to be the core of RRI regarding the possible meanings of moral responsibility.

3.3.8. Care without Aristotle?

Following the work of Hans Jonas (Citation1984), the authors prioritize care ethics. Indeed, they cite René von Schomberg (Citation2013) who ‘argues that we cannot aspire to the abstract ideals of the Aristotelian “good life” however contested this may be, and takes a more pragmatic view’ (Owen et al. Citation2013, 37).

Thus, it appears difficult to see how it might be possible to depart from Aristotle, one of the fathers of virtue ethics, and at the same time to favor a virtue ethics theory under the name of care.

3.3.9. Return of consequentialism and deontologism

Based on Groves’ interpretation of Jonas, they argue that the predominant consequentialist view of responsibility is inadequate (Owen et al. Citation2013, 36), and adopt René von Schomberg’s claims (see above), to define his pragmatic view:

that (is) at least in a European context, the ‘right impacts’ […] constitutionally enshrined in the European Treaty, such as a competitive social market economy, sustainable development, and quality of life. (Owen et al. Citation2013, 37)

Keeping the different moral theories in mind, the mention of impacts takes these authors close to consequentialist moral theory. Furthermore, the mention of the European Treaty and law could be interpreted as a moral deontologist perspective.

3.3.10. Consequentialism considered

In their text consequentialism seems to be confined to localism and reciprocity. However, this theory has never argued that it is valid only for intersubjective, reciprocal or local frameworks, even if the consequentialist moral theory debate exists to determine if the consequences to take into account are reasonable or real, if they are for the short or long term, how to compare different periods of evaluation, if they are founded on a deterministic, probabilistic basis or without any probabilities, as shall be seen with the PP.

3.3.11. A smaller or bigger world?

Their interpretation of Jonas certainly gives rise to an interesting problem. They believe that the main ‘reciprocal’ (Owen et al. Citation2013, 36) view of responsibility is inadequate, because the new ‘conceptualization’ of Jonas goes beyond the ‘ethics of neighborhood’. But in the same time they also believe that our world has become ‘far smaller (and) interdependent’, that might be seen as contradictory.

However, these observations are not related to proximity or distance, but rather to technological mediated interactions at work.

To summarize and analytically express Owen et al. (Citation2013) four RRI dimensions, anticipation is focused on the exploration of other paths, other ‘narratives’ rather than promissory ones, reflexivity is focused on what is known and not known, deliberation is a broad debate, while responsiveness sets the tempo for the process of collective reflexivity, as a learning with dynamic capability.

Their framing is thus a collective process of reflexivity. It might be very useful to tackle neurotechnologies challenges, where collective and anticipatory assessment is required.

They have mainly based deliberation on participation, inclusiveness and dialogue. The next section will focus on how to move from inclusive discussion to a more precise, political and cognitive conception of deliberation.

4. From public dialogue to a democratic theory of deliberation

The concluding words of Owen et al. in their chapter, A Framework for Responsible Innovation, are directed towards democracy. Indeed their RRI framework ‘might open up new possibilities for science and innovation and support their democratic,Footnote11 and responsible, emergence in society’ (Owen et al. Citation2013, 46). In the same vein, when they commit themselves normatively and attempt to justify the building of their four RRI dimensions, they write: ‘dialogue is the right thing to do for reasons of democracy equity and justice’ (Owen et al. Citation2013, 38), they are referring to the fifth chapter, written by Sykes and Macnaghten (Citation2013).

From a normative perspective, democracy seems to be an essential component to support their proposal, but what democracy and what theory of democracy? The Theory of Deliberative Democracy (TDD) is not mentioned in the book. Indeed in their index, deliberation (10 tokens) is connected with dialogue.

The most important and explicit source of deliberation (Owen et al. Citation2013, 146), presented as ‘analytic-deliberative approaches’, might have been linked to the TDD. However, this did not happen. Instead they refer to Ortwin Renn, Webler, and Wiedermann (Citation1995), Stern and Fineberg (Citation1996), and Webler, Kastenholz, and Renn (Citation1995), limiting themselves to the question of participation of ordinary citizens together with their level of information regarding technological risk, or what Wynne (Citation1991, Citation1992, Citation1993) and Hartz-Karp (Citation2007)Footnote12 call ‘co-intelligenceFootnote13 to ‘include varied viewpoints’ (232). Owen et al. do not go further than the notion of ‘public dialogue’ (i.e. with Stahl, Eden, and Jirotka Citation2013, 211).

These proposals constitute an essential first step for RRI. There now needs to be a broader conversation that addresses the problem of diversity. However, we do not know how to discuss, communicate on or justify diversity. Neither do we know how to manage the debate surrounding how to shift from plurality (diversity) to normative pluralism. As has been developed elsewhere (Reber Citation2006a, Citation2016a; Pellé and Reber Citation2016), these problems must be tackled in order to achieve high-quality results given the importance of the issues at stake, and to avoid the risk of causing major and/or irreversible damage, to use the terms adopted by the PP.

4.1. Deliberation in political theory

The political theory of deliberative democracy (TDD),Footnote14 or more generally the important role of deliberation in politics has become an integral part of contemporary political philosophy. The deliberative turn taken by democratic theory from as early as 1990 has gone from strength to strength. As eloquently stated by Dryzek (Citation2010, 4), for example,

This turn put communication and reflection at the centre of democracy […]. (It) is not just about the making of decisions through the aggregation of preferences. Instead, it is also about the process of judgment and preference formation and transformation within informed, respectful, and competent dialogue.

Deliberative democracy is not only a discussion within philosophy circles. It has been ascendant in practice too as shown by the former President of the US, Barak Obama with his book entitled The Audacity of Hope (Citation2006, 92). Neither is this a US exception; even the Chinese Communist Party appears to be open to deliberative experiments.Footnote15

Obama’s quote on ‘deliberative democracy’ offers an interesting first definition:

What the framework of our constitution can do is to organize the way in which we argue about the future Footnote16. All of its elaborate machinery – its separation of powers and checks and balances and federalist principles and Bill of Rights – are designed to force us into a conversation, a ‘deliberative democracy’ in which all citizens are required to engage in a process of testing their ideas against an external reality persuading others of their point of view and building shifting alliance of consent.

The terms in italics help to describe democratic deliberation. Some terms such as conversation are similar though looser in meaning. Some indicate the specificities of deliberation such as the attempt to persuade and attain consent, while some are new in the general TDD debate, such as to argue about the future, as we will argue with the PP.

Obama’s words seem to suggest that the US Constitution is grounded in the need for deliberation. The presence of the word ‘federalist’ in his statement provides an inspiration for the building of Europe or any other system based on policy, such as the science and innovation system.

Despite interpretative quarrels, the TDD could be provisionally described in political theory in the following way:

The notion of deliberative democracy is rooted in the intuitive ideal of a democratic association in which the justification of the terms and conditions of association proceeds through public argumentation and reasoning among equal citizens. Citizens in such an order share a commitment to the resolution of problems of collective choice through public reasoning, and regard their basic institutions legitimate insofar as they establish the framework for free public deliberation. (Cohen Citation1977, 72)

The following is a less institutional and much quoted definition:Footnote17

Deliberation is debate and discussion aimed at producing reasonable, well-informed opinions in which participants are willing to revise preferences in light of discussion, new information, and claims made by fellow participants. Although consensus need not to be the ultimate aim of deliberation, and participants are expected to pursue their interests, an overarching interest in the legitimacy of outcomes (understood as justification to all affected) ideally characterizes deliberation. (Chambers Citation2003, 309)

Despite some differences, deliberative theorists stress the same ideal, that decision-making should be preceded by a process where citizensFootnote18 are involved in the exchange of reasons or arguments that potentially leads to the transformation of their preferences (Cooke Citation2000; Dryzek Citation2000; Andersen and Hansen Citation2007; Fishkin Citation2009; Lindell Citation2011). According to this democratic ideal, decisions should be based on discussions among equal citizens, or their representatives, and the reasons or arguments put forward should be weighed according to their merits (Smith and Wales Citation2000; Andersen and Hansen Citation2007; Grönlund, Setälä, and Herne Citation2010). It is expected that deliberation will also filter participants’ values (Elster Citation1998). In this way, democratic deliberation encourages respect and, hopefully, mutual understanding (Smith and Wales Citation2000, 53–54). Claims relating to pure and narrow self-interest become difficult to defend in a deliberative context (Mansbridge et al. Citation2010).

This theory is in opposition to conceptions of democracy that stress bargaining, the aggregation of preferences or more inclusive participation (participatory democracy; see Cunningham Citation2002).

This last point is important regarding those working in the field of Sciences and/with society. Participative democracy is often confused with deliberative democracy and is only concerned with the need to be more inclusive or, at least to allow the (actually or potentially) affected people to participate in decision-making. This theory – and lot of RRI understandings – does not define why or how to participate. TDD requires an open debate and is more specific in the way participants should behave (i.e. following an exchange of reasons or arguments). Thus the TDD proposes a more ambitious conception of citizens (or other actors, individuals or institutions) and their interactions, and of the political community. The more precise and ambitious requisites of TDD are very appropriate regarding neurotechnologies issues with the special intensity and place of the brain in the human representations. It is the specific area of individual deliberation. Moreover, these technologies are tinted with a lot of uncertainties that will require the employment of the PP framework – as we will see – to redirect the TDD in the right direction: the uncertain futures.

A number of virtues, including normative ones can be recognized in this theory. Its defenders believe that political representatives – or the principal stakeholders in RRI – have the capacity to justify and perhaps present arguments for their decisions. They expect citizens (or participants) to be able to justify their choices, rather than stick to their frequently vague preferences. Justifications are expected from both sides, decision-makers (or stakeholders) and the general public. TDD believes that citizens have the capacity to search for and collectively formulate the common good within public deliberations that link the common good, justification and legitimacy together, and respect the autonomy of citizens.

4.2. Deliberation in empirical research

Despite debate and disagreement in political philosophy, interpretations of the TDD have been proposed for empirical research. This research has a lot to teach to RRI. Those who have written on the subject include Luskin, Fishkin, and Jowell (Citation2002), Neblo (Citation2007), Bächtiger et al. (Citation2010), Luskin et al. (Citation2014), Deville (Citation2015) and Gerber et al. (Citation2016), and the Swiss political scientist Jürg Steiner and his colleagues (Steiner et al. Citation2004; Steiner Citation2012).Footnote19 Their 2004 book was a ‘first attempt to put Habermas into the lab’. Interestingly, Habermas was astonishedFootnote20 to see that real empirical work can be produced as a ‘result’ of his work in the philosophy of social sciences. To summarize Steiner et al.’s approach, a workable list of traits of deliberative democracy theory might include the following:Footnote21

  1. Arguments should be expressed in terms of ‘public good’.Footnote22 If somebody wants to defend his/her interests, he/she should be able to show their compatibility and their contribution to the public good.

  2. Participants should truthfully and truly express their views.

  3. They should listen to others’ arguments and treat them with respect.

  4. Parties should defend their claims and logical justifications, through an exchange of information and good reasons.Footnote23

  5. Participants should follow the strength of the better argument, that is not a priori given, but to be sought out in the common deliberation.

  6. Everybody participates on an equal level, without constraints in an open political process.

Using these dimensions of TDD, they have produced a Deliberative Quality Index (DQI) that contain the following elements: participation, level of justification for demands, content of justification for demands, respect towards groups to be helped (empathy), respect towards the demands of others, respect towards counterarguments, constructive politics. The DQI allows them to assess the quality of deliberation in the formal arenas of different national Parliaments.

Of course, TDD is not above criticism (Reber Citation2011a, Citation2011b, Citation2016a). The deliberation in question is more complex now than previously, as scientific controversies alongside problems and the variety of possible solutions in terms of ethical evaluations (ethical pluralism of values and/or theories) that are subject to political constraints need to be considered. This is the case in situations where there is strong suspicion of serious or irreversible damage and when ethical and political decisions have to be taken in a context of uncertainty. Various forms of commitment and expectations can be added with regard to the future and the pessimistic or optimistic expectations and risks surrounding innovation. This is the case with the potential benefits and risks of neurotechnologies. Emerging neurotechnologies help the understanding of the human brain and offer great promise in diagnosis and therapy for neurological disorders. If some neurotechnologies provide opportunities to enhance our general well-being, however, if they are used for cognitive enhancement or to alter consciousness, they raise a profound set of ethical, legal, social, cultural, and even health issues.

The relationships between the sciences, scientific practices and ethics, the interweaving of facts and values, the quarrels that exist between coexisting disciplines, the competing approaches of epistemic pluralism – in both inter- and intra-disciplinary forms – and argumentative coexistence in interdisciplinary contexts should thus be considered. These challenging problemsFootnote24 cannot all be dealt with here but a new version of the theory of deliberative democracy can be proposed, focusing on its specificity as a future genre (Aristotle Citation1926; Reber Citation2016a), and based on arguments used to defend plausibility. The moral philosophy of ethical theories is applied in this context as a form of casuistic (Jonsen and Toulmin Citation1998) involving probabilities, and not limited to case studies within the framework of applied ethics. The PP covers and, to a certain extent, intertwines the question of scientific, ethical and political evaluation central to this work. Moreover, the PP provides a significant meta-principle for the EU – or other countries – for responsibility, and thus for RRI.

5. The need for the PP

Uncertainty has found a new role at a time and in the context of major advances in science and technology. Neurotechnologies field do not escape this stipulation on the contrary; the brain is the core of identity, deliberation and responsibility process. Even if the capacity of science and technology to know, simulate, model and predict has not been called into question, their capacity to raise new questions as they progress is somewhat paradoxical (Berthelot Citation2007). It is the case with the two sides of neuroethics, the one that aim at better understanding the way our brains perform moral appraisal, and the one that assess the brain experiments. The element of uncertainty may be accepted as part of a type of contract that accepts the best along with the worst; at the very least, a level of frustration with regard to knowledge – which we would prefer to be more certain – must be accepted. Thus, the future is tinged with uncertainty. However, it would be unwise to resort to a form of ‘quietism’, accepting this uncertain and risk-laden technological civilization, renouncing the possibility of acquiring knowledge and, more seriously, of action. The power of innovative processes and knowledge is such that passivity – or status quo – has ceased to be acceptable, particularly in cases where technologies have the potential to cause serious or irreversible damage. This raises two problems:

  1. The first concerns the recognition of uncertainties in different phases of knowledge, above and beyond simple technological capabilities.

  2. The second problem concerns the decision to act in the presence of uncertainty, for example, in order to guarantee a high level of security, one of the cardinal principles of the EU.

5.1. Definitions of the PP

In the EU,Footnote25 the PP has become a political and ethical meta-norm, intended to provide a framework for certain political decisions, relative to scientific and technological choices. It should be applied as a matter of course in certain specific circumstances, notably in cases involving high levels of uncertainty due to the limitations of scientific expertise in risk assessment. The PP may well provide an appropriate framework for conducting in-depth PTA. It also holds considerable promise for RRI.

While the principle has rapidly become one of the main references used in handling certain public scientific controversies, including the backlash to genetically modified foods in the EU (Reber Citation2011a), it is subject to interpretative debate. For instance, the importance, newness and instability associated with this principle have created a need for greater precision, as in the case of a report requested by a former French Prime MinisterFootnote26 and the Communication drafted by the European Commission on the Precautionary Principle (COM 2000). The PP is subject to a high level of interpretative volatility, but the margin for interpretation within broad principles in inter-institutional negotiations is not new. What has been useful to assess controversial technologies in agriculture (i.e. GMOs), or nanotechnologies, could be appropriate for neurotechnologies. Indeed, despite of the proliferation of studies, however, fundamental questions remain about the mechanisms through which the devices impact and interact with the brain, the long-term consequences and the effects of varying the multiple parameters associated with non-invasive neuromodulation (e.g. coil geometry and placement, pulse timing, number and spacing of sessions; Garden Citation2017). The use of devices in combination with other therapies, such as drugs, psychotherapy or behavioral tasks is also a growing area of interest.

Various formulations have been used, such as the German version, Vorsorgeprinzip,Footnote27 which appeared in the late 1960s, and several texts in international law from the mid-1980s, which refer specifically to this principle. International recognition increased considerably with its inclusion in principle 15 of the 1992 Rio Declaration, which stipulates:

In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.

A few months earlier, in February 1992, the Maastricht Treaty introduced the PP as one of the foundational elements of EU policy in the environmental domain.

5.2. Detailed use of the PP

COM 2000 provides a detailed description of uncertainties, giving the lie to the many philosophical works, which question the PP or cast doubt on Jonas’ ‘impossible futurology’. However, we should not give in to the urge to decide too quickly, without sufficient reflection on decisions taken in a situation of uncertainty. The PP is made up of several different elements and principles. Therefore, it is a meta-principle, framing a deliberation. In this section, these components will be carefully considered, noting from the outset that they relate essentially to scientific evaluation. Note that this long list of details involved in evaluation phases invalidates the affirmations made by certain philosophers (Hunyadi Citation2004; Sunstein Citation2005; Gardiner Citation2006), along with other critics, who claim that the PP lacks method and, instead, encourage epistemic abstinence, or a return to anti-scientific obscurantism. On the contrary, it makes a plea for more scientific inquiry, matched with ethical and political responsibilities. It is thus entirely appropriate for RRI and deserves great attention. It is the reason for this very detailed presentation.

In the English version of the COM 2000, sub-section 5 is entitled The Constituent Parts of the Precautionary Principle. The text begins by identifying ‘two distinct aspects’ COM 2000 (12):

1) The ‘political decision to act or not to act as such, which is linked to the factors triggering recourse to the precautionary principle’.

2) In the affirmative, ‘how to act, i.e. the measures resulting from application of the precautionary principle’.

Each aspect will be considered below.

5.2.1. Description of uncertainties

The first aspect which describes the factors triggering recourse to the PP includes three main components:

  • 1.1. identification of potentially negative effects

  • 1.2. scientific evaluation, or

  • 1.3. scientific uncertainty.Footnote28

1.1. The first component is a prerequisite for use of the PP. The precise formulation is interesting:

To understand the effects [the potentially negative effects of a phenomenon] more thoroughly it is necessary to conduct a scientific examination. The decision to conduct this examination without awaiting additional information is bound up with a less theoretical and more concrete perception of the risk. (13)

The decision to conduct a scientific evaluation of a phenomenon is therefore based on a perception qualified as concrete.Footnote29

1.2. The second factor, scientific evaluation, is more rhapsodic in nature. This is supported by Annex III of the document, entitled The Four Components of Risk Assessment. These same elements are indicated in the final paragraph of the relevant section (28):

  • 1.2.1. hazard identification

  • 1.2.2. hazard characterization

  • 1.2.3. appraisal of exposure, or

  • 1.2.4. risk characterization

1.3. The third factor concerns scientific uncertainty. The discussionFootnote30 centers on ‘five characteristics of the scientific method’,Footnote31 which are conventional:

  • 1.3.1. the variable chosen

  • 1.3.2. the measurements made

  • 1.3.3. the samples drawn

  • 1.3.4. the models used, or

  • 1.3.5. the causal relationship employed

COM 2000 states that, ‘Scientific uncertainty may also arise from a controversy on existing data or lack of some relevant data.’ Furthermore, it ‘may relate to qualitative or quantitative elements of the analysis’ (13–14). This final, highly detailed presentation is not, however, universally accepted: some scientists prefer 1.3’, ‘a more abstract and generalized approach’ in which all uncertainties are separated into three categories:

1.3’.1 Bias, 1.3’.2 Randomness and 1.3’.3 True variability; other ‘experts’, 1.3’’., use categories of uncertainty ‘in terms of estimation of confidence interval of the probability of occurrence and of the severity of the hazard’s impact’.

Faced with the complexity of this issue, the EC requested four reports (as written in the COM 2000), intended to ‘give a comprehensive description of scientific uncertainty’. Finally, one report was produced instead of the four initially planned (Stirling, Renn, and Klinke Citation1999). The report adds nothing particularly different from this part of the evaluation. However, the researchers were subject to pressure from members of the EC, which is an extremely rare occurrence. They were summoned during the report creation period to deal with an issue of opposition between the PP and risk/benefit assessment (RBA), which certain lobbyists wanted the Commission to adopt. RBA was later proposed as a preferable option in the academic literature as is the case with the American legal expert Professor Cass Sunstein (Citation2005).

At this stage the position of the COM 2000 may be seen as a means of deciding between approaches involving a variety of different forms of uncertainty if nothing better is available. However, the COM 2000 notes that, ‘Risk evaluators accommodate these uncertainty factors by incorporating prudential Footnote32 aspects [not to be confounded with the PP] such as:

  • 1.4.1.Footnote33 relying on animal models to establish potential effects in man

  • 1.4.2. using body weight ranges to make inter-species comparisons

  • 1.4.3. adopting a safety factor in evaluating an acceptable daily intake to account for intra- and inter-species variabilityFootnote34

  • 1.4.4. not adopting an acceptable daily intake for substances recognized as genotoxic or carcinogenic

  • 1.4.5. adopting the As Low As Reasonably Achievable, as low as reasonably achievable, level as a basis for certain toxic contaminants’

The terms in italics indicate that we are now dealing with safety measures. It is no longer enough to simply understand the phenomena in question. These might be considered to fall under aspect (2) of the PP, that is, the measures to take.

COM 2000 thus acknowledges that situations exist in which,

the scientific data are not sufficient Footnote35 to allow one to apply these prudential aspects in practice, i.e. in cases in which extrapolations cannot be made because of the absence of parameter modelling and where cause–effect relationships are suspected but have not been demonstrated. It is in situations like these that decision makers face the dilemma of having to act or not to act.

The summary of this sub-section adds:

Recourse to the precautionary principle presupposes identification of potentially negative effects resulting from a phenomenon, product or procedure,

but also, more importantly,

a scientific evaluation of the risk which because of the insufficiency of the data, their inconclusive or imprecise nature, makes it impossible to determine with sufficient certainty the risk in question.

The Communication presents the advantage of clearly highlighting and specifying well-known uncertainties in scientific practices. Thus far, the PP cannot be distinguished from prevention.

5.2.2. Various types of action

We may then decide to act. This decision is ‘eminently political’, according to COM 2000 and is ‘a function of the risk level that is “acceptable” to the society on which the risk is imposed’. We may add that it is a question of responsibility, and note that, from this perspective, PTA and moreover RRI can be extremely helpful and represent two of the best available tools. But they need to be adjusted with PP.

If the decision-maker decides to act, he or she has access to ‘a whole range of actions’: judicial review, the decision to fund a research program or even the decision to inform the public about the possible adverse effects of a product or procedure. The fear expressed by certain legal philosophers, such as Sunstein (Citation2005), based on the damaging effects of certain laws which cause more harm than good may lead to a false assumption that these possible actions are purely judicial. This panel of possible administrative and political tools are very relevant for neurotechnology too.

The second part of the Communication provides guidelines for the very implementation of this principle, providing an interpretation that is somewhat different from that shown above. The text notes that scientific evaluation should include,

a description of the hypotheses Footnote36 used to compensate for the lack of scientific or statistical data.

Furthermore,

an assessment of the potential consequences of inaction should be considered […]; the decision to wait or not to wait for new scientific data before considering possible measures should be taken by the decision makers with a maximum of transparency. (16)

There are thus two distinct moments at which the decision to wait or not to wait for new data must be made. This question has already been raised in relation to the first factor triggering the use of the PP. The COM 2000 notes that the absence of scientific proof should not be used to justify inaction. It also strongly recommends that due account be taken of the views of credible minority factions, and that all interested parties should be involved in the procedure from the earliest possible stage. This point is already a step toward the recommendations by Stirling et al. (Citation1999), relating to early involvement in assessment. This is also promoted by RRI, particularly with regard to inclusion of stakeholders (Pillar 1) and a concern for care and the future (Owen et al. Citation2013). Similarly, it sets out ‘general principles of application’ for all risk management measures, and not exclusively for the PP. These principles of application include proportionality, non-discrimination, consistency, examination of the benefits and costs of action or lack of action, and examination of scientific developments.

Taking this list of principles into account, it is clear that the PP may enable us to avoid limiting the number of available options for managing a risk which has not been fully evaluated. Similarly, the PP considers the negative effects, which only emerge some time after exposure, in cases where causality is difficult to prove or validate scientifically. Finally, it should be noted that the principle of non-discrimination enters into competition with one final point: the (non-systematic) reversal of the burden of proof, ‘shifting responsibility for producing scientific evidence’ (20). The authors of the Communication consider that this is one way for legislators to apply the PP, notably for ‘substances deemed “a priori” hazardous or which are potentially hazardous at a certain level of absorption’.

Dissymmetry is inevitable, being inherent to our different roles and responsibilities to one another, due to the possible consequences of the complex technological actions in question. The Common but differentiated responsibility conception, used as a negotiation tool by the Conferences of the Parties to reduce climate warming, might be adopted for the PP and RRI (Reber Citation2016b).

In short, regarding the PP as set out in the COM 2000, solicitations are more frequent toward the natural, medical and engineering sciences. On the side of humanities and social sciences, there is, to a lesser extent, a fuzzy notion of the social acceptability of a risk, and of the high levels of safety that may be required. From this perspective, decision-makers are rather isolated. There is a major imbalance between the detail of uncertainties in scientific terms and the lack of detail relative to more normative, decisional aspects.

Is it possible to create assessment tools for both the political and ethical spheres in order to avoid what might be seen as a weak form of Weberian decisionism (Habermas Citation1970; Reber Citation2013, Citation2016a)? With its different understandings of moral responsibility and a complete conception of deliberative democracy oriented to the future RRI may well help to consolidate the normative side of the PP. Conversely, PP may help RRI to be coherent with some of its pillars (1, 2, 4, 5 and 6).

6. Conclusion: deliberation based on a meeting of minds

Meeting of minds is the title of one of the most ambitious participatory technological socio-political experiment, which brought together 126 citizens from across the EU (despite their different languages).Footnote37 The focus was precisely the future of brain technologies (Reber Citation2011a; Pellé and Reber Citation2016), and what is possible and desirable. This deliberation was about the brain, itself the organ of deliberation. Such deliberation can be done individually, as merely an ethical question for one single mind (RRI Pillar 5), or collectively, the brains of those involved in the experiment, (RRI Pillar 6). Indeed, many analysts want to expand the sphere of expertise to different stakeholders (RRI Pillar 1) under a relevant institutionally designed apparatus (RRI Pillar 5), summoning pluralist expertise (Pillars 2 and 3), to allow minds to meet each other. With RRI, such socio-political experiments might be take place in research centers, becoming long-term projects rather than ‘one-off’ events.

Research ethics (here in brain science, neuroethics) means different things and RRI may lead to some reconfigurations of the problems by expanding the scope of ethics. The above analysis shows deliberation to be a condition for RRI and reintroduces the debate on TDD, which is absent from this research, considering deliberation as a dialogue. The TDD believes that citizens have the capacity to search for and collectively formulate the common good within public deliberations that link common good, justification and legitimacy, and respect the autonomy of citizens. Deliberation is a prerequisite not only in cases of conflict but also in the presence of uncertainties. In some cases, particularly in innovative and groundbreaking research, scientific uncertainties are so high that probabilities cannot be built. There is a shift here from prevention to precaution. As an action principle, the PP is in a position to take proportional measures. It should be seriously considered in research in anticipatory governance, hermeneutical (Grunwald Citation2016) and other inspirational areas (Barben et al. Citation2008; Guston Citation2010).

These scientific and technological uncertainties should not inhibit the relevant action from being taken when serious presumptions of major and/or irreversible damage are present. The PP may be brought closer with Owen et al. (Citation2013) notion of assumptions in reflectivity dimension (collective process of reflexivity) to reach a second-order reflectivity of descriptions or practical judgements. The relationships between the sciences, ethics and politics, the interweaving of facts and values, the quarrels that exist between coexisting scientific disciplines, the competing approaches of epistemic pluralism – in both inter- and intra-disciplinary forms – and argumentative coexistence in interdisciplinary contexts cannot all be dealt with here but a new version of the theory of deliberative democracy has been proposed, focusing on its specificity as a future genre and based on arguments used to defend plausibility. The consideration for the future contained in both RRI and the PP might be used to broaden and improve the theory of deliberative democracy. The PP introduces rationality to debate about the possible and the impossible, all the modalities between them, and key elements in the meta-deliberation of minds at the intersection of ethics, politics, technology and science in an uncertain world. The PP provides a significant meta-principle for the EU – or other jurisdictions – for responsibility and, thus, for RRI.

PP and TDD appeared before RRI. Thus the latter is potentially inheritor of them. If the inheritance is reconstructed here, the three of them share the same concerns, even if the RRI analysts have not made these connections. To be coherent and seeking high-quality results in implementing RRI, they need to think and solve the questions PP and TDD address. Both have a great potential to build the future of strong RRI, even if the latter is historically inheritor of them.

Acknowledgements

I would like to thank the OECD and Arizona State University for having invited me to the international workshop on Neurotechnologies and Society: Strengthening Responsible Research and Innovation in Brain Sciences, hosted in the National Academies of Sciences Engineering and Medicine, especially to Herman Garden and David Winickoff. I would also like to thank Philippe Galiay Head of the Sector RTD-B7.3, ‘Mainstreaming Responsible Research and Innovation in Horizon 2020 and the European Research Area’ for having proposed that I attend this event. I am grateful to the members of the European Project Governance for Responsible Innovation, especially Sophie Pellé and Robert Gianni, to John Pearson and my colleague and language editor Chantal Barry. Finally, I want to express my gratitude to Diana Bowman for its careful reading and editing, and her enthusiasm.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes on contributor

Bernard Reber, HDR, PhD, is a Research Director at the National Centre for Scientific Research (CNRS) since 2011 and at the Political Research Centre of Sciences-Po since 2014. He is a Philosopher (moral and political philosophy, HDR/Sorbonne University) and political scientist (PhD/School for Advanced Studies in Social Sciences (EHESS)/Centre for Sociological and Political Studies, Aron Centre, Paris).

Notes

1 Italics are used to summarize these pillars.

2 I set aside the problem of expertise in ethics and the fact that most frequently ethical committees are partly interdisciplinary in their composition. This is the case for national or dedicated committees and for ethical review of research projects.

3 My italics.

4 Italics used in the Code.

5 Italics used in the Code.

6 For some distinctions, see Pellé and Reber (Citation2016).

7 The shift between ‘reflective’ and ‘reflexive’ allows us to understand both these terms to have the same meaning.

8 My italics.

9 They refer here, among others, to the works of Andrew Stirling (Citation2005, Citation2008).

10 My italics.

11 My italics.

12 Despite the fact that the article cited has been published in the Journal of Public Deliberation, its definition of deliberation is not very specific and remains the equivalent to dialogue:

Deliberation. The second tenet is the opportunity for these disparate people to engage in egalitarian discourse on a public issue, taking into account multiple views and comprehensive, balanced information. The hope is that through respectful dialogue, people will creatively problem solve and find common ground that reflects the common good. (Citation2007, 3)

13 In the French debate on participation, those in charge of the Commission nationale du débat public (National commission for public debate) refer to an ‘exercise in collective intelligence’.

14 For main works on this topic in English, see Bohman and Rehg (Citation1997), Dryzek (Citation2010) and Parkinson and Mansbridge (Citation2012). For philosophical works in French, see Girard (Citation2010), Le Goff (Citation2009) and Reber (Citation2011b).

15 Presentation of Stanford University Professor James Fishkin during the Congrès de l’Association française de sciences politiques (Congress of the French Political Sciences Association), workshop, The future of deliberative democracy, Aix-en-Provence, 22–24 June 2015.

16 My italics.

17 For example, Mansbridge et al. (Citation2006, 7) and Mackie (Citation2006, 298–299).

18 I leave aside the question of the number of participants that has limitations regarding the expected quality of deliberation. It is easy to see that too broad a participation can negatively affect the quality of deliberation.

19 For a critical presentation, see Reber (Citation2016a).

20 Discussion with Jürg Steiner.

21 My summary.

22 Cohen uses the term ‘common good’ (Citation1997, 420s). Rawls uses the term ‘public reason’.

23 On this point, Habermas will go further with his belief in their universality.

24 The reference here is to Reber (Citation2016a).

25 The principle is also recognized at international level, as seen in the Rio Declaration on Environment and Development, adopted on 10 June 1992, and the Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters of 25 June 25 1998, commonly known as the Aarhus Convention.

26 Requested by a former Prime Minister Lionel Jospin and produced by the researchers Philippe Kourilsky and Geneviève Viney: the former is a biologist and the latter a legal specialist (Citation2000).

27 The German term implies foresight, and not simply precaution. The history of the German contribution to international recognition of the precautionary principle owes a good deal to this semantic freedom, exploited by ecological movements and by the Green Party who was in power at the time.

28 Numbering added for the purposes of clarity.

29 This implies a difference between theoretical perception and concrete perception.

30 Section 5.1.3 in the text (COM 2000).

31 By using the term ‘method’ in the singular, the text implies that this method is the same for any discipline. However, the characteristics described later suggest the existence of several different methods.

32 My addition and my italics.

33 Numbered in this way for reasons of clarity.

34 It should be noted that the magnitude of this factor depends on the degree of uncertainty of the available data.

35 My italics.

36 My italics, used to highlight significant elements, both in response to unfounded criticisms of the principle and for the purposes of analysis.

References

  • Adam, B., and C. Groves. 2011. “Futures Tended: Care and Future-Oriented Responsibility.” Bulletin of Science, Technology & Society 31: 17–27. doi: 10.1177/0270467610391237
  • Andersen, V., and K. Hansen. 2007. “How Deliberation Makes Better Citizens: The Danish Deliberative Poll on the Euro.” European Journal of Political Research 46: 531–556. doi: 10.1111/j.1475-6765.2007.00699.x
  • Aristotle. 1926. Art of Rhetoric. Translated by J. H. Freese. Cambridge, MA: Harvard University Press.
  • Bächtiger, A., S. Niemeyer, M. Neblo, M. R. Steenbergen, and J. Steiner. 2010. “Disentangling Diversity in Deliberative Democracy: Competing Theories, Their Blind Spots and Complementarities.” Journal of Political Philosophy 18: 32–63. doi: 10.1111/j.1467-9760.2009.00342.x
  • Barben, D., E. Fisher, C. Selin, and D. H. Guston. 2008. “Anticipatory Governance of Nanotechnology: Foresight, Engagement, and Integration.” In The Handbook of Science and Technology, 3rd ed., edited by E. J. Hackett, O. Amsterdamska, M. Lynch, and J. Wajcman. Cambridge: MIT Press, 979–1000.
  • Berthelot, J.-M. 2007. L’emprise du vrai. Connaissance scientifique et modernité. Paris: Presses Universitaires de France.
  • Bohman, W., and W. Rehg. 1997. Deliberative Democracy. Essays on Reason and Politics. Cambridge: MIT Press.
  • Callon, M., P. Lascoumes, and Y. Barthes. 2001. Agir dans un monde incertain. Essai sur la démocratie technique [ Acting in an Uncertain World. An Essay on Technical Democracy]. Translated by G. Burchell. Paris: Seuil.
  • Chambers, S. 2003. “Deliberative Democracy Theory.” Annual Review of Political Science 6: 307–326. doi: 10.1146/annurev.polisci.6.121901.085538
  • Cohen, J. 1997. “Deliberation and Democratic Legitimacy.” In Deliberative Democracy. Essays on Reason and Politics, edited by W. Bohman and W. Rehg, 67–91. Cambridge: MIT Press.
  • Cooke, M. 2000. “Five Arguments for Deliberative Democracy.” Political Studies 48: 947–969. doi: 10.1111/1467-9248.00289
  • Cunningham, F. 2002. Theories of Democracy. A Critical Introduction. London: Routledge.
  • Decker, M., and M. Ladikas. 2004. Bridges Between Science, Society and Policy; Technology Assessment – Methods and Impacts. Berlin: Springer.
  • Deville, M. 2015. “ Débat politique: quelle(s) rationalité (s)? Différenciation méthodologique entre le contenu et le relationnel. Débat sur l’extension des droits politiques des étrangers dans le cadre de l’Assemblée Constitutante de Genève et de délibérations citoyennes expérimentales.” PhD, Université de Genève.
  • Dryzek, J. S. 2000. Deliberative Democracy and Beyond: Liberals, Critics, Contestations. Oxford: Oxford University Press.
  • Dryzek, J. S. 2010. Foundations and Frontiers of Deliberative Governance. Oxford: Oxford University Press.
  • Dryzek, J. S., R. E. Goodin, A. Tucker, and B. Reber. 2009. “Promethean Elites Encounter Precautionary Publics: The Case of GM Food.” Science, Technology, & Human Values 34: 263–288. doi: 10.1177/0162243907310297
  • Elster, J. 1998. “Introduction.” In Deliberative Democracy, edited by J. Elster, 1–18. Cambridge: Cambridge University Press.
  • European Commission, Call for Tender, No. RTD-B6-PP-00964-2013. 2013. “ Study on Monitoring the Evolution and Benefits of Responsible Research and Innovation.” Brussels.
  • European Science Foundation. 2011. European Code of Conduct for Research Integrity. Strasbourg. http://www.esf.org/fileadmin/Public_documents/Publications/Code_Conduct_ResearchIntegrity.pdf.
  • Ferry, J.-M. 1991. Les puissances de l’expérience. 2 vols. Paris: Cerf.
  • Ferry, J.-M. 2004. Les grammaires de l’intelligence. Paris: Cerf.
  • Fishkin, J. S. 2009. When the People Speak: Deliberative Democracy and Public Consultation. Oxford: Oxford University Press.
  • Garden, H., OECD Report. 2017. Neurotechnology and Society. Strengthening Responsible Innovation and Research in Brain Science.
  • Gardiner, S. 2006. “A Core Precautionary Principle.” Journal of Political Philosophy 14 (1): 33–60. doi: 10.1111/j.1467-9760.2006.00237.x
  • Gerber, M., A. Bächtiger, S. Shikano, S. Reber, and S. Rohr. 2016. “Deliberative Abilities and Influence in a Transnational Deliberative Poll (Europolis).” British Journal of Political Science 37: 1–26. doi: 10.1017/S0007123416000144
  • Gianni, R. 2016. Responsibility and Freedom: The Ethical Realm of RRI. London: ISTE.
  • Girard, C. 2010. L’idéal délibératif à l’épreuve des démocraties représentatives de masse. Autonomie, bien commun et légitimité dans les théories contemporaines de la démocratie. Université de Paris 1.
  • Grinbaum, A., and C. Groves. 2013. “What Is ‘Responsible’ about Responsible Innovation? Understanding the Ethical Issues.” In Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 119–142. New York: Wiley.
  • Grönlund, K., M. Setälä, and K. Herne. 2010. “Deliberation and Civic Virtue: Lessons from a Citizen Deliberation Experiment.” European Political Science Review 2: 95–117.
  • Groves, C. 2006. “Technological Futures and Non-reciprocal Responsibility.” The International Journal of Humanities 4 (2): 57–61.
  • Grunwald, A. 2016. The Hermeneutical Side of Responsible Research and Innovation (RRI). Concepts, Cases, and Orientations. London: ISTE.
  • Guston, D. 2010. “The Anticipatory Governance of Emerging Technologies.” Special Issue Nanomaterials Ethics. Journal of the Korean Vacuum Society 19 (6): 432–441. doi: 10.5757/JKVS.2010.19.6.432
  • Habermas, J. 1970. “Technology and Science as ‘Ideology’.” In Toward a Rational Society, (1968) translated by J. Shapiro, 81–122. Boston: Beacon Press.
  • Hartz-Karp, J. 2007. “How and Why Deliberative Democracy Enables Co-intelligence and Brings Wisdom to Governance.” Journal of Public Deliberation 3 (1): 1–9.
  • Hennen, L. 1999. “ Participatory Technology Assessment: A Response to Technical Modernity?” Edited by S. Joss, Special issue on public participation in science and technology. Science and Public Policy 26 (5).
  • Hunyadi, M. 2004. Les usages de la precaution. Foreword J.-P. Dupuy. Paris: Seuil.
  • Jonas, H. 1984. The Imperative of Responsibility: In Search of Ethics for the Technological Age (1979). Translated by Hans Jonas and David Herr. Chicago: Chicago University Press.
  • Jonsen, A. R., and S. Toulmin. 1988. The Abuse of Casuistry: A History of Moral Reasoning. Berkeley: University of California Press.
  • Joss, S., and A. Brownlea. 1999. “Considering the Concept of Procedural Justice for Public Policy – and Decision-Making in Science and Technology Dossier.” Special Issue on Public Participation in Science and Technology. Science and Public Policy 26 (5): 321–330. doi: 10.3152/147154399781782347
  • Klüver, L. 2000. “ Project Management. A Matter of Ethics and Robust Decision.” In EUROPTA, Participatory Methods in Technology Assessment and Technology Decision-Making, 87–111. http://www.tekno.dk/pdf/projekter/europta_Report.pdf.
  • Kourilsky, P., and G. Viney. 2000. Le principe de precaution. Paris: La Documentation Française.
  • Lee, R. G., and J. Petts. 2013. “Adaptative Governance for Responsible Innovation.” In Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 143–164. New York: Wiley.
  • Le Goff, A. 2009. Démocratie délibérative et démocratie de contestation. Repenser l’engagement civique entre républicanisme et théorie critique. Université de Paris Ouest-Nanterre.
  • Lindell, M. 2011. “ Same but Different – Similarities and Differences in the Implementation of Deliberative Mini-publics.” ECPR General Conference, Section “Democratic Innovations in Europe – a Comparative Perspective”, August 24–27.
  • Luskin, R. C., J. S. Fishkin, and R. Jowell. 2002. “Considered Opinions: Deliberative Polling in Britain.” British Journal of Political Science 32: 455–487. doi: 10.1017/S0007123402000194
  • Luskin, R. C., I. O’Flynn, J. S. Fishkin, and D. Russel. 2014. “Deliberating Across Deep Divides.” Political Studies 62: 116–135. doi: 10.1111/j.1467-9248.2012.01005.x
  • Mackie, G. 2006. “Does Democratic Deliberation Change Minds?” Politics, Philosophy & Economics 5: 279–303. doi: 10.1177/1470594X06068301
  • Mansbridge, J., J. Bohman, S. Chambers, D. Estlund, A. Föllesdal, A. Fung, C. Lafont, B. Manin, and J. L. Martí. 2010. “The Place of Self-interest and the Role of Power in Deliberative Democracy.” The Journal of Political Philosophy 18: 64–100. doi: 10.1111/j.1467-9760.2009.00344.x
  • Mansbridge, J., J. Hartz-Karp, M. Amengual, and J. Gastil. 2006. “Norms of Deliberation: An Inductive Study.” Journal of Public Deliberation 2: 1–47.
  • Mitcham, C. 2003. “Co-responsibility for Research Integrity.” Science and Engineering Ethics 9 (2): 273–290. doi: 10.1007/s11948-003-0014-0
  • National Consultative Ethics Committee for Health and Life Sciences (CCNE). 2014. Avis N° 122, Recours aux techniques biomédicales en vue de “neuro-amélioration” chez la personne non malade: enjeux éthique, 12 février. http://www.ccne-ethique.fr/sites/default/files/publications/ccne.avis_ndeg122.pdf.
  • Neblo, M. A. 2007. “ Change for the Better? Linking the Mechanisms of Deliberative Opinion Change to Normative Theory.” Accessed December 12, 2016. https://scholar.google.com/citations?view_op=view_citation&hl=en&user=XJxA_KIAAAAJ&citation_for_view=XJxA_KIAAAAJ:zYLM7Y9cAGgC.
  • Obama, B. 2006. The Audacity of Hope: Thoughts on Reclaiming the American Dream. New York: Crown.
  • Owen, R., J. Bessant, and M. Heintz. 2013. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society. New York: Wiley.
  • Owen, R., J. Stilgoe, P. Macnaghten, M. Gorman, E. Fisher, and D. Guston. 2013. “A Framework for Responsible Innovation.” In Managing the Responsible Emergence of Science and Innovation in Society. Edited by R. Owen, J. Bessant, and M. Heintz, 27–50. New York: Wiley.
  • Parkinson, J., and J. Mansbridge. 2012. Deliberative Systems. Deliberative Democracy at the Large Scale. Cambridge: Cambridge University Press.
  • Pellé, S., and B. Reber. 2015. “Responsible Innovation in the Light of Moral Responsibility.” Special issue on responsible innovation in the private sector. Journal on Chain and Network Science 15 (2): 107–117. doi: 10.3920/JCNS2014.x017
  • Pellé, S., and B. Reber. 2016. From Ethical Review to Responsible Research and Innovation, 314–319. London: ISTE.
  • Reber, B. 2005a. “Technologies et débat démocratique en Europe. De la participation à l’évaluation pluraliste.” Revue Française de Science Politique 55 (5–6): 811–833. doi: 10.3917/rfsp.555.0811
  • Reber, B. 2005b. “Public Evaluation and New Rules for the ‘Human Park’.” In Making Things Public. Atmospheres of Democracy, edited by B. Latour and P. Weibel, 314–319. Cambridge: MIT.
  • Reber, B. 2006a. “Pluralisme moral: les valeurs, les croyances et les théories morales.” Archives de philosophie du droit 49: 21–46.
  • Reber, B. 2006b. “Technology Assessment as Policy Analysis: From Expert Advice to Participatory Approaches.” In Handbook of Public Policy Analysis. Theory, Politics and Methods. Public Administration and Public Policy Series, edited by F. Fischer, G. Miller, and M. Sidney, Vol. 125, 493–512. New York: Rutgers University.
  • Reber, B. 2010. “La Valutazione partecipata della technologia. Una promessa troppo difficile da mantenere?” In Innovazioni in corso. Il dibattito sulle nanotecnologie fra diritto, etica e società, edited by S. Arnaldi and A. Lorenzet, 325–349. Roma: Il Mulino.
  • Reber, B. 2011a. La démocratie génétiquement modifiée. Sociologies éthiques de l’évaluation des technologies controversies. Collection Bioéthique critique. Québec: Presses de l’Université de Laval.
  • Reber, B. 2011b. “ Argumenter et délibérer entre éthique et politique.” In Reber B. (dir.), Vertus et limites de la démocratie deliberative. Archives de Philosophie 4, avril-juin: 289–303.
  • Reber, B. 2013. “ Governance Between Precaution and Pluralism.” Special Issue, Interpreting Environmental Responsibility, International Social Science Journal 211/212 March–June: 75-87. Oxford: Wiley/Blackwell.
  • Reber, B. 2016a. Precautionary Principle, Pluralism, Deliberation. Science and Ethics. London: ISTE. In French. La délibération des meilleurs des mondes. Entre précaution et pluralisme. London: ISTE, 2017.
  • Reber, B. 2016b. “ Sens des responsabilités dans la gouvernance climatique.” In Reber B. (dir.), Ethique et gouvernance du climat. Revue de Métaphysique et de Morale, Vol. 1, 103–117. Paris: Presses Universitaires de France.
  • Renn, O., T. Webler, and P. Wiedermann. 1995. Fairness and Competence in Citizen Participation: Evaluating Models for Environmental Discourse. Boston, MA: Kluwer Academic Press.
  • Smith, G., and C. Wales. 2000. “Citizens’ Juries and Deliberative Democracy.” Political Studies 48: 51–63. doi: 10.1111/1467-9248.00250
  • Stahl, B. C., G. Eden, and M. Jirotka. 2013. “Responsible Research and Innovation in Information and Communication Technology: Identifying and Engaging with the Ethical Implications.” In Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 199–218. New York: Wiley.
  • Steiner, J. 2012. The Foundations of Deliberative Democracy. Empirical Research and Normative Implications. Cambridge: Cambridge University Press.
  • Steiner, J., A. Bächtiger, M. Spörndli, and M. R. Steenbergen. 2004. Deliberative Politics in Action. Analysing Parliamentary Discourse. Cambridge: Cambridge University Press.
  • Stern, P. C., and H. Fineberg, ed. 1996. Understanding Risk: Informing Decisions in a Democratic Society. Washington, DC: National Academy Press.
  • Stilgoe, J., R. Owen, and P. Macnaghten. 2013. “Developing a Framework for Responsible Innovation.” Research Policy 42: 1568–1580. doi: 10.1016/j.respol.2013.05.008
  • Stirling, A. 2005. “Opening Up or Closing Down? Analysis, Participation and Power in the Social Appraisal of Technology.” In Science and Citizens: Globalization and the Challenge of Engagement (Claiming Citizenship), edited by M. Leach, I. Scoones, and B. Wynne, 218–231. London: Zed.
  • Stirling, A. 2008. “‘Opening Up’ and ‘Closing Down’: Power Participation, and Pluralism in the Social Appraisal of Technology.” Science Technology and Human Values 33: 262–294. doi: 10.1177/0162243907311265
  • Stirling, A., O. Renn, and A. Klinke. 1999. On Science and Precaution in the Management of Technological Risk. Sevilla: European Commission – JRC Institute Prospective Technological Studies.
  • Sunstein, C. R. 2005. Laws of Fear. Beyond the Precautionary Principle. Cambridge, MA: Cambridge University Press.
  • Sykes, K., and P. Macnaghten. 2013. “Responsible Innovation – Opening Up Dialogue and Debate.” In Managing the Responsible Emergence of Science and Innovation in Society. Edited by R. Owen, J. Bessant, and M. Heintz, 85–107. New York: Wiley.
  • von Schomberg, R. 2013. “Vision of Responsible Research and Innovation.” In Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 51–74. New York: Wiley.
  • Webler, T., H. Kastenholz, and O. Renn. 1995. “Public Participation in Impact Assessment: A Social Learning Perspective.” Environmental Impact Assessment Review 15 (5): 443–464. doi: 10.1016/0195-9255(95)00043-E
  • Wynne, B. 1991. “Knowledges in Context.” Science, Technology & Human Values 15 (1): 111–121. doi: 10.1177/016224399101600108
  • Wynne, B. 1992. “Public Understanding of Science Research: New Horizons or Hall of Mirrors?” Public Understanding of Science 1 (1): 37–43. doi: 10.1088/0963-6625/1/1/008
  • Wynne, B. 1993. “Public Uptake of Science: A Case for Institutional Reflexivity.” Public Understanding of Science 2 (4): 321–337. doi: 10.1088/0963-6625/2/4/003

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.