2,686
Views
37
CrossRef citations to date
0
Altmetric
Discussion Paper

Responsible innovation as a critique of technology assessment

, &
Pages 254-261 | Received 20 Dec 2016, Accepted 01 May 2017, Published online: 16 May 2017

ABSTRACT

The notion of ‘responsible innovation’ has become fashionable amongst policy-makers and knowledge institutes. In the new Horizon 2020, calls of the EU ‘responsible research and innovation’ (RRI) figures prominently as a condition and an aim in itself. The rise of RRI shows considerable overlap with the aims, philosophies and practices of Technology Assessment (TA). The overlap, though, is not perfect and this raises questions about how RRI relates to TA. While it is plausible to interpret the relationship as RRI being a sequel of TA ambitions, we explore an alternative interpretation: RRI as a critique of TA.

Histories of TA and RRI

Technology Assessment (TA) has a history of about five decades. It is a set of philosophies, practices and approaches, which have evolved over time. The initial attempts, exemplified by the launch of the US Office of Technology Assessment in 1972, were to make statements about future performance of technologies in order to assess their impact on society. The term ‘impact’ refers to the metaphor of an external object that unsettles a more or less stable substrate. The idea was that it was possible and useful to map the consequences of the future technology on all kinds of relevant dimensions, such as employment, industrial structure, health or economic competition. In essence, it had a cost–benefits approach, supported by probabilistic methods such as risk calculations and decision theory. The approach also assumed that experts (and experts only) could make assessments about impact.

On various accounts these assumptions have come to be challenged by subsequent TA practitioners (Joss and Bellucci Citation2002; Schot and Rip Citation1997). The idea that technological developments can be more or less predicted by extrapolation or other means was increasingly recognized as over-simplistic. The notion of predictability does not align with further understanding of the non-linear and indeterminate processes of research and innovation. Meanwhile, others have questioned the legitimacy of experts’ knowledge for deciding what is at stake and how this should be weighed against other values.

Later versions of TA stressed the importance of other sources of knowledge and sought to include stakeholders to accommodate their perspectives and values. This counteracts the notion that experts have a privileged entrance to good judgement and sound policies. The basic idea is that the participation of stakeholders warrants a more democratic, thus better policy and/or innovation process. The risk here is identifying with the particular interests of the stakeholders, rendering TA into politics with other means.

So-called Constructive TA also includes the merits of stakeholder inclusion, but conceptualizes the innovation trajectory as a series of decisions which can be improved by including more factors and actors. Its perspective is to optimize the design of new technologies with a view towards both technical and societal considerations. The merit is not so much a more democratic process, but an improved process of co-construction: ‘a better technology in a better society’ (Schot and Rip Citation1997).

Responsible research and innovation (RRI), in contrast, does not yet have a history with strands, approaches, institutional forms and experiences (Rip Citation2014; Zwart, Laurens, and Van Rooij Citation2014). So, it is more difficult to flesh out the philosophies and assumptions. Yet, its career through the realms of national and European policy is impressive and provides already some material to reflect on. Clearly, it is an umbrella term which connects different interests and viewpoints; there are different streams (Owen, Bessant, and Heintz Citation2013; Von Schomberg Citation2011). The emergence of RRI as a new flag has been often combined with a stress on ‘Grand Challenges’: societal directions that appear to be desirable, such as healthy aging or sustainability. One stream stresses the ethical aspect of new technologies: they do not merely produce new risks and benefits, they alter the symbolic or moral order as well. Such changes are less tangible, yet profound and require additional attention and reflection. The idea is that reflection on research and innovation should incorporate normative ideals.

  summarizes this very sketchy characterization. It lists salient differences between various generations of TA and RRI. The last column points to the possible perversities that any approach will have. A focus on expert decision (TA, old) brings the risk of marginalization of other viewpoints. The ideal of inclusion of stakeholders (TA, new) is vulnerable for aligning too closely with their interests. The notion of co-productive design (CTA) may favour the agenda of technologists. The perversity of RRI may be its role to secure the fate of innovation – as long as they can be marked as ‘responsible’ their uptake in firms, economic sectors and society at large can be expected to be successful.

Table 1. Assumptions and perversities of TA and RRI.

This sketch reinforces the question about the relationship between RRI and TA. Amongst TA practitioners it is tempting to see RRI as a next step of TA, or even as the same ambition in other terms. For example, the The Pacita Project Manifesto that was stated during the second Parliaments and Civil Society in Technology Assessment conference (Berlin, February 2015) voices this interpretation explicitly:

Responsible Research and Innovation has shaped the last year’s policy discourse in Europe related to the societal role of research and innovation. It has given key concepts in TA, such as participation, forward-thinking, reflexivity and policy action, greater focus. TA can and should be a key carrier of the concept and play a light-house role in RRI.

In this discussion paper we do not intend to discuss the veracity of this interpretation. Instead, we suggest using the emergence of RRI as a possibility to reflect on the ambitions of TA and its possible blind spots. In this sense, we suggest the thought experiment to consider RRI as a critique of TA. We continue this thought on two accounts: RRI as a re-appreciation of ethical deliberation and RRI highlighting the ambiguous consultation of stakeholders. While both of these matters have been addressed by TA discourse and practices, we contend that RRI, in its inchoate and aspirational forms, represents an opportunity to revisit and reappraise these discourses and practices.

Ambiguities and ethics

Following this thought experiment, we can argue that what makes RRI stand out as really different from previous forms of TA is that RRI takes a broader set of values seriously, and – partly as a result – focuses more on morally ambiguous situations and moral controversies.

The various forms of TA are part of a larger accountability system. The rise of TA accompanies the growing societal awareness that science and technology bring essential gifts, but simultaneously often have a darker side too, in the form of consequences that are evaluated as undesirable. In this context, TA’s effort to map a technology’s impacts constitutes an essential role in cost–benefit deliberations, and in societies’ attempt to hold certain parties accountable for these undesirable consequences.

Now, the category ‘undesirable consequence’ is not only extremely vague and encompassing; it is also bound to be contested. As such, it is hard to base legislation or policy on that concept. Yet, in practice, we see that the broad category is applied in much more specific ways. One of us (Swierstra Citation2015; Swierstra and te Molder Citation2012) has identified three such features that all help to transform the vague and unruly concept of undesirability into a category sufficiently objective or ‘hard’ to be taken seriously by technology and policy actors. A first important marker of such objectivity is quantifiability. If one wants to allocate responsibility for an undesirable consequence, it helps if one can attach a measure to it in terms of numbers.

A second marker is that the undesirability is relatively uncontroversial. The less consensus that an impact is in fact undesirable, the harder it is to create legitimacy for policy or legislative measures. In a pluralist, liberal society the only hard form of undesirability is constituted by clear instances of harm. We may disagree on what constitutes the good life, but most of us can easily agree that physical harm and illness are highly undesirable. In that sense, physical harm is often fairly ‘objective’, and by extension so is harm inflicted on the environment, as this harm will sooner or later reflect back on our own well-being in a negative way. In a liberal market economy, what is deemed desirable is to a large extent outsourced to the ‘objective’ forces of the free market, where individual consumers establish what is de facto desirable by translating their individual preferences into consumer choices. The limited role left to the state is then essentially to avoid undesirable consequences. This is why Karl Popper in The Open Society and Its Enemies selected the minimization of harm, rather than the maximization of the much more controversial and intangible happiness (or ‘utility’), as the sole legitimate goal of democratic policy-making: ‘Those who suffer can judge for themselves, and the others can hardly deny that they would not like to change places’ (159).

And finally, there should be a clear causal link between the technology, an identifiable agent and the undesirable (harmful) consequence. This stress that there should be a clear causal link is ultimately grounded in an instrumentalist conception of technology as passive, docile tools. If wielded wisely, and if not malfunctioning, they are supposed to bring about desirable goods and values. However, in reality these two conditions are not always met: sometimes certain people misuse the available tools, and sometimes there are flaws in the design or manufacture of the tools that cause undesirable consequences. In both these cases, it is clear where to allocate liability: either with those misusing the technology or with those having failed in producing a reliable tool.

These markers of objective or ‘hard’ impacts of technology crystalized in the context of an accountability regime that puts its trust in (preferably scientifically generated) numbers, in a thin morality that revolves around clear and non-controversial instances of harm, and that looks for clear causal connections between an identifiable agent and an act so that is can hold someone unequivocally accountable. (There is a good rule of thumb to assess whether a value is hard or not: it is hard if actors do not refer to it as an ‘ethical’ or ‘moral’ value. In most discourses, labels like ‘ethical’ and ‘moral’ are primarily used to communicate that the concerns labelled as such are soft, and therefore private and irrational rather than subject matter for public reason.)

It is this feature, that issues are considered to be a subject matter for public reason, that puts ‘hard’ impacts aside from ‘soft’ impacts. Soft impacts are considered too fuzzy and subjective to merit the serious attention of technology and policy actors. They are seen to belong to the private sphere, as matters of individual choice and personal discretion. And for this same reason, they tend to receive very little attention in the various forms of TA. Our contention is that RRI – as a bureaucratic entity and more generally responsible innovation (RI) as a scholarly aspiration (Grinbaum and Grove Citation2013) – differs from previous forms of TA importantly in that it gives more attention to these presumably soft impacts.

So, we argue that TA, even in its participatory and anticipatory forms, was almost exclusively directed to ‘hard’ impacts, whereas RI and RRI are broader in the sense that they also give room to ‘soft’ impacts. Why did TA ignore soft impacts? One possible explanation would be that TA practitioners were not aware of what we have here labelled as soft impacts. That explanation, however, is untenable: from Plato onwards people have worried about what we now label as soft impacts: consequences for social harmony and order, for our identity, for the good life, for honour, for aesthetics, for gender relations, for mental growth, for authenticity and so forth. A more plausible explanation lies in the intended recipients of TA reports. They were not interested in soft impacts, as they were regarded as irrelevant for decisions regarding technology and policy development. And the main reason, we want to hypothesize, is that technology and policy actors deal badly with moral ambiguity. As response, TA has tended to avoid moral ambiguity in its reports.

While TA tends to avoid moral ambiguity, it does focus on moral ambivalence: the situation that actors have to choose between two or more incompatible values. TA has been paying attention to such ambivalence, for example by pointing to conflicts between economic and environmental values, or by articulating trade-offs between health risks and efficiency standards. These are moral values, and as such TA does not shun explicating moral conflicts. However, TA is typically restricting itself to conflicts between well-established, public, values that are linked to the no-harm principle. Conflicts between values that are less well entrenched in the spheres of technology and policy actors, like for instance friendship or peace of mind, are conspicuously absent. Thus, TA does deal with moral ambivalence, but only in the case of conflicts between ‘hard’ values like health, safety, environment, economy and so forth.

What TA does typically not do, however, is pay elaborate attention to moral ambiguity, that is: a situation when we are uncertain about moral values, either because we do not know whether they are moral in the sense of ‘ideally collectively binding’ (as opposed to politics, which deals with decisions that are de facto collectively binding), or because we do not know how to interpret general values into concrete actions and choices, or we do not know which moral values to apply to the situation. In this sense, questions regarding our health, safety and such values lack moral ambiguity, whereas questions about the importance of ‘naturalness’, about whether modern life becomes too accelerated, about what real friendship is are extremely ambiguous.

As far as ethics is concerned, RI and RRI are more encompassing than TA in two aspects: in discussing also less well-entrenched values when discussing moral ambivalence, and in giving more attention to moral ambiguity. How to explain that policy actors are growing increasingly open for ‘wide’ moral ambivalence, and for moral ambiguity? It is impossible to answer this question comprehensively in the limited context of this essay, but one suggestion stands out; the shift from a reactive type of ethics focusing on harm avoidance, which leaves the determination of what is desirable to the market, to a more pro-active, aspirational, type of ethics – as a response to the 2008 economic crisis. The EU has to legitimize its funding of scientific and technological research with public money, and to do this it had to formulate positive goals – rather than simply making a profit, which is the main driver for private firms. These goals found expression in form of so-called Grand Societal challenges. It is this aspirational ethic that potentially creates more space for wide moral ambivalence, and for moral ambiguity.

Stakeholder participation and orientation

A second way in which RRI can be seen as a critique of TA relates to the role of stakeholders. Like most current TA approaches, papers on RRI generally refer to stakeholders’ engagement in research activities and/or in innovation processes as one of the key dimensions of RRI. For example, Von Schomberg (Citation2013) points out the need for TA and foresight in RRI, together with multi-stakeholder involvement and deliberative mechanisms; Stilgoe, Owen, and Macnaghten (Citation2013) identify inclusion as one of the four dimensions or RI, together with anticipation, reflexivity and responsiveness.

Yet, the role of stakeholders as sources of knowledge and of legitimation appears to follow a different story. We allow ourselves a look back at stakeholders’ involvement in TA practices to investigate this possibility. While the mantra of participation in RRI has the same origins as in TA, we can draw useful insights from the historical experience of TA. The original model of TA institutionalized in the US through the creation of Office of Technology Assessment in 1972 did not include an emphasis on public or stakeholder participation. It was generally assumed that experts’ knowledge dedicated to the study of intended and unintended impacts of new technologies would be sufficient to provide political representatives with useful tools to govern science and technology. When parliamentary TA was institutionalized in different European countries, some countries introduced what has been called the ‘participative turn’ of TA. Denmark and the Netherlands have been at the forefront and have implemented public participation with some important differences. The Danish Board of Technology (DBT) is well known for having invented the ‘consensus conference’, the model of public participation that has known the more widespread diffusion internationally. Consensus conferences are designed to mediate the relation between scientific expertise and public policies. One of the key characteristics is that this mediation is public. Hence, through a mix of inspiration from ideas formulated and developed by Rawls and Habermas, public participation is designed as a way to perform a dialogue on science and technology in the public sphere, in order to elicit public will. The Netherlands Office of Technology Assessment (NOTA – which later became Rathenau Institute) experimented with models of public participation that had a slightly different inspiration, namely deliberative forums constituted by ‘mini-publics’ but they are not organized with a panel of lay people at their core (Schot and Rip Citation1997). They are designed to spur interactions between different stakeholders, including researchers and professionals involved in the subject area. The basic belief is that this hybrid deliberation may improve the production of knowledge and provide relevant insights on useful and societal relevant orientations of research and innovation processes.

The distinction between the Danish (public debate) and Dutch (co-production) example is important from an analytical point of view. According to Callon, Lascoume, and Barthe (Citation2009) the co-production model rests on the active work of concerned groups who problematize and challenge the production of knowledge and develop collaborations with some labs to explore solutions to their problems. In such a model, the identity (and the objectives) of the groups concerned may evolve in the process of collaboration – patients’ associations are the emblematic example here. Research objectives and societal needs are co-constructed during the process of interaction.

Both models of stakeholder involvement have their strengths and weaknesses. At first sight, one could argue that the co-production model of public participation offers more advantages and less risk than the public debate model. One could also argue that interactions of stakeholders and researchers at bench level (so to say) are more productive and more able to contribute to the reflexivity of researchers. However, this appraisal may be challenged on different grounds. First, the scale and the scope of such stakeholder interactions are intrinsically limited. At the scale of the project or of the laboratory, major issues related to power relations and to regulation of research activities may be overlooked or even dismissed. While opening up potentially includes discussions of the broader frame within which research is designed (the future social words related to broader interactions between science, technology and society, etc.), such discussions may be limited insofar as they are considered to be irrelevant at this scale. This may have two implications: first the inability to raise important questions and second an overestimation of the potential of reflexivity to transform behaviour and norms of researchers.

Second, stakeholder engagement raises the issue of representation. The devices used to facilitate such engagement do not necessarily aim at including actors who represent ‘the whole society’ but are limited to those who have a more immediate stake in the issues discussed. However, the question of whether the invited actors have the same interests and concerns as those who are not present is a difficult one. The criteria of the diversity of groups included is interesting but of limited practical utility, especially for new emerging technologies because their publics are not constituted (to refer to Dewey). This may limit the legitimacy of these mini-group exercises when public funds or public decisions are involved.

The third limitation of stakeholder involvement is closely related to the former one. Stakeholders’ dialogue may work in an idealized flat world exempt from strong power asymmetries, a world of distributed governance. In such a vision, the roles of the State and government bodies are limited. Their role is to foster interactions and learning processes, to empower actors and create conditions that favour innovation (whatever direction it takes). This conception of the polity, closely related to neo-liberalism, is currently challenged on many respects. There are many reasons to ‘bring the State back in’, including the need to address Grand Challenges, which appears as a cornerstone of current research and innovation policies. As Grand Challenges are one of the anchor points of RRI (responsibility of research and innovation is to contribute to the solutions of major problems), stakeholder engagement should not be viewed as a substitute to government regulations but as a key piece for the government of innovation.

Therefore, these limitations of stakeholder involvement should not be viewed as a definitive rejection of TA, but instead as a critique that becomes apparent in the frame of RRI. Moreover, it also provides new tasks for RRI as well. The ‘public debate’ and ‘co-production’ models are complementary and instrumental for dealing with the issues at various scales and articulating local dynamics and public policy in the making. Second, it is important that such exercises have the ability to produce knowledge on the web of power relations and strategies within which they take place. And third, these exercises should be designed in a way that allows to seriously deal with the directionality of innovation and not under the implicit assumption that technological innovation is a good as such.

To conclude

In this discussion paper, we elaborated on the possible interpretation of RRI as a critique of TA. Based on short histories of the two, we followed two core issues in RRI that, with hindsight, seem to have been problematic in TA: the role of normativity and the role of stakeholders. In this way of reasoning, TA can be said to neglect moral ambiguity and to downplay the desired direction of innovation. Our interpretation resonates with the diagnosis of Daimer, Hufnagl, and Warnke (Citation2012) of the evolution of innovation policies of the last decades. While market failure has been a general rationale for policy-making since the 1970s and system failure was added as a second rationale, they note the rise of ‘orientational failure’ as an additional rationale for policy interventions. In this sense, RRI with its aspirational logic of Grand Challenges is a response to the orientational failure of TA, and could be interpreted as an urge to include normative concerns about the societal goals of innovation.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes on contributors

Harro van Lente is full professor of Science and Technology Studies at the Faculty of Arts and Social Science, Maastricht University. He is one of the founding fathers of the Sociology of Expectations, which studies the dynamics of expectations in the development of technology. Van Lente has published widely on technology dynamics, innovation policy and philosophy of technology.

Tsjalling Swierstra is full professor of Philosophy at the Faculty of Arts and Social Science, Maastricht University. He has published on ethics and the politics of new and emerging science and technology (NEST). The philosophical problematic here is how to make anticipatory knowledge regarding ‘technomoral change’ available to technologists, policy-makers and the larger public.

Pierre-Benoit Joly, economist and sociologist, is Director of Research at the National Institute of Agronomic Research (INRA) in France. His research activities are focused on the governance of collective risks, socio-technical controversies, the use of scientific advice in public decision-making and the forms of public participation in scientific activities. He lectures at Sciences Po Paris.

References

  • Callon, M., P. Lascoume, and Y. Barthe. 2009. Acting in an Uncertain World: An Essay on Technical Democracy. Cambridge, MA: MIT Press.
  • Daimer, S., M. Hufnagl, and P. Warnke. 2012. “Challenge-Oriented Policy-Making and Innovation Systems Theory: Reconsidering Systemic Instruments.” In Innovation System Revisited – Experiences from 40 Years of Fraunhofer ISI Research, edited by Fraunhofer ISI, 217–234. Stuttgart: Fraunhofer Verlag.
  • Grinbaum, A., and C. Grove. 2013. “What is ‘Responsible’ About Responsible Innovation? Understanding the Ethical Issues.” In Responsible Innovation. Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 119–142. Chichester: John Wiley & Sons.
  • Joss, S., and S. Bellucci, eds. 2002. Participatory Technology Assessment: European Perspectives. London: Centre for the Study of Democracy.
  • Owen, R., J. Bessant, and M. Heintz, eds. 2013. Responsible Innovation. Managing the Responsible Emergence of Science and Innovation in Society. Chichester: John Wiley & Sons.
  • Rip, A. 2014. “The Past and Future of RRI.” Life Sciences, Society and Policy 10 (17). doi:10.1186/s40504-014-0017-4.
  • Schot, J., and A. Rip. 1997. “The Past and Future of Constructive Technology Assessment.” Technological Forecasting & Social Change 54: 251–268. doi: 10.1016/S0040-1625(96)00180-1
  • Stilgoe, J., R. Owen, and P. Macnaghten. 2013. “Developing a Framework for Responsible Innovation.” Research Policy 42 (9): 1568–1580. doi: 10.1016/j.respol.2013.05.008
  • Swierstra, T. 2015. “Identifying the Normative Challenges Posed by Technology’s ‘Soft’ Impacts.” Etikk i praksis. Nordic Journal of Applied Ethics 9 (1): 5–20. doi: 10.5324/eip.v9i1.1838
  • Swierstra, T., and H. te Molder. 2012. “Risk and Soft Impacts.” In Handbook of Risk Theory. Epistemology, Decision Theory, Ethics, and Social Implications of Risk, edited by S. Roeser, R. Hillerbrand, P. Sandin, and M. Peterson, 1050–1066. Dordrecht: Springer Academic.
  • Von Schomberg, R., ed. 2011. Towards Responsible Research and Innovation in the Information and Communication Technologies and Security Technologies Fields. Luxembourg: Publications Office of the European Union.
  • Von Schomberg, R. 2013. “A Vision of Responsible Research and Innovation. In Responsible Innovation Managing the Responsible Emergence of Science and Innovation in Society, edited by R. Owen, J. Bessant, and M. Heintz, 51–74. Chichester: Wiley.
  • Zwart, H., L. Laurens, and Van Rooij, A. 2014. “Adapt or Perish? Assessing the Recent Shift in the European Research Funding Arena from ‘ELSA’ to ‘RRI’.” Life Sciences, Society and Policy 10 (11). doi: 10.1186/s40504-014-0011-x

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.