3,843
Views
0
CrossRef citations to date
0
Altmetric
Research articles

The moral psychology of value sensitive design: the methodological issues of moral intuitions for responsible innovation

ORCID Icon
Pages 186-200 | Received 29 Apr 2017, Accepted 14 Mar 2018, Published online: 02 Apr 2018

ABSTRACT

This paper argues that although moral intuitions are insufficient for making judgments on new technological innovations, they maintain great utility for informing responsible innovation. To do this, this paper employs the Value Sensitive Design (VSD) methodology as an illustrative example of how stakeholder values can be better distilled to inform responsible innovation. Further, it is argued that moral intuitions are necessary for determining stakeholder values required for the design of responsible technologies. This argument is supported by the claim that the moral intuitions of stakeholders allow designers to conceptualize stakeholder values and incorporate them into the early phases of design. It is concluded that design-for-values (DFV) frameworks like the VSD methodology can remain potent if developers adopt heuristic tools to diminish the influence of cognitive biases thus strengthening the reliability of moral intuitions.

It has long been a contention in the sociology of science that technological artifacts are embedded with values, whether those values are explicitly designed into them or not (Magnani Citation2013; Pinch and Bijker Citation1987; van Wynsberghe and Robbins Citation2014; Winner Citation2003). As such, technological artifacts become the subject of ethical discourses as the values that are embedded become of political and social import. The issues associated with the values in design of technologies becomes exacerbated when we consider the transformative nature of emerging technologies (King, Whitaker, and Jones Citation2011; Lucivero, Swierstra, and Boenink Citation2011; Roache Citation2008; Timmermans, Zhao, and van den Hoven Citation2011). This paper shows how many of the design judgments made regarding emerging technologies are the result not of ethical deliberation but instead of moral intuition. This moral intuition is shown to be insufficient to responsibly inform research and innovation.

Since the 1970s, the study of moral psychology has illuminated not only the ways in which individuals think about and use their faculty of moral intuitions but also the extent to which those intuitions map onto the real-world usage of technology. Additionally, empirical work in psychology has shown that a number of cognitive biases may in fact influence individuals’ reasoning processes (e.g. Caviola et al. Citation2014; Tversky and Kahneman Citation1974). This influence and the resultant biased reasoning in moral intuition is particularly evident when the subject of intuition is unique, convoluted or laden with ideology, such as technological innovations are (Caviola et al. Citation2014; Cosmides and Tooby Citation1992; Kahan et al. Citation2013).

It is the purpose of this paper to show how the moral values that are fundamental to design-for-values (DFV)Footnote1 approaches are subject to cognitive biases and that heuristic tools need to be adopted in order to de-bias or otherwise strengthen the credibility of these values. To this end, I draw upon the Value Sensitive Design (VSD) approach amongst the existent DFV, to illustrate this thesis. VSD is chosen in particular amongst other design methodologies such as participatory design, universal design and inclusive design particularly because it both mandates that designers account for the explicit values of stakeholders as well as how those values map onto the existent ethical literature. Thus, showing how this philosophically informed framework may be contested has foundational implications for all DFV approaches which hinge on the incorporation of stakeholder values in technological design. Given that the proposed VSD approach requires designers to investigate the needs and values of stakeholders, the moral intuition of those stakeholders becomes of particular importance. Hence, given arguments against the favourability of moral intuitions towards novel technologies, this paper concludes by claiming that the DFV frameworks can, in fact, remain protected from these pitfalls if designers approach their investigations with a toolkit of simple heuristics that can reduce the influence of bias on moral intuition.

To the best of my knowledge, this paper is the first to: (1) evaluate the merits of a DFV approach from the perspective of moral, psychological theory, and (2) restructure such an approach in such a way as to make it more adaptable to empirical evidence stemming from moral psychology. Prior literature on VSD has focused on methodology (Cummings Citation2006; Friedman et al. Citation2013; Friedman and Kahn Citation2002; van den Hoven and Weckert Citation2008; Van den Hoven, Lokhorst, and Van de Poel Citation2012), applications to current technological innovations (Aad Correljé, Eefje Cuppen, Marloes Dignum Citation2015; Briggs and Thomas Citation2015; van den Hoven Citation2007) as well as to novel technologies (Dechesne, Warnier, and van den Hoven Citation2013; Friedman Citation1997; Friedman and Kahn Citation2000; Timmermans, Zhao, and van den Hoven Citation2011; van den Hoven Citation2014; van Wynsberghe Citation2013; Van Wynsberghe Citation2016). Although these studies provide useful information, they do not take into account the reliability of all of the constituent parts of the VSD approach, in this case, the importance of moral intuition in conceptual investigations. This paper’s application of moral intuition and cognitive biases towards converging technologies and VSD is comparatively unique. It is the intent of this paper to help spur continued discussions on some of the issues regarding the design of emerging and converging technologies such as safety, privacy, and autonomy among others (for a more formalized list of values see for example Friedman et al. Citation2015; Shilton Citation2013; Timmermans, Zhao, and van den Hoven Citation2011).

To successfully tackle these arguments, this article is organized into the following sections: the first section will give a more in-depth account of (1) how moral intuitions are commonly understood in ethics and moral psychology, (2) the strength of moral intuitions when applied to novel technologies, in particular, nano-bio-info-cogno (NBIC) technologies. The second section will lay out the methodological framework of the VSD approach so that the reader can better understand the position that values, and consequentially, moral intuitions have in the theory. The third section will draw upon the conclusions of the first section to argue that the VSD approach needs to adopt heuristic tools to strengthen its conceptual investigations in determining stakeholder values. The final section of this paper sketches the broader theoretical implications that these conclusions induce as well as potential fruitful research streams.

1. Moral intuitiveness and cognitive biases

The application of moral concepts such as deontology (i.e. duty ethics), utility ethics (e.g. utilitarianism, consequentialism) or virtue ethics definitely have positions in conversations about the way that we should act as it concerns emerging technologies. However, this paper’s primary focus concerns the application and utility of moral intuitions, fraught as they are. In order to do this, I devote the rest of this section to discussing current conceptions of moral intuitiveness as well as some of the barriers that moral judgments inevitably encounter. Although there has been much literature devoted to the meta-ethical issues surrounding moral values, imagination and intuition, this paper draws primarily from the earlier work of J. Haidt, J. Klien, L. Caviola et al. given that they provide the most salient analysis of moral intuitionism with relation to technology and associated limits.

Although the early history of moral psychology was primarily dominated by rationalist theories of morality, recent decades have seen a shift, given new empirical findings and advances in psychology (Haidt Citation2001). This change has led to the adoption and exploration of what is now known as Intuitionism. Moral intuition is thus far removed from the theoretical underpinnings of rationalist theory given that intuitionism is argued to be a process of cognition rather than a rationalization in search of moral truths that then inform moral judgments. Intuitionists argue that rationalist conception is instead a mischaracterization of actual occurrences in the brain; individuals first intuit moral judgments, then only on an ex post facto basis does the agent rationalize their decision (Haidt Citation2001; Shweder and Haidt Citation1993). Mostly, agents make automatic value judgments, typically unaware of the cognitive processes that produced such judgments. It is only after they are pressed for reasons that agents attempt to give an argumentative rational for why they arrived at such a judgment, sometimes unconvincingly (Haidt Citation2001). Haidt (Citation2001) constructs a narrative of safe, consensual incest. This narrative invokes a visceral sense of ‘wrongness’ that agents who are presented this story are hard pressed to give reasons for why they find it morally abhorrent. Likewise, Klein (Citation2016) proposes a variation of the well-known trolley problem that also elicits a ‘wrongness’ feeling. Both examples are cited to illustrate how moral intuition works. Individuals are presented cases; they are, in turn, asked to make a value judgment; once pressed for reasons for such judgement, they find difficulty in rationalizing the inclination, thus emphasizing a lack of a priori reasoning (Klein Citation2016).

Nonetheless, whether or not we take moral intuitionism as a real interpretation of how human moral judgments are formed, issues still arise that provide practical problematic concerns. These problems are typically manifested in how we apply our moral judgments as a product of intuition, primarily when intuition has been shown to be highly susceptible to a number of cognitive biases (Brink Citation2014; Cushman, Young, and Hauser Citation2006; Greene Citation2014; Nichols and Knobe Citation2007; Waldmann and Dieterich Citation2007; Woodward and Allman Citation2007). Such biases can affect the ways in which we value technologies, the way we approach technologies and the way in which we interact with them.

In fact, Caviola et al. (Citation2014) explain the potential effects of cognitive biases on the value-perception of cognitive enhancing technologies (CE). They argue that the polarized nature of the debates surrounding the ethical issues of CE can be explained by the influence of biases on human reasoning, thus disposing the moral intuition of individuals to intuit in particular ways. They conclude that not only do cognitive biases have the potential to lead people to make irrational value-judgments about CE but that they are more likely to do so in a negative capacity. Likewise, these biases are not exclusive to debates regarding CE but may be just as pervasive in discussions of other transformative technologies.

Klein (Citation2016) takes a different approach to our application of moral intuitions to novel technologies. He argues that given our evolutionary history, and the rate of technological innovation, humans have failed to acquire moral intuitions in a capacity that can sufficiently make value-judgments of technology. Because the increasing complexity of these technologies makes the causal chains ambiguous, he argues that we naturally tend to miss or ignore the ethical issues that emerge with these technologies. As a consequence, Klein proposes that moral intuitions, because of their innate failures, must be buttressed with ‘culture substitutes’ that can help to make complex and novel technologies easier to intuit. Because he argues that we are good at making moral intuitions regarding the behavior of other humans, one way to substitute intuition is by employing what he calls a ‘what if it was human’ method. This process involves conceptualizing novel technology as if its uniqueness was embodied by a person. Speculating about how that person would behave, both publically and privately, can assist in intuiting the moral value of the technology.

Regardless, the importance of moral intuition as an essential way of understanding human judgment has become a dominating area of discussion in moral psychology. The shift away from the traditional humanist view of humans as rational animals towards one that argues that most of our moral judgments are a product of the unconscious has significant implications for how we view emerging and converging technologies; particularly given the susceptibility of moral intuitions to biases (Klein Citation2016).

Taking this into account, it is this paper’s contention is that the DFV approaches, particularly VSD, although claiming to be predicated on a foundation of universal moral values is in fact not, instead, upon closer inspection, most of the values incorporated are intuition based. Not only this, but I contend that objectivity in the determination of values is not necessary. A functionalist, pragmatic approach is thus adopted for the purposes of this paper. The intersubjectivity of moral intuitions, the acknowledgment of this in DFV approaches and the strengthening of the epistemic status of moral intuitions through heuristic tools as mentioned in this paper provides at least one means by which we can more reflexively engage in RI.

As a consequence, any design framework or methodology that works as a function of stakeholder values must take into account the inherent susceptibility of moral intuitions to cognitive biases. The following section introduces the design methodology VSD. As noted earlier, this paper aims to bolster the applicability of the VSD approach towards the responsible innovation of transformative technologies. Because the discussion of values in this paper is not restricted to VSD (meaning that the values and the related issues are not VSD-exclusive), the implications drawn from this paper have a scope that extends beyond VSD.

2. Value sensitive design (VSD)

Value Sensitive Design is an approach to the design of technology that seeks to account for human values during the design phases (Friedman and Kahn Citation2002). Originating in the domain of Human–Computer Interaction (HCI), VSD has since been developed as a proposed approach to the responsible innovation of many different technologies such as identity technologies (Briggs and Thomas Citation2015), energy technologies (Aad Correljé, Eefje Cuppen, Marloes Dignum Citation2015), information and communication technology (ICT) (Dechesne, Warnier, and van den Hoven Citation2013; Friedman Citation1997; Friedman et al. Citation2013; Huldtgren Citation2014; van den Hoven Citation2007) and nanotechnology (Timmermans, Zhao, and van den Hoven Citation2011; van den Hoven Citation2014).

VSD is one of a host of ‘design-for-values’ approaches to RI that seek to link value integration with RI (see Boenink Citation2013; Doorn et al. Citation2014; Fisher et al. Citation2015; Micheletti and Benetti Citation2016). The inception of these methodologies are in response to some of the foundational issues debated within RI discourses, primarily those that result from the social construction and co-production of technological artifacts (Foley, Bernstein, and Wiek Citation2016; see also Pinch and Bijker Citation1987). Firstly, the issues associated with the infrastructural embeddedness of a technology over a period of time make modification difficult. Hence, once the negative impacts of a technology emerge, they may have already enrolled economic and political capital that make augmentations challenging (Collingridge Citation1980; Star Citation1999). As such, we see how the governance of technological artifacts are ex post facto, meaning they are retrospective in nature and usually follow ubiquitous production of technologies (Kaiser et al. Citation2009; Rip Citation2009; Rip and Van Amerom Citation2009). Likewise, there is a seeming gap between the enterprises of technological development and societal requirements (Guston and Sarewitz Citation2002; Sarewitz and Pielke Citation2007).

DFV frameworks such as VSD were developed in the attempt to address these challenges. They are attempts at balancing often divergent approaches to the amelioration of these challenges, primary between risk-based and precautionary approaches to design (Alvial-Palavicino Citation2016; Brey Citation2012; Brown Citation2009; Guston Citation2014; Nordmann Citation2014; te Kulve and Rip Citation2011). In doing so they generally aim to address these issues via design by making foundational the dimensions of anticipation (future values, issues), reflexivity (biases, assumptions, intentions), inclusion (of various stakeholders, including the designers themselves) and responsiveness (recursively operationalizing the former three practice) (Owen et al. Citation2013; Owen, Macnaghten, and Stilgoe Citation2012). Each, however, operationalize these dimensions in their own way. This paper employs the VSD methodology because its formalization of these dimensions is overtly explicit, making its evaluation of value-inclusions simpler to demonstrate and its broader implications less convoluted.

VSD is defined as ‘a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process’ (Friedman and Kahn Citation2002, 1). VSD is grounded on the foundational premise that technologies embody values (they are value-laden) and provides a framework and methodology for assessing the current design of technologies while simultaneously integrating a proactive approach to guide the development of technologies both at the early stages of design and throughout the design process.

All in all, VSD is a methodology that has been designed with the intention of being capable of integrating the values of stakeholders during the early design stages to guide the technology in a more predictable way, while still allowing for the flexibility to account for emerging changes in values and impacts. Those individuals and groups that interact directly with the technology in question are considered direct stakeholders while those in the peripheries are considered indirect. The VSD approach requires that designers account for both direct and indirect stakeholder values in the design phases, the latter of which is typically side-lined during conventional design processes (Friedman and Kahn Citation2002; Taebi et al. Citation2014). This is not only done via consultation with existing literature on what stakeholders value, but through their direct enrollment. This means that diverse publics and differing epistemic patches are particularly levied in order to more richly enhance the legitimacy and salience of design (cf. Cash et al. Citation2003; Chilvers Citation2008; Delgado, Kjølberg, and Wickson Citation2011).

Additionally, one of the primary concerns of VSD is that issues arise with the application of technology as a result of the ethical values that society holds related to that application (Timmermans, Zhao, and van den Hoven Citation2011). In acknowledging this conflict, VSD aims to account for the relevant societal values and address the potential conflict during design stages (Capurro et al. Citation2015; Friedman and Kahn Citation2002; Taebi et al. Citation2014). This means not only designing in what values are determined most pertinent, but also designing out any unwanted values. This involves an awareness that designers can implicitly embed values given their relatively centralized positions in the design process. For in every case the designers themselves form part of the relevant stakeholder community.

A step-by-step methodology for implementing the VSD framework has already explicated by Friedman, Kahn, and Borinng (Citation2008). As such I have chosen to forgo a rephrasing of it given that the intention of this paper is not to advocate for the VSD framework over other DFV methodologies, nor is it to provide a full account of the feasibility or applicability of VSD to technology design. Instead, this paper seeks to use the VSD methodology, particularly its emphasis on stakeholder values, to provide a more general discussion of stakeholder value integration in technological design methodologies.

3. The moral psychology of conceptual investigations

What are values? Where do values come from? Which values are socio-culturally unique and which values universal? Which values can be integrated into the design of technological innovations? How do we balance apparently conflicting values such as autonomy and security? Should moral values always be given precedence over values that are non-moral? These are some of the questions that the VSD approach traditionally seeks to address in technological design, specifically by procedures outlined under conceptual investigations, one of the three investigations that compose the tripartite methodology of VSD (Friedman and Kahn Citation2002).

Conceptual investigations usually involve designers determining how both direct and indirect stakeholders might be affected by the technological innovation that they aim to design (for a clear examples of how this carried out, see Friedman, Kahn, and Borinng Citation2008). Initial conceptual investigations may take the form of drawing upon the relevant literature of technologies that are applicably similar or involved in the technology that designers seek to develop. In the case of developing a nanopharmaceutical drug like a Doctor in a Cell, values such as privacy (medical data collected by the device), informed consent, safety and efficacy can be conceptualized as posing ethical and societal issues (Casci Citation2004; Lucivero Citation2012; Timmermans, Zhao, and van den Hoven Citation2011). The literature that analyses these particular issues, whether it is the issues per se or in the context of some other technological innovation, can then be levied in order to understand better what the impacts of those issues may be in relation to nanopharmacy. The next step would be to use the initial results of the conceptual analysis to begin the technical work in designing the system. However, in order to successfully and holistically manage this, the epistemic status of the investigated values needs to be clearly demarcated as well as means by which that status can be buttressed by drawing upon heuristic tools.

3.1. The epistemology of conceptual analyses

The epistemic status of the values gathered by designers during conceptual investigations seems, as a result of the theories proposed by moral psychology, dubious.Footnote2 If its assumed that value judgments are in fact the result of a cognitive process and that reasoning is an ex post facto activity, then the susceptibility of cognition to various cognitive biases makes the resulting value judgments less credible. Yet, the value judgments are a critical part of the VSD methodology. How can we reconcile the apparent lack of value-grounding in the VSD approach?

van Wynsberghe’s (Citation2013) approach to this realization was to conceptualize the origin of value in the VSD framework. She attempted to add a level of normativity to the ethical valuations that are essential to the conceptual investigations of VSD. Her application of VSD to the design of care robots was thus argued to be grounded on the existent values of care. Two separate arguments can be made regarding this manoeuvre: (1) the VSD methodology is meant to be augmented in ways that best integrate it with current practices and activities, thus her augmentation was simply a spirited instantiation of VSD philosophy, and (2) VSD methodology is grounded in normative ethics by taking as its foundation ‘moral value such as freedom, equality, trust, autonomy or privacy justice [that] is facilitated or constrained by technology’ (Friedman Citation1997; van den Hoven Citation2013, 137). Hence, Van Wynsberghe’s adoption of values specific to care could be viewed simply as field-specific interpretations of the already grounding values that VSD holds as foundational, thus making her move nothing more than a context-sensitive focusing of existing values.

The spirit of this approach, however, opens up what could ultimately prove to be a detrimental stance against adopting a VSD methodology. Rather than designers having to manipulate the VSD approach for every particular technological innovation that they are attempting to develop responsibly, I propose that the solution to the susceptibility of moral judgments on account of cognitive biases does not lie in the search for a normative moral foundation but rather can be solved by adding a new methodological tool to the stage of conceptual analyses. Cognitive biases influence our moral intuitions, particularly in relation to controversial technologies (Caviola et al. Citation2014). Thus, to more authentically ascertain the moral intuitions of individuals, and in turn gain a better grasp of the intersubjective values that they hold regarding a particular technology, it is a useful practice for designers, during their conceptual investigations, to employ certain psychological heuristic practices that reduce the influence of cognitive biases toward technology (see Bostrom and Ord Citation2006; Larrick Citation2004; Savulescu Citation2007).

Hence, the goal of grounding VSD in objective/universal values is not only dubious, given what I have discussed regarding intuitionism, but unnecessary. The current need for a DFV, like VSD, for transformative technology creates a time-sensitive imperative to instantiate a design methodology before the technologies are developed, many of which are currently in development. As such, a pragmatic impetus exists to regarding the existent DFV approaches, those being: the realization of the intersubjectivity of moral intuitions, the lack of a need for value objectivity and the need to de-bias stakeholder value intuitions. Heuristic tools can help designers with the latter.

3.2. A heuristic toolkit

Given that conceptual investigations in VSD require an analysis of the ethical literature available to better understand the moral and non-moral values of stakeholders at play, a good starting point for the designers who seek to apply a VSD methodology would be to acknowledge the theoretical underpinnings of the moral epistemology of the values investigated. This means that developers understand how moral judgments are made and that cognitive biases affect moral judgments. In light of this, conceptual analysis should not only account for the ethical literature at play, but also the psychological literature and relevant scientific evidence that can be levied to better justify which values are included in the design as well as how tradeoff values are balanced (Caviola et al. Citation2014). Likewise, remedial measures must also be put into play in order to better judge which values are most authentic and also to create impartial evaluations through employing simple heuristic tests.

One such heuristic test is Bostrom and Ord’s (Citation2006) Double Reversal Test that aims at reducing the status quo bias in its judgments regarding technological innovation. They describe the effectiveness of the Double Reversal Test in its applicability to cognitive enhancement technologies saying that:

The Double Reversal Test works by combining two possible perceptions of the status quo. On the one hand, the status quo can be thought of as defined by the current (average) value of the parameter in question. To preserve this status quo, we intervene to offset the decrease in cognitive ability that would result from exposure to the hazardous chemical. On the other hand, the status quo can also be thought of as the default state of affairs that results if we do not intervene. To preserve this status quo, we abstain from reversing the original cognitive enhancement when the damaging effects of the poisoning are about to wear off. By contrasting these two perceptions of the status quo, we can pin down the influence that status quo bias exerts on our intuitions about the expected benefit of modifying the parameter in our actual situation. (Bostrom and Ord Citation2006, p. 673)

Hence, its purpose, as Bostrom and Ord clearly state, is to attempt to determine exactly how the status quo bias influences intuition. In doing so, a better understanding of exactly how and why individuals argue for specific values can be reached. Designers whose aim it is to apply the VSD methodology as thoroughly as possible need to approach their conceptual investigations with the additional activity of de-biasing their moral valuations. Because transformative technologies are more likely to elicit moral intuitions that have a higher likelihood of being influenced by biases, as a consequence of the controversial nature of the technologies (Caviola et al. Citation2014).

As such, what is required of designers is a holistic account of how to responsibly innovate through a DFV methodology. The VSD framework provides a sound basis from which to start, however fundamental characteristics of the method that may, from the outset, be susceptible to criticism need to be addressed before designers can confidently adopt the approach more widely. Work is already being done on the status of moral intuitions in making judgments about technology (e.g. Klein Citation2016), as has been the role that cognitive biases play when making intuitive judgments about transformative technologies (e.g. Caviola et al. Citation2014; Oliveira Citation2009; Partridge et al. Citation2011). The VSD approach needs to begin by taking this literature into account and integrating it into the basic methodology rather than relying on designers to change the method in an ad hoc fashion for every potential application (although a basic element of change will always be necessary given the diverse range of applications). Addressing the issue of the contentiousness of moral intuitions not only strengthens the VSD and other stakeholder-centered approaches by reinforcing their moral grounding, but also to better understand the authentic values of stakeholders beyond the veil of bias interference.

There are a host of potential de-biasing methods that can be employed by designers, each of which may be more applicable to a particular application than others. Because DFV approaches are principled and formulaic in their procedures, it is beyond this paper’s scope in determining which tools are best suited to which application. As such, future research, such as described in the proceeding section, should explore which methods are most effective towards value alignment in particular developmental streams.

4. Implications and further research

DFV approaches, particularly those that centralize the position of stakeholders in the design of new technologies face an epistemological gap in determining the values that stakeholders express; that is, that the moral values that are of critical import to VSD (and values in general for DFV approaches) are subject to cognitive biases.

These novel technologies, such as nanotechnology, biotechnology and artificial intelligence have been predicted to, at the very least, have major economic and societal impacts. As this paper noted at the outset, technology is inherently value-laden, and VSD takes this as its founding precept. Designing without values is impossible: whether or not they are deliberate is a matter of particular importance. As such, there is a responsibility to direct the development of these transformative innovations towards futures desired by stakeholder viz. the embedding of pertinent stakeholder values and designing out of those that run contrary to them.

This paper, rather than offer a transformative or novel design methodology that seeks to encompass all of the values and issues that exist or may emerge, it has opted to offer a critique of current DFV methodology as they pertain particularly to the enrollment of stakeholder values as well as one of a potential number of ways to strengthen said methodologies. The use of heuristics in order to achieve a greater degree of authenticity regarding stakeholder values is but a simple, ad hoc, functional step that can be taken. Further research should look at the viability of moral imagination theories that may be useful in bolstering the value-based investigations that DFV methodologies employ (Boenink, Swierstra, and Stemerding Citation2010; Lucivero, Swierstra, and Boenink Citation2011; Mahoney and Litz Citation2000). Doing so may be fruitful both prior and during the employment of DFV approaches. Because VSD aims to be self-reflexive and recursively improving, like many transformative technologies that can be directed through its use, its continually improvement, and even foundations restructuring through new research projects feeds into its modus operandi.

Additionally, further research into DFV approaches should look into their potential to anticipate not only emerging future values, but to also anticipate potential governance needs. As such the enrollment of stakeholders and the resultant value-integration may lead to novel and emerging governance structures and institutions. The potential for DFV approaches to anticipate potentially applicable governance mechanisms may be particularly salient as designers and other enrolled actors have a privileged position and therefore a responsibility to inform policy makers of possible governance needs. This position is due to the intimate relationship between stakeholders (including designers) and the technology itself. This close encounter during the design and development of the technology allow for a more situated understanding of the technology and its affective nature and thus better suited for policy recommendations.

Finally, the application of de-biasing heuristics does face particular constraints and contentions. The primary being their ability to be self-applied, that is, for designers and direct stakeholders who participate directly within the development and design of technologies to employ these tools on themselves. This is particularly important for the designers as they themselves are at least as prone to cognitive bias as other stakeholders and thus self-applicability of these tools is essential. Arguments could be made against this paper’s thesis that there is a lack of symmetry in the operationalization of heuristics, meaning that the way that tools are applied to certain stakeholders such as the public may not be applied as effectively against other stakeholders such as designers and special interest groups. This contention is methodological in nature, and requires a reformulation of the principles of DFV approaches in order to account for a symmetrical distribution and application of heuristic tools. As such, future research could explore how designers and developers can self-apply de-biasing tools in order to ensure that biased values are not uncritically designed into technologies. As such, the de-biasing and in some cases designing out of biased values could play a critical part here, and because this is a principle of many DFV approaches – most explicitly VSD – tools to ensure its success are of methodological importance.

5. Conclusion

Although existing design-for-values (DFV) approaches do not all function in the same way and may each require further refinements in order to continue to make effective contributions to responsible innovation, such approaches are at present available and being used for value integration in technology (Doorn et al. Citation2014; Fisher et al. Citation2015). Accordingly, it is appropriate to employ them to help identify and address issues (ethical, cultural, social, etc.) pertaining to the design of transformative technologies given that they are already being heavily funded, their development is underway, and their convergence is presently being experienced (Boenink Citation2009; NSTC, COT, & NSET Citation2018; Roco Citation2005, Citation2011). In light of the pragmatic imperative that now exists, both designers and ethicists can play important roles in determining what interventions can be taken in the development of these technologies to help direct them in such a way that is credibly aligned with the values of diverse stakeholders. The Value Sensitive Design (VSD) approach is one such methodology that aims to incorporate the values of stakeholders during the early design phases.

Given the potential for values to be based on intuitions that are in turn subject to cognitive biases, however, I have argued that de-biasing heuristics or similar tools are needed as an initial ad hoc step. This addition, regarding VSD in this case, preserves the tripartite investigations of the approach while adding a critical tool to conceptual investigations. By adding heuristic tools to the conceptual analyses of values, the VSD methodology is thereby strengthened against doubts about the epistemic status of moral judgments produced by moral intuitions. This in turn holds implications for other value integration approaches. Although this paper has not shown that the employment of heuristics creates epistemic certainty regarding moral judgments, it does suggest that in light of the pragmatic urgency, as well as the current status of the origin of moral judgments, the implementation of practices that aim to de-bias moral values may serve as critical and functional ad hoc measures that may contribute to the potential effectiveness of DFV and integrative approaches aimed at responsible innovation.

Notes on contributor

Steven Umbrello is the Managing Director of the Institute for Ethics and Emerging Technologies and a researcher at the Global Catastrophic Risk Institute with research interests in explorative nanophilosophy, the design psychology of emerging technologies, and the general philosophy of science and technology.

Disclosure statement

No potential conflict of interest was reported by the author.

Notes

1 DFV is a categorical term that can designate various design methodology that broadly aim to incorporate stakeholder values into the design process to direct the design towards desired futures. Design-for-values was coined by Jeroen van den Hoven, Pieter E. Vermaas and Ibo van de Poel in the edited collection Handbook of Ethics, Values, and Technological Design (2015).

2 Although this paper focuses exclusively on the effect of biases on conceptual investigations, one can reasonably extend the effect of biases towards other aspects of VSD such as the empirical investigations. However, given the paper’s aim is not to bolster VSD, but to show the limits of intuitionism for informing RI, employing solely conceptual investigations as an example is sufficient.

References

  • Aad Correljé, Eefje Cuppen, Marloes Dignum, U. P. & B. T. 2015. “Responsible Innovation in Energy Projects: Values in the Design of Technologies, Institutions and Stakeholder Interactions 1 (Draft Version for Forthcoming Book) Aad Correljé, Eefje Cuppen, Marloes Dignum, Udo Pesch & Behnam Taebi.” In Responsible Innovation 2, edited by B.-J. Koops, I. Oosterlaken, H. Romijn, T. Swierstra, and J. van den Hoven, 183–200. Springer International Publishing. https://link.springer.com/chapter/10.1007%2F978-3-319-17308-5_10.
  • Alvial-Palavicino, C. 2016. “The Future as Practice. A Framework to Understand Anticipation in Science and Technology.” TECNOSCIENZA: Italian Journal of Science & Technology Studies 6 (2): 135–172. http://www.tecnoscienza.net/index.php/tsj/article/view/239.
  • Boenink, M. 2009. “Tensions and Opportunities in Convergence: Shifting Concepts of Disease in Emerging Molecular Medicine.” NanoEthics 3 (3): 243–255. doi: 10.1007/s11569-009-0078-7
  • Boenink, M. 2013. “The Multiple Practices of Doing ‘Ethics in the Laboratory’: A Mid-level Perspective.” In Ethics on the Laboratory Floor, edited by S. van der Burg, and T. Swierstra, 57–78. London: Palgrave Macmillan.
  • Boenink, M., T. Swierstra, and D. Stemerding. 2010. “Anticipating the Interaction between Technology and Morality: A Scenario Study of Experimenting with Humans in Bionanotechnology.” Studies in Ethics, Law, and Technology 4 (June 2016): 1. doi:10.2202/1941-6008.1098.
  • Bostrom, N., and T. Ord. 2006. “The Reversal Test: Eliminating Status Quo Bias in Applied Ethics.” Ethics 116 (4): 656–679. http://www.ncbi.nlm.nih.gov/pubmed/17039628. doi: 10.1086/505233
  • Brey, P. A. E. 2012. “Anticipatory Ethics for Emerging Technologies.” NanoEthics 6 (1): 1–13. doi: 10.1007/s11569-012-0141-7
  • Briggs, P., and L. Thomas. 2015. “An Inclusive, Value Sensitive Design Perspective on Future Identity Technologies.” ACM Transactions on Computer-Human Interaction 22 (5): 1–28. doi: 10.1145/2778972
  • Brink, D. O. 2014. “Principles and Intuitions in Ethics: Historical and Contemporary Perspectives.” Ethics 124 (4): 665–694. doi: 10.1086/675878
  • Brown, S. 2009. “The New Deficit Model.” Nature Nanotechnology 4 (10): 609–611. doi: 10.1038/nnano.2009.278
  • Capurro, G., H. Longstaff, P. Hanney, and D. M. Secko. 2015. “Responsible Innovation: An Approach for Extracting Public Values Concerning Advanced Biofuels.” Journal of Responsible Innovation 2 (3): 246–265. doi: 10.1080/23299460.2015.1091252
  • Casci, T. 2004. “Doctor in a Cell.” Nature Reviews. Genetics 5 (6): 406.
  • Cash, D. W., W. C. Clark, F. Alcock, N. M. Dickson, N. Eckley, D. H. Guston, … R. B. Mitchell. 2003. “Knowledge Systems for Sustainable Development.” Proceedings of the National Academy of Sciences 100 (14): 8086–8091. doi: 10.1073/pnas.1231332100
  • Caviola, L., A. Mannino, J. Savulescu, and N. Faulmuller. 2014. “Cognitive Biases Can Affect Moral Intuitions about Cognitive Enhancement.” Frontiers in Systems Neuroscience 8 (October): 1–5.
  • Chilvers, J. 2008. “Deliberating Competence: Theoretical and Practitioner Perspectives on Effective Participatory Appraisal Practice.” Science, Technology, & Human Values 33 (2): 155–185. doi: 10.1177/0162243907307594
  • Collingridge, D. 1980. The Social Control of Technology. Frances Pinter. https://books.google.ca/books/about/The_social_control_of_technology.html?id=2q_uAAAAMAAJ&redir_esc=y.
  • Cosmides, L., and J. Tooby. 1992. “Cognitive Adaptations for Social Exchange.” In The Adapted Mind: Evolutionary Psychology and the Generation of Culture, edited by J. Barkow, L. Cosmides, and J. Tooby, 163–228. New York: Oxford University Press. http://www.cep.ucsb.edu/papers/Cogadapt.pdf.
  • Cummings, M. L. 2006. “Integrating Ethics in Design Through the Value-sensitive Design Approach.” Science and Engineering Ethics 12 (4): 701–715. doi: 10.1007/s11948-006-0065-0
  • Cushman, F., L. Young, and M. Hauser. 2006. “The Role of Conscious Reasoning and Intuition in Moral Judgment: Testing Three Principles of Harm.” Psychological Science 17 (12): 1082–1089. doi: 10.1111/j.1467-9280.2006.01834.x
  • Dechesne, F., M. Warnier, and J. van den Hoven. 2013. “Ethical Requirements for Reconfigurable Sensor Technology: A Challenge for Value Sensitive Design.” Ethics and Information Technology 15 (3): 173–181. doi: 10.1007/s10676-013-9326-1
  • Delgado, A., K. L. Kjølberg, and F. Wickson. 2011. “Public Engagement Coming of age: From Theory to Practice in STS Encounters with Nanotechnology.” Public Understanding of Science 20 (6): 826–845. doi: 10.1177/0963662510363054
  • Doorn, N., D. Schuurbiers, I. Van de Poel, and M. E. Gorman. 2014. Early Engagement and New Technologies: Opening up the Laboratory (Vol. 16). Springer. https://www.springer.com/us/book/9789400778436?wt_mc=ThirdParty.SpringerLink.3.EPR653.About_eBook#otherversion=9789400778443.
  • Fisher, E., M. O’Rourke, R. Evans, E. B. Kennedy, M. E. Gorman, and T. P. Seager. 2015. “Mapping the Integrative Field: Taking Stock of Socio-technical Collaborations.” Journal of Responsible Innovation 2 (1): 39–61. doi: 10.1080/23299460.2014.1001671
  • Foley, R. W., M. J. Bernstein, and A. Wiek. 2016. “Towards an Alignment of Activities, Aspirations and Stakeholders for Responsible Innovation.” Journal of Responsible Innovation 3 (3): 209–232. doi: 10.1080/23299460.2016.1257380
  • Friedman, B. 1997. Human Values and the Design of Computer Technology. Edited by B. Friedman. CSLI Publications. https://web.stanford.edu/group/cslipublications/cslipublications/site/1575860805.shtml#.
  • Friedman, B., D. G. Hendry, A. Huldtgren, C. Jonker, J. Van den Hoven, and A. Van Wynsberghe. 2015. “Charting the Next Decade for Value Sensitive Design.” Aarhus Series on Human Centered Computing 1 (1): 4. doi:10.7146/aahcc.v1i1.21619.
  • Friedman, B., and P. H. Kahn Jr, 2000. “New Directions: A Value-sensitive Design Approach to Augmented Reality.” In Proceedings of DARE 2000 on Designing Augmented Reality Environments (pp. 163–164). New York, NY: ACM. doi:10.1145/354666.354694.
  • Friedman, B., and P. H. Kahn Jr, 2002. Value Sensitive Design: Theory and Methods. University of Washington Technical, December, 1–8. doi:10.1016/j.neuropharm.2007.08.009.
  • Friedman, B., P. H. Kahn Jr, and A. Borinng. 2008. “Value Sensitive Design and Information Systems.” In The Handbook of Information and Computer Ethics, edited by K. E. Himma and H. T. Tavani, Hoboken, NJ: John Wiley & Sons. doi:10.1002/9780470281819.ch4.
  • Friedman, B., P. H. Kahn Jr, A. Borning, and A. Huldtgren. 2013. “Value Sensitive Design and Information Systems.” In Early Engagement and New Technologies: Opening up the Laboratory, edited by N. Doorn, D. Schuurbiers, I. van de Poel, and M. E. Gorman, 55–95. Dordrecht: Springer. doi:10.1007/978-94-007-7844-3_4.
  • Greene, J. D. 2014. “Beyond Point-and-shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics.” Ethics 124 (4): 695–726. doi: 10.1086/675875
  • Guston, D. H. 2014. “Understanding ‘anticipatory governance’.” Social Studies of Science 44 (2): 218–242. doi:10.1177/0306312713508669.
  • Guston, D. H., and D. Sarewitz. 2002. “Real-time Technology Assessment.” Technology in Society 24 (1–2): 93–109. doi: 10.1016/S0160-791X(01)00047-1
  • Haidt, J. 2001. “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment.” Psychological Review 108 (4): 814–834. doi: 10.1037/0033-295X.108.4.814
  • Huldtgren, A. 2014. “Design for Values in ICT.” In Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, edited by J. van den Hoven, P. E. Vermaas, and I. van de Poel, 1–24. Dordrecht: Springer. doi:10.1007/978-94-007-6994-6_35-1.
  • Kahan, D. M., E. Peters, E. C. Dawson, and P. Slovic. 2013. “Motivated Numeracy and Enlightened Self-government.” Yale Law School, Public Law Working Paper 307: 54–86.
  • Kaiser, M., M. Kurath, S. Maasen, and C. Rehmann-Sutter. 2009. Governing Future Technologies: Nanotechnology and the Rise of an Assessment Regime (Vol. 27). Springer Science & Business Media.
  • King, M., M. Whitaker, and G. Jones. 2011. “Speculative Ethics : Valid Enterprise or Tragic Cul-De-Sac?” In A. Rudnick (Ed.), Bioethics in the 21st Century (pp. 139–158). InTech. doi:10.5772/19684.
  • Klein, W. E. J. 2016. “Problems with Moral Intuitions Regarding Technologies.” IEEE Potentials 35 (5): 40–42. doi: 10.1109/MPOT.2016.2569742
  • Larrick, R. P. 2004. “Debiasing.” In Blackwell Handbook of Judgment and Decision Making, 316–338. Blackwell Publishing Ltd. doi:10.1002/9780470752937.ch16.
  • Lucivero, F. 2012. Too Good to be True? Appraising Expectations for Ethical Technology Assessment. Edited by P. A. E. Brey, M. Boenink, T. E. Swierstra, and M. Boenink. Enschede: Universiteit Twente. doi: 10.3990/1.9789036533898
  • Lucivero, F., T. Swierstra, and M. Boenink. 2011. “Assessing Expectations: Towards a Toolbox for an Ethics of Emerging Technologies.” NanoEthics 5 (2): 129–141. doi: 10.1007/s11569-011-0119-x
  • Magnani, L. 2013. “Abducing Personal Data, Destroying Privacy: Diagnosing Profiles Through Artefactual Mediators.” In Privacy Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology, edited by M. Hildebrandt, and K. de Vries, 67–90. Routledge. doi:10.4324/9780203427644.
  • Mahoney, J. T., and R. Litz. 2000. “Moral Imagination and Management Decision Making.” Academy of Management Review 25 (1): 256–259.
  • Micheletti, C., and F. Benetti. 2016. Safe-by-design Nanotechnology for Safer Cultural Heritage Restoration. Accessed December 15, 2017. http://atlasofscience.org/safe-by-design-nanotechnology-for-safer-cultural-heritage-restoration/.
  • Nichols, S., and J. Knobe. 2007. “Moral Responsibility and Determinism: The Cognitive Science of Folk Intuitions.” Nous (detroit, Mich ) 41 (4): 663–685. doi: 10.1111/j.1468-0068.2007.00666.x
  • Nordmann, A. 2014. “Responsible Innovation, the Art and Craft of Anticipation.” Journal of Responsible Innovation 1 (1): 87–98. doi: 10.1080/23299460.2014.882064
  • NSTC, COT, & NSET. 2018. The National Nanotechnology Initiative—Supplement to the President’s 2018 Budget. https://www.nano.gov/sites/default/files/NNI-FY18-Budget-Supplement.pdf.
  • Oliveira, J. R. 2009. “Much Ado about Cognitive Enhancement.” Nature. doi: 10.1038/457532b
  • Owen, R., P. Macnaghten, and J. Stilgoe. 2012. “Responsible Research and Innovation: From Science in Society to Science for Society, with Society.” Science and Public Policy 39 (6): 751–760. doi: 10.1093/scipol/scs093
  • Owen, R., J. Stilgoe, P. Macnaghten, M. Gorman, E. Fisher, and D. Guston. 2013. A Framework for Responsible Innovation. Edited by R. Owen, J. Bessant, and M. Heintz. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society. Wiley Chichester, Sussex.
  • Partridge, B., J. Lucke, J. Finnoff, and W. Hall. 2011. “Begging Important Questions about Cognitive Enhancement, Again.” The American Journal of Bioethics 11 (1): 14–15. doi: 10.1080/15265161.2010.534536
  • Pinch, T., and W. E. Bijker. 1987. “The Social Construction of Facts and Artifacts.” In The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, edited by W. E. Bijker, T. P. Hughes, and T. Pinch, 405. MIT Press. https://books.google.ca/books?id=B_Tas3u48f8C&printsec=frontcover&dq=The+Social+Construction+of+Technological+Systems&hl=en&sa=X&ved=0ahUKEwjvtbqrkfHXAhUMMd8KHdXnDGEQ6AEIKDAA#v=onepage&q=The Social Construction of Technological Systems&f=false.
  • Rip, A. 2009. “Technology as Prospective Ontology.” Synthese 168 (3): 405–422. doi: 10.1007/s11229-008-9449-9
  • Rip, A., and M. Van Amerom. 2009. “Emerging De Facto Agendas Surrounding Nanotechnology: Two Cases Full of Contingencies, Lock-outs, and Lock-ins.” In Governing Future Technologies, edited by M. Kaiser, M. Kurath, S. Maasen, and C. Rehmann-Sutter, 131–155. Dordrecht: Springer.
  • Roache, R. 2008. “Ethics, Speculation, and Values.” NanoEthics 2 (3): 317–327. doi: 10.1007/s11569-008-0050-y
  • Roco, M. C. 2005. “The Emergence and Policy Implications of Converging New Technologies Integrated from the Nanoscale.” Journal of Nanoparticle Research 7 (2): 129–143. doi: 10.1007/s11051-005-3733-0
  • Roco, M. C. 2011. “The Long View of Nanotechnology Development: The National Nanotechnology Initiative at 10 Years.” In Nanotechnology Research Directions for Societal Needs in 2020: Retrospective and Outlook, 1–28. Dordrecht: Springer. doi:10.1007/978-94-007-1168-6_1.
  • Sarewitz, D., and R. A. Pielke. 2007. “The Neglected Heart of Science Policy: Reconciling Supply of and Demand for Science.” Environmental Science & Policy 10 (1): 5–16. doi: 10.1016/j.envsci.2006.10.001
  • Savulescu, J. 2007. “Genetic Interventions and the Ethics of Enhancement of Human Beings.” In The Oxford Handbook of Bioethics, edited by B. Steinbock, Oxford University Press. doi: 10.1093/oxfordhb/9780199562411.003.0023
  • Shilton, K. 2013. “Values Levers: Building Ethics into Design.” Science, Technology & Human Values 38 (3): 374–397. doi: 10.1177/0162243912436985
  • Shweder, R. A., and J. Haidt. 1993. “The Future of Moral Psychology: Truth, Intuition, and the Pluralist Way.” Psychological Science 4 (6): 360–365. doi: 10.1111/j.1467-9280.1993.tb00582.x
  • Star, S. L. 1999. “The Ethnography of Infrastructure.” American Behavioral Scientist 43 (3): 377–391. doi: 10.1177/00027649921955326
  • Taebi, B., A. Correljé, E. Cuppen, M. Dignum, and U. Pesch. 2014. “Responsible Innovation as an Endorsement of Public Values: The Need for Interdisciplinary Research.” Journal of Responsible Innovation 1 (1): 118–124. doi: 10.1080/23299460.2014.882072
  • te Kulve, H., and A. Rip. 2011. “Constructing Productive Engagement: Pre-engagement Tools for Emerging Technologies.” Science and Engineering Ethics 17 (4): 699–714. doi: 10.1007/s11948-011-9304-0
  • Timmermans, J., Y. Zhao, and J. van den Hoven. 2011. “Ethics and Nanopharmacy: Value Sensitive Design of New Drugs.” NanoEthics 5 (3): 269–283. doi: 10.1007/s11569-011-0135-x
  • Tversky, A., and D. Kahneman. 1974. “Judgment under Uncertainty: Heuristics and Biases.” Science 185 (4157): 1124–1131. doi: 10.1126/science.185.4157.1124
  • van den Hoven, J. 2007. “ICT and Value Sensitive Design.” In The Information Society: Innovation, Legitimacy, Ethics and Democracy in Honor of Professor Jacques Berleur S.J.: Proceedings of the Conference “Information Society: Governance, Ethics and Social Consequences”, University of Namur, Belgium 22–23 May 20, edited by P. Goujon, S. Lavelle, P. Duquenoy, K. Kimppa, and V. Laurent, 67–72. Boston, MA: Springer. doi:10.1007/978-0-387-72381-5_8.
  • van den Hoven, J. 2013. “Architecture and Value-sensitive Design.” In Ethics, Design and Planning of the Built Environment, edited by C. Basta and S. Moroni, 224. Springer Science & Business Media. https://books.google.ca/books?id=VVM_AAAAQBAJ&dq=moral+value+such+as+freedom,+equality,+trust,+autonomy+or+privacy+justice+%5Bthat%5D+is+facilitated+or+constrained+by+technology&source=gbs_navlinks_s.
  • van den Hoven, J. 2014. “Nanotechnology and Privacy: The Instructive Case of RFID.” In Ethics and Emerging Technologies, edited by R. L. Sandler, 285–299. London: Palgrave Macmillan. doi:10.1057/9781137349088_19.
  • Van den Hoven, J., G. J. Lokhorst, and I. Van de Poel. 2012. “Engineering and the Problem of Moral Overload.” Science and Engineering Ethics 18 (1): 143–155. doi: 10.1007/s11948-011-9277-z
  • van den Hoven, J., and J. Weckert. 2008. Information Technology and Moral Philosophy. Edited by J. van den Hoven, and J. Weckert. Cambridge University Press. http://www.cambridge.org/catalogue/catalogue.asp?isbn=9780521855495.
  • van Wynsberghe, A. 2013. “Designing Robots for Care: Care Centered Value-sensitive Design.” Science and Engineering Ethics 19 (2): 407–433. doi: 10.1007/s11948-011-9343-6
  • Van Wynsberghe, A. 2016. “Service Robots, Care Ethics, and Design.” Ethics and Information Technology 18 (4): 311–321. doi: 10.1007/s10676-016-9409-x
  • van Wynsberghe, A., and S. Robbins. 2014. “Ethicist as Designer: A Pragmatic Approach to Ethics in the Lab.” Science and Engineering Ethics 20 (4): 947–961. doi: 10.1007/s11948-013-9498-4
  • Waldmann, M. R., and J. H. Dieterich. 2007. “Throwing a Bomb on a Person Versus Throwing a Person on a Bomb: Intervention Myopia in Moral Intuitions.” Psychological Science 18 (3): 247–253. doi: 10.1111/j.1467-9280.2007.01884.x
  • Winner, L. 2003. “Do Artifacts Have Politics?” Technology and the Future 109 (1): 148–164. doi:10.2307/20024652.
  • Woodward, J., and J. Allman. 2007. “Moral Intuition: Its Neural Substrates and Normative Significance.” Journal of Physiology-Paris 101 (4): 179–202. doi: 10.1016/j.jphysparis.2007.12.003

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.