1,281
Views
10
CrossRef citations to date
0
Altmetric
Research Articles

Inclusive deliberation and action in emerging RRI practices: the case of neuroimaging in security management

, &
Pages 26-49 | Received 25 Nov 2014, Accepted 30 Dec 2015, Published online: 09 Feb 2016

ABSTRACT

What does it mean to facilitate inclusive deliberation, a core aspect of Responsible Research and Innovation, at a very early stage in a controversial field such as security? We brought neuroscientists and security professionals together in a step-by-step fashion – interviews, focus groups and dialogue – to construct imaginaries of neuroimaging applications. Neuroscientists and security professionals related differently to neuroimaging investigation. Some of what the security professionals seek in neuroscience might better be provided by social psychology. However, neuroimaging can be imagined to aid professionalization through a theory-informed security management practice. Post-hoc reflection interviews were performed to identify impacts, such as reflexivity of the stakeholders, the formation of new relationships and actions. Our use of social psychology as a low-technology alternative brought into sharper focus where potential responsible security sector uses of neuroimaging lie. However, without on-going facilitation of interactions, this imaginary is likely to dissolve.

1. Introduction

Knowing that science and technology are interwoven in society in intricate ways, the question is how to deal with scientific knowledge and emerging technologies in a responsible manner. ‘Responsible Research and Innovation’ (RRI) is a term increasingly used to describe the application of scientific knowledge and technology for major global and societal issues, while avoiding controversies of the past, such as those surrounding genetically modified organisms. RRI is to stimulate research and innovation activities to take ethical and societal considerations into account from the early phases onwards, and facilitate societally desirable and ethically acceptable outcomes through an exemplary process (Koops Citation2015; Owen, Macnaghten, and Stilgoe Citation2012). Such a process is characterised by inclusive deliberation with wider stakeholders with respect to purposes of research, processes and products of research and innovation and a willingness of stakeholders to act and adapt according to new insights (Owen, Bessant, and Heintz Citation2013). This entails conversations on aspects that are not only techno-scientific or macroeconomic in nature, but also on ethical and societal aspects of science and technology.

In this paper, we explore what it means to facilitate such inclusive deliberation at a very early stage, for the application of neuroimaging in the field of private security in the Dutch context. The development of neuroimaging applications for the field of security is in a very early phase, such that the technology's emergence is still flexible and open to negotiation. The application of neuroimaging in the field of private security brings into view normative challenges with respect to initiating the facilitation of inclusive deliberation at a very early stage. Inclusive deliberation involves bringing together different stakeholders who normally may not interact with each other. The process of reflection on the purposes of neuroimaging in the field of private security could at the same time provide the foundation for such a practice to emerge. This is particularly problematic for applications within a contentious paradigm. Practices within security are often seen as at odds with democracy. One particular example is the association between security technologies – especially that of surveillance – and ‘social sorting’ (Lyon Citation2003). Within surveillance, large amounts of personal and group data are obtained and used to classify people and populations into categories. To these categories, worth and risk are assigned, meaning that for example access is awarded or denied based on the category one gets placed into. This can have effects on life chances. Social sorting thus not only impacts personal privacy, but also social justice.

We describe how we brought together neuroscientists and security professionals in a step-by-step fashion in order to explore potential future responsible security sector uses of neuroscience research, in particular of neuroimaging technologies. To enhance the responsible nature of the potential application, we focused on the phenomenon of the technological fix and on societal desirability. A technological fix (Kiran Citation2012) is at odds with responsible technology development. To lessen its temptation, neuroimaging was contrasted with social psychology during the inclusive deliberation process. Social psychology offers a more situated approach than neuroscience, and therefore seems to match the practice of security management professionals more closely. In doing so, an imaginary of neuroimaging in security might be shaped that could be of added value compared with the low-technology alternatives provided by social psychology methodologies. Furthermore, societal aspects of neuroimaging applications within the security paradigm were introduced in the conversations by discussing ethical acceptability and societal desirability.

Before describing these results in detail, we will elaborate upon neuroimaging and security management, explain why we decided to initiate inclusive deliberation between neuroscientists and security professionals in the first place, expound inclusive deliberation in RRI, present our study questions and discuss our methodology.

1.1. Neuroimaging

Having played a pivotal role in the boom of neuroscience in the last decades, the possibility of examining the functioning of the living brain has widened the scope of cognition research. From domains of basic and clinical research, new fields of application have emerged, such as neurolaw (Belcher and Sinnott-Armstrong Citation2010), neuromarketing (Lee, Broderick, and Chamberlain Citation2007) and neuroeducation (Ansari, De Smedt, and Grabner Citation2012). As neuroimaging diffuses into new fields, it becomes an excellent candidate for an upstream inclusive deliberation process. Besides opportunities to increase understanding in various domains, the application of novel neuroscience technologies warrants scrutiny for the relation between our perceptions of self, human identity and the brain, which gives rise to questions about the biological underpinnings of who we are (Farah and Wolpe Citation2004; Pickersgill, Cunningham-Burley, and Martin Citation2011). Recently, Dutch neuroscientific research has been gravitating towards research into issues relating to criminality and security. From the new millennium onwards, publicationsFootnote1 appeared in which inventories were made of potential contributions of neurobiological and neuropsychological research and insights to criminological research or to policies of the Dutch Ministry of Security and Justice. In 2010, this culminated in funds being made available for neurobiological research into issues of security and criminality.Footnote2 However, many of the current investigations target justice and security activities arranged by state agents and their ‘traditional’ partners, such as detention centres, the police, parole officers or the army.

1.2. Security management

Security generally refers to an absence of danger to individuals or institutions and the condition of being protected against danger or loss (Peissl Citation2010). Security professionals are entrusted with security activities in a great variety of (semi-) public places we find ourselves in on a daily basis as they deal with the safety of organizations, people and data in private settings as well as public spaces. Private security is a growing industry internationally (Zedner Citation2007), but the number of companies active in the security industry in the Netherlands is also soaring (Van Melik and Van Weesep Citation2006). Security management is characterized by a mentality of prevention rather than retribution. It has an economic rationale of pre-empting, minimizing and displacing loss, in which ‘loss’ can mean damage to material goods, theft of digital data, bodily harm or the disruption of business processes (Shearing and Johnston Citation2005, 32; Williams Citation2005). Security management is not a very well-defined field, sometimes overlapping with more common organizational activities. For example, security professionals also consider ‘employment screening’ part of their practice, which concerns the assessment of applicants or employees for their suitability for a position.

In communicating with security professionals, we found out that this field is currently investing in the professionalization of the practice. A new, and better educated, generation of professionals is expected to work with more sophisticated methods and to incorporate more sophisticated knowledge in their practice. In this context, neuroscientific knowledge and neuroimaging techniques have triggered their interest, and Dutch security professionals are trying to get access to neuroscientists for the application of their knowledge of the brain and of their techniques.

1.3. Initiating inclusive deliberation

This study is part of a research project called Neurosciences in Dialogue,Footnote3 which revolves around the responsible development and societal embedding of emerging neuroimaging technologies in various domains of application in The Netherlands. We try to open up the research and development process from the outset to various stakeholders, such as practitioners at grassroots levels and the public, each bringing with them their different perspectives and value systems. However, we are not institutionalised in a neuroscientific or practitioners’ practice. We try to achieve these aims as external facilitators of inclusive deliberation processes. Shortly after the start of this project, we were contacted by members of the Dutch and Flemish division of the largest international organization for security (management professionals), American Security Industries Association (ASIS International). They were keen to explore the use of neuroimaging technologies within their domain. We felt uneasy, as the application of security technologies is far from undisputed (Levi and Wall Citation2004; Lodge Citation2007; Webb Citation2007). Provision of security is central to the legitimacy of the nation-state,Footnote4 however, it also devolves activities and responsibilities to non-state agents, such as private security professionals. Considering the Grand Challenge of Secure Societies of the European Commission (EC Citation2014), the triggered interest of security professionals in neuroscience, and the potentially problematic issues arising from such application, neuroscience-based technologies for security management can be seen as in need of RRI as an approach. This is especially true for security delivered by non-state actors, given the relative lack of transparency in those sectors. We therefore saw a role for ourselves to explore potential responsible applications of neuroimaging in this field.

1.4. RRI and inclusive deliberation

RRI can be considered an ideal and an approach (Koops Citation2015). The ideal relates to (1) addressing societal challenges, (2) incorporating social and ethical values from the earliest stages of technology development onwards and (3) exemplary innovation processes that are inclusive and promote self-learning through anticipation, reflection, deliberation and responsiveness (Koops Citation2015; Owen, Macnaghten, and Stilgoe Citation2012). The approach relates to how we can innovate responsibly and the tools we have at our disposal. Reflecting upon normative aspects of research, technology development and products at an early stage has a pragmatic and a substantive element. Normative choices are being made in early phases that often remain undeliberated (Macnaghten, Kearnes, and Wynne Citation2005). Furthermore, there are still possibilities for adjustments to the innovation trajectory as interests have not yet become vested.

Deliberation in an early phase cannot concern actual or probable applications, as concrete applications are lacking and uncertainty on probable applications is high. Moreover, research and technological development are informed by ‘imaginaries’ of actor groups of the technologically conceived future (Borup et al. Citation2006; Hedgecoe and Martin Citation2003; Kearnes et al. Citation2006; Rose Citation2001). Nordmann (Citation2010) considers imaginaries as a wish to be fulfilled or a potential to be realised in the future, but with the wish or potential contained in the now. They consist of implicit assumptions, values and visions of an actor group, and combine elements of fantasy and evidence (Macnaghten, Kearnes, and Wynne Citation2005). They are materially powerful as they shape relationships and are enacted in everyday practices.

1.5. Study questions

In this study, we explored (a) how the two main actor groups imagine applications and (b) whether there is a danger of a technological fix and, if so, how can this be prevented during multi-stakeholder deliberation? The shared imaginary would not be responsible if it would reflect an under-critical stance towards neuroimaging as a technology, which would pose the risk of design instrumentalism (Kiran Citation2012). Technologies in search of applications are to be prevented, as these are less likely to be situated and practice-bound. Attention should thus be paid to the seductive lure of technological fixes to security problems that are not necessarily high-technological in nature. Furthermore, an imaginary shared by the two main actor groups should also be informed by societal aspects, besides techno-scientific aspects and benefits for the profession. As such, it was explored (c) whether a shared imaginary could be shaped through deliberation between these two unfamiliar main actor groups while staying critical of the added value of neuroimaging and discussing aspects of ethical acceptability and societal desirability. As the two main actor groups do not usually interact, they will probably lack common ground. And finally, (d) how can the impacts of the deliberation process be evaluated for RRI in terms of reflexive thinking, relationships and action? Ideally, RRI increases reflexivity of the actors through mutual learning. Consequently, actors should be willing to act on these new insights. Furthermore, as imaginaries also shape relationships, it also seems prudent to investigate whether new relationships are being formed by these two actor groups who are largely unfamiliar to the other.

2. Methodology

RRI starts with the identification of relevant stakeholders, with the aim to bring these different stakeholder groups together in a step-by-step process. During this mapping of the stakeholders, it soon became apparent that two main stakeholder groups had no existing interaction, being the neuroscientists and the private security professionals. As these groups were unfamiliar with each other, there were no existing settings for these stakeholders to get to know each other and discuss. We therefore decided to first introduce these two groups to each other, before including other societal stakeholder groups as well. Inclusion of a larger variety of stakeholder groups at the same time would make it harder to ensure quality of the conversations as the facilitator would have to contend with a lot of variables simultaneously. We chose to introduce aspects of ethical acceptability and societal desirability in a somewhat controlled fashion by the inclusion of an applied ethicist and via exercises in the meetings, such as the value game. The step-by-step process of bringing together security professionals and neuroscientists described here should therefore be seen as an event in a series, in which the diversity of stakeholder groups incrementally rises. We interviewed neuroscientists, conducted focus groups with security professionals, organised a multi-stakeholder dialogue session, and performed reflection interviews with dialogue participants by telephone.

2.1. Interviews with scientists

In 2011–2012, a total of 20 semi-structured interviews were conducted with scientists employing neuroimaging technologies for their research encompassing concepts relevant to the domain of justice and security (see also de Jong et al. Citation2015). Interviews allow for an in-depth conversation while at the same time permitting the acquisition of data on pre-established topics through their structure. The scientists were selected because of their research on concepts relevant to the domain of justice and security, and their use of neuroimaging technologies. Analytical themes related to the scientists’ perspective on potential application of neuroimaging for security issues specifically (n = 15)Footnote5 and to shared imaginaries for neuroimaging research (n = 20). Participants were sent summaries of their interviews to verify the accuracy of documented responses.

2.2. Focus groups with security professionals

We conducted three focus groups with security professionals (see for the design). Focus groups are high in external validity, as they can mirror conversations in daily life (Hollander Citation2004). Maximum variation sampling was used to account for the different needs that can arise from different sub-disciplines (Patton Citation1990) (see ). In addition, three feedback interviews with key informants were held to reflect on the results obtained and to check whether these were also recognized in the wider field of security management. A summary was sent for member check.

Table 1. Designs of the focus groups and the dialogue.

Table 2. Focus group participants and feedback interviewees by area(s) of expertise.

2.3. Dialogue

The dialogue session (2014) was designed on the basis of the imaginaries unpacked during the focus groups (see for the design). The imaginaries revolved around (1) recognizing intentions and predicting behaviour and (2) subsequently influencing behaviour. Underlying these imaginaries is a distinction between ‘normal’ and ‘deviant’ behaviour. Defining what constitutes deviance is not straightforward (Clinard and Meier Citation2015). For example, one can hold statistical conceptions of deviance, in which abnormal behaviours are those performed by a statistical minority of people. In sociology, normative and reactivist perspectives on deviance are dominant (Pontell Citation2007). The former refers to deviance as the formal violation of one or more existing norms or rules. The latter refers to deviance as judgements by relevant social groups of real behaviours.

Neuroimaging could be applied directly or indirectly in these imaginaries: either by producing insights relevant for the recognition of intentions, the prediction of behaviour and changing behaviours, or by developing derivative tools more amenable to application in operational settings based on the insights provided by neuroimaging research. The actors (n = 13) that participated were four scientists (three neuroscientists and one social psychologist experienced in neuroimaging), six security professionals, two intermediary (technology developing) actors also familiar with neuroscience or social psychology research, and one applied ethicist (see ).

Table 3. Participants of the dialogue by expertise.

The security professionals had previously participated in one of the focus groups. Scientists were selected for their experience in doing research with non-clinical subjects and an interest in matters of justice and security. Two of the scientists had been interviewed, and one was a colleague of a previously interviewed scientist. The other neuroscientist was newly identified. The applied ethicist was selected for familiarity with (private) security technologies.

During the interviews with the scientists, as well as through analysing the focus groups, it became clear that viable alternatives could lie in the field of social psychology, which deals with the nature and causes of individual behaviour, perceptions, and feelings in social situations. Therefore, we included a social psychologist in the dialogue. In our design we incorporated prompting questions concerning this alternative. The dialogue setting enabled deliberation on desirable applications of neuroimaging (direct or indirect) in security management, while at the same time assessing value they added in comparison to alternatives.

All interview, focus group and dialogue transcripts were coded and thematically analysed by the first author using qualitative data analysis software MAXQDA 11. The coding and analysis process was discussed with the second author.

For privacy reasons, all participants will be referred to in the male form.

2.4. Reflection interviews

Seven months after the dialogue, eight semi-structured telephonic interviews were conducted with dialogue participants: five security professionals and all three neuroscientists. Questions of the 20-minute interview focused on significant changes after the dialogue with respect to personal actions and the possible emergence of new relations with other stakeholders, and the perception of the event itself. A written reaction was obtained from the intermediary technology developer party. The notes and written reaction were thematically analysed by hand.

3. Imaginaries of the main actor groups

3.1. Imaginaries of scientists

In exploring the views of the interviewed Dutch scientists on possible applications of neuroimaging in security, we found that imaginaries related to basic and clinical research: scientists were trying to get insights into brain mechanisms. For example, delinquents were seen as an interesting group as they can particularly lack skills such as emotion regulation and empathy. The scientists were interested in finding ways to diagnose, treat or prevent clinically relevant phenomena such as antisocial disorders or post-traumatic stress disorder. Especially scientists with therapeutic relations with delinquents had a perspective of ‘care’ for those who are troubled and deserving of clinical interference. They formulated new treatment options based on the research insights, rather than options for prevention, such as early screening. A perspective of care dominated over potential applications that are more in line with the preventive mentality within security management, as illustrated by these quotes:

[I-1]: An important task for me is to clarify ( … ) what the neurobiological causes are, mainly to achieve more effective treatments. I would not propose to hook up all naughty six year old boys to [a neuroimaging] device and pick out all potential little psychopaths beforehand.

[I-2]: You could make a scan to determine how electrodes can easily be inserted, to stop a person's suffering. But that is again a treatment option ( … ) I cannot endorse removing all the gates at the airport and doing brain scans instead, so that the officer can see if someone is a danger on the basis of brain activity. I would not like to see this happen.

Applications within private security were marginalized: both in number (care-related applications were mentioned by all scientists, applications in private security were mentioned by 30% of the scientistsFootnote6) as in appreciation (only 5% had positive things to say about applications in private security, whereas all scientists had said something positive about care-related applications). Applications in private security mostly met with scrutiny and disbelief. This was connected by some interviewees to a negative media hype in 2009 which involved two neuroeconomists who appeared in the media proclaiming that in a few years screening of applicants would be done by scanning them. This was deemed ‘idiotic' and ‘ridiculous' by two interviewees who spontaneously brought up this topic.

One interviewee formulated another source of criticism towards the application of neuroimaging in operational settings such as security management. According to this interviewee, operational settings will not so much benefit from insight into intentions, but more from insight into behaviours that may indicate that adverse behaviour may follow. Social psychology is a field that deals with this, contrary to neuroscience.

[I-3]: Especially when it comes to counter-terrorism and security, there is lots of cackling, lots of expectations being raised. Doubtful technologies are recommended [by scientists] ( … ) How can we actually recognize a potential terrorist? ( … ) The obvious answer is: their behaviour. ( … ) Social psychologists are only interested in behaviour that signals that someone is going to attack, and not in the intent, not in their brain. ( … ) Social psychologists have a situational approach to norm-violating behaviour. Whereas neuroscientists reason from the brain, maturation of the brain and hormones. But what you see is that social psychologists draw the short straw [because neuroscientists get the money].

This quote also indicates a view that present neuroscientific research in justice and security is being done because of a push by funders. In some other interviews, scientists confirmed that they were in this research because they follow the money. With this demand among the funders, a significant part of the scientists interviewed – though not all – seem to try to find ways to make it compatible with their own imaginary, which is mostly focused on basic research and clinical applications, to a smaller degree on justice, and tries to steer clear of private security. This reticence of the scientists therefore seems to relate to their preference for caring for patients rather than pinpointing those who pose a security threat. This is further enhanced by a belief among several neuroscientists that they would be expected to produce reductionist tools. Many of the interviewed scientists were quite concerned about reductionism among societal stakeholders, and sometimes also about other scientists capitalizing on this. They sensed inflated expectations, such as the possibility of individual judgements based on neuroimaging data, and of neuroimaging technology replacing existing (qualitative) methods, such as questionnaires.

[I-4]: Similarly for risk assessment tools ( … ) which are now used on an individual basis for deciding which persons in involuntary forensic care get furlough. And we know that this is impossible. But they do it anyway, because 1) policy officials say that they should and 2) some [other] scientists have sold it like it's the Egg of Columbus.

In their view, neuroimaging could at best improve current practices in the form of corroborative evidence, and not as a substitution. Scientists’ concerns about reductionism illustrate their capacity to see that technology is not neutral, although they are mainly alert to this issue when analysing actual applications than the inception stage.

3.2. Imaginaries of security professionals

Security professionals imagined several ways in which neuroimaging could contribute to their profession. Often, this related to pre- or in-employment screening, lie detection or truth finding, and crisis management. They thought that candidates for certain positions could possibly be screened for characteristics such as leadership skills and integrity, or in the case of primary school teachers, paedophilia. Lie and truth detection was perceived as a useful tool to assess job applicants or employees, but also more generally to investigate criminal acts within organizations. Stress-resistance and fearfulness were seen as particularly relevant for crisis management, also for security professionals themselves, when composing a crisis team charged with making decisions under pressure. Furthermore, enhancement of abilities was imagined a possible application. Neuroimaging was also perceived as a relevant instrument to measure the reliability of assessment tools that investigate attitude towards and knowledge of organizational procedures and policies. However, despite this abundance of hypothetical options put forward in all three focus groups, when asked to prioritize, each group's main interest was in the broader and abstract theme of predicting behaviour or recognizing intent, and influencing behaviours. For the options mentioned above, neuroimaging was imagined as useful, but lacking in urgency to explore it further.

[FG2-P2]: How I see it, in employment screening, we are already pretty good at that. And I see [neuroimaging] as a technology that is going to trigger a kind of paradigm shift. And then I do not think about employment screening, but immediately think a few steps further than that.

Because of insecurities in assessing relevant deviant behaviours, they are also unsure whether their current tools to prevent adverse behaviours are efficient or effective. There is uncertainty whether they are applying their methods and tools at the right persons to begin with. They hoped that direct or indirect application of neuroimaging could fill this gap.

[FG1-P1]: If you can prevent things, by being able to identify people who have a screw loose. If you can then steer people into another direction or help them in some way that can prevent a lot of agony.

Predicting behaviour was imagined through assessing mental states, or through increased knowledge on the recognition of precursor behaviours (behaviours that are in itself not hazardous, but are indicators of subsequent adverse behaviours). Application was mainly imagined as access and exit control of the spaces they secure, especially those spaces that are more sensitive, such as airports.

[FG3-P2]: We feel, the most suitable application is controlling access.

( … ) [FG3-P1]: It can also be an exit, by the way.

( … ) [Facilitator]: And in what type of situations would you feel this is desirable?

[FG3-P1]: Well, yeah, airports. Someone gets in with something and you pick him out of the line.

When security professionals use the word ‘predict', for example, they do not see this in the light of putting a ‘dangerous' label on someone's head, nor do they see this to lead to the automatic removal of this person from the space in question. Rather, they see it as a trigger for further investigation.Footnote7

Neuroscientific research employing neuroimaging modalities were also imagined to aid in evaluating the reliability and validity of existing security management tools, and to create an evidence-base.

[FB-P3]: I get feedback from customers, but actually I don't know very well what works and what doesn't. ( … ) If I would be better informed on certain triggers in the brain, or feelings of anxiety and positivity that could have certain influences. If I would know more about what possibilities there are to influence behaviour, what action has a positive outcome and what doesn't, then I would be able to make far more effective programs.

At times, security professionals sometimes worried about the reliability of future applications. Moreover, they were concerned with harming privacy if they were to adopt such hypothetical devices.

[FG3-P2]: We may be looking whether [a civilian] is thinking of a bomb. But what to do with any other information that you may acquire while scanning? ( … ) We say that we'll only collect a certain type of information because we want to use it for this certain goal. But in practice we know we fortuitously ( … ) also have access to other information you really should not use, but the question is who checks that? Who guarantees that the other information is not also used?

They seem aware of the complexity and messiness of the real world they operate in, and employ a more holistic approach rather than a reductionist one.

[FG3-P1]: Someone approaches the border and starts thinking ‘now I shouldn't think of a bomb', but then he has already thought of the bomb while walking through the gate. ( … ) You can't control your thoughts. ( … ) I would always combine it with multiple other sources. It is not just that [scan].

They explained this by referring to the ample sources of uncertainty and ignorance they are dealing with on a daily basis, and to the experience that people are different.

[FG2-P4]: But of course not everyone reacts in the same way. If you can see someone's brain activity, you don't know whether someone reacts in this way or the other way around.

4. The potential of a technological fix

As the quote by [I-3] illustrates, social psychology seems to be losing out to the lure of neuroscience and neuroimaging technologies, but seems nevertheless an appropriate scientific field from which security management could draw methods and approaches. Social psychology offers a more situated approach than neuroscience. Furthermore, neuroscience seems to foreground the ‘patient’ in need of care whereas the practice of security management revolves around the ‘control’ of the potentially dangerous subject in order to protect ‘potential victims’. When looking from a perspective of patients and care, the ‘complex of behaviours’ is put into view, not all of them dangerous, but related to the ‘diseased’ state of the patient and its subsequent suffering. Within operational settings only the single adverse behaviour is important. Social psychology can thus be used as an instrument in the deliberation activity to avoid an under-critical stance towards neuroimaging in shaping a shared imaginary of neuroimaging in security management.

5. Early deliberation

The main actor groups were brought together − in the presence of actors from the intermediary/developer organization, the social psychologist and the ethicist − to explore the potential of a shared imaginary. A first important step in this process, however, was to get acquainted with each other.

5.1. Getting to know each other

The main actor groups tried to find common ground, by developing a shared language. For example, what constitutes ‘security’ was seen in quite a different light by security professionals than by neuroscientists, who approached it more in terms of ‘safety’ (which is more in line with their dominant image of neuroscience in clinical terms, see above).

[D-P4; social psychologist]: To return to the question [D-P1] posed at the beginning, I am not an expert in the field of security and what I immediately start asking myself is, whose safety?

( … ) [D-P13; ethicist]: And what kind of incidents are we talking about? Someone falling off a scaffolding because he doesn't have an accurate awareness of occupational dangers? Or are we talking about crime and preventing criminal acts?

( … ) [D-P5; security professional]: Indeed, what kind of security are we looking at? We, as security managers, we look at the security slash continuity of our organisation. We don't look at the safety of some criminal that climbs over a fence and falls down.

( … ) [D-P4; social psychologist]: But if you are hired by the mafia as a security expert, then you are concerned with the security of the criminal?

[D- P5: security professional]: Well, yes, or … 

( … ) [D-P1; neuroscientist]: But then I ask again: ‘What is security?' And then we can look at those issues.

At a certain point this discussion settled down, with security being interpreted more in the vein of the security management practice for the duration of the dialogue. Another contested term was that of ‘intentions’ and how these connected to mental states. The neuroscientists deemed the conceptualization of security professionals too vague and not an interesting target, if what counts is the behaviour itself.

[D-P6; security professional]: And the question is of course, ‘How you can measure intent?', for intent determines a lot, I think. There is the risk factor.

[D-P2; neuroscientist]: ( … ) But ultimately the true measure you're interested in, is the behaviour itself.

[D-P6; security professional]: ( … ) Intention is very interesting to us.

( … ) [D-P13; ethicist]: But what is an intention then?

[D-P6; security professional]: Well, what underlies whether someone is going to do something. That is the intention.

[D-P13; ethicist]: But is that an emotion, or … 

[D-P6; security professional]: Probably, we don't know. Because that's the difference between science and us. I'm not interested in what exactly underlies it. I'm only interested in how you can use it for something concrete.

[D-P4; social psychologist]: So you want to be able to predict, but it doesn't matter how, as long as you can predict.

[Several security professionals]: Yes!

The previous quotes also illustrate that scientists and security professionals work within different paradigms. The scientists were looking for definitions, whereas security professionals sometimes lacked interest in the scientists' desire for operationalizing problems into observable or measurable phenomena. Another notable difference related to dealing with uncertainties and ignorance; scientists were actively trying to remove sources of variation, whereas security professionals conveyed that their daily practice does not allow them to do so.

[D-P4; social psychologist]: Then, actually, you don't know what you don't identify.

[D-P7; security professional]: True, but that's security.

[D-P5; security professional]: Correct.

[D-P7; security professional, now louder]: that [uncertainty] IS security!

5.2. A shared imaginary

The exploration of a shared imaginary was aided by identifying instances where neuroimaging would have added value. These options were discussed in terms of societal desirability.

5.2.1. The low-technological alternative

The options for neuroimaging applications raised throughout the session were confronted with methods from social psychology. This aided in determining where neuroimaging would indeed be of added value, and where low-technological options would be more suitable. For example, at a certain point in the dialogue, an imaginary was emerging in which neuroimaging should be focused on the security professional rather than on citizens, as this was considered fruitful, and much less controversial.

[D-P12; intermediary]: Then I really want to emphasize that I think chances mainly lie in this domain, to retain future support. Because the human brain is a social brain, therefore training and better recognition of your [the security official's] own brain responses to observing socially deviant behaviour, I think that is the quick win here.

[D -P7; security professional]: Yes, but we are not so interested in our own mind, but in the mind of the criminal.

[Protest noises of D-P5 and D-P9; security professionals]

[D-P5; security professional]: That is partly true, that's partly true.

[Facilitator]: So now we hear different things, because it is also said here, ‘especially the brain of the regulator is interesting'.

[D- P5; security professional] Yes, and I indeed see there is potential here for security managers, if you want to assemble a good security force, in the manner [D-P12] indicates.

Besides training security officials, another example revolved around determining beforehand whether a new applicant would be suitable, by using algorithms based on brain data of successful security officials. However, by exploring the alternative of social psychology, it was discovered that this approach would not necessarily involve brain data, but could also involve behavioural data, which would probably be easier and cheaper.

[D -P1; neuroscientist]: Even without any input, closing your eyes and ears, you can determine a lot of things from the brain. But I agree with you, I do not see why one should use brain [data] if it would be cheaper and simpler to use [behavioural] observations.

In the course of the conversation, the training option was abandoned as the participants were unable to reach closure. The input of the social psychologist kept the controversy open on whether a brain-based approach would be fruitful or not.

5.2.2. Discussing societal desirability

Two main target groups were considered. Neuroimaging technologies could be applied to gain more knowledge about the functioning of the security professional or to acquire more insights into the crowd security professionals are managing. Concerns regarding societal desirability surfaced particularly in the discussion on which target group should be preferred. With respect to the latter group, fears were expressed related to excluding citizens based on a certain potential, rather than a certain act. Implementations of technologies in a state with a high degree of surveillance were discussed.

[D-P7; security professional, in response to a neuroscientist]: ‘[ … ] yes, the surveillance society. I [usually] assume that [a certain technology] is used fairly and with integrity. But yes, that is, again, an assumption.'

( … ) [D-P11; ethicist]: ‘But a lot of predictive techniques, such as profiling, are developed for terrorism, but are now being applied for other [less threatening] things. But these techniques are designed with this purpose in mind.'

[D-P6; security professional]: ‘That's true.'

Particular concerns were privacy and reductionism.

[D-P10, intermediary]: ‘Maybe we will gain security, but we stand to lose a lot of privacy.'

In response to these voiced concerns security professionals were challenged to open the ‘black box’ of their daily practice and how their vision of a prospective neurotechnological device would fit. For them it was ‘obvious’ that they would not label people as dangerous or remove them from a certain space on the basis of a technology alone. But instead they see it as providing an entry point for where to direct their (quite soft) investigations, such as beginning a conversation.

From the perspective of societal desirability, the target group of security professionals was considered less problematic. Although in the beginning of the dialogue the emphasis first lay on the crowd one is managing, during the dialogue this emphasis was questioned by the intermediary/developer, scientists and ethicist. As such, the emphasis shifted towards the security professionals themselves. Actors from the intermediary/developer organization relayed examples of possible improvements in the practice of security management by focusing on the professionals themselves.

5.2.3. Theory-informed practices

A shared imaginary was found in the shape of the application of neuroimaging for theory-informed practices. A catalyst for this imaginary was the concern of the applied ethicist regarding discriminatory practices in surveillance. This was later followed up by one neuroscientist:

[D-P3; neuroscientist]: I introduced myself as the critical one, but maybe I'm actually the most optimistic. Because, what do we need [brain insights] for? Well, for theory-informed practices ( … ) Some people say, ‘Pay attention to signs of nerves', others claim ‘Pay more attention to signs of cognitive load'. ( … ) To resolve this, you can do pretty nice neuroscience research. You can test theories using neuroscience, and based on those you can decide which behaviour we should look for.

And by a security professional:

[D-P8; security professional]: If we could infuse extra information into training programs ( … ) that would make it a lot easier to [decide who to] start a conversation with [to find out more about them].

( … ) [D-P11, applied ethicist]: ‘Currently, a lot of employees in security have, for example, certain discriminatory ideas. If [this would yield] new ways to train professionals to decrease this tendency, it is here where opportunities [for neuroimaging] may be found.'

Using neuroimaging and neuroscientific insights for theory-informed practices, would allow an information-centred approach, which would be more effective and could counter the current reliance on ‘intuition'. As it is difficult to assess when someone acts based on intuition and when on discriminatory ideas, new theories could possibly provide new modes of action and as such less discriminatory practices. Security professionals saw opportunities for a safer society. Participants settled on the necessity of theory development for further professionalization of security management practices, although concerns remained present with respect to reliability of the technology and the implementation in the security paradigm. The latter was particularly, but not exclusively, voiced by the ethicist. Neuroscientific knowledge and neuroimaging was deemed suitable as they can measure additional phenomena and provide alternative insights with respect to behavioural measures (from social psychology), leading to the possibility of formulating better hypotheses for theory development. Note that this imaginary, though shared, is still quite blurry. The shared imaginary particularly contains a relevant knowledge question in the domain of private security for the application of neuroscientific knowledge and neuroimaging technologies. But it is in need of higher contextualisation. Furthermore, some relevant questions concerning societal desirability are not yet sufficiently resolved. Nevertheless, it does provide a relevant point of departure for continued deliberation with wider stakeholders.

6. Evaluation

Different types of impacts of the deliberation process are possible, specifically regarding their thinking, forging of relationships and actions undertaken after the dialogue. The latter two, relationships and actions, give insight into whether the imaginaries were materially powerful. The willingness to act is also related to responsiveness. Insight into their (changed) thinking will give us an idea whether reflexivity has been achieved.

6.1. Reflexive thinking by research participants

The dialogue was experienced as fascinating by the security professionals. Two of the neuroscientists said that the meeting did not yield anything concrete, but that it was nevertheless experienced as interesting and pleasant. The other neuroscientist experienced the meeting as ‘odd' and had not given the event much thought since. In contrast, all the other interviewed participants had all talked about the meeting with peers on one or more occasions.

The absence of a yield as perceived by neuroscientists related to a lack of action plans at the end of the meeting, and the neuroscientists’ perceived failure in educating security professionals on the opportunities and limitations of neuroimaging. In this light, it is interesting that the event in fact had an eye-opening effect on security professionals. One, for example, reported that it alerted him to taken-for-granted approaches towards security management, especially related to decision-making based on feelings and intuitions in the absence of hard facts. Another became more aware of potential detrimental effects of applying technologies in security management. This enhanced reflexivity was attributed to multiple aspects of the dialogue: the presence of scientific stakeholders with such different perspectives; the value game that introduced new societal viewpoints; and the participation of the applied ethicist as a ‘thorn in the side'.

This heightened reflexivity was not observed among the neuroscientists. They did not report any intensified awareness with respect to their own scientific practice, nor any significant shift in their perceived role in interacting with societal stakeholders. Nevertheless, at the end of each feedback interview, something interesting happened that made us suspect that the perceived lack of yield may relate to a lack of capacity among the neuroscientists to make sense of what happens during this type of meetings, as they may not have been exposed to such thinking. As the neuroscientists expressed curiosity to what our perceptions of the meetings were, we gave them our view on the points the neuroscientists had put forward in the reflection interview after all evaluation topics had been addressed. We described, for example, how we saw that the creation of a shared language had been taking place. We recapitulated that in this event we had been including societal stakeholders during the problem definition phase instead of the implementation phase, and how we observed that this problem definition changed and gained in focus through multi-stakeholder interaction. We explained that we saw this as progress, even though we did not – in this one dialogue meeting – reach the phase of formulating concrete action plans. We addressed how we saw security professionals drawing lessons for their own practice. For all the neuroscientists, this led to increased understanding of the purpose and character of upstream multi-stakeholder engagement in innovation processes. One, for example, expressed that he may have been ‘naïve' in his expectations how much can be reached in one single meeting. A second neuroscientist became very enthusiastic about this type of transdisciplinary research. The transformation of the third neuroscientist was maybe even more drastic. While initially stating that he had forgotten most about the event, except for it being ‘odd' and of no use, the conversation ended with him voicing different kinds of experiments he would like to embark upon with security professionals and he inquired after a follow-up of this study.

6.2. Relationships

Security professionals experienced the neuroscientists as interested in their practice, but also emphasised the paradigm clash between them, which they perceive as difficult to bridge. Two of the security management participants made appointments with scientists after the meeting, as the interaction with the scientists in the dialogue meeting intensified their interest in professionalizing security management. Another was at the verge of making such an appointment. Importantly, none of these appointments were with neuroscientists, and two of them concerned the social psychologist. The neuroscientists had not formed relationships with participating or other security professionals. However, the one neuroscientists who had had previous encounters with security professionals, now viewed security professionals in a more favourable light. This influences the potential of relationship formation later on.

6.3. Action

Since the event, relatively little seems to have changed in the practice of the security professionals and neuroscientists interviewed. For the security professionals, their interest in professionalizing security management through collaborating with scientists has intensified. They appear to have gained in action preparedness after the dialogue, and undertake action on their own. The reflection interviews seem to have contributed somewhat to the action preparedness of the neuroscientists, with respect to participating in follow-up activities.

The intermediary development party had a ‘dormant brain file' prior to the dialogue. However, after the dialogue, they deemed the participating neuroscientists not ‘enthusiastic' enough for them to invest in knowledge translation. However, if the security professionals had a clearer picture of a desired application of neuroimaging in their domain, they would be willing to facilitate, or take the lead in this process. For now, the brain file is still dormant.

7. Discussion

This paper provides insights into dealing with the interactive exploration and formulation of knowledge questions when conditions are present for a technology in search of an application in the contested security paradigm. A two-pronged approach was taken to escape the lure of a technological fix and an under-critical stance towards the security paradigm. Potential applications of neuroimaging were juxtaposed with low-technological alternatives from social psychology, which led to the dismissal of some neuroimaging applications for lacking added value. Secondly, the contentious security paradigm was put into focus by discussing societal desirability of the remaining neuroimaging options by addressing which values would be harmed or respected by a certain application. Both steps contributed to reflexivity among practitioners. The shared imaginary constructed through deliberation provides a point of departure for continued wide deliberation with – and unpacking of the imaginary by – societal stakeholders.

7.1. Linking two worlds

Security professionals want to get a grip on their preventive task. They are looking for solutions based on authoritative knowledge to ensure an evidence-base, but at the same time they seem to lack access to the academic world. Cognitive neuroscience and neuroimaging, being highly visible in popular culture, has caught the imagination of security professionals, thereby triggering a high-technology interest (Gunter Citation2014). One security professional's account on his disappointing interaction with a notable Dutch neuroscientist, trying to convince him to jointly develop applications in security management was illustrative of this. The relevant contribution of this paper to security and policy is that emphasis on and interest in neuroscience can be misplaced when it comes to security technology applications, as low-technological options are available. Although low-technology options are often more suitable, such as behavioural measures based on social psychology, these appear to be overlooked. As a biological approach, neuroscience seems to fit more in the recent shift to individualization (for example the importance of personalized medicine or personalized education). On the contrary, social psychology is a field in decline, apparently because of the diminishing capacity of making empirically compelling links between macro-level processes and micro-level phenomena, such as human behaviour (House Citation2008; Schnittker Citation2013). Social psychology is less likely to spark interest than neuroscience. As such, security professionals are in danger of buying into a high-technology fix for an under-defined problem; the latter being indicative of the current early phase.

Upstream deliberation with a wider group of actors was useful for increasing the definition of the problem, while drawing on multiple perspectives. Secondly, the use of a low-technology alternative was valuable in making approaches based on technological fixes more explicit and reducing their allure. Setting up engagement at this early phase allows researchers to begin an RRI process from the problem definition stage onwards. We were not committed to the position that neuroscience should be applied in security management. In the juxtaposition of different perspectives, with a wider set of stakeholders being in the same location, we may have created circumstances for potential neuroimaging applications to arise. Thus we were shaping, rather than just unpacking, an imaginary that could guide technology development. Secondly, in creating a corridor between parties that usually do not have access to each other, we were creating a space for a practice to emerge.

7.2. Shaping an imaginary

The dialogue setting enabled us to elicit deliberation on desirable and acceptable direct or indirect applications of neuroimaging in security management, while at the same time assessing potential added value to alternatives from the field of social psychology. Elsewhere, it has been formulated that the interest in neurotechnologies of security professionals is related to the desire to ‘improve the present capabilities of inferring higher cognitive functions, personality traits and psychological states of an individual, even without the individual's knowledge, cooperation or consent' (Hüsing, Jäncke, and Tag Citation2006, 159). This is encouraged by the perception of neuroimaging as direct, objective and accurate. Also, Canli (Citation2006) projects that it is only a matter of time before neuroimaging technologies will be used for employment screening. Our observations bring nuance and difference for the Dutch context. Personality traits appear to be of less importance for the application of neuroimaging in their field. The interest of Dutch security professionals in recognizing intent and predicting behaviour seems to correspond with the interest in psychological states as described. They see how this could help them in their profession, but at the same time they are capable of identifying concerns. Importantly, through inclusive deliberation another imaginary takes shape. In this imaginary, security professionals themselves are targeted and neuroimaging is seen as a suitable option for theory-informed practice. This could change the real world for the better in two ways: more effective interventions and a potential decrease in discriminatory practices. However, it should be noted that theory-informed security practices and discriminatory practices are not automatically mutually exclusive. In ongoing deliberations with wider stakeholders, attention needs to be paid throughout the technology development and implementation phases to existing discriminatory practices that may be further instantiated by ‘validated’ theory, or to different discriminatory practices that may emerge in the process.

7.3. Practice in the making

Through our interventions, we have created a corridor for information to travel (Bal and Mastboom Citation2007) between scientists, security professionals and an intermediary technology development party. In initiating the creation of a shared language, and the exploration of shared problem definitions and goals, we may have created the foundation for a practice in the making. It is unclear whether this corridor will remain open, but if so, what happens in this corridor is more or less uncontrollable, especially once we, RRI researchers, pull out.

7.4. Making and doing

In this process, we helped shape a new imaginary shared by a wider set of actors and influenced participants’ activities and (potential) relationships. Neuroscientists and security professionals still endorse the idea of a theory-informed security management practice. However, this imaginary is not fixed, but rather a temporary shared agreement within on-going conversations. Without facilitation of bringing people together and crystallizing the imaginary, this imaginary is likely to dissolve because of a lack of on-going multi-stakeholder conversations.

The security professionals seem more ready to undertake activities towards their more general goal of professionalizing their practice. We did not instigate action among the neuroscientists, besides continued willingness to participate in similar dialogue meetings. The intermediary technology development party seems ready to take action if the security professionals or neuroscientists will take the initiative. However, the imaginary requires higher definition to instigate significant action among the different stakeholders.

Is instigation of action probable in general in such an early phase? On the one hand, there is a relatively wide manoeuvring space as there are relatively few actors making and doing. This also seems to mean that RRI researchers should frequently facilitate interactions for action to arise, as also argued by Roelofsen (Citation2011). These can be small scale activities, such as the reflection interviews described here. The facilitator's view on the progress achieved seems to be an important motivator. But also larger and intelligently designed activities are required. In this study, for example, facilitation would be needed with respect to paradigm differences. Security professionals are used to dealing with uncertainties and ignorance in their daily practice, whereas the neuroscientists try to eliminate this in their laboratories. The intermediary party could function as a liaison, as their tolerance for uncertainty seemed to be more in the middle of these extremes during the dialogue. Another important role of the facilitator would be to introduce new stakeholders, vital for steering towards more fruitful, desirable and acceptable avenues.

7.5. Contributions to RRI

Macnaghten, Kearnes, and Wynne (Citation2005) previously conveyed experiences with Responsible Innovation in action. Theirs was a case study of a technoscience in-the-making relating to nanotechnology. However, the project's conception was already in place, before the framework of RRI was introduced. This rendered the framework of RRI prone to instrumental conditioning. We created this site with the ideal of Responsible Innovation as part of its proper conception. This location is already formed by wider ethical and constructive discussion and available for more of this type of modulation.

Constructive Technology Assessment (CTA) or Interactive Learning & Action are tools to possibly create conducive circumstances for RRI (Rip and Te Kulve Citation2008). van Merkerk and Smits (Citation2008) used a concrete application, Lab-on-a-chip, as the focal point for deliberation. However, concrete applications are typically lacking in a very early phase. Others have constructed guiding visions among the scientific community, which are then reflected upon by societal stakeholders (Arentshorst et al. Citation2014; Edelenbosch, Kupper, and Broerse Citation2013; Roelofsen et al. Citation2008). But what if there is not a coherent community of practice to speak of among the scientists? This is of relevance to RRI as conversation on RRI puts the contribution to a societal challenge at centre stage. When taking the societal challenge as a point of departure, there may not be a community of practice within science as a key site for vision assessment. This study proposes that the process should start by unpacking imaginaries among the societal stakeholders who are positioned more closely to the societal problem.

However, our study has limitations. For one, as participation was voluntary, we mainly attracted those security professionals who are the frontrunners of the current trend in security management to professionalize the practice. As we included security professionals representing various activities within security management, we did achieve saturation within this group of frontrunners. We also encountered recurring themes in the interviews with neuroscientists. Secondly, the dialogue meeting was attended by one to six members of each stakeholder group, therefore the resulting imaginary is possibly not generalizable. However, our purpose was to achieve a responsible imaginary, rather than a generalizable one. We aimed to ensure scientific rigour by transcribing verbatim, sending member checks, involving multiple researchers in analysis and discussion of the data, and using qualitative software.

Despite these limitations, to what extent were we able to contribute to an increased likelihood of responsible neuroimaging technologies in security management, if they were to emerge in this field? For one, have we contributed to more responsible actors? The security professionals clearly became more reflexive with respect to their practice. We did not see a similar tendency among the neuroscientists, although we do seem to have enriched their view on innovation processes and their perceived role therein. Considering the limited scope of this study, impacts are mainly on the individual actor level, although institutional effects cannot be altogether excluded. As the community of practice literature (Wenger Citation2002) teaches, actors learn through sense making with peers in their own practice. The reported conversations with peers by all but one interviewed participant is indicative of this. This could become a source of a spillover effect. Secondly, have we created an imaginary that is more likely to lead to responsible outcomes, in the sense that they would be more likely to be ethically acceptable and socially desirable? Privacy issues seem to be a major concern among the security professionals: they are aware of the balance between respecting civil rights and the regulatory space to undertake their security activities. Directing neuroimaging applications onto the security professionals themselves for higher performance, instead of onto civilians to screen for harmful intents, makes this balance somewhat less precarious. Social factors are possibly positively influenced by our introduction of social psychology as an alternative source of security innovations, to lessen the interest in a technological fix. This is supported by the observation that security professionals have started forming relations with other scientists rather than neuroscientists. In that sense, we appear to have reduced the allure of neuroscience. A more theory-informed practice within security management could both lead to less discrimination and more efficient or effective approaches within the field. This coincides with the societal relevance formulated for the Grand Challenge of Secure Societies of the European Commission (EC Citation2014). However, considering the strong rationale for pre-emption within security management, the theory-informed practice could still turn out to become a practice in which decisions are based on potential rather than action, or in which prejudice is validated by ‘theory’. From the perspective of RRI, this continues to warrant reflection because of implications for privacy and social justice. Therefore, on-going inclusive deliberation is needed considering its potential to increase reflexivity, as shown to have taken place among the security professionals in this study. Continued deliberation should not remain restricted to the parties we involved here. Although the interaction between practitioners, scientists and technology developers (in the presence of an applied ethicist) was useful for problem structuring and defining knowledge questions, the resulting imaginary is in need of wider shaping effects by those that do not necessarily stand to gain from the technology's creation. Stakeholders such as public authorities, non-governmental organisations and the public at large should be included. This is not to say that societal perspectives were not taken into account in this deliberation process. The applied ethicist contributed a different perspective, as illustrated by the description of the ethicist as a ‘thorn in the side'. But also the other participants drew from the societal perspective during the conversations: even the security professionals we initially felt so apprehensive about.

Disclosure statement

No potential conflict of interest was reported by the authors.

Notes on contributors

Irja Marije de Jong holds master degrees in chemistry and in health and life sciences. She wrote her PhD on Responsible Research and Innovation in the context of emerging neuroimaging technologies in justice and security. This research project was funded by the Netherlands Organisation for Scientific Research.

Frank Kupper studied biomedical sciences and science philosophy and wrote his PhD thesis reflective learning processes on animal biotechnology. He is an assistant professor of science communication and founded a company dedicated to the use of theatre and dialogue as instruments of playful reflection.

Jacqueline Broerse is a professor of innovation and communication in the health and life sciences. She holds a master degree in biomedical sciences and obtained her PhD on the interactive development of research agendas. She specializes in science-society dialogue and systemic change for more inclusive innovation processes.

Additional information

Funding

This work was supported by the Netherlands Organisation for Scientific Research (NWO) within the thematic funding program Responsible Innovation (MVI) under Grant [313-99-180].

Notes

1. Reports were published by WODC, an advisory body for the Ministry of Security and Justice, for example, de Kogel (Citation2008). The Dutch Journal of Criminology published a special issue on the state of the art in bio-psychological and biosocial criminology in February 2005.

2. The programme ‘Brain & Cognition – social innovation in health care, education and safety' (HCMI) operates under the broader Dutch Initiative on Brain & Cognition, of the Netherlands Organisation for Scientific Research (NWO). The pillar ‘safety’ is devoted to matters of security and criminality. Safety is the translation used by this programme, although ‘security’ would better fit the nature of the projects.

3. This research project, Neurosciences in Dialogue, is funded by the Responsible Innovation programme of the Netherlands Organisation for Scientific Research (NWO), and focuses on the responsible development and use of neuroimaging in three application domains, being health care, education and justice & security.

4. We follow here Lockean social contract theory. In short: citizens give up the least possible amount of personal freedom to ensure the freedom of all citizens, and trading in this personal loss of freedom and autonomy for protection by the state.

5. For the remaining five we were unable to raise this topic due to time constraints.

6. Another 55% mentioned security-related applications outside of private security, in public security and justice settings such as firefighting departments and ambulance personnel.

7. This usually means, talking to this person, asking a question such as ‘Good morning, sir. Can I help you with something?' This became even clearer during the dialogue session, although not specifically mentioned in that section in this paper. In response to concerns of other participants (mainly the neuroscientists) for reductionism, security professionals were elicited to open this ‘black box' of their daily experience. For them it was ‘obvious' that they would not label people as dangerous or remove them from a certain space on the basis of a technology alone.

References

  • Ansari, Daniel, Bert De Smedt, and Roland H. Grabner. 2012. “Neuroeducation – A Critical Overview of an Emerging Field.” Neuroethics 5 (2): 105–117. doi:10.1007/s12152-011-9119-3.
  • Arentshorst, Marlous E., Jacqueline E. W. Broerse, Anneloes Roelofsen, and Tjard de Cock Buning. 2014. “Towards Responsible Neuroimaging Applications in Health Care: Guiding Visions of Scientists and Technology Developers.” In Responsible Innovation 1, edited by Jeroen van den Hoven, Neelke Doorn, Tsjalling Swierstra, Bert-Jaap Koops, and Henny Romijn, 255–280. Dordrecht: Springer.
  • Bal, Roland, and Femke Mastboom. 2007. “Engaging with Technologies in Practice: Travelling the Northwest Passage.” Science as Culture 16 (3): 253–266. doi:10.1080/09505430701568651.
  • Belcher, Annabelle, and Walter Sinnott-Armstrong. 2010. “Neurolaw.” Wiley Interdisciplinary Reviews: Cognitive Science 1 (1): 18–22. doi:10.1002/wcs.8.
  • Borup, Mads, Nik Brown, Kornelia Konrad, and Harro van Lente. 2006. “The Sociology of Expectations in Science and Technology.” Technology Analysis & Strategic Management 18 (3–4): 285–298. doi:10.1080/09537320600777002.
  • Canli, Turhan. 2006. “When Genes and Brains Unite: Ethical Implications of Genomic Neuroimaging.” In Neuroethics: Defining the Issues in Theory, Practice, and Policy, edited by Judy Illes, 169–184. Oxford: Oxford University Press.
  • Clinard, Marshall, and Robert Meier. 2015. Sociology of Deviant Behavior. 13 ed. Belmont, CA: Thomson Wadsworth.
  • EC (European Commission). 2014. “Secure Societies – Protecting Freedom and Security of Europe and its Citizens.” Accessed November 20. http://ec.europa.eu/programmes/horizon2020/en/h2020-section/secure-societies-%E2%80%93-protecting-freedom-and-security-europe-and-its-citizens.
  • Edelenbosch, Rosanne, Frank Kupper, and Jacqueline E. W. Broerse. 2013. “The Application of Neurogenomics to Education: Analyzing Guiding Visions.” New Genetics and Society 32 (3): 285–301. doi:10.1080/14636778.2013.808033.
  • Farah, Martha J., and Paul Root Wolpe. 2004. “Monitoring and Manipulating Brain Function: New Neuroscience Technologies and their Ethical Implications.” Hastings Center Report 34 (3): 35–45. doi: 10.2307/3528418
  • Gunter, Tracy D. 2014. “Can We Trust Consumers with Their Brains? Popular Cognitive Neuroscience, Brain Images, Self-Help and the Consumer.” Indiana Health Law Review 11 (2): 483–552. doi:10.18060/18887.
  • Hedgecoe, Adam, and Paul Martin. 2003. “The Drugs Don't Work: Expectations and the Shaping of Pharmacogenetics.” Social Studies of Science 33 (3): 327–364. doi:10.1177/03063127030333002.
  • Hollander, Jocelyn A. 2004. “The Social Contexts of Focus Groups.” Journal of Contemporary Ethnography 33 (5): 602–637. doi:10.1177/0891241604266988.
  • House, James S. 2008. “Social Psychology, Social Science, and Economics: Twentieth Century Progress and Problems, Twenty-First Century Prospects.” Social Psychology Quarterly 71 (3): 232–256. doi:10.1177/019027250807100306.
  • Hüsing, Bärbel, Lutz Jäncke, and Brigitte Tag. 2006. Impact Assessment of Neuroimaging: Final Report. Vol. 50. Zürich: vdf Hochschulverlag AG.
  • de Jong, Irja Marije, Frank Kupper, Anneloes Roelofsen, and Jacqueline E. W. Broerse. 2015. “Exploring Responsible Innovation as a Guiding Concept: The Case of Neuroimaging in Justice and Security.” In Responsible Innovation 2: Concepts, Approaches and Applications, edited by Bert-Jaap Koops, Ilse Oosterlaken, Henny Romijn, Tsjalling Swierstra, and Jeroen van den Hoven, 57–84. Dordrecht: Springer.
  • Kearnes, Matthew, Robin Grove-White, Phil Macnaghten, James Wilsdon, and Brian Wynne. 2006. “From Bio to Nano: Learning Lessons from the UK Agricultural Biotechnology Controversy.” Science as Culture 15 (4): 291–307. doi:10.1080/09505430601022619.
  • Kiran, Asle H. 2012. “Does Responsible Innovation Presuppose Design Instrumentalism? Examining the Case of Telecare at Home in the Netherlands.” Technology in Society 34 (3): 216–226. doi:10.1016/j.techsoc.2012.07.001.
  • de Kogel, C. H. 2008. “De hersenen in beeld: neurobiologisch onderzoek en vraagstukken op het gebied van verklaring, reductie en preventie van criminaliteit.” In Onderzoek en Beleid, edited by WODC, 1–199. Den Haag: WODC.
  • Koops, Bert-Jaap. 2015. “Introduction.” In Responsible Innovation 2: Concepts, Approaches and Applications, edited by Jeroen van den Hoven, Ilse Oosterlaken, Bert-Jaap Koops, and Henny Romijn, 1–15. Dordrecht: Springer.
  • Lee, Nick, Amanda J. Broderick, and Laura Chamberlain. 2007. “What Is ‘Neuromarketing’? A Discussion and Agenda for Future Research.” International Journal of Psychophysiology 63 (2): 199–204. doi: 10.1016/j.ijpsycho.2006.03.007
  • Levi, Michael, and David S Wall. 2004. “Technologies, Security, and Privacy in the Post-9/11 European Information Society.” Journal of Law and Society 31 (2): 194–220. doi: 10.1111/j.1467-6478.2004.00287.x
  • Lodge, Juliet. 2007. “Biometrics: A Challenge For Privacy Or Public Policy-Certified Identity And Uncertainties.” Minority, Politics, Society 1: 193–206.
  • Lyon, David, ed. 2003. Surveillance as Social Sorting: Privacy, Risk, and Digital Discrimination. London: Routledge.
  • Macnaghten, Phil, Matthew B. Kearnes, and Brian Wynne. 2005. “Nanotechnology, Governance, and Public Deliberation: What Role for the Social Sciences?” Science Communication 27 (2): 268–291. doi:10.1177/1075547005281531.
  • Nordmann, Alfred. 2010. “A Forensics of Wishing: Technology Assessment in the Age of Technoscience.” Poiesis & Praxis 7 (1–2): 5–15. doi:10.1007/s10202-010-0081-7.
  • Owen, Richard, John Bessant, and Maggy Heintz, eds. 2013. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society. Chichester: Wiley.
  • Owen, Richard, Phil Macnaghten, and Jack Stilgoe. 2012. “Responsible Research and Innovation: From Science in Society to Science for Society, with Society.” Science and Public Policy 39 (6): 751–760. doi:10.1093/scipol/scs093.
  • Patton, Michael Quinn. 1990. Qualitative Research & Evaluation Methods. 2nd ed. Newbury Park, CA: Sage.
  • Peissl, Walter. 2010. “Privacy and Security – A Way to Manage the Dilemma.” In ISSE 2009 Securing Electronic Business Processes, edited by Norbert Pohlmann, Helmut Reimer, and Wolfgang Schneider, 187–196. Wiesbaden: Vieweg+Teubner.
  • Pickersgill, Martyn, Sarah Cunningham-Burley, and Paul Martin. 2011. “Constituting Neurologic Subjects: Neuroscience, Subjectivity and the Mundane Significance of the Brain.” Subjectivity 4 (3): 346–365. doi: 10.1057/sub.2011.10
  • Pontell, Henry N. 2007. “Deviance, Reactivist Definitions of.” In Blackwell Encyclopedia of Sociology, edited by George Ritzer. Blackwell Reference Online.
  • Rip, Arie, and Haico Te Kulve. 2008. “Constructive Technology Assessment and Socio-Technical Scenarios.” In The Yearbook of Nanotechnology in Society, Volume I: Presenting Futures, 49–70. Berlin: Springer.
  • Roelofsen, Anneloes. 2011. “Exploring the Future of Ecogenomics: Constructive Technology Assessment and Emerging Technologies.” PhD thesis, Athena Institute, VU University Amsterdam.
  • Roelofsen, A., Jacqueline E. W. Broerse, Tjard de Cock Buning, and Joske F. G. Bunders. 2008. “Exploring the Future of Ecological Genomics: Integrating CTA with Vision Assessment.” Technological Forecasting and Social Change 75 (3): 334–355. doi:10.1016/j.techfore.2007.01.004.
  • Rose, Nikolas. 2001. “The Politics of Life Itself.” Theory, Culture & Society 18 (6): 1–30. doi:10.1177/02632760122052020.
  • Schnittker, Jason. 2013. “Social Structure and Personality.” In Handbook of Social Psychology, edited by John DeLamater, and Amanda Ward, 89–115. Boston, MA: Springer.
  • Shearing, Clifford, and Les Johnston. 2005. “Justice in the Risk Society.” Australian & New Zealand Journal of Criminology 38 (1): 25–38. doi:10.1375/acri.38.1.25.
  • Van Melik, Rianne, and Jan Van Weesep. 2006. “Terrorismebestrijding of criminaliteitspreventie.” Agora 22 (1): 20–23.
  • Van Merkerk, Rutger O., and Ruud E. H. M. Smits. 2008. “Tailoring CTA for Emerging Technologies.” Technological Forecasting and Social Change 75 (3): 312–333. doi:10.1016/j.techfore.2007.01.003.
  • Webb, Maureen. 2007. Illusions of Security: Global Surveillance and Democracy in the Post-9/11 World. San Francisco: City Lights Books.
  • Wenger, Etienne. 2002. “Communities of Practice and Social Learning Systems.” In How Organizations Learn: Managing the Search for Knowledge, edited by K. Starkey, S. Tempest, and A. McKinlay, 238–258. London: Thomson Learning.
  • Williams, James W. 2005. “Reflections on the Private Versus Public Policing of Economic Crime.” British Journal of Criminology 45 (3): 316–339. doi:10.1093/bjc/azh083.
  • Zedner, Lucia. 2007. “Pre-Crime and Post-Criminology?” Theoretical Criminology 11 (2): 261–281. doi:10.1177/1362480607075851.