Publication Cover
New Genetics and Society
Critical Studies of Contemporary Biosciences
Volume 30, 2011 - Issue 1
2,012
Views
37
CrossRef citations to date
0
Altmetric
Articles

Connecting neuroscience and law: anticipatory discourse and the role of sociotechnical imaginaries

Pages 27-40 | Published online: 03 Mar 2011

Abstract

In recent years, attempts have increasingly been made to connect neuroscience and law. Scientists and lawyers are imagining and actively fostering the realization of futures in which neuroscience will play a prominent role in the activity of courts. In this article I take these debates as my empirical object. I trace the emergence of neurolegal discourse, explore its focus on free will and lie detection, and show how expectations about the potential role neuroscience might play in the law are being embedded in new research programs and funding streams. In so doing, I analyze the role of particular “sociotechnical imaginaries” in stimulating, directing and restricting neurolegal discourse and highlight the ways in which new visions of law, science and scientists are produced in the process. Sociotechnical imaginaries are shown to be salient in structuring anticipatory discourse, and represent a key target for social scientific intervention in such debates.

Introduction

Neuroscience promises to provide new ways of understanding ourselves, social relations, and societies. Through its investigations, not least of which are powerful imaging studies, the “new brain sciences” are contributing to longstanding debates concerning selfhood, normality and pathology (Rose Citation2007, Pickersgill Citation2009). At the same time, academics and professionals from a variety of backgrounds are seeking to forge connections between neuroscience and other disciplines, such as economics, education and law. In the process, these actors, and the institutions that support their activities, invoke and produce particular visions of science, the social, and the brain.

In this article, I take one such interdisciplinary commitment as my empirical object. Specifically, I interrogate the expanding discourse on neuroscience and law. Debates regarding what some have referred to as “neurolaw” have emerged in recent years through efforts to connect neuroscientific knowledge to legal practice (Chorvat and McCabe Citation2004, Zeki and Goodenough Citation2004, Tovino Citation2007). This endeavor has seen new collaborations and engagements develop between neuroscientists, lawyers and ethicists. Here I focus both on the emergence and current topography of neurolegal discourse, and on the role of particular imaginaries of law, science and scientists in this.

In my analysis, I draw upon and elaborate the concept of “sociotechnical imaginaries,” as described by Jasanoff and Kim Citation(2009). These authors take imaginaries to be a means through which states describe and prescribe national futures within which technoscience plays a central role. Sociotechnical imaginaries are described as “collectively imagined forms of social life and social order reflected in the design and fulfilment of nation-specific scientific and/or technological projects” (ibid., p. 120). Resonant with recent sociological work on expectations (van Lente and Rip Citation1998, Brown Citation2003, Hedgecoe and Martin Citation2003, Nerlich and Halliday Citation2007, Wilkie and Michael Citation2009), we might view sociotechnical imaginaries as one means through which anticipatory discourse and practices are structured, and thus as a mechanism through which futures are designed.

In this article, I redeploy the concept of sociotechnical imaginaries, exploring the role of visions of technoscience and social order within individual and collective accounts of potential futures, of the sort (re)produced within normative debate about the potential of science to reconfigure and enhance existing social practices. I thus extend Jasanoff and Kim's analysis by highlighting the role of sociotechnical imaginaries within more micro-social processes, as well as the ways in which these may emerge in transnational discourse.

Methods

The data upon which this analysis draws primarily include published journal articles and other commentaries, alongside blog posts and other online media and participant observation at lectures, workshops and conferences on neuroscience and society. In order to generate a corpus of articles, Web of Knowledge (WoK) was searched using the terms “neuroscience and law” and “neurolaw.” Other articles were also located through examining the references of key papers sourced from WoK, and through attending to texts commonly cited in seminars and online commentary. As Littlefield Citation(2009) has shown, such texts are saturated with assumptions about brain, self and society, and the power and potential of neuroscience.

My textual analysis is also grounded in limited (participant) observation at UK, European and North American neuroscience and society events over the last five years. Some of these explicitly focused on the relationships between neuroscience and law, including a high-profile week-long conference supported by the European Science Foundation on “Law and neuroscience: our growing understanding of the human brain and its impact on our legal system” (held October 2009).Footnote1 Overall, my critical discourse analysis concerned the deciphering of implicit and explicit assumptions regarding the place, role and impact of neuroscience in and on legal systems, and the development of these statements and positions over time.

Making the connections

As lawyer Amanda Pustelnik Citation(2009) notes, “law and neuroscience have been engaged in an ill-fated and sometimes tragic affair for over two hundred years” (Pustelnik Citation2009, p. 183). However, over the last decade, a new conversation has emerged between scientists, lawyers and ethicists about the place and role of neuroscience in law. Such discussions of “neurolaw” have been contained within a variety of fora, including edited volumes, special editions of prestigious journals, and high-profile conferences and other events at celebrated universities.Footnote2 These discursive arenas act as what Molyneux-Hodgson and Meyer refer to as “community-making devices” that provide “‘glue’ to capture and begin to sustain emerging links” (Molyneux-Hodgson and Meyer Citation2009, p. 140).Footnote3

As discourses pertaining to both the potential of neuroscience to enhance law and the ways in which this enhancement might be handled in an ethically robust fashion, such discussions can be read as first, instantiations of a broader cultural understanding of brain research as holding promise to transform science, medicine and society; second, a reflection of longstanding expectations about the capacity of scientific knowledge to improve legal practice; and third, a platform from which such expectations might be consolidated and reproduced. Together, these discourses at once embed and are embedded within an emerging “community of promise” (Brown Citation2003, p. 6): the neurolegal community.

Discussions of the neuroscience/law interface are not simply part of the “grass-roots,” unfunded scholarship that still forms a common component of the contemporary work of academia. Rather, neurolegal discourse has attracted substantial investment from prestigious funding sources, such as the US John D. and Catherine T. MacArthur Foundation sponsorship of the Law and Neuroscience Project (LANP) (Gazzaniga Citation2008). This well-funded venture explicitly aims to stimulate innovative new work at the intersections of neuroscience and law. At the same time, organizations such as the European Science Foundation have also taken an interest in these debates (e.g. the conference highlighted above where some of the data upon which this article is based were collected).

For many involved in neurolegal debate, it is clear not only that neuroscience does have a valuable contribution to make to the law, but that it will impact on legal practice. According to the LANP website, neuroscience “will increase our understanding of actions that our laws regulate and of attitudes that our laws reflect” (LANP Citation2010). As such, scientists, lawyers and others are understood as having a responsibility to ensure that existing connections between neuroscience and law are fostered and furthered in an ethically robust way:

How we apply this knowledge can have a major impact on the future of our legal system. With informed and cautious reform, our justice system could have more accurate predictions, more effective interventions, and less bias. Society could have less crime and fewer people in prisons. However, by ignoring or failing to integrate neuroscience properly, we could end up with a legal system that is worse off as a result of unreliable evidence that could send the wrong people to prison and because of widespread skepticism throughout society about law's basic assumptions. (ibid.)

The charged expectations evident in the extract above are not unusual in neurolegal discourse. In the words of psychologist Daniel Martell: “there is a place for neuroscience in the courtroom, a place that is certain to expand” (Martell Citation2009, p. 132). In the introduction to a recent special issue of Neurocase dedicated to neuroscience and crime, psychologist Hans Markowitsch confidently wrote how “Jurisprudence will profit considerably from methods and applications of the neurosciences” (Markowitsch Citation2008, p. 1). While some lawyers and scientists are cautious when evaluating the connections between neuroscience and law and are careful “not to speculate about the potential of neuroscience” (Tovino Citation2009, p. 473; see also Tovino Citation2008), discourse on neurolaw is proliferating, and even the New York City Bar Foundation Citation(2005) has come to address these issues.

Within the neurolegal literatures, arguably provocative statements come to be mundane. For instance, the claim of MacArthur Project Director, Michael Gazzaniga (a prominent cognitive neuroscientist and former member of the US President's Council on Bioethics) that “Neuroscience, like it or not, is enmeshed with the core issues of criminal law” (Gazzaniga Citation2008, p. 415) has a commonsense logic within this set of debates which elides the considerable sociotechnical work necessary to align the practices and concerns of neuroscientists with those of lawyers. However, within the current topography of neurolaw, well-meaning assertions that the “publicly spirited neuroscientist” must “help guide our society to the proper use of its accurate and growing base of scientific knowledge” (ibid.) are regarded as unproblematic, even as they reconfigure ideas about the nature of scientific expertise and the role of both science and scientists in the law.

In what follows, I discuss further these emerging issues, grounding my analysis in debates centering on two conceptual and substantive foci for neurolaw: neuroscience and free will, and neurotechnological deception detection. As will become clear, though debates regarding neuroscience and law are by no means unilateral, as a whole they are (re)producing particular sociotechnical imaginaries that legitimate, order and propel such discourse.

Neuroscience and free will

Considerable debate on neurolaw has focused on free will, with a number of scientists, lawyers and ethicists arguing that neuroscience is powerfully illuminating and even challenging dominant philosophical perspectives, as well as those on related concepts such as moral responsibility and intentionality (Greene et al. Citation2001, Casebeer Citation2003, O'Hara Citation2004, Tancredi Citation2007, Heisenberg Citation2009, Kaposy Citation2009). Max Planck scientist Chun Sion Soon and colleagues Citation(2008) have used functional magnetic resonance imaging (fMRI) to illustrate the role of the frontal and parietal cortex in the formation of decisions before they have been consciously made. These findings continue an established scientific tradition; in 1985, for instance, celebrated physiologist Benjamin Libet used electroencephalography (EEG) to demonstrate electrical activity in the brains of research participants prior to their conscious decisions to perform an action (in this case, the pressing of a button) (Libet Citation1985). Libet's data were thought to suggest that ostensibly deliberate choices were made subconsciously even before individuals were aware they had done so, raising profound questions about free will.

These and other studies have been argued to have important implications for the law. Neuroscientist Maxwell Bennett, for instance, has argued that “failure of appropriate restraint” (Bennett Citation2009, p. 289), leading to violence, is related to synaptic failure – a finding he suggests might enjoin reflection on the part of courts ruling on an individual's responsibility for their crime. Others have likewise advanced analyses regarding the degree to which neuroscience might nuance and add to psychological and psychiatric evidence regarding the moral culpability of a criminal defendant.Footnote4 As lawyer Kelly Burns and neuroscientist Antoine Bechara put it, “the idea of freedom of will on which our legal system is based is not supported by the neuroscience of decision making” (Burns and Bechara Citation2007, p. 263; see also Greene and Cohen Citation2004).

Debates about free will are exemplified in discourse on the moral responsibility of “the psychopath.” As bioethicist Walter Glannon Citation(2008) has pointed out, considerable interest has been shown in the questions of whether, and to what extent, psychopaths can be held responsible for their antisocial behavior. Within moral philosophy there is a strong tradition of using psychopathy as a case study through which to discuss the nature of morality – a tradition that has been enriched through the links ethicists have made with cutting-edge neuroscientific data.Footnote5 Glannon himself feels, like many others, that “The cognitive and affective impairment in psychopaths is enough to justify mitigated responsibility, but not excuse” (Glannon Citation2008, p. 159).

Yet, some of the scientists conducting research germane to this area are less certain. In particular, neuroscientist James Blair, an authority on psychopathy based at the US National Institutes of Mental Health, has resisted giving definitive answers to questions centering on the responsibility of psychopaths, regarding such issues as “out of the scope” (Blair Citation2008, p. 153) of his research. Pockett reached a similar conclusion regarding responsibility in a range of individuals, though she argued it was nevertheless

a good idea […] to start discussing the question of how the law would or should be affected if the ultimate conclusion turns out to be that all so-called voluntary behavior is in fact unconsciously initiated. (Pockett Citation2007, p. 292)

The reflections of Pockett and others outlined here reflect a key assumption within much neurolegal discourse; namely, that law should become more “open” to neuroscientific expertise, or, in some cases, even be reconfigured so as to “catch up” with neuroscience. These arguments imagine both the epistemic foundations of law to be subordinate to those of neuroscience, and that law is somewhat plastic: it is viewed not only as something that should be changed, but imagined as an institution that could be (though not necessarily easily). At the same time, imaginaries of scientific experts are produced which cast researchers as having expertise that is intrinsically normative.

Nonetheless, as we have begun to see, not everyone agrees with claims that neuroscience will play a role in legal futures, nor that some neuroscientists have expertise relevant to law. In particular, lawyer Stephen Morse Citation(2005) has argued that only profound discoveries by scientists are likely to recast legal doctrine, while bioethicist Adina Roskies has more boldly stated that despite “hand-wringing about freedom and moral responsibility” (Roskies Citation2006, p. 419) neuroscience presents little challenge to conceptions of free will.Footnote6 Sociologist Nikolas Rose is similarly doubtful about the extent to which neuroscience will transform the law, believing it “unlikely to overturn or radically transform legal reasoning or the premises of the criminal justice system” (Rose Citation2007, p. 226). Others state plainly that “human behaviour cannot be described adequately in physical terms of cause and effect” (Kroeber Citation2007, p. 251).

Not all analysts, therefore, unanimously believe neuroscience will transform legal praxis. Rather, some consider the “challenge” presented by neuroscience to law as somewhat mundane. These comments may, perhaps, be accurate, but it is also clear that they too are structured by particular imaginaries. Specifically, rather than constructing law as plastic, legal institutions are instead constructed as strongly resistant to exogenous change. However, it remains the case that these are not the dominant sociotechnical imaginaries structuring neurolegal discourse: it is precisely because of this that caveats or even opposition to more hyperbolic accounts of the neuroscience/law interface have been advanced. Within much neurolegal discourse, many continue to ascribe neuroscientific data with profundity.

Technologies of truth

While neurologic hopes and the fears that go hand in hand with them are readily evident in discourse on free will, nowhere are they more apparent than in debates on neurotechnological developments in lie detection. Here the legal applications of neuroscience have been argued to be far more immediate, and certainly there is some empirical support for this assertion: though these developments are incremental, it is clear that some courts have employed neuroscientific techniques to detect deception (Erickson and Felthous Citation2009). Such practices are situated within and help to amplify longstanding sociotechnical spaces wherein the role of polygraphy, EEG and molecular genetics in the courts has been articulated, explored and embedded (Alder Citation2007, Dressing et al. Citation2008).

Some research in this area draws on longstanding EEG methodologies, though several scientists are employing more recent neuroimaging tools. Largely, this research draws on fMRI cognitive subtraction techniques to examine changes in synaptic activity during deception. These changes are then used to infer which parts of the brain are implicated in lying.Footnote7 DARPA funded researcher Bruce Luber and colleagues Citation(2009) have suggested transcranial magnetic stimulation (TMS) may also hold potential value as a means of detecting deception.

Given the established use of polygraphy, it is perhaps unsurprising that legal interest in these developments has been acute. This is particularly the case in the US, where post-9/11 anxieties have stimulated public and political desires for neurologic deception detectors (Littlefield Citation2009). Lawyer Charles Keckler, for instance, has argued that “the legal profession should seek to foster” the potential of neuroimaging to “encourage factual veracity” in the courts “through structuring the correct incentives and rules for admissibility” (Keckler Citation2006, p. 509). The promise of neurotechnological lie detection is not just located within academic discourse, however. Rather, it is evident too in the growth of companies like No Lie MRI and Cephos Corp which are seeking to bring neurologic deception detection to the market.Footnote8

Alongside these expectations and new research trajectories, calls for anticipatory ethical appraisals of neuroscience-based lie detection have become powerfully resonant. University of Pennsylvania bioethicist Paul Root Wolpe has been an especially energetic analyst of neuroethical issues, including neurotechnologies for lie detection. His engagement has been stimulated by the potential for such techniques to “be used routinely in the criminal justice system” (Stoller and Wolpe Citation2007, p. 364). As Wolpe and colleagues (including deception-detection researcher Daniel Langleben) remarked in a paper on the “promises and perils” of neurotechnological lie detection, it is time to begin a “social conversation about the appropriate parameters of its civil, forensic, and security use” (Wolpe et al. Citation2005, p. 39; see also Fox Citation2009). Writing with Sarah Stoller, he set out a number of issues demanding consideration, including

the general admissibility of a variety of deception-detecting technologies, the acceptability of brain imaging evidence to determine subjective states of mind, and the use of brain imaging to determine levels of competence or capacity. (Stoller and Wolpe Citation2007, p. 375)

Others, too, have called for bioethical attention to neuroscience-based lie detection; for instance, lawyer Brent Garland and neuroscientist Paul W. Glimcher suggest it is “a pending advance” that “raises serious legal and ethical questions” (Garland and Glimcher Citation2006, p. 132).Footnote9 Specific points in need of addressing include the degree to which constitutional protections might be violated by such technologies. Like Wolpe and colleagues, these authors, in effect, outline a series of regulatory obstacles that scientists and lawyers must navigate to ensure the widespread use of neuroscience-based lie detection.

However, Garland and Glimcher do not merely suggest neuroscientists may come to play a role in these debates, they actively encourage it:

If scientists do not participate in a dialogue with the legal community, then how neuroscience and neurobiology enters the courtroom will be a discussion largely left to lawyers, judges, legal scholars and policy makers. (ibid., p. 134)

The place of neuroscientists in such discussion is as a direct consequence of Garland and Glimcher's assumptions about the inevitability of neuroscientific knowledge becoming embedded within legal practice. Even lawyer and psychiatrist Nigel Eastman and neuroscientist Colin Campbell, who are more cautious about the certainty of legal applications for neuroscience, envision possible futures whereby neuroscientists are “drawn (or perhaps seduced) into” (Eastman and Campbell Citation2006, p. 315) applying their expertise to legal problems. Some scientists themselves have also sought to foster ethical debate on the use of neurotechnologies in court. For example, Daniel V. Meegan noted in the American Journal of Bioethics that in spite of the failure of current tools and techniques to meet US criteria for legal admissibility, they nevertheless have considerable “forensic potential” (Meegan Citation2008, p. 9) and therefore demand ethical reflection and discussion.

Once more, therefore, sociotechnical imaginaries that construct law as plastic and neuroscience as intrinsically normative both structure and direct neurolegal commentary. The new imaginary of scientific expertise emergent in discourse on free will and responsibility has gained particular traction within debates regarding lie detection: both scientists and lawyers are imagining and instating neuroscience researchers as experts not only in the brain, but in ethics as well.

Yet, there is also skepticism about the promise of neuroscience.Footnote10 Sociologists have argued that neurologic techniques will be subject to high levels of interrogation before it becomes a legal mainstay (e.g. Rose Citation2007, p. 300). Some lawyers are likewise cautious about their implications (Moriarty Citation2009). Scientists too have outlined caveats to these technologies.Footnote11 One of the leading investigators in the area, University of Sheffield psychiatrist Sean Spence, has himself pointed to a number of limitations to this research. These include various methodological issues, and a failure to replicate the published studies (Appelbaum Citation2007, Spence Citation2008). However, by delineating limitations, researchers like Spence draw attention to the considerable promissory discourse surrounding neuroscience and its legal potential, implicitly suggesting current drawbacks can and will be one day overcome, and legitimating the dominant sociotechnical imaginary of neurolaw.

It is thus clear not only that neuroscientific research into lie detection is being actively pursued, but there is a demand for it by some legal professionals. New expectations are being generated, co-produced with financial investment and ethical engagement. Neuroscientists are being configured by ethicists, lawyers, and, not least, by themselves as playing important roles in ensuring that emerging techniques for lie detection translate into legal practice in ethically robust ways. Together, these new research programs and ethical discourses are, it seems, imagining sociotechnological spaces which new lie detection technologies are imagined to one day occupy. In so doing, these imaginaries contribute to forging and strengthening connections between neuroscience and law.

Discussion

In this paper I have analyzed the emergence and topography of a new normative discourse: neurolaw. Sharply illuminating the “current preoccupation with interdisciplinarity” (Barry et al. Citation2008, p. 21) within and beyond the academy, this is an attempt to connect neuroscience and the law through new scholarly and professional engagements and research programs. Within neurolegal discourse, significant questions are now being asked about how neuroscience will reshape or challenge dominant ideas about free will; how effective neuroscientific techniques are at detecting deception; what role these knowledges and technologies could and should play in the courts; and how they can be regulated in ethically robust ways. Neurolaw can thus be seen as governed by what Barry and colleagues Citation(2008) refer to as a “logic of ontology” within interdisciplinary enterprises: it seeks to redefine what counts as knowledge relevant to courts, and, in more extreme modes, legal practice itself. Accordingly, the roles of legal and scientific practitioners themselves are also candidates for reappraisal. In this sense, we can view neurolegal praxis as a form of what might be called “ontological work.”

The discourses of neurolaw are distributed across bioethical, legal and scientific communities, though they are being played out more within scientific than legal literatures, and primarily in North American – but increasingly European – contexts. Such literatures and, in particular, the conferences and seminars they often originate from, are important incubators for this emergent tradition; they produce “stickiness through being able to bring together the main players in the field” (Molyneux-Hodgson and Meyer Citation2009, p. 141). Yet, new connections between neuroscience and law are not evident solely within discourse. Rather, they are being instantiated through new flows of what Thompson Citation(2005) might call “promissory capital.” Private investment into neuroscience-based lie detection is rising, and the US Defense Department and CIA have invested significantly in neurotechnologies that might contribute to the enhancement of homeland security (Moreno Citation2006, Langleben and Datillio Citation2008). Neurolaw thus emerges from and further extends a larger cultural turn towards viewing neuroscience as an authoritative and prestigious branch of biomedicine (Rose Citation2007, Pickersgill, Citationin press), exemplifying a broader “conviction that science can deliver failsafe, and therefore just, legal outcomes where the law, acting on its own, might fall short” (Jasanoff Citation2006, p. 328; see also Littlefield Citation2009, Moriarty Citation2009).

Such moves largely elide the fact that neuroscience does not, a priori, have a contribution to make to the law. That some now take this discipline to be germane to the concerns of courts is a social accomplishment. In making neuroscience the “right tool for the job” (Clarke and Fujimura Citation1992, Joyce Citation2008), neuroscientists (and lawyers) have had to reconfigure amorphous concepts like “free will” or “deception” into constructs that can be operationalized within research, and prominent actors have necessarily made considerable efforts to communicate across the boundaries of science, law and ethics.

Communications are intimately tied to what Jasanoff and Kim Citation(2009) call “sociotechnical imaginaries.” These figure neuroscience and law in particular ways, consolidating connections between these disciplines and practices. In this paper I have taken Jasanoff and Kim's concept, and used it as an analytic tool to explore the visions of technoscience and social order that are embedded in, constitutive of, and produced by the discourse of individuals and transnational collectives, not just states. This analysis therefore not only casts light on the sociology of neurolaw, but contributes further to our understandings of the ways in which anticipatory debates are structured and futures engineered.

In the case at hand, there are different figurations of the interdisciplinary matrix. However, a dominant sociotechnical imaginary is discernable. Law is imagined as first, epistemically subordinate to neuroscience, and second, highly plastic: neuroscience not only should but could enhance legal institutions and processes. At the same time, science is imagined as having an intrinsic normativity that demands attention and action, and scientists are understood to be key figures needing to be enrolled as part of the assemblage of actors that can and must effect this legal shift.

Of course, not all “stakeholders” in debates on neuroscience and law invoke or produce these imaginaries: the neurolegal terrain is hotly contested, and several key commentaries evidence understandings of scientific research and legal practice that differ markedly from those set out above. Yet, the invocation and generation of such “counter-imaginaries” is intimately tied to the dominance of the former and thus do little to silence debate. Rather, as “negative expectations” (Nerlich and Halliday Citation2007) they too have a performative force, acting to further propel neurolegal discourse and research. In particular, anticipatory ethical appraisal might act as “roadmaps” (Hedgecoe and Martin Citation2003), promising smooth passage of neuroscience into the courts if they are followed. Anticipatory discourse or “upstream” ethics thus help to embed imaginaries within social action.

For this case at least, when conducted in isolation from empirical sociological research on science and law, anticipatory discourses can be seen to be strongly structured by sociotechnical imaginaries that develop, shape and restrict debate. This is problematic: the neurolegal enterprise revolves around controversial issues of significant societal importance and thus needs to be informed by the rich empirical work of sociologists of law and science, rather than abstractions which may act to legitimize rather than problematize current developments. The importance of continued attention to and engagement with “the promotion and reception of science and technology by non-scientific actors and institutions” (Jasanoff and Kim Citation2009, p. 119) is thereby underscored.

Acknowledgements

I thank Sarah Cunningham–Burley for an insightful reading of an earlier version of this article, and the referees for their very helpful comments. Funding is gratefully acknowledged from ESRC, through the grant “Constituting Neurologic Subjects: Neuroscience, Identity and Society after the ‘Decade of the Brain’” (RES-000-22-3501).

Notes

See: http://www.esf.org/index.php?id=5679 [Accessed 11 November 2009].

Such journals include Philosophical Transactions of the Royal Society and Neurocase and books like Neuroscience and the law (Garland Citation2004). Events on legal aspects of neuroscience have been held at centers such as Harvard University, University College London, the University of Akron School of Law, and the Scottish Institute of Advanced Studies, with other conferences sponsored by important public bodies like the European Science Foundation.

Brigitte Chamak Citation(1999), in her rich study of the development of cognitive science, has also pointed towards the important role of (funded) conferences and workshops in the emergence and consolidation of (inter)disciplines.

See, for example, Baird and Fugelsang Citation(2004), Sapolsky Citation(2004), Aharoni et al. Citation(2008), Batts Citation(2009), Knabb et al. Citation(2009), Fabian Citation(2010), Vincent Citation(2010). See also Lederman Citation(2010).

See, for example, Maibom Citation(2008) and references therein.

See also Aronson Citation(2007), Kawohl and Habermeyer Citation(2007), Silva Citation(2007).

See Spence et al. Citation(2001), Kozel et al. Citation(2004), Lee et al. Citation(2005).

See: http://www.cephoscorp.com/ and http://www.noliemri.com/ [Accessed 11 November 2009].

See also Kerr et al. Citation(2008).

As there is in the mental health professions, regarding the therapeutic promise of neuroscience (Pickersgill Citationin press).

Though for some lawyers, the question of whether neuroscience is ready for legal use is a question for the courts, not scientists, to decide (Schauer Citation2010).

References

  • Aharoni, E., et al., 2008. Can neurological evidence help courts assess criminal responsibility? Lessons from law and neuroscience, Annals of the New York Academy of Science 1124 (2008), pp. 145–160.
  • Alder, K., 2007. The lie detectors: the history of an American obsession. New York: Free Press; 2007.
  • Appelbaum, P. S., 2007. The new lie detectors: neuroscience, deception, and the courts, Psychiatric Services 58 (2007), pp. 460–462.
  • Aronson, D. D., 2007. Brain imaging, culpability and the juvenile death penalty, Psychology, Public Policy, and Law 13 (2007), pp. 115–142.
  • Baird, A. A., and Fugelsang, J. A., 2004. The emergence of consequential thought: evidence from neuroscience, Philosophical Transactions of the Royal Society of London B: Biological Sciences 359 (2004), pp. 1797–1804.
  • Barry, A., Born, G., and Weszkalnys, G., 2008. Logics of interdisciplinarity, Economy and Society 37 (2008), pp. 20–49.
  • Batts, S., 2009. Brain lesions and their implications in criminal responsibility, Behavioral Sciences and the Law 27 (2009), pp. 261–272.
  • Bennett, M., 2009. Criminal law as it pertains to “mentally incompetent defendants”: a McNaughton Rule in the light of cognitive neuroscience, Australian and New Zealand Journal of Psychiatry 43 (2009), pp. 289–299.
  • Blair, R. J.R., 2008. The cognitive neuroscience of psychopathy and implications for judgments of responsibility, Neuroethics 1 (2008), pp. 149–157.
  • Brown, N., 2003. Hope against hype: accountability in biopasts, presents and futures, Science Studies 16 (2003), pp. 3–21.
  • Burns, K., and Bechara, A., 2007. Decision making and free will: a neuroscience perspective, Behavioral Sciences and the Law 25 (2007), pp. 263–280.
  • Casebeer, W. D., 2003. Moral cognition and its neural constituents, Nature Reviews Neuroscience 4 (2003), pp. 840–847.
  • Chamak, B., 1999. The emergence of cognitive science in France: a comparison with the USA, Social Studies of Science 29 (1999), pp. 643–684.
  • Chorvat, T., and McCabe, K., 2004. The brain and the law, Philosophical Transaction of the Royal Society of London B: Biological Sciences 359 (2004), pp. 1727–1736.
  • Clarke, A., and Fujimura, J., 1992. Clarke, A., and Fujimura, J., eds. The rights tools for the job. Princeton: Princeton University Press; 1992.
  • Dressing, H., Sartorius, A., and Meyer-Lindenberg, A., 2008. Implications of fMRI and genetics for the law and the routine practice of forensic psychiatry, Neurocase 14 (2008), pp. 7–14.
  • Eastman, N., and Campbell, C., 2006. Neuroscience and legal determination of criminal responsibility, Nature Reviews Neuroscience 7 (2006), pp. 311–318.
  • Erickson, S. K., and Felthous, A. R., 2009. Introduction to this issue: the neuroscience and psychology of moral decision making and the law, Behavioral Science and the Law 27 (2009), pp. 119–121.
  • Fabian, J. M., 2010. Neuropsychological and neurological correlates in violent and homicidal offenders: a legal and neuroscience perspective, Aggression and Violent Behavior 15 (2010), pp. 209–223.
  • Fox, D., 2009. The right to silence as protecting mental control, Akron Law Review 42 (2009), pp. 763–801.
  • Garland, B., 2004. Garland, B., ed. Neuroscience and the law: brain, mind and the scales of justice. Chicago: University of Chicago Press; 2004.
  • Garland, B., and Glimcher, P. W., 2006. Cognitive neuroscience and the law, Current Opinion in Neurobiology 16 (2006), pp. 130–134.
  • Gazzaniga, M. S., 2008. The law and neuroscience, Neuron 60 (2008), pp. 412–415.
  • Glannon, W., 2008. Moral responsibility and the psychopath, Neuroethics 1 (2008), pp. 158–166.
  • Greene, J., and Cohen, J., 2004. For the law, neuroscience changes nothing and everything, Philosophical Transactions of the Royal Society of London B: Biological Sciences 359 (2004), pp. 1775–1785.
  • Greene, J. D., et al., 2001. An fMRI investigation of emotional engagement in moral judgement, Science 293 (2001), pp. 2105–2108.
  • Hedgecoe, A., and Martin, P., 2003. The drugs don't work: expectations and the shaping of pharmacogenetics, Social Studies of Science 33 (3) (2003), pp. 327–363.
  • Heisenberg, M., 2009. Is free will an illusion?, Nature 459 (2009), pp. 164–165.
  • Jasanoff, S., 2006. Just evidence: the limits of science in the legal process, Journal of Law, Medicine and Ethics 34 (2006), pp. 328–341.
  • Jasanoff, S., and Kim, S.-H., 2009. Containing the atom: sociotechnical imaginaries and nuclear power in the United States and South Korea, Minerva 47 (2009), pp. 119–146.
  • Joyce, K. A., 2008. Magnetic appeal: MRI and the myth of transparency. Ithaca: Cornell University Press; 2008.
  • Kaposy, C., 2009. Will neuroscientific discoveries about free will and selfhood change our ethical practices?, Neuroethics 2 (2009), pp. 51–59.
  • Kawohl, W., and Habermeyer, E., 2007. Free will: reconciling German Civil Law with Libet's neurophysiological studies on the readiness potential, Behavioral Sciences and the Law 25 (2007), pp. 309–320.
  • Keckler, C. N.W., 2006. Cross-examining the brain: a legal analysis of neural imaging for credibility impeachment, Hastings Law Journal 57 (2006), pp. 509–556.
  • Kerr, I., Binnie, M., and Akoi, C., 2008. Tessling on my brain: the future of lie detection and brain privacy in the criminal justice system, Canadian Journal of Criminology and Criminal Justice 50 (2008), pp. 367–387.
  • Knabb, J. J., et al., 2009. Neuroscience, moral reasoning, and the law, Behavioral Science and the Law 27 (2009), pp. 219–236.
  • Kozel, F. A., Padgett, T. M., and George, M. S., 2004. A replication study of the neural correlates of deception, Behavioral Neuroscience 118 (2004), pp. 852–856.
  • Kroeber, H., 2007. The historical debate on brain and legal responsibility – revisited, Behavioral Sciences and the Law 25 (2007), pp. 251–261.
  • Langleben, D. D., and Datillio, F. M., 2008. Commentary: the future of forensic functional brain imaging, Journal of the American Academy of Psychiatry and Law 36 (2008), pp. 502–504.
  • LANP (Law and Neuroscience Project), 2010. Mission [online]. Available from: http://www.lawandneuroscienceproject.org/About-Us/Mission.aspx [Accessed 10 June 2010]..
  • Lederman, C., 2010. Science in the courtroom: vital to best interests and reasonable efforts, Juvenile and Family Court Journal 61 (2010), pp. 63–68.
  • Lee, T. M.C., et al., 2005. Neural correlates of feigned memory impairment, NeuroImage 28 (2005), pp. 305–313.
  • Libet, L., 1985. Unconscious cerebral initiative and the role of conscious will in voluntary action, Behavioral Brain Sciences 8 (1985), pp. 529–566.
  • Littlefield, M., 2009. Constructing the organ of deceit: the rhetoric of fMRI and brain fingerprinting in post-9-11 America, Science, Technology and Human Values 34 (2009), pp. 365–392.
  • Luber, B., et al., 2009. Non-invasive brain stimulation in the detection of deception: scientific challenges and ethical consequences, Behavioral Science and the Law 27 (2009), pp. 191–208.
  • Maibom, H. L., 2008. The mad, the bad and the psychopath, Neuroethics 1 (2008), pp. 167–184.
  • Markowitsch, H. J., 2008. Neuroscience and crime, Neurocase 14 (2008), pp. 1–6.
  • Martell, D. A., 2009. Neuroscience and the law: philosophical differences and practical constraints, Behavioral Science and the Law 27 (2009), pp. 123–136.
  • Meegan, D. V., 2008. Neuroimaging techniques for memory detection: scientific, ethical and legal issues, American Journal of Bioethics 8 (2008), pp. 9–20.
  • Molyneux-Hodgson, S., and Meyer, M., 2009. Tales of emergence: synthetic biology as a scientific community in the making, BioSocieties 4 (2009), pp. 129–145.
  • Moreno, J. D., 2006. Mind wars: brain research and national defense. New York and Washington. DC: The Dana Foundation; 2006.
  • Moriarty, J. C., 2009. Visions of deception: neuroimages and the search for truth, Akron Law Review 42 (2009), pp. 739–761.
  • Morse, S., 2005. New neuroscience, old problems: legal implications of brain science, Cerebrum 6 (2005), pp. 81–90.
  • Nerlich, B., and Halliday, C., 2007. Avian flu: the creation of expectations at the interplay between science and the media, Sociology of Health and Illness 29 (2007), pp. 46–65.
  • New York City Bar Foundation Committee on Science and Law, 2005. Are your thoughts your own? “Neuroprivacy” and the legal implications of brain imaging. 2005, [online]. Available from: http://www.abcny.org/pdf/report/Neuroprivacy-revisions.pdf [Accessed 25 June 2009].
  • O'Hara, E. A., 2004. How neuroscience might advance the law, Philosophical Transactions of the Royal Society of London B: Biological Sciences 359 (2004), pp. 1677–1684.
  • Pickersgill, M., 2009. Between soma and society: neuroscience and the ontology of psychopathy, BioSocieties 4 (2009), pp. 45–60.
  • Pickersgill, M. D., in press. “Promising” therapies: neuroscience, clinical practice, and the treatment of psychopathy, Sociology of Health and Illness (in press).
  • Pockett, S., 2007. The concept of free will: philosophy, neuroscience and the law, Behavioral Science and the Law 25 (2007), pp. 281–293.
  • Pustelnik, A., 2009. Violence on the brain: a critique of neuroscience in criminal law, Wake Forest Law Review 44 (2009), pp. 183–238.
  • Rose, N., 2007. The politics of life itself: biomedicine, power, and subjectivity in the twenty-first century. Princeton and Oxford: Princeton University Press; 2007.
  • Roskies, A., 2006. Neuroscientific challenges to free will and responsibility, Trends in Cognitive Science 10 (2006), pp. 419–423.
  • Sapolsky, R. M., 2004. The frontal cortex and the criminal justice system, Philosophical Transactions of the Royal Society of London B: Biological Sciences 359 (2004), pp. 1787–1796.
  • Schauer, F., 2010. Neuroscience, lie-detection, and the law: contrary to the prevailing view, the suitability of brain-based lie-detection for courtroom or forensic use should be determined according to legal and not scientific standards, Trends in Cognitive Sciences 14 (2010), pp. 101–103.
  • Silva, J. A., 2007. The relevance of neuroscience to forensic psychiatry, Journal of the American Academy of Psychiatry and Law 35 (2007), pp. 6–9.
  • Soon, C. S., et al., 2008. Unconscious determinants of free decisions in the human brain, Nature Neuroscience 11 (2008), pp. 543–545.
  • Spence, S. A., 2008. Playing devil's advocate: the case against fMRI lie detection, Legal and Criminological Psychology 13 (2008), pp. 11–25.
  • Spence, S. A., et al., 2001. Behavioural and functional anatomical correlates of deception in humans, NeuroReport 12 (2001), pp. 2849–2853.
  • Stoller, S. E., and Wolpe, P. R., 2007. Emerging neurotechnologies for lie detection and the fifth amendment, American Journal of Law and Medicine 33 (2007), pp. 359–375.
  • Tancredi, L. R., 2007. The neuroscience of “free will.”, Behavioral Science and the Law 25 (2007), pp. 295–308.
  • Thompson, C., 2005. Ontological choreography: reproductive technologies and their economies. Cambridge: MIT Press; 2005.
  • Tovino, S. A., 2007. Functional neuroimaging and the law: trends and directions for future scholarship, American Journal of Bioethics 7 (2007), pp. 44–56.
  • Tovino, S. A., 2008. The impact of neuroscience on health law, Neuroethics 1 (2008), pp. 101–117.
  • Tovino, S. A., 2009. Neuroscience and health law: an integrative approach?, Akron Law Review 42 (2009), pp. 469–517.
  • Van Lente, H., and Rip, A., 1998. The rise of membrane technology: from rhetorics to social reality, Social Studies of Science 28 (1998), pp. 221–254.
  • Vincent, N. A., 2010. On the relevance of neuroscience to criminal responsibility, Criminal Law and Philosophy 4 (2010), pp. 77–98.
  • Wilkie, A., and Michael, M., 2009. Expectation and mobilisation: enacting future users, Science, Technology and Human Values 34 (2009), pp. 502–522.
  • Wolpe, P. R., Foster, K. R., and Langleben, D. D., 2005. Emerging neurotechnologies for lie-detection: promises and perils, American Journal of Bioethics 5 (2005), pp. 39–49.
  • Zeki, S., and Goodenough, O. R., 2004. Law and the brain: introduction, Philosophical Transactions of the Royal Society of London B: Biological Sciences 359 (2004), pp. 1661–1665.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.