2,132
Views
6
CrossRef citations to date
0
Altmetric
Target Article

What an International Declaration on Neurotechnologies and Human Rights Could Look like: Ideas, Suggestions, Desiderata

Abstract

International institutions such as UNESCO are deliberating on a new standard setting instrument for neurotechnologies. This will likely lead to the adoption of a soft law document which will be the first global document specifically tailored to neurotechnologies, setting the tone for further international or domestic regulations. While some stakeholders have been consulted, these developments have so far evaded the broader attention of the neuroscience, neurotech, and neuroethics communities. To initiate a broader debate, this target article puts to discussion twenty-five considerations and desiderata for recognition by a future instrument. They are formulated at different levels of abstraction, from the big picture to technical details, seek to widen the perspective of preparatory reports and transcend the narrow debate about “neurorights” which overshadows many richer and more relevant aspects. These desiderata are not an exhaustive enumeration but a starting point for discussions about what deserves and what requires protection by an international instrument.

This article is referred to by:
Beyond Substance: Structural and Political Questions for Neurotechnologies and Human Rights
Valuing Subjectivity Beyond the Brain, but Also Beyond Psychology and Phenomenology: Why an International Declaration on Neurotechnologies Should Incorporate Insights From Social Theory as Well
Neurorights: The Land of Speculative Ethics and Alarming Claims?
The Global Governance of Neurotechnology: The Need for an Ecosystem Approach

INTRODUCTION

Ethical and legal worries arising from novel neurotechnological applications have reached the level of international human rights institutions and prompted ongoing deliberations about a new legal instrument that sets international standards for the development, regulation, and use of neurotechnologies. In a recent report on human rights implications of neurotechnologies, the International Bioethics Committee of UNESCO (IBC) considers the idea of a “governance framework set forth in a future UNESCO Universal Declaration on the Human Brain and Human Rights” or a “New Universal Declaration on Human Rights and Neurotechnology” (2021, at 184c). Other human rights agencies have been concerned with the matter, hosted hearings and commissioned reports (especially OECD Citation2019; see also Ienca Citation2021; OECD Citation2017; Sosa et al. Citation2022). The UN Human Rights Council (Citation2022) mandated its Advisory Committee to prepare a comprehensive study on neurotechnologies and human rights. A novel international instrument will likely emerge from these debates (cf. UNESCO Docs. 216 EX/Dec.9 and EX/50). As the first global instrument specifically tailored to neurotechnologies, it will set the tone for further regulations at domestic, supranational, and international levels. Although some stakeholders have been consulted in previous proceeedings, the development has so far largely evaded the broader attention of the neuroscience, neurotech, and neuroethics communities.Footnote1 This is unfortunate, as academic input is vital to identify problems, frame debates and develop solutions, not least because international agencies lack subject matter expertise and have relied on a limited number of experts so far. The timing is critical. Once debates move to the political arena and intergovernmental negotiations, the room for academic and big picture debates narrows as matters tend to become increasingly technical and arguments tend to become interest-based. Accordingly, the time for impactful academic interventions is now. To facilitate it and to widen the perspective of current debates, this target article puts to discussion twenty-five considerations and desiderata for a future instrument. In particular, it wishes to transcend the confines of the debate about so called neurorights that dominates the current discourse (e.g., Borbón and Borbón Citation2021; Bublitz Citation2022c; Genser, Herrmann, and Yuste Citation2022; Ienca Citation2021; Ligthart et al. Citation2023; Rommelfanger, Pustilnik, and Salles Citation2022; Yuste et al. Citation2017; Zúñiga-Fajuri et al. Citation2021). Proceeding on the basis of existing rights, the following remains uncommitted as to whether novel rights are needed. This debate overshadows a broader and richer field of relevant questions, and it is time to turn to them.

Setting the stage, the nature and the limits of a future instrument should be clarified. It will likely be a soft law instrument such as a recommendation by UNESCO or a resolution by the UN General Assembly. Such documents are not legally binding and lack enforcement mechanisms. Whether they qualify as law at all depends on legal theory’s perennial question about the nature of law and may be answered differently with respect to different types of documents (Andorno Citation2012; Shelton Citation2008). Suffice it to note here that such documents understand themselves as more than mere ethical statements because they demand compliance by signatory States without creating enforceable legal obligations. Theoretical matters aside, soft law instruments can be practically effective governance tools that draw attention to problems and set standards which are often observed by States and other stakeholders. They may, for instance, affect governmental research funding, decisions by ethics committees, or the regulatory conditions for market approval of devices. Soft law may also turn into hard law in several ways. It may provide guidance for courts in interpreting norms, rendering the content of rights more concrete and resolving normative conflicts. It may inform secondary soft law such as general comments by treaty bodies, and inspire further binding acts at domestic or international levels. Soft law’s greater flexibility is an advantageous feature in fast-moving fields without firm normative underpinnings such as neurotechnologies, and has therefore become the prime legal-regulatory tool for technology governance at both the international and the domestic level (Hagemann and Skees Citation2018; Marchant and Tournas Citation2019). At any rate, because of the often insurmountable political hurdles that binding treaties of international law face, especially in the current geopolitical climate, soft law instruments are the best form of international governance of neurotechnologies that is realistically attainable in the near future.

The nature of an instrument shapes its content. In contrast to the abstract and elegantly worded Universal Declaration of Human Rights and the international covenants that followed it, soft law instruments allow for more aspirational goals and broader scopes but also for more concrete norms and standards. In addition, they are not only directed at States as the protagonists of international law but also at other stakeholders, notably private actors such as businesses that may threaten human rights, individuals whose rights may have been violated, but also other relevant parties such as engineers and developers of neurotechnologies. Moreover, given the aspiration of global applicability and the need for consensus in matters about which countries and cultures may reasonably disagree, instruments must allow for local adaptability, value pluralism, compromises, and gravitate toward smallest common denominators. These conditions are reflected in the texts of such documents, which are often replete with references to general values of the human rights systems, not always entirely coherent, and sometimes even intentionally vague at critical points. But despite and because of these weaknesses, soft law instruments can set norms and standards that are observed and steer the course of the future development of a field. The UNESCO Recommendation on Artificial Intelligence (AI), adopted in 2021, may serve as a model for a future neurotech instrument. It contains recommendations at different levels of abstraction, from broad values over principles to actionable policy options. Although not free from textual weaknesses, the Recommendation provides some novel, concrete, and surprisingly far-reaching standards.Footnote2

It is further worth noting that international norms for the regulation of neurotechnologies already exist. Current debates sometimes evoke the impression that they develop in a legal vacuum, but this is a bit misleading. For instance, placing devices on markets is regulated by domestic and supranational device regulation, such as the EU Medical Device Regulation, which covers neurotechnologies for medical and some non-medical purposes (European Union Citation2017). It leaves neurodevices for non-medical neuroimaging outside of its scope, but this is not a gap but rather an intentional regulatory decisions. At the international human rights level, the Oviedo Convention on Human Rights and Biomedicine (1997), a legally binding international treaty signed by more than 30 States, seeks to safeguard the dignity and integrity of persons “with regard to the application of biology and medicine” (Council of Europe Citation1997, preamble). Likewise, the non-binding UNESCO Universal Declaration on Bioethics and Human Rights (2005) was adopted in view of the “rapid advances in science and their technological applications” (2005, preamble). Both instruments contain various norms about human rights and informed consent that apply to neurobiological interventions. The same is true for the Recommendation on Responsible Innovation in Neurotechnology (OECD Citation2019). This leads to the first desideratum:

(i) A future instrument should cohere with existing instruments but not merely repeat them; it should neither contradict them without compelling reasons, nor address similar points by different terms, and should strive to go beyond them by suggesting more concrete norms or addressing substantially different aspects.Footnote3

The following presents further desiderata and considerations for a future instrument. It proceeds from the general to the particular, from meta-considerations to concrete rights and technical suggestions, and at least partially attempts to deduce the latter from the former. The points are thus interwoven rather than distinct; they are sometimes couched in the idiosyncratic style of international documents and should not be understood as conclusive but as an invitation for criticism and additions.

Valuing Subjectivity

The most general point to reflect on are the objectives of an instrument on neurotechnologies and the brain. What is its ultimate concern, what is the ultimate object of protection? One of them, sometimes overlooked by a narrow reductionist focus on the brain, is human subjectivity. In the end, individuals and society are concerned about what people think, how they feel, why they are happy or depressed, in other words, with a wide range of mental and often conscious phenomena, or more broadly, with human subjectivity. The brain, and the body more broadly, are the physical realizers of subjectivity, or perhaps its constitutive causes, but this does not make them per se the prime objects of concern. Brains are only contingently or instrumentally relevant insofar as, and only to the degree to which, they generate mental processes and other subjective properties deemed valuable. This becomes clear when neurotechnologies complement or replace brain functionality and enable or sustain mental processes, e.g. an implanted brain stimulator that regulates a person’s moods: The resulting affectivity matters, not its biological underpinning. More generally, people reasoning clearly, thinking freely, remembering accurately or feeling well is what counts, not the mechanism or biological processes that underlie these mental functions.Footnote4 Negotiations about the future instruments should be framed in a way that recognizes that human subjectivity, broadly speaking, is what is of concern and partially at stake through neurotechnological interventions. This framing contrasts with the emphasis on the brain and brain activity abound in current debates, which tends to confuse causes of the object of protection with the object itself.Footnote5 Centering subjectivity calibrates the perspective of the instrument.

(ii) A future instrument should emphasize human subjectivity as a primary object of concern.

Adopting Mind–Brain Pluralism

People might agree with the significance of subjectivity but argue that it is caused by or dependent on the brain, so that both levels are inseparable. This is indeed very probable. But the implications of this dependence-relation are often overstated. It is important to acknowledge that the precise relationship between mind and brain or mental and neural functions is still an open scientific and metaphysical question. Surely, crude forms of Cartesian substance dualism according to which the mind is an immaterial substance unrelated to the brain are no longer tenable, not least because alterations at the brain level can demonstrably cause alterations at the mental level (bottom-up causation). This is beyond dispute. But how this interconnection works more precisely, whether top-down mental causation or emergent higher-order properties exist, remains nothing short of a great mystery. Neuroscientific approaches are often characterized by reductive assumptions, e.g., that causal relations between mental events are fully explainable by causal relata at the neurobiological level. Reductionism is to some degree inherent to neuroscience.Footnote6 Alternative non-reductionist views are sometimes too easily dismissed as “dualist” based on a fallacious generalization: That Cartesian dualism is wrong neither refutes other forms of dualism, nor proves reductionism, or the identity of mind and brain.Footnote7 In fact, it seems hard to avoid acknowledging that prima facie humans have a dual nature, they are both physical objects and conscious subjects, entities with an inner first-person perspective that diverges from the properties observable from external the third-person perspective (Habermas Citation2007). So far, reducing one to the other has not proven successful, and until this is achieved, stronger forms of reductionism run danger of missing the full picture. To be clear, reductionism is a valid scientific hypothesis that merits investigation. But scientific hypotheses should neither be conflated with established facts, nor with adequate grounds for public policy. Until the “explanatory gap” (Levine Citation1983) is closed and the “hard problem of consciousness” (Chalmers Citation1995) solved, until it is settled how neurons, brain circuits and everything else measurable from a third-person perspective relate to the lived experience of persons, neither individuals, nor society at large manage to operate without some form of dualism. Contemporary debates, especially with a neuroscientific bend, sometimes fall short of grasping the complexities of the possible interrelations between mind and brain, and the plurality of positions developed in philosophy of mind (e.g., emergence, supervenience, or pan-psychism as popular non-reductive positions). Although these debates are well-informed by neuroscience, no position has a clear advantage over others. The divergence of philosophical views should instill some humbleness and call for avoiding oversimplications such as “the mind is the brain” and its variations, or overly reductive assumptions of “neuroessentialism” or “brainhood” (Vidal Citation2009). This is not only a demand of epistemic rationality, but also a conditio-sine-qua-non for an international instrument to be endorsed by various cultures, traditions, and more holistic worldviews, many of which are seemingly hard to reconcile with strong reductionist positions. Strictly speaking, not even the existence of an immaterial soul—a central element of many worldviews—has been disproven by neuroscience. Accordingly, the western naturalist perspective should be enriched and complemented by other perspectives, and a future instrument should be open to holistic, non-reductive worldviews.Footnote8 This requires, among others, avoiding brain-centric language and reflecting on the persuasiveness of the content of the instrument from non-reductionist perspectives.

(iii) A future instrument should be aware of the limits of current knowledge and the complexities of the debate about the mind-brain relationship, avoid reductionist tones and commitments, and strive to be open and hospitable to non-reductionist worldviews by embracing pluralism in these matters.

Avoiding Neuro-Objectification

The ethical corollary of a reductive overemphasis on neurobiology is the neglect of subjectivity, which threatens to objectify humans. Objectification is a multifaceted term drawn upon by feminism, critical theory, and Kantianism in various ways. In the present context, objectification may be understood as treating an entity as a mere object although it should not be treated as such; disregard for subjectivity is one of its hallmarks (Nussbaum Citation1995). Reductionist positions holding that the brain level has causal or explanatory primacy over the mental level will seek explanations of mental phenomena and interventions altering them primarily at the neural level. Again, this might be a fruitful approach for fields such as biological psychiatry, but is not a sound basis for transcultural policy. The key normative question is, pointedly, whether people should treat each other primarily as biological objects or as beings with subjectivity. The latter requires engaging with the subjective level, e.g., with the contents of thoughts and emotions of people, their experiences, and the meaning they give to the world and their lives. By contrast, modes of interaction that disregard subjective aspects and engage at the material level only tend to objectify persons and reduce them to their bodies (and in this case, their brains), endangering valuable forms of interpersonal engagement (Bublitz Citation2020a; Hoffman Citation2013). A touch of objectification is inherent to neurotechnologies, which common concepts of the field indicate: networks and neurotransmitters, circuits, currents, and connectomes, milliampere and millimeters, blood-oxygen-level dependent signals. The target of neurointerventions is the brain as a physical object and its electric or magnetic properties, the mind is approached and accessed through its physiological correlates. Should this become the main mode of engagement with people, objectification looms large.

(iv) A future instrument should conceptualize and recognize the dangers of objectification through neurotechnologies.

Promoting Phenomenology

One way of avoiding objectification and valuing subjectivity is placing stronger emphasis on exploring and understanding the lived experience of persons, traditionally the field of phenomenology. Surprisingly, although neurotechnologies alter brains and minds, their mental and phenomenological effects are not systematically examined; in fact, this is done only rarely with respect to mental properties unrelated to the target of an intervention. As an example, neurointerventions for disorders such as depression are measured on scales and inventories for depressive symptoms but not other mental effects, which therefore remain largely unexamined. Even less attention is given to phenomenology in the stricter sense, as the analysis of how it feels, e.g., to be subjected to brain stimulation or to embody neuroimplants.Footnote9 Neurodevices may also affect the self-perception—and perhaps the self-relation—of users; it might well be that they adopt a technical-mechanistic attitude toward their minds, possibly a form of self-objectification or a source of self-alienation (Gilbert Citation2018; Gilbert et al. Citation2019; Hoffman Citation2013; Leuenberger Citation2021). Examining subjectivity in these richer and more comprehensive senses demands psychological and phenomenological methods uncommon in contemporary neurotechnological research. A future instrument should call for investigations and funding of such research.

(v) Subjective effects of neurotechnologies should be systematically examined; States should encourage, support, and fund psychological and phenomenological studies into technological effects on subjectivity, self-relations, and alienation of users.

Situating Neurotechnologies and Avoiding Neurotech-Exceptionalism

Moreover, the future instrument should avoid the impression of what one may call “neurotechexceptionalism”, the idea that neurotechnologies are the only, main, or prime means to change brains and minds. Instead, neurotechnologies should be situated within the broader landscape of mind-altering technologies. This raises the general question about the defining features of neurotechnologies. A consensus definition does not exist, their key characteristic is that they directly interact with the neurobiological (brain) level by measuring or physically altering electrochemical properties or activity of the (central) nervous system. The contention that these technologies are the prime means to alter the brain, and are therefore the main method of intervention to produce mental changes is false. Neurotechnologies are neither the most powerful, nor the most precise technology to alter minds. Pharmacological, psychotherapeutic, cognitive or behavioral interventions often target mental processes more efficiently and accurately. A basic lession of neuroscience is that a host of influences—social relations, walking in nature, driving taxis, even poverty—may change and shape the brain (Farah Citation2018; Linden Citation2006; Maguire et al. Citation2000; Sudimac, Sale, and Kühn Citation2022). These influences differ from neurotechnologies in their causal pathways; they primarily run through the external senses and are processed by a variety of mental mechanisms processing perceptual stimuli, which are bypassed by neurotechnologies that work more directly at the brain. While this difference may justifiy different normative treatments (Bublitz Citation2020b), it should be made clear by the instruments—also in the interest of public education—that other influences affect the brain as well, often more significantly and permanently. An instrument entitled “Declaration on the Human Brain” (suggested by the IBC report 2021, at 167) that would only address neurotechnologies runs into the dangers of neurotechexceptionalism. If the object of concern is the brain, the numerous other influences from poor diet to poverty would need to be recognized as well.

Furthermore, suspicion is warranted with regard to statements broadly alluding to unprecedented powers of neurotechnologies. While some neurodevices indeed confer spectacular powers over brains, their supposed novelty must be contextualized. As an example, an often voiced fear is memory manipulation through neurotechnologies, based on pioneering studies with optogenetic memory implantation in mice (Ramirez et al. Citation2013). While this research is stunning, memory research in humans has demonstrated time and again that common psychotherapeutic techniques may implant false memories, sometimes even dramatic ones of abuse (Loftus and Ketcham Citation1996). These ordinary, non-neurotechnological means to manipulate memories are far more effective and ethically worrisome than speculative neurotech, and have been known for decades. This point generalizes to many other means to change and alter minds. In light of the long history of dubious mind interventions, from “brainwashing” (Taylor Citation2017), psychosurgery, ideological conversion, Skinnerian behavioral modification, and electroconvulsive therapy, to many psychotherapeutic methods, neurotechnologies appear regularly more as their—more sophisticated and effective—continuation rather than something unprecedented.

In particular, the inclusion of mind-altering pharmaceuticals (“psychotropics”) in the instrument should be considered as they may qualify as neurotechnologies as well. They alter electrochemical properties and activity of the brain via the metabolic route and are often the most potent available means to directly change minds. Accordingly, they raise normative questions similar to electric-magnetic brain stimulation. In clinical practice, pharmacology or electric interventions are increasingly used side-by-side, if one fails, recourse to the other is taken, so that categorical distinctions seem unwarranted. The ideal of coherence in normative orders suggests treating like means alike unless good reasons speak to the contrary (Levy Citation2007). And even if the instrument left psychopharmaceuticals aside for pragmatic reasons, their history has valuable lessons to offer that should bear on debates and negotiations. Consider only the ethical and social problems stemming from illicit drug use and their criminalization, profit-driven mechanisms and lack of regulatory oversight, e.g., leading to the recent Opioid crisis in the US (Humphreys et al. Citation2022), worries about the medicalization of everyday nuisances through Prozac (Kramer Citation1994), coercive medication, and more. As the sociologists Rose and Abi-Rached remark (Citation2014, 24), by “the end of the twentieth century, for every problem of everyday existence, in almost every region where the management of mental health was a governmental problem, pharmacological interventions was the first resort”. This is the historic constellation in which neurotechnologies arise. It allows some speculations about its trajectory and should be reflected upon. Neurotechexceptionalism tends to overcloud continuations and similarities between different mind-interventions, which may bury valuable lessons of the past.

Furthermore, the tendency of neurotechexceptionalism to sideline other means of intervention may distort political choices about interventions. With greater availability of neurotechnologies, societies will face choices between different means to target a state of affairs and concomitant tradeoffs between harms and benefits. For instance, staggering rates of depression might be mitigated by a wide array of means, from improving job security and other social determinants of health to accessible psychological counseling or offering brain stimulation. A preference for neurobiological interventions based on the idea of them being more effective or closer to the source of the problem is unwarranted (supra) and may often have detrimental effects. Other interventions at psycho-social levels may be more effective, less harmful, and with greater overall benefits. Although the biopsychosocial model is widely accepted in psychiatry, real life practices in many places focus on biological interventions that appear less costly and more effective in the short-term. However, these interventions replace other potentially valuable forms of interpersonal engagement or social-environmental changes which might be more beneficial in the long-term. More generally, human minds are embedded in the social and environmental contexts that shape them. Neurotechexceptionalism focuses on proximate causes and lets broader causal factors fall out of sight, although they may be more suitable targets for interventions. This grounds the worry that neurointerventions, embodying the allure of the quick technological fix, will be preferred for economic and related reasons, to the detriment of other interventions and the values they promote.

(vi) A future instrument should recall that minds and brains are affected by and might be altered through many social and environmental factors, that neurotechnological interventions are not per se more effective or preferable over other interventions, and that interventions at environmental and societal levels may often be more beneficial for individual and societies, all things considered.

Neurodiversity and Posthuman Minds

Any normative assessment of neurotechnologies which alter mental properties requires evaluative standards for these properties. But surprisingly, standards that substantially go beyond trivial statements of the kind that pain is ceteris paribus undesirable and improved cognition is good, hardly exist. A more general framework for valuing mental states and processes is yet outstanding, and neuroethics has not even formulated significant parts of it (Metzinger Citation2009). Without it, however, evaluating and regulating neurotechnologies may soon run into fundamental problems of uncertainty. In fact, uncertainties about valuing mental states underlie familiar bioethical controversies about the proper scope of mental disorders (“pathologizing sadness”), the non-medical use of drugs, or the dismerit of living an inauthentic life. In political terms, uncertainties and disagreement about normative standards in a domain often suggests refraining from stipulating binding norms and leaving decisions to affected individuals. The absence of standards may thus motivate the normative position of cognitive liberty or mental self-determination (Boire Citation2001; Bublitz Citation2013; Farahany Citation2019). Nonetheless, individuals may not always be in a position to understand what is best or desirable for them for lack of experience and cultural familiarity with neurotechnologically-altered minds. The perhaps most appropriate way forward in such situations is an experimental approach that facilitates medical, socioeconomic, and cultural conditions for individual and collective learning about valuable and problematic ways to alter minds. Such an approach speaks for regulatory restraint and against comprehensive, far-ranging uniform regulations. This bears directly on the scope of a future international instrument.

Moreover, unregulated choices might lead to a diversity of outcomes—here: diverse mental properties—that should prima facie be appreciated. The lasting contribution of the neurodiversity movement was to reframe mental differences in terms of diversity rather than disability. A broader use of neurotechnologies could extend this reasoning beyond the medical domain toward trans- or posthuman minds. However, the extent to which diversity of minds is desirable from a societal point of view is open. After all, the combination of many individual minds weave the mental fabric of society (Merkel et al. Citation2007). How it might be affected by radically different minds is hard to anticipate.

(vii) States should prima facie appreciate differences and diversity in mental properties, promote the development of standards for their evaluation at the individual and societal level, recognizing a broad range of personal, social, and cultural factors, and create favorable conditions for experimenting with technologically-altered mental states.

Securing Boundaries between Persons and Technologies

A further intriguing feature with potentially profound implications currently neglected in law is that bio- and neurotechnologies blur the boundary between the person, her body and mind on the one side, and the technological artifact on the other. Where do the former end and the latter begin? Firm boundaries between body and world have been deconstructed in many disciplines in recent years, where it is said that humans and technologies mutually depend on, or co-constitute each other. The law, by contrast, draws a categorical distinction between persons and things. Persons are the objects of concern and the ground of the entire human rights system, whereas things are the objects of property law, goods to be made, sold, and discarded. Both entitities are subject to different legal regimes; persons and their parts cannot be property, they form a class of their own. Some biotechnologies are hard to categorize. In some countries, objects implanted inside the body and integrated with bodily functions (pacemakers, dental implants) become legally part of the body (Akmazoglu and Chandler Citation2021; Bublitz Citation2022a; Quigley and Ayihongbe Citation2018). Neurotechnologies may do so even more. Sophisticated devices, e.g., closed-loop BCIs that detect and modulate moods of users may become deeply functionally integrated with human minds, perhaps to a point where it might be said that they run on two hardware systems, the organic brain and the neurotechnology, which jointly generate mental functioning; some authors speak about hybrid minds (Soekadar et al. Citation2021). In such minds, neurotechnologies becomes part of the mind or the person in a strict sense and with several ramifications. For instance, in the aim to preserve the special legal protection of the person, manufacturers may lose property over the device that has become part of the person, and intellectual property over the software running it (Bublitz Citation2022b). These questions need to be clarified at the level of national law. A future instrument should draw attention to the merging of minds and technology, provide guidance to locate and secure the boundaries of the person, and suggest solutions for further ramifications.

(viii) Observing that neurotechnological devices may become so deeply integrated with persons that they turn into parts of them, States should protect the freedom of the person by ensuring that third-parties lose claims and rights in such devices.

Sovereignty Over Minds – No Control of Others

The foregoing points were meta-considerations that form the philosophical, ethical, and legal background of a future instrument. They are the material of preambles, recitals, or explanatory memoranda as they explain and justify the following more concrete points. If the structure of the new instrument is modeled after the recent UNESCO Recommendation on Artificial Intelligence, the foregoing points may function as broad values from which principles, rights, and more concrete actionable points are deduced.

The most abstract principle implied in human rights law that should serve as a maxim of the future instrument may appear self-evident but is worth emphasizing: The sovereignty of the individual over her mind, essentially a liberty from normative claims or factual control of others. It arises from the general idea of autonomy underlying the human rights order, in conjunction with the value of subjectivity (supra). From the perspective of users of neurotechnologies, giving up control over parts of their brain/mind system may be a pivotal issue. It is essential to compensate their loss of factual control over their mind through guarantees of legal control of sovereignty.

As neurotechnologies afford accessing the mind in two ways, sovereignty and control have two main reference points in this context: (a) the detection and recording of electrochemical properties and activity of the brain that allow inferences about mental properties (“neuroimaging”); (b) interventions into brains, altering mental properties (“neurointerventions”). The sovereignty principle states that minds should not be accessed in either way without approval of the person, i.e., others should refrain from factually interfering with minds through interventions or imaging. So much is clear. The challenge lies in defining the scope and limits of sovereignty more concretely. Mental sovereignty over minds cannot be an absolute principle in a strict sense because people influence minds of each other on a myriad of ways, many of which are part of ordinary social interaction and beyond normative concern. Moreover, some legitimate interests of others, including governments and society at large, may constrain sovereignty, from mandatory school education to mental health treatment or forensic assessments. Thus, defining the contours of the normative boundaries between the person and the social sphere more concretely would be an impressive result of a future instrument (and will be further specified in subsequent considerations).

(ix) A future instrument should emphasize the principle of the sovereignty of persons over their minds. This entails the absence of de jure claims of others—including the State—, and the absence of de facto control of minds through neuroimaging or neurointerventions without consent.

Reaffirming Established Human Rights

A key objective of any instrument under the auspices of the United Nations is reaffirming and strengthening established human rights. These rights render foregoing desiderata such as mental sovereignty more concrete. Although various existing human rights apply to neurotechnologies, courts have not elaborated on them so far, mostly for a lack of cases. Legal scholarship has charted how fundamental rights may apply to cases involving neurotechnologies, mainly with respect to specific legally relevant applications in the context of domestic constitutional law (e.g., Blitz Citation2017; Ligthart Citation2022; Moriarty Citation2008). A few voices, however, suggest the insufficiency of established rights (Genser, Herrmann, and Yuste Citation2022), and some call for novel rights (Goering et al. Citation2021; Ienca and Andorno Citation2017). In light of these contentions, a future instrument should reaffirm and strengthen established human rights by showing that and how they apply to neurotechnologies. This entails clarifying their scope and content, and further developing them by the canones of legal interpretation with the aim of deriving more concrete standards from them. This provides guidance for courts interpreting and applying rights and counters the narrative of the insufficiency of rights. Here is a sketch of how the most important rights can be interpreted in the neurotechnology context:

Human dignity is a prime principle in the UN system, laid out in the preamble of the UN Charter as well as Article 1 of the European Charter of Fundamental Rights and Freedoms (CFR), and several bioethics instruments.Footnote10 A concededly vague concept, attention shall be drawn to some interpretations that apply to the present context.Footnote11 In a Kantian-flavored understanding, human dignity commands respect for persons as subjects and protects them against instrumentalization and objectification, linking the right with the meta-considerations developed supra, especially the valuing of subjectivity and the avoidance of objectification. Dignity then bars neurotechnological control of person, in paradigmatic form, the remote control of a person’s bodily movements (as successfully demonstrated in animals by Joseph Delgado decades ago, see Delgado Citation1964; Talwar et al. Citation2002). It further bars direct brain interventions affecting persons’ capacities to set ends for themselves, or in modern terms, their decision-making, and interventions undermining or bypassing subject-defining mental features, e.g., by implanting memories, beliefs, or affective states.Footnote12 Moreover, it may protect against interventions that disrupt a person’s relation to herself, e.g., by inducing self-estrangement or self-alienation (Gilbert Citation2018). Finally, dignity may command alleviating disorder and disability (infra). These interpretations of human dignity provide founding elements of a wider legal architecture protecting the person against neurotechnological challenges.

Another right that can easily be interpreted to cover neurointerventions is the integrity of the person. It is implied in the security of the person, e.g., Article 9 of the International Covenant on Civil and Political Rights (ICCPR), and sometimes further divided into rights to bodily/physical and mental/psychological integrity in regional instruments (e.g., Article 5.1 American Convention on Human Rights; Article 3.1 CFR, Article 8 European Convention on Human Rights). Neurotechnologies that alter electrochemical properties of the brain (brain stimulation, psychopharmaceuticals) cause changes in the body and may thus be deemed to interfere with bodily integrity; when they also alter mental states, they may interfere with mental integrity. The notable consequence is that virtually all neurointerventions fall within the ambit of these rights. Accordingly, no serious structural gap in existing law exists.

With respect to the second mode of accessing minds—neuroimaging that detects brain states—the distinct right to privacy or private life complements integrity rights. It is guaranteed in almost all international instruments (e.g., Articles 12 UDHR, 17.1 ICCPR, 8 ECHR, 11 ACHR). By the logic of abstract rights, the right to privacy entails privacy in particular domains, be that the bedroom, the telephone, the mind or the brain (Ligthart et al. Citation2021; Marshall Citation2008). A right to mental privacy thus already exists.

Furthermore, the rights to freedom of thought and opinion, Articles 18 and 19 UDHR & ICCPR, bar interventions into thinking, the holding and forming of opinions, and protect against involuntarily revealing thoughts or opinions. The precise meaning of these freedoms is not settled, but they clearly provide protection against severe interferences (Alegre Citation2017; Bublitz Citation2021; Ligthart Citation2023; Special Rapporteur on Freedom of Belief or Religion Citation2021). In contrast to preceding rights, these freedoms are absolute, with the effect that interferences can never be justified. In addition, further rights may apply to neurotechnologies in specific contexts such as the right to fair trial or against self-incrimination (Farahany Citation2012; Ligthart et al. Citation2021).

These examples demonstrate that existing rights can be interpreted to encompass most, if not all, neurotechnological interferences and close alleged gaps. They also provide a framework that allows for reasonable decisions, with an unconditionally protected core of the mind—freedom of thought and opinion—, and qualified rights to mental integrity and privacy that cover less severe interferences. It would be meritorious of a future instrument to affirm this and provide guidance for courts and lawmakers navigating this unfamiliar territory. This approach advances the law without introducing novel rights.

(x) A future instrument should reaffirm established human rights by emphasizing their applicability to neurotechnological interferences through clarifying their scope and demonstrating how neurotechnologies may interfere with them. It should lay out that, and the way in which, all serious interferences fall under the scope of established rights.

Drawing Absolute Limits to Interferences

It bears noting that an interference with a right is not the same as its violation. Interferences can be justified by countervailing rights or public interests. How such conflicts of rights ought to be balanced cannot be decided in the abstract but requires context-specific assessment of concrete cases. Nonetheless, rights may have absolute boundaries, red lines that cannot be crossed, areas in which the sovereignty of the person over her mind strictly prevails over other interests. Drawing the contours of these absolutely guaranteed areas would be a significant achievement of the instrument. The UNESCO Recommendation on AI succeeded in defining some absolute prohibitions. The points of departure in the present context are human dignity and the freedoms of thought and opinion. Drawing on them, the sovereignty and no-control principles suggest two more concrete norms along these lines:

(xi) Neurotechnologies should never be used to infer occurrent mental states of persons (e.g., a particular emotion), the content of her thoughts, the type of mental action she is performing (e.g., dreaming, calculating), or her mental capacities and dispositions, without consent.

(xii) Neurotechnologies should never be used to intervene into brains of persons to alter mental states or processes (e.g., thoughts, forming of opinions, emotions), without consent.

These two absolute limits warrant some explanation and may require exceptions. Norm (xi) is triggered when inferences about mental states, processes, or capacities are drawn from information about brain activity. It does not prohibit mere neurological examinations. The norm pays tribute to the value of subjectivity. In practical terms, it restricts the involuntary use of applications such as vigilance monitoring at the workplace or involuntary brain-based lie detection in the justice system.

As intervening into someone’s mind without consent is surely among the most invasive interferences, the reasons supporting norm (xii) are self-evident (cf. OECD. Citation2019, IV. 9. C). However, there might be a need for exceptions because a few established and prima facie legitimate practices such as coercive medications in psychiatry or the justice system to restore competence run afoul of it. Without delving into these complex and thorny issues, it is worth noting that even when such exceptions are affirmed, they should be clearly enumerated and limited.

Mental Self-Determination and Its Limits

Human rights are traditionally negative claims to noninterference. Because of this perspective, the liberty of rightholders to change their bodies and minds through their own actions are not in the focus of human rights documents. Explicit entitlements of persons to alter their brains and minds through neurotechnologies or other ways are absent. The international drug control regime may even confer the impression that such entitlements are non-existent with respect to mind-altering substances. But this may be a misunderstanding which overlooks the principle of sovereignty over one’s mind (ix) and the autonomy of the person, a general principle underlying the human rights system. Both entail a liberty to change one’s mind (Boire Citation2001; Farahany Citation2019; OECD. Citation2019; Rommelfanger et al. Citation2018). The future instrument should sharpen the view on this dimension of sovereignty by proclaiming a pro tanto liberty to change one’s mind, also with the assistance of neurotechnologies.

This right is not absolute because there are good reasons for restrictions such as the well-understood long-term interests of the person herself, paternalism, etc. Similar to illicit drugs, dangers to personal autonomy may arise from neurotechnologies stimulating dopamine pathways or the pleasure center (Synofzik, Schlaepfer, and Fins Citation2012). Liberty to change one’s mind can also be restricted by rights of others or public interests, but how these competing interests are to be reconciled precisely is yet to be worked out. The lack of evaluative standards of mental states (supra) calls for caution in restricting sovereignty for paternalistic reasons. The future instrument could advance the debate by introducing a (narrow) limitation clause enumerating the legitimate grounds for restricting mental sovereignty.

(xiii) The future instrument should recognize the right or liberty to freely change one’s mind through neurotechnologies and should enumerate the grounds for limitations.

Access to Medical Neurotechnologies

The third human rights dimension is self-evident but merits repeating and reiterating in a future instrument, namely the positive obligations of States to provide access to neurotechnologies to people in need. Several international rights impose such obligations with respect to illness, disorder, or disability (e.g., Article 12 International Covenant on Economic, Social and Cultural Rights, ICESCR; Article 25 Convention on the Rights of Persons with Disabilities, CRPD). It should be noted that under the these instruments, rights are “progressively realized” in correspondence with available resources (Article 2.1 ICESCR, Article 4 CPRD). Reminding States of their obligations in this regard is vital, especially in cases without alternative treatment options (Visser-Vandewalle et al. Citation2022). Many neurotechnologies may facilitate the full and effective participation in society on an equal basis with others, one of the central objectives of the CRPD. A future instrument should emphasizes these positive obligations, taking into account the extensive catalogue of the obligations under the CRPD, and press for their effective implementation.

(xiv) A future instrument should emphasize obligations of States to provide access to neurotechnologies alleviating mental disorders or disabilities in order to ensure everyone’s full and effective participation in society.

Norms Addressing Private Actors

Human rights traditionally bind only States. Many future threats of neurotechnologies, however, will likely arise from private parties, especially businesses which are usually not bound by human rights. There are two main responses to this predicament. The first are attempts to extend duties of States to private persons. The most advanced approach in this direction is the Guiding Principles on Business and Human Rights by the UN Human Rights Council (Citation2011). This soft law creates the non-binding social expectation that businesses “protect, respect, and fulfill” human rights. It calls for businesses adopting a commitment to observe human rights, instigate due-diligence processes that identify, prevent, and mitigate human rights impacts, and provide remedies for adverse effects. A future instrument on neurotechnologies should reaffirm these demands and define due diligence criteria for the neurotech industry more concretely. In particular, ethical and human rights impact assessments might be made an obligatory part of the requirements for market approval for neurotechnologies under local medical device regulations.

(xv) A future instrument should call for human rights impact assessments in the development of neurodevices and their placement on markets.

The second response is passing domestic laws that transform human rights into binding positive laws that imposes duties on private actors. Because of the importance of the endangered interests, the norms should presumably be backed criminal offenses, provided existing domestic offenses fall short of capturing interferences. Three suggestions for such norms:

(xvi) Offense of direct brain intervention: Intervening directly into the brain of another person through administering pharmaceuticals, electric, magnetic, or other forms of direct neurointerventions without her consent, causing non-trivial adverse mental consequences, shall be punishable (cf. Bublitz and Merkel Citation2014).

(xvii) Offense of mind-probing: Measuring neurophysiological data through neurotechnologies and drawing inferences about mental states, processes, or other mental properties from them without the consent of the person, shall be punishable.

(xviii) Offense of hacking neurotechnologies (“brainjacking”): Accessing neurodevices without consent to extract data about its operations or its user, especially previously recorded neurophysiological data; or accessing them to alter the parameters of its operation, shall be punishable (Ienca and Haselager Citation2016; Pycroft et al. Citation2016).

These norms protect people’s sovereignty over their minds against interferences by private actors. While their precise formulation and relation to established norms in domestic law require further discussions among scholars of criminal law, the future instrument should mark the need for such norms and assist States in their drafting.

14. Ethics & Human Rights by Design

The foregoing has mainly addressed worries about the misuse of technologies and their potential for rights interferences. From a practical point of view, however, another dimension is equally relevant. Rather than being protected from them, many people may wish to use neurotechnologies provided devices are good, i.e., safe, well-performing, and observing rights and interests. Developing good products requires constructive and creative cooperation between engineers, designers, and ethicists already during the planning stages. Successful “rights-and-ethics-by-design” may significantly enhance the practical use of neurotechnologies and alleviate worries (OECD Citation2019; Pfotenhauer et al. Citation2021). The challenge is identifying how precisely design choices can reflect human rights concerns, products should be systematically screened for them. Here are some exemplary device features following from foregoing considerations:

(xix) Technologies should respect, promote, and implement human rights by design choices, such as:

  1. Transparency of operation: Users should always be notified when and how a device is operating (e.g., reading, stimulating), this includes easily understandable parameters (low, high stimulation). This confers control to users and provides them with the information required to make sense of their phenomenological experience, and adapt to the operation of the device. Technical implementation should ensure that notification is easily detectable by users but not by other parties for risk of stigmatization (e.g., integration into smartwatches). Given the varying abilities of users of medical devices, technology design should offer several options.

  2. Emergency-switch: Users should have the possibility to stop the operation of the device at any time through an easily accessible emergency switch.

  3. Veto-power: When neurotechnologies direct actuators such as BCI-controlled prostheses, users should have the power to veto the execution and stop its operation at any moment.

  4. Altering parameters by users: Users should have the power to alter the parameters of the operation of the device (on, off, intensity, etc.) to the greatest extent possible to realize sovereignty over minds. Limits might be set by potential for self-harm.

  5. Altering parameters by others: When others (physicians, engineers) alter the parameters of the device, there should be a notification to users and possibly some further safeguard to ensure consent, e.g., the need to confirm changes by users, e.g., through a PIN code. Special caution is required with remotely controlled teledevices.

  6. Physical appearance: To avoid stigmatization, designers should devise devices that are not visible or recognized as such by others. Cooperation with the fashion industry should be encouraged (e.g., EEG caps).

  7. Documentation: The operation of the device should be documented at all times, including alterations of parameters and malfunctioning. Given that this data may allow sensible inferences, it should observe strictest data-classification standards. It should always be accessible to users.

  8. User perspective: Views and suggestions of users should be actively investigated and accommodated beginning with early development phases and continuing through the life-cycle of the product.

Research & Regulation

Innovation in neurotechnology is partly driven by venture capital naturally seeking returns and revenue. Patents and intellectual property rights allow their holder considerable shaping powers over technological developments and product design. But laying the trajectory of the field in the hands of market forces is not without concerns. “Moving fast and breaking things” is not an adequate motto for technologies which modify central characteristics of persons. Avoiding this requires a far-sighted and multilayered strategy that yet awaits development.

(xx) A future instrument should collect ideas and lay out steps that enable and encourage responsible innovation and avoid pitfalls of overly rapid profit-driven technology development.

Morevoer, the history of pharmaceutical development provides some lessons. Without oversimplified condemnations of “big pharma”, some evident problems must be avoided with neurotechnologies. A future instrument should note some salient issues and provide ways to overcome them. For instance, research and investigation of risks, safety, and efficacy of products relevant for market-approval are largely carried out by manufacturers with vested interests in positive outcomes. Moreover, biases resulting from selectively publishing favorable studies (“publication bias”) are well-known from the pharma-field. Furthermore, studies may not be designed to detect harms or side-effects, especially subtle changes to mental states which are only discernible by in-depth phenomenological investigations which are only rarely conducted. Detrimental mental effects might be underestimated as a result. Paying tribute to the value of subjectivity and reiterating (v), the future instrument should demand more thorough studies of the side-effects of neurotechnologies.

Furthermore, research participants have time and again voiced disappointment about having to return test-devices after study completion. Especially in the absence of alternatives, this means depriving them of the medium of their hopes (Gilbert Citation2015; Gilbert, Ienca, and Cook Citation2023; Hansson Citation2021). Forced explantations interfere with bodily integrity.

(xxi) A future instrument should encourage long term follow-up studies providing post-trial access, possibly sponsored by manufacturers.

Furthermore, regulatory hurdles for basic research with neurotechnologies should be critically revised; some burdens might be disproportional (Baeken et al. Citation2023). In addition, incentive structures for market-approval should be scrutinized. “Off-label” use of pharmaceuticals is a common phenomenon that absolves manufacturers from expensive multi-phase studies yet allows selling their products. Whether this can be avoided with respect to neurodevices should be ascertained, without compromising the needs of patients with rare conditions. Moreover, accessibility of products especially in the developing world, the problems of patents, and the promotion of open-source technology in products emerging from publicly financed basic science should be emphasized in an instrument committed to the UN Sustainable Development Goals (Goal 3, “Ensuring healthy lives and promoting well-being for all persons with disabilities”).

(xxii) A future instrument should motivate critical reflections on existing regulatory frameworks and incentive structures. It should foster responsible innovation and appeal to manufacturers and companies to makes technologies accessible to people in need.

Taming Neurocapitalism

A wider use of non-medical neurotechnologies may have several worrying societal consequences under capitalist conditions. The use of brain data provides an example. Many business models for non-medical applications will draw on commodifying brain data. Through sophisticated placement of stimuli and more or less direct detection of brain reactions to them, companies can create information-rich profiles of users which might allow far-ranging inferences about them. These profiles can be used for marketing and other potentially manipulative purposes. Even when data processing requires consent, many users may provide it in exchange for free services. Whether such conditional consent is valid can only be answered with respect to specific data regulations (regarding the GDPR see European Data Protection Board Citation2020). Given the importance of the principle of sovereignty over minds, a future instrument may consider banning conditional consent to non-necessary forms of data processing. More generally, lawmakers should consider classifying “brain data” pursuant to the highest category of protection in the domestic data regulations, akin to medical or health data (Rainey et al. Citation2020; International Bioethics Committee of UNESCO Citation2021, sec. 130; cf. Recommendations 2–5 by Goering et al. Citation2021). This would curtail many worrisome forms of data usage.

(xxiii) Brain data that allows inferences about mental states, processes, or other properties should be classified in the same category as “medical” or “health data” in data regulation frameworks. Consent for processing of brain data in exchange for services is valid only insofar as the data is necessary for the provision of the service.

Another field with potentially worrisome applications is the workplace. Vigilance and mental capacities of employees might be checked and monitored, with consent, as part of the employment contract. This may be a reasonable precautionary measure in high-risk fields, but is also the stuff of dystopian worries of surveillance. Consent and voluntary use of neurotech should be regulated in fields characterized by power inequality. The future instrument may sketch the way:

(xxiv) For monitoring mental states, processes, or capacities at the workplace, neurotechnology should only be deployed if benefits, especially the reduction of risks of harm, clearly outweigh the setbacks to privacy.

Furthermore, the ethics of technologically enhancing mental capacities has been controversially discussed in recent years without reaching common ground. Given reasonable disagreement, an international instrument striving to be embraceable by people from a variety of cultures and worldviews may not be the right place to address the matter. Local experiments and flexible regulation may offer a more promising way forward. Moreover, widespread enhancement likely exacerbates cognitive competition in mental economies, threatening sovereignty over one’s mind (Goering et al. Citation2021; Greely et al. Citation2008;). Especially worrisome are mental side-effects that people might “voluntarily” incur to stay competitive. While competition is inherent to market economies, the task of ethics and the law is curbing it when important values require so. Mental sovereignty is one of them. Personal decisions by often overburdened individuals are not the adequate instrument to regulate such dynamics, they must be addressed at the collective level. A future instrument should call for caution.

(xxv) The use of cognitive enhancement with negative mental side-effects in the workplace should be discouraged and regulated in competitive job markets.

OUTLOOK: DEMOCRATIC NEUROPOLITICS

By rendering parts of the brain and the mind accessible, neurotechnologies turn them into objects of control and choice. With critical tones, one might view this as the further subjection of parts of the world—and more worringly, parts of the self—to the operative logic of technology, or as the IBC writes, a “subjugation of the person to the technical device” (2021, 49). How far this may go and whether benefits outweigh costs cannot reasonably be determined at this moment. This uncertainty poses a major challenge to reasonable regulation (Collingridge Citation1981). In this situation, the way forward presumably most conducive to the individual and the common good is experimental, cautious, tentative, self-reflective, and reversibe.

However, the pace of development and the dynamics of markets fueling it counteract and contravene a cautious, stepwise technological innovation. This is aptly shown by the socio-economic, cultural and psychological changes that computers, the internet, or smartphones have brought about. Never subjected to genuine informed, anticipatory democratic oversight, these technologies largely came over people and overwhelmed democratic decision-making, institutions, and civil society; the powers steering their trajectories lay mainly with companies and private actors. The visions of stakeholders about the future trajectory of neurotechnologies vary greatly. Some are interested in basic science, others in alleviating debilitating diseases, still others espouse transhumanist imaginaries of blending humans with machines. When the latter become the driving forces of technological development, backed by venture capital, one may wonder whether entrusting their adherents with great factual transformative powers is reconcilable with ideas of democratic governance and the plurality of worldviews largely incompatible with such ideas. While everyday neuroethics is caught up in many smaller-scale questions about autonomy and consent, a future instrument may want to find the time and space to reflect upon the big picture, how democratic governance can be regained, and which trajectories of neuroscience would effectively benefit humanity. Beyond the medical field, nothing forces humanity to embark upon nebulous paths toward post- or transhumanism. Arguments for absolute boundaries around central human properties that technology should not encroach upon are arguable—and even a moratorium for non-medical neurotechnologies should be put to discussion. Fortunately, it seems that it might still take about a decade before worrisome technologies will have matured to a practically relevant degree. A time window large enough for thorough and philosophically sound reflections on the path forward. There is no point in arguing about smartphones any longer, but there still is about non-medical neurotechnologies. This is an opportunity that neuroethics and civil society should not let pass. Perhaps the greatest achievement of the work on a future instrument would be a truly pluralist and inclusive dialogue that reestablishes democratic participation in the shaping of the things to come.

DISCLOSURE STATEMENT

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the German Ministry of Education and Research (ERA-NET 01GP2121A & 01GP2214B); Open Access Publication Fund of Universität Hamburg.

Notes

1 Recently published academic ethical guidelines for neurotechnology deserve mentioning, e.g. Goering et al. Citation2021; H. T. Greely et al. Citation2018; Rommelfanger et al. Citation2018. As the issue of brain data has received considerable attention recently (Eke et al. Citation2022; Ienca et al. Citation2022), it will only be briefly addressed in the following.

2 For instance, it rules out distinct legal personalities for AI and the use of AI for social scoring or mass surveillance (UNESCO Citation2021).

3 See also the Resolution on human rights standard setting by the UN General Assembly (Citation1986).

4 This finds support in established rights and scholarly proposals that refer to subjective states, from freedom of thought (Article 18 Universal Declaration on Human rights, UDHR), freedom of memory (Kolber Citation2008) to mental self-determination (Bublitz Citation2020c) or psychological integrity (Article 5.1 American Convention on Human Rights, ACHR; Article 8 European Convention on Human Rights, ECHR).

5 See e.g. the opening paragraph of the IBC report, highlighting the importance and the “centrality of brain activity to notions of human identity, freedom of thought“ and well-being. It seems worthy to note that this centrality is only contingent. People may formulate those notions, cherish those properties, and develop their identities without any reference to brain activity. By contrast, these concepts inherently refer to aspects of subjectivity. As an illustration: if one were to write a book about travel and mobility, but only refer to cars, trains, and planes, one may miss the subject, although these technologies are central to human mobility.

6 Otherwise, neuroscientific explanations would be incomplete, because they do not include higher-order explanations (e.g., psychological dynamics).

7 For an introduction to the mind-brain problem and the many variations see Kim (Citation2018) but also Strawson (Citation2009).

8 Rommelfanger et al. (Citation2018) make a similar point (“brain exceptionalism”). The preamble of the UNESCO Bioethics Declaration (2005) similarly bears “in mind that a person’s identity includes biological, psychological, social, cultural and spiritual dimensions”. Notably, this should not be a problem for fellow reductionists. They can simply assume that all the “mind-talk” will one day be replaced by brain talk. But the converse is not true for non-reductionists, as the irreducible mental elements would be missing in a reductive approach.

9 Examples of such studies are de Haan et al. (Citation2015); Eich, Müller, and Schulze-Bonhage (Citation2019); Shahmoon, Smith, and Jahanshahi (Citation2019); Tbalvandany et al. (Citation2019).

10 Marking its importance, human dignity is mentioned in the preamble of the UDHR, Article 1 of the Oviedo Convention, Article 3 of the UNESCO Bioethics Convention, and Article 1 of the EU Charter of Fundamental Rights and Freedoms (CFR). According to a UN General Assembly resolution, novel standards should “derive from the inherent dignity and worth of the human person” (1986: para. 4a).

11 The UNESCO AI Recommendation makes a pertinent remark: “ethical questions related to AI-powered systems for neurotechnologies and brain-computer-interfaces should be considered in order to preserve human dignity and autonomy” (at 126).

12 See the IBC suggesting to consider “any form of neurotechnological alteration, modification or manipulation as a violation of human dignity” (at 41).

REFERENCES

  • Akmazoglu, T., and J. A. Chandler. 2021. Mapping the emerging legal landscape for neuroprostheses: Human interests and legal resources. In Developments in Neuroethics and Bioethics, ed. M. Hevia, vol. 4, 63–98. Cambridge, MA: Elsevier. doi:10.1016/bs.dnb.2021.08.002.
  • Alegre, S. 2017. Rethinking freedom of thought for the 21st century. European Human Rights Law Review 3:221–33.
  • Andorno, R. 2012. Intergovernmental declarations relating to bioethics: Are they legal in nature or merely ethical? In Standing Tall. Hommages à Csaba Varga, 15–23. Budapest, Hungary: Pazmany Press.
  • Baeken, C., M. Arns, J. Brunelin, L. Chanes, I. Filipcic, A. Ganho-Ávila, M. Hirnstein, F. Rachid, A. T. Sack, J. O’shea, et al. 2023. European reclassification of non-invasive brain stimulation as class III medical devices: A call to action. Brain Stimulation 16 (2):564–6. doi:10.1016/j.brs.2023.02.012.
  • Blitz, M. J. 2017. Searching minds by scanning brains: Neuroscience technology and constitutional privacy protection. London, UK: Palgrave.
  • Boire, R. G. 2001. On cognitive liberty. The Journal of Cognitive Liberties 2 (1):7–22.
  • Borbón, D., and L. Borbón. 2021. A critical perspective on neuro rights: Comments regarding ethics and law. Frontiers in Human Neuroscience 15:703121. doi:10.3389/fnhum.2021.703121.
  • Bublitz, C. 2013. My mind is mine!? Cognitive liberty as a legal concept. In Cognitive Enhancement, ed. Elisabeth Hildt, 233–264. Doerdrecht, NL: Springer.
  • Bublitz, C. 2020a. Objectification: Ethical and epistemic concern of neurobiological approaches to the mind. In Psychiatry reborn: Biopsychosocial psychiatry in modern medicine, ed. Will Davies, Julian Savulescu, Rebecca Roache, and J. Pierre Loebe, 325–360. Oxford, New York, NY: Oxford University Press.
  • Bublitz, C. 2020b. Means matter: On the legal relevance of the distinction between direct and indirect mind-interventions. In Neuro-interventions and the law: Regulating human mental capacity, ed. Nicole Vincent. New York: Oxford University Press.
  • Bublitz, C. 2020c. The nascent right to psychological integrity and mental self-determination. In The Cambridge handbook of new human rights: Recognition, novelty, rhetoric, ed. Andreas von Arnauld, Kerstin von der Decken, and Mart Susi, 1st ed., 387–403. Cambridge, UK: Cambridge University Press.
  • Bublitz, C. 2022a. The body of law: Boundaries, extensions, and the human right to physical integrity in the biotechnical age. Journal of Law and the Biosciences 9 (2):lsac032.
  • Bublitz, C. 2022b. Might artificial intelligence become part of the person, and what are the key ethical and legal implications? AI & SOCIETY. doi:10.1007/s00146-022-01584-y.
  • Bublitz, C. 2021. Freedom of thought as an international human right: Elements of a theory of a living right. In The law and ethics of freedom of thought, Volume 1: Neuroscience, autonomy, and individual rights, ed. Marc Jonathan Blitz and Christoph Bublitz. London, UK: Palgrave.
  • Bublitz, C. 2022c. Novel Neurorights: From Nonsense to Substance. Neuroethics 15 (1):7. doi:10.1007/s12152-022-09481-3.
  • Bublitz, C., and R. Merkel. 2014. Crimes against minds: On mental manipulations, harms and a human right to mental self-determination. Criminal Law and Philosophy 8 (1):51–77. doi:10.1007/s11572-012-9172-y.
  • Chalmers, D. J. 1995. Facing up to the problem of consciousness. Journal of Consciousness Studies 2 (3):200–19.
  • Collingridge, D. 1981. The social control of technology. Milton Keynes, England: Open University Press.
  • Council of Europe. 1997. Convention on human rights and biomedicine (Oviedo Convention). Adopted 04 April 1997.
  • de Haan, S., E. Rietveld, M. Stokhof, and D. Denys. 2015. Effects of deep brain stimulation on the lived experience of obsessive-compulsive disorder patients: In-depth interviews with 18 patients. PLOS One 10 (8):e0135524. doi:10.1371/journal.pone.0135524.
  • Delgado, J. M. R. 1964. Free behavior and brain stimulation. International Review of Neurobiology 6:349–449. doi:10.1016/S0074-7742(08)60773-4.
  • Eich, S., O. Müller, and A. Schulze-Bonhage. 2019. Changes in self-perception in patients treated with neurostimulating devices. Epilepsy & Behavior 90 (January):25–30. doi:10.1016/j.yebeh.2018.10.012.
  • Eke, D. O., A. Bernard, J. G. Bjaalie, R. Chavarriaga, T. Hanakawa, A. J. Hannan, S. L. Hill, M. E. Martone, A. McMahon, O. Ruebel, et al. 2022. International data governance for neuroscience. Neuron 110 (4):600–12. doi:10.1016/j.neuron.2021.11.017.
  • European Data Protection Board. 2020. Guidelines on consent version 1.1. https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf.
  • European Union. 2017. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017. Document 32017R0745.
  • Farah, M. J. 2018. Socioeconomic status and the brain: Prospects for neuroscience-informed policy. Nature Reviews-Neuroscience 19 (7):428–38. doi:10.1038/s41583-018-0023-2.
  • Farahany, N. A. 2012. Incriminating thoughts. Stanford Law Review 64:352–408.
  • Farahany, N. A. 2019. The Costs of Changing Our Minds. Emory Law Journal 69:75–110.
  • Genser, J., S. Herrmann, and R. Yuste. 2022. International human rights protection gaps in the age of neurotechnology. https://neurorightsfoundation.org/publications.
  • Gilbert, F. 2015. Self-estrangement & deep brain stimulation: Ethical issues related to forced explantation. Neuroethics 8 (2):107–14. doi:10.1007/s12152-014-9224-1.
  • Gilbert, F. 2018. Deep brain stimulation: Inducing self-estrangement. Neuroethics 11 (2):157–65. doi:10.1007/s12152-017-9334-7.
  • Gilbert, F., M. Cook, T. O'Brien, and J. Illes. 2019. Embodiment and estrangement: Results from a first-in-human “intelligent BCI” trial. Science and Engineering Ethics 25 (1):83–96. doi:10.1007/s11948-017-0001-5.
  • Gilbert, F., M. Ienca, and M. Cook. 2023. How I became myself after merging with a computer: Does human-machine symbiosis raise human rights issues? Brain Stimulation 16 (3):783–9. doi:10.1016/j.brs.2023.04.016.
  • Goering, S., E. Klein, L. Specker Sullivan, A. Wexler, B. Agüera Y Arcas, G. Bi, J. M. Carmena, J. J. Fins, P. Friesen, J. Gallant, et al. 2021. Recommendations for responsible development and application of neurotechnologies. Neuroethics 14 (3):365–86. doi:10.1007/s12152-021-09468-6.
  • Greely, H., B. Sahakian, J. Harris, R. C. Kessler, M. Gazzaniga, P. Campbell, and M. J. Farah. 2008. Towards responsible use of cognitive-enhancing drugs by the healthy. Nature 456 (7223):702–5. doi:10.1038/456702a.
  • Greely, H. T., C. Grady, K. M. Ramos, W. Chiong, J. Eberwine, N. A. Farahany, L. S. M. Johnson, B. T. Hyman, S. E. Hyman, K. S. Rommelfanger, et al. 2018. Neuroethics guiding principles for the NIH brain initiative. The Journal of Neuroscience 38 (50):10586–8. doi:10.1523/JNEUROSCI.2077-18.2018.
  • Habermas, J. 2007. The language game of responsible agency and the problem of free will: How can epistemic dualism be reconciled with ontological monism? Philosophical Explorations 10 (1):13–50. doi:10.1080/13869790601170128.
  • Hagemann, R., and J. H. Skees. 2018. Soft law for hard problems: The governance of emerging technologies in an uncertain future. Colorado Technology Law Journal 17:37–130.
  • Hansson, S. O. 2021. The ethics of explantation. BMC Medical Ethics 22 (1):121. doi:10.1186/s12910-021-00690-8.
  • Hoffman, G. A. 2013. Treating yourself as an object: Self-objectification and the ethical dimensions of antidepressant use. Neuroethics 6 (1):165–78. doi:10.1007/s12152-012-9162-8.
  • Humphreys, K., Shover, C. L., Andrews, C. M., Bohnert, A. S., Brandeau, M. L., Caulkins, J. P., et al. 2022. Responding to the opioid crisis in North America and beyond: recommendations of the Stanford–Lancet Commission. The Lancet, 399 (10324):555–604.
  • Ienca, M. 2021. Common human rights challenges raised by different applications of neurotechnologies in the biomedical field. Report for the Committee on Bioethics of the Council of Europe. https://rm.coe.int/report-final-en/1680a429f3.
  • Ienca, M., and R. Andorno. 2017. Towards new human rights in the age of neuroscience and neurotechnology. Life Sciences, Society and Policy 13 (1):5. doi:10.1186/s40504-017-0050-1.
  • Ienca, M., J. J. Fins, R. J. Jox, F. Jotterand, S. Voeneky, R. Andorno, T. Ball, C. Castelluccia, R. Chavarriaga, H. Chneiweiss, et al. 2022. Towards a governance framework for brain data. Neuroethics 15 (2):1–14. doi:10.1007/s12152-022-09498-8.
  • Ienca, M., and P. Haselager. 2016. Hacking the brain: Brain–computer interfacing technology and the ethics of neurosecurity. Ethics and Information Technology 18 (2):117–29. doi:10.1007/s10676-016-9398-9.
  • International Bioethics Committee of UNESCO. 2021. Report on the ethical issues of neurotechnology. UNESCO Doc. SHS/BIO/IBC-28/2021/3 Rev.
  • Kim, J. 2018. Philosophy of mind. 3rd ed. New York: Routledge.
  • Kolber, A. 2008. Freedom of memory today. Neuroethics 1 (2):145–8. doi:10.1007/s12152-008-9011-y.
  • Kramer, P. D. 1994. Listening to Prozac. London: Fourth Estate London.
  • Leuenberger, M. 2021. Losing meaning: Philosophical reflections on neural interventions and their influence on narrative identity. Neuroethics 14 (3):491–505. doi:10.1007/s12152-021-09469-5.
  • Levine, J. 1983. Materialism and Qualia: The explanatory gap. Pacific Philosophical Quarterly 64 (4):354–61. doi:10.1111/j.1468-0114.1983.tb00207.x.
  • Levy, N. 2007. Neuroethics: Challenges for the 21st century. Cambridge, UK: Cambridge University Press.
  • Ligthart, S. 2022. Coercive brain-reading in criminal justice: An analysis of European human rights law. Cambridge, UK: Cambridge University Press.
  • Ligthart, S. 2023. Mental privacy as part of the human right to freedom of thought? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4464655.
  • Ligthart, S., T. Douglas, C. Bublitz, T. Kooijmans, and G. Meynen. 2021. Forensic brain-reading and mental privacy in European human rights law: Foundations and challenges. Neuroethics 14 (2):191–203. doi:10.1007/s12152-020-09438-4.
  • Ligthart, S., M. Ienca, G. Meynen, F. Molnar-Gabor, R. Andorno, C. Bublitz, P. Catley, L. Claydon, T. Douglas, N. Farahany, et al. 2023. Minding rights: Mapping ethical and legal foundations of “Neurorights.” Cambridge quarterly of healthcare ethics 32:461–81. doi:10.1017/S0963180123000245.
  • Linden, D. E. J. 2006. How psychotherapy changes the brain–the contribution of functional neuroimaging. Molecular Psychiatry 11 (6):528–38. doi:10.1038/sj.mp.4001816.
  • Loftus, E. F., and K. Ketcham. 1996. The myth of repressed memory: False memories and allegations of sexual abuse. New York: St. Martin’s Press.
  • Maguire, E. A., D. G. Gadian, I. S. Johnsrude, C. D. Good, J. Ashburner, R. S. Frackowiak, and C. D. Frith. 2000. Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences of the United States of America 97 (8):4398–403. doi:10.1073/pnas.070039597.
  • Marchant, G., and L. Tournas. 2019. Filling the governance gap: International principles for responsible development of neurotechnologies. AJOB Neuroscience 10 (4):176–8. doi:10.1080/21507740.2019.1665135.
  • Marshall, J. 2008. Personal freedom through human rights law?: Autonomy, identity and integrity under the European Convention on Human Rights. Leiden, NL: Nijhoff.
  • Merkel, R., G. Boer, T. Fegert, D. Hartmann, S. Nuttin, and S. Roshal, eds. 2007. Intervening in the brain: Changing psyche and society. Berlin; New York: Springer.
  • Metzinger, T. 2009. The ego tunnel: The science of the mind and the myth of the self. New York: Basic Books.
  • Moriarty, J. C. 2008. Flickering admissibility: Neuroimaging evidence in the US courts. Behavioral Sciences & the Law 26 (1):29–49. doi:10.1002/bsl.795.
  • Nussbaum, M. C. 1995. Objectification. Philosophy & Public Affairs 24 (4):249–91. doi:10.1111/j.1088-4963.1995.tb00032.x.
  • OECD. 2017. Neurotechnology and society. OECD science, technology and industry policy papers. DSTI/STP/BNCT(2016)9/FINAL. https://www.oecd-ilibrary.org/science-and-technology/neurotechnology-and-society_f31e10ab-en
  • OECD. 2019. Recommendation of the Council on responsible innovation in neurotechnology. OECD/LEGAL/0457. https://www.oecd.org/science/recommendation-on-responsible-innovation-in-neurotechnology.htm.
  • Pfotenhauer, S. M., N. Frahm, D. Winickoff, D. Benrimoh, J. Illes, and G. Marchant. 2021. Mobilizing the private sector for responsible innovation in neurotechnology. Nature Biotechnology 39 (6):661–4. doi:10.1038/s41587-021-00947-y.
  • Pycroft, L., S. G. Boccard, S. L. F. Owen, J. F. Stein, J. J. Fitzgerald, A. L. Green, and T. Z. Aziz. 2016. Brainjacking: Implant security issues in invasive neuromodulation. World Neurosurgery 92 (August):454–62. doi:10.1016/j.wneu.2016.05.010.
  • Quigley, M., and S. Ayihongbe. 2018. Everyday cyborgs: On integrated persons and integrated goods. Medical Law Review 26 (2):276–308. doi:10.1093/medlaw/fwy003.
  • Rainey, S., K. McGillivray, S. Akintoye, T. Fothergill, C. Bublitz, and B. Stahl. 2020. Is the European data protection regulation Su Cient to deal with emerging data concerns relating to neurotechnology? Journal of Law and the Biosciences 7 (1):lsaa051. doi:10.1093/jlb/lsaa051.
  • Ramirez, S., X. Liu, P.-A. Lin, J. Suh, M. Pignatelli, R. L. Redondo, T. J. Ryan, and S. Tonegawa. 2013. Creating a false memory in the hippocampus. Science 341 (6144):387–91. doi:10.1126/science.1239073.
  • Rommelfanger, K. S., A. Pustilnik, and A. Salles. 2022. Mind the gap: Lessons learned from neurorights. Science & Diplomacy doi:10.1126/scidip.ade6797.
  • Rommelfanger, K. S., S.-J. Jeong, A. Ema, T. Fukushi, K. Kasai, K. M. Ramos, A. Salles, I. Singh, J. Amadio, G.-Q. Bi, et al. 2018. Neuroethics questions to guide ethical research in the international brain initiatives. Neuron 100 (1):19–36. doi:10.1016/j.neuron.2018.09.021.
  • Rose, N., and J. Abi-Rached. 2014. Governing through the brain: Neuropolitics, neuroscience and subjectivity. The Cambridge Journal of Anthropology 32 (1):3–32. doi:10.3167/ca.2014.320102.
  • Shahmoon, S., J. A. Smith, and M. Jahanshahi. 2019. The lived experiences of deep brain stimulation in Parkinson’s disease: An interpretative phenomenological analysis. Parkinson’s Disease 2019 (February):1–7. doi:10.1155/2019/1937235.
  • Shelton, D. 2008. Soft law. In Handbook of international law, ed. Armstrong, D., Brunee, J., Jackson, J, and D. Kennedy, 68–81. Milton Park, UK: Routledge.
  • Soekadar, S., J. Chandler, M. Ienca, and C. Bublitz. 2021. On the verge of the hybrid mind. Morals & Machines 1 (1):30–43. doi:10.5771/2747-5174-2021-1-30.
  • Sosa N., M. Salvador Dura-Bernal, G. Carla Maria, and S. Clare, eds. 2022. The risks and challenges of neurotechnologies for human rights. UNESCO Report. doi:10.54678/POGS7778.
  • Special Rapporteur on Freedom of Belief or Religion. 2021. Annual report to the general assembly on freedom of thought. UN Doc. A/76/380.
  • Strawson, G. 2009. Realistic monism: Why physicalism entails panpsychism. Journal of Consciousness Studies 13:10–1.
  • Sudimac, S., V. Sale, and S. Kühn. 2022. How nature nurtures: Amygdala activity decreases as the result of a one-hour walk in nature. Molecular Psychiatry. 27:4446–52.
  • Synofzik, M., T. E. Schlaepfer, and J. J. Fins. 2012. How happy is too happy? Euphoria, neuroethics, and deep brain stimulation of the nucleus accumbens. AJOB Neuroscience 3 (1):30–6. doi:10.1080/21507740.2011.635633.
  • Talwar, S. K., S. Xu, E. S. Hawley, S. A. Weiss, K. A. Moxon, and J. K. Chapin. 2002. Rat navigation guided by remote control. Nature 417 (6884):37–8. doi:10.1038/417037a.
  • Taylor, K. E. 2017. Brainwashing: The science of thought control. Second edition. Oxford: Oxford Landmark Science, Oxford University Press.
  • Tbalvandany, S. S., B. S. Harhangi, A. W. Prins, and M. H. N. Schermer. 2019. Embodiment in neuro-engineering endeavors: Phenomenological considerations and practical implications. Neuroethics 12 (3):231–42. doi:10.1007/s12152-018-9383-6.
  • UN General Assembly. 1986. Resolution on Setting international standards in the field of human rights. Adopted 4 December 1986. UN Doc. A/RES/41/120.
  • UN Human Rights Council. 2011. Guiding principles on business and human rights: Implementing the United Nations “protect, respect and remedy” framework. UN Doc. A/HRC/17/31.
  • UN Human Rights Council. 2022. Neurotechnology and human rights. UN Doc. A/HRC/51/L.3.
  • UNESCO. 2021. Recommendation on the ethics of artificial intelligence. General Conference SHS/BIO/PI/2021/1.
  • Vidal, F. 2009. Brainhood, anthropological figure of modernity. History of the Human Sciences 22 (1):5–36. doi:10.1177/0952695108099133.
  • Visser-Vandewalle, V.,. P. Andrade, P. E. Mosley, B. D. Greenberg, R. Schuurman, N. C. McLaughlin, V. Voon, P. Krack, K. D. Foote, H. S. Mayberg, et al. 2022. Deep brain stimulation for obsessive–compulsive disorder: A crisis of access. Nature Medicine 28 (8):1529–32. doi:10.1038/s41591-022-01879-z.
  • Yuste, R., S. Goering, B. A. Y. Arcas, G. Bi, J. M. Carmena, A. Carter, J. J. Fins, P. Friesen, J. Gallant, J. E. Huggins, et al. 2017. Four ethical priorities for neurotechnologies and AI. Nature 551 (7679):159–63. doi:10.1038/551159a.
  • Zúñiga-Fajuri, A., L. V. Miranda, D. Z. Miralles, and R. S. Venegas. 2021. Neurorights in Chile: Between neuroscience and legal science. In Developments in Neuroethics and Bioethics, ed. M. Hevia, vol. 4:165–179. Cambridge, MA: Elsevier. doi:10.1016/bs.dnb.2021.06.001.