872
Views
9
CrossRef citations to date
0
Altmetric
Editorial

Stimulating Autonomy: DBS and the Prospect of Choosing to Control Ourselves Through Stimulation

Implanted neural devices that stimulate our brains to restore functioning or address unwanted symptoms of disease (seizures, hand tremors, obsessive behaviors) that are resistant to other therapeutic efforts have become more common in the past 15 or so years. Although we aren't quite sure exactly why or how they work (Chiken and Nambu Citation2015), deep brain stimulation (DBS) devices are in many respects quite promising for a variety of conditions. Nonetheless, the devices have had some issues of safety or effectiveness and have also led to some interesting side effects, including mania, impulsivity, and personality alterations, as well as reports from individuals who simply don't feel “like themselves” postoperatively (Schüpbach et al. Citation2006; Kraemer Citation2013; Mecacci and Haselager Citation2014). As the articles and commentaries in this issue suggest, neuroethicists are rightly interested in how DBS devices interact with patient autonomy.

One part of respect for patient autonomy has to do with informed consent prior to receiving the implants. What kinds of information do prospective users of DBS devices need in order to make good choices about whether or not to have the surgery? Knowledge of likely risks and benefits is a minimum, but as Muller and colleagues (2015) point out, we don't yet have reliable data regarding the effects of DBS on many conditions for which it has been considered. With respect to anorexia nervosa (AN), for instance, relatively few cases are published, and not all of them seem to fit the standard criterion of the condition being treatment resistant. Without clinical trials or at least case registries with systematic data collection, it will be difficult to ensure that patients have a robust sense of what they are consenting to and of the likelihood that it will successfully treat their symptoms. Likewise, patients considering the implantation of neural devices for epilepsy—devices designed to predict epileptic seizures, and then either advise about next steps or automatically trigger treatment (as Gilbert Citation[2015] explores)—will need to know relative risks and benefits of the devices, including how to conceive of the risks of permanent brain monitoring, and how to understand the future autonomy of an agent who has a semi-autonomous device operating within her brain. As such, in order to have a good sense of how to respect autonomy in obtaining consent for the procedure, we need to explore the potential ways that the devices might enhance or threaten autonomy when they are functioning successfully.

So how might neural devices affect our autonomy once they are implanted? Should we think of the devices as assistive, as aiding us in preserving or enhancing our autonomy so that we have more control—in the way a wheelchair aids the autonomy of a person with spinal cord injury (SCI), or an antidepressant may enhance autonomy for a depressed person, by expanding options and eliminating unwanted constraints? Or is it appropriate to think of the devices as providing benefit but also, at least in part, controlling us—such that the device is understood to introduce new constraints on a person, perhaps even to the extent of threatening to undermine some aspects of her autonomy (as Gilbert wonders about the automated therapy option)?

Part of the challenge of understanding the effect of DBS on autonomy is that conditions targeted by DBS can already involve disrupted autonomy. The piece by Muller and colleagues Citation(2015) suggests that part of the problem of AN is compromised autonomy: Individuals get caught in a vicious loop, from which they cannot break free. Even though AN is self-inflicted in a certain sense, once it develops, it is notoriously difficult to undo. Though individuals with AN seem to want to starve themselves, at a higher level they may recognize that such behavior is unhealthy and potentially life-threatening. Part of the problem is that they cannot bring their first-order desires into alignment with their higher order, considered preferences (like the addicted person who wishes he didn't have the addiction, but still can't stop himself from using the drug) (Frankfurt Citation1971). Understood this way, DBS that successfully treats the symptoms of AN may seem to restore or at least enhance autonomy, by allowing the individual to act in ways that align with her higher order values, to gain control of her desires and actions. Of course, this presumes that the individual does in fact desire the healthier body. If her preferred body image or self-perception does not change, but the DBS is successful in getting her to gain weight, the intervention may only create what Wu and colleagues Citation(2013) call a “psychological hell.” Part of the problem here is that the DBS is aiming to break an unhealthy connection between the individual's self-perception and her eating behavior, but who she is (the entity that we aim to respect when we respect autonomy) may be caught up in that connection. For DBS to be successful it has to allow for her either to remake her self-perception, or to gain control of her desires such that she can behave in ways that align with her higher order preferences for a healthy body. The latter frames DBS as an assistive device that enables autonomy; the former may significantly change the individual's very sense of self, making ascriptions of autonomy enhancement questionable at best, and perhaps even undermining the individual's existing autonomy.

Gilbert raises similar questions in the context of closed-loop devices that are both predictive and advisory, and possibly also autonomous in administering a needed drug dose or stimulation to individuals with epilepsy. Even purportedly successful implants—ones that serve their predictive roles effectively—may nonetheless give us reason to question patient post-operative autonomy. Gilbert warns of overreliance on the predictive device, in much the way that drivers now often rely excessively on global positioning system (GPS) devices, sometimes to their peril, when they could have observed and used independent evidence to help them arrive safely at their destinations. On the one hand, overreliance on an implanted device might mean that individuals will wrongly believe that the devices will successfully predict all seizures, and thus these individuals will participate in risky behavior they would have otherwise avoided, with the threat of a possible accident if the device doesn't function as reliably as they come to expect. Overreliance might, on the other hand, mean that every time the device predicts a seizure, the individuals will stop what they are doing and take excessive precautions, even though no seizure may follow. This could leave the individual in a state of “decisional vulnerability” in which she is unsure either how much to rely on the device, or how to make decisions on her own (i.e., noting what the device says but not being fully controlled by it). Of course, we rely on all kinds of information (from GPS devices, from the weather forecasts, from Internet searches) without necessarily losing or decreasing our autonomy. We just have to learn how to be savvy consumers of information, and how to tailor the kinds of information we seek. Perhaps the same thing applies here. How sure would you want to be that a seizure would follow, in order to get a warning? Some of us might prefer to play it safe, and err on the side of many false positives. Others, however, might be frustrated by unnecessary warnings, and learn to ignore them (as so many of us ignore warnings regarding downloads on our computers), and so would want to adjust the parameters to trigger warnings only when seizures are almost guaranteed to occur. Appropriate predictive parameters might be a matter of individual preference.

To my eye, though, Gilbert is more concerned with the possibility of automated therapy, in which the device does not simply offer advice, but automatically supplies the relevant response (increased medication dose, seizure-averting stimulation, etc.). Such a closed-loop system would be intended to work more efficiently, to decrease the needed amount of stimulation or drug dose (given that it would be applied only when necessary, rather than at a set level over time). But individuals who receive such devices seem to have an autonomous loop running inside of them—and, in a way, outside of their own autonomous control. They don't get to choose to take each dose or not; they simply receive it automatically. Of course, they would consent to have the device implanted, but postsurgery, the individual would be taken “out of the loop,” so to speak. Should that matter? We might think that it would be ideal—no need to remember to take a pill, flexibility for when the system is needed, and so on. Perhaps we could consider implantable birth control as an analogous intervention. But implantable birth control isn't likely to alter our thinking, or our sense of self, in the same way. When the semi-autonomous implanted device works to produce ends that the individual endorses, perhaps she won't care about not being “in the loop” directly. But when she's less sure of the end, or even when she's less sure of the means of achieving that end, she may feel that her autonomy is compromised by the device; perhaps she feels less like she is choosing or controlling herself, and more like she is being constrained (even if it is for a good end). We may wish for a way (a device, a drug, a behavioral strategy) to control ourselves, but perhaps what we really want is the capacity for ourselves to be in control, to do the controlling. When we control ourselves, we are autonomous; when we are controlled, we are not (or at least less so). And so we need to figure out whether (or when) such closed-loop systems are restoring autonomy or undermining it, and what makes that difference. Assistive devices can open up new options and enhance autonomy, but as individuals, we want to remain in control. ▪

REFERENCES

  • Chiken, S., and A. Nambu. 2015. Mechanism of deep brain stimulation: Inhibition, excitation or disruption. Neuroscientist April 17. doi:10.1177/1073858415581986
  • Frankfurt, H. 1971. Freedom of the will and the concept of a person. Journal of Philosophy 68(1): 5–20.
  • Gilbert, F. 2015. A threat to autonomy? The intrusion of predictive brain implants. AJOB Neuroscience 6(4): 4–11.
  • Kraemer, F. 2013. Me, myself and my brain implant: Deep brain stimulation raises questions of personal authenticity and alienation. Neuroethics 6: 483–97.
  • Mecacci, G., and W. F. G. Haselager. 2014. Stimulating the self: The influence of conceptual frameworks on reactions to deep brain stimulation. AJOB Neuroscience 5(4): 30–39.
  • Müller, S., R. Riedmüller, H. Walter, and M. Christen. 2015. An ethical evaluation of stereotactic neurosurgery for anorexia nervosa. AJOB Neuroscience 6(4): 50–65.
  • Schüpbach, M., M. Gargiulo, M. Welter, et al. 2006. Neurosurgery in Parkinson disease: A distressed mind in a repaired body? Neurology 66: 1811–16.
  • Wu, H., P. Van Dyck-Lippers, R. Santegoeds, et al. 2013. Deep-brain stimulation for anorexia nervosa. World Neurosurgery 80(3–4): S29.e1–10.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.