1,128
Views
6
CrossRef citations to date
0
Altmetric
Insight

Crowdsourced tDCS Research: Feasible or Fanciful?

&

Abstract

In recent years, there has been an increase in “crowdsourced” health studies, wherein patients and individuals are not merely passive subjects, but rather active participants in driving research (Swan Citation2012, Ranard et al. 2013). Given the existence of a community of individuals who use brain stimulation—particularly transcranial direct current stimulation (tDCS)—at home for self-improvement purposes (Jwa Citation2015; Wexler Citation2016), the issue of whether data from such a population could be aggregated and transformed into scientifically credible knowledge has been suggested by several scholars (Davis Citation2016; Treene et al. 2015). The possibility has also been raised by several home tDCS users on the Reddit tDCS forum (www.reddit.com/r/tDCS). However, as of yet, no data aggregation initiative has been undertaken. Here, we argue that although data from the home use tDCS community could in theory be scientifically useful (at least to some extent), in practice there are a number of methodological obstacles that would be difficult to surmount. In particular, we discuss how the control of variability in scientific tDCS studies would be difficult to replicate among home users, and how a “crowdsourced” tDCS study would differ from previous participant-led studies in other domains. We conclude by discussing the ethical issues that would arise if a tDCS data aggregation initiative is undertaken, and how they would vary depending on whether the initiative is driven by a research institution, company, or home users themselves.

In recent years, there has been an increase in “crowdsourced” health studies, wherein patients and individuals are not merely passive subjects, but rather active participants in conducting research (Swan Citation2012; Ranard et al. Citation2013). Most notably, in one study an individual with amyotrophic lateral sclerosis (ALS) convinced fellow ALS sufferers who frequented the website PatientsLikeMe to take an experimental treatment of lithium and track their results. The website owners subsequently aggregated and reported these data—in which no effect of lithium was observed—in Nature Biotechnology (Wicks et al. Citation2011). Other crowdsourced studies have involved an analysis of the effects of off-label drugs (Frost et al. Citation2011), an examination of Internet-based behavioral interventions for individuals with mental illnesses (Naslund et al. Citation2015), and a correlation of patient-reported phenotypic data related to Parkinson's disease with genetic information (Do et al. Citation2011). The increase in participant-led research studies has been attributed in part to the rise of online social communities and greater access to technological tools (Vayena and Tasioulas Citation2013a).

Given the existence of a group of individuals who use brain stimulation—particularly transcranial direct current stimulation (tDCS)—at home for self-improvement purposes (Jwa Citation2015; Wexler Citation2016), the issue of whether data from such a population could be aggregated and transformed into scientifically credible knowledge has been suggested by several scholars (Davis Citation2016; Treene et al. 2015). The possibility has also been raised by several home tDCS users on the Reddit tDCS forum (www.reddit.com/r/tDCS).Footnote1 However, as of yet, no data aggregation initiative has been undertaken (Wexler Citation2016).

We argue that although data from the home use of tDCS could in theory be scientifically useful (at least to some extent), in practice there are a number of methodological obstacles that may be difficult to surmount. Overcoming such obstacles would require someone—or a company or institution—to take control over the direction of research, and would raise a host of complicated ethical questions.

METHODOLOGICAL OBSTACLES

Variability Across User Goals, Stimulations Protocols, and Devices

Although the home use of tDCS (sometimes known as “do it yourself” tDCS) is often written about as a single community, in reality it is a heterogeneous group with tremendous variability across individual goals (e.g., cognitive enhancement, self-treatment) and stimulation protocols (e.g., electrode placement, current amplitude, session duration, frequency). In addition, although a tDCS device is thought of as a single tool, in the last 5 years lay individuals have purchased and utilized at least 25 different tDCS devices, not including home-built ones. Current distribution likely varies significantly across devices and across electrodes, which users often purchase separately. Furthermore, unlike crowdsourced studies on pharmaceuticals, where a dose of medication is a relatively well-defined measure, in tDCS, dosage is thought of as a constellation of at least a dozen factors related to how a stimulation device distributes electromagnetic fields (e.g., shape, size, position of electrodes; pulse shape, amplitude, width, polarity, interval, duration of stimulation; see Peterchev et al. 2012).

Insufficient Sample Size

Variability across stimulation protocols is not in and of itself an issue, provided that there exists a sufficiently large sample that allows for comparisons across subgroups. However, existing evidence indicates that the magnitude of the home use of tDCS has likely been overstated (Jwa Citation2015), and informal observations suggest that there is a relatively high attrition rate among home users. Furthermore, there is a reason why no data aggregation initiative has taken off: Home users are primarily interested in self-improvement, not making new scientific knowledge (Wexler Citation2016). Thus, finding a sufficient sample to account for the already-mentioned variation will likely prove challenging.

Validity of Stimulation Protocols

Unlike most devices used in scientific studies, consumer devices do not measure resistance or display the total amount of current flowing through the circuit; thus, it would be difficult to confirm whether current was actually flowing through the circuit in a subject-reported protocol. While crowd-sourced studies often rely on self-report to validate that subjects did indeed perform a given intervention (i.e., take a dose of medication at a specific time), as noted earlier, the idea of dosage in tDCS is complex, and in the absence of standardized placement protocols it would be difficult to confirm where subjects were placing electrodes.

Selection and Self-Report Biases

Those who utilize tDCS at home are not naive to the technique—most devices require individuals to do their “homework” to figure out how to stimulate their brains. Such individuals have been hopeful enough about tDCS to spend money and/or time to acquire a device, and their personal investment could lead to a greater perceived placebo effect or other self-report biases. In addition, subjects would likely skew toward one of two groups: adventurous early adopters willing to try brain stimulation to enhance their cognition, or those who have been frustrated enough with their medical condition to resort to home stimulation.

Lack of Controls

In laboratory studies of tDCS, the control group is often administered a “sham” session, wherein current is applied for up to a minute at the start of a session but not during the session itself. Currently, only one direct-to-consumer tDCS provides a (limited) sham option. Furthermore, while laboratory subjects are generally naive to tDCS and may not be able to differentiate sham from real stimulation, experienced home users would likely be able to discriminate between the two, as stimulation (particularly with direct-to-consumer devices) provides a continuous sensation of heat and tingling.

It should be noted that some of the methodological obstacles outlined here are not unique to crowdsourced tDCS but also apply to tDCS experiments conducted in laboratory settings. For example, there are no “standardized” tDCS devices or stimulation protocols, and research studies have their own selection biases (e.g., often using populations of undergraduate students). Indeed, there is currently a debate within the field of tDCS with respect to the reliability and reproducibility of data (see, e.g., Horvath, Forte, and, Carter Citation2015; Price and Hamilton Citation2015; Mancuso et al. Citation2016). On the whole, however, laboratory tDCS studies attempt to adjust for the lack of standardization and to control their methodologies as tightly as possible.

DISCUSSION AND ETHICAL CONSIDERATIONS

One method that has been proposed of aggregating data from the home use of tDCS is the creation of a database (Treene et al. Citation2015) where users could input variables such as device type, electrode placement, and stimulation protocol, as well as the subjective effects of stimulation. However, due to the reasons outlined earlier—in particular with regard to variability and insufficient sample size—it would likely be of limited utility for producing scientifically legitimate data, and instead would be more akin to an organized version of data already reported on the Reddit.com tDCS forum.

In theory, it may be possible to overcome many of the mentioned methodological obstacles and achieve greater control over variability. For example, a standardized device could be used, set stimulation protocols could be established (and validated by the device itself and/or via video monitoring), well-defined baseline measures and training tasks could be conducted across users, and (blinded) participants could be split into intervention and control groups. Even though there would still be a number of issues—selection bias, reliability of self-report, and nonnaive users being able to distinguish sham from nonsham—such issues would be similar to those reported in other crowdsourced studies. While the results of such experimentation would not be a substitute for a double-blind randomized control trial, they could be useful as complementary or exploratory data.

Yet overcoming such obstacles would require an entity (e.g., individual, company, institution) to step in and control the direction of the study. Indeed, the notion of a “crowdsourced” study is somewhat misleading (Vayena and Tasioulas Citation2013a), as all peer-reviewed published crowdsourced studies to-date have had significant oversight or direction at some stage of the process. For example, in the crowdsourced ALS/lithium study, it was an ALS patient who persuaded fellow patients to ingest lithium, but PatientsLikeMe matched patients to historical controls, performed all analyses, wrote the article, and submitted it for publication (Wicks et al. Citation2011). In the already-mentioned genetic association study on Parkinson's disease, data were collected from the database of the personal genetics company 23andMe, but members of the company performed all analyses (Do et al. Citation2011). Other crowdsourced studies have involved users playing publicly available “games” to advance a university-based research project: For example, players of EyeWire reconstruct neurons in three dimensions (Kim et al. Citation2014), and players of FoldIt help model the folding properties of certain proteins (Cooper et al. Citation2010).

Thus, terms such as “crowdsourced” or “citizen science” have been used to refer to a surprisingly large variety of studies, ranging from those that are primarily led by participants to those primarily led by investigators (Vayena and Tasioulas Citation2013b; Swan Citation2012). What crowdsourced studies do have in common, however, is a kind of decentralization of scientific research; that is, contrary to the traditional model of a single researcher (or collaborative group) being responsible for conceiving, designing, conducting, analyzing, and writing, any of these tasks may be performed by those outside the research group. Given that the ethical and regulatory framework for the protection of human subjects is predicated on traditional models of scientific experimentation, the fragmentation inherent in crowdsourced studies raises tricky ethical questions (Graber and Graber 2013; Janssens et al. Citation2012; O'Connor Citation2013).

In the case of the home use of tDCS, we argue that the creation of a database would have both the smallest potential payoff (in terms of creating scientifically valid knowledge) and the fewest ethical concerns, as it would build off actions that people are already voluntarily performing on themselves using devices that they have freely chosen to purchase.

But transforming data from the home use of tDCS into scientifically credible information—which, as we have shown, would require some degree of oversight—would venture into ethically problematic territory. If devices were to be standardized, who would provide them, a company or institution? Would they be provided at a discounted rate or free of charge, and how would they be marketed? Providing cheap or free tDCS devices to those who are sick (or to those who cannot afford a device) may be viewed as tacit encouragement to perform an intervention for which the long-term risks are unknown. In other words, there may be significant ethical concerns about coercion and exploitation.

The ethical concerns would also vary according to whether the entity directing the research project was an individual, corporation, or part of an institution (Vayena and Tasioulas Citation2013a). For example, what level of oversight would be required if the tDCS standardization initiative were participant-led, but university-based researchers analyzed and wrote up the data? If such researchers also provided guidance on the experimental design, would the study then need to undergo approval from an institutional review board (IRB)? The ethical territory only becomes more muddled in the case of a study led by a private corporation, which would presumably have an interest in individuals purchasing or using their tDCS devices. Although IRB oversight would not, in most cases, be required for studies conducted by private corporations (see O'Connor Citation2013), ethical questions remain: Would data collection be explicit, and what privacy protections would be put into place? More generally, what is an appropriate ethical way to balance corporate interests with protection for human subjects?

There are no straightforward answers to these questions, which are becoming more and more prevalent as increased amounts of personal and health data are being collected through Internet forums, mobile apps, and wearable devices (Yom-Tov Citation2016). What is clear is that in the case of the home use of tDCS, better data will come from greater oversight, and the greater the oversight, the more such experimentation will fit traditional models of scientific experimentation and relevant human subject protections. Practically speaking, given the effort required to produce scientifically credible data from the home use of tDCS, it is hard to envision how such an endeavor would be a worthwhile effort on the part of tDCS researchers. Thus, while in theory it may be possible to transform data from the home use into scientifically credible knowledge, in practice the methodological limitations and ethical considerations run deeper than they appear. ▪

Notes

REFERENCES

  • Cooper, S., F. Khatib, A. Treuille, and Foldit Players, et al. 2010. Predicting protein structures with a multiplayer online game. Nature 466 (7307):756–60. http.//doi.org/10.1038/nature09304
  • Davis, N. J. 2016. The regulation of consumer tDCS: Engaging a community of creative self-experimenters. Journal of Law and the Biosciences 3 (2):304–8. https://doi.org/10.1093/jlb/lsw013.
  • Do, C. B., J. Y. Tung, Eh. Dorfman, et al. 2011. Web-based genome-wide association study identifies two novel loci and a substantial genetic component for Parkinson's disease. PLoS Genetics 7 (6):e1002141–14. http.//doi.org/10.1371/journal.pgen.1002141
  • Frost, J., S. Okun, T. Vaughan, J. Heywood, and P. Wicks. 2011. Patient-reported outcomes as a source of evidence in off-label prescribing: Analysis of data from patients like me. Journal of Medical Internet Research 13 (1):e6. http.//doi.org/10.2196/jmir.1643
  • Graber, M. A., and A. Graber. 2013. Internet-based crowdsourcing and research ethics: The case for IRB review. Journal of Medical Ethics 39 (2):115–18. http.//doi.org/10.1136/medethics-2012-100798.
  • Horvath, J. C., J. D. Forte, and O. Carter. 2015. Quantitative review finds no evidence of cognitive effects in healthy populations from single-session transcranial direct current stimulation (tDCS). Brain Stimulation 8 (3):535–50. http.//doi.org/10.1016/j.brs.2015.01.400
  • Janssens, A., J. W. Cecile, and Peter. Kraft. 2012. Research conducted using data obtained through online communities: Ethical implications of methodological limitations. PLoS Medicine 9 (10):e1001328. http.//doi.org/10.1371/journal.pmed.1001328
  • Jwa, A. 2015. Early adopters of the magical thinking cap: A study on do-it-yourself (DIY) transcranial direct current stimulation (tDCS) user community. Journal of Law and the Biosciences 2 (2):292–335. http.//doi.org/10.1093/jlb/lsv017
  • Kim, J. S., M. J. Greene, A. Zlateski, et al. 2014. Space–time wiring specificity supports direction selectivity in the retina. Nature 509 (7500):331–36. http.//doi.org/10.1038/nature13240
  • Mancuso, L. E., I. P. Ilieva, R. H. Hamilton, and M. J. Farah. 2016. Does transcranial direct current stimulation improve healthy working memory?: A meta-analytic review. Journal of Cognitive Neuroscience 28 (8):1063–89. http.//doi.org/10.1162/jocn_a_00956
  • Naslund, J. A., K. A. Aschbrenner, L. A. Marsch, G. J. McHugo, and S. J. Bartels. 2015. Crowdsourcing for conducting randomized trials of internet delivered interventions in people with serious mental illness: A systematic review. Contemporary Clinical Trials 44 (C):77–88. http.//doi.org/10.1016/j.cct.2015.07.012
  • O'Connor, D. 2013. The apomediated world: Regulating research when social media has changed research. Journal of Law, Medicine & Ethics 41 (2):470–83. http.//doi.org/10.1111/jlme.12056
  • Price, A. R., and R. H. Hamilton. 2015. A re-evaluation of the cognitive effects from single-session transcranial direct current stimulation. Brain Stimulation 8 (3):663–65. http.//doi.org/10.1016/j.brs.2015.03.007
  • Ranard, B. L., Y. P. Ha, Z. F. Meisel, et al. 2013. Crowdsourcing—Harnessing the masses to advance health and medicine, A systematic review. Journal of General Internal Medicine 29 (1):187–203. http.//doi.org/10.1007/s11606-013-2536-8
  • Swan, M. 2012. Crowdsourced health research studies: An important emerging complement to clinical trials in the public health research ecosystem. Journal of Medical Internet Research 14 (2):e46. http.//doi.org/10.2196/jmir.1988
  • Treene, L., A. Wexler, and J. Giordano. 2015. Toward an integrative database of/for transcranial electrical stimulation: Defining need, and positing approaches, benefits and caveats. Poster presented at the International Neuroethics Society Annual Meeting, October 15–16, Chicago, IL.
  • Vayena, E., and J. Tasioulas. 2013a. Adapting standards: Ethical oversight of participant-led health research. PLoS Medicine 10 (3):e1001402–5. http.//doi.org/10.1371/journal.pmed.1001402
  • Vayena, E., and J. Tasioulas. 2013b. The ethics of participant-led biomedical research. Nature Biotechnology 31 (9):786–87. http.//doi.org/10.1038/nbt.2692
  • Wexler, A. 2016. The practices of do-it-yourself brain stimulation: Implications for ethical considerations and regulatory proposals. Journal of Medical Ethics 42 (4):211–15. http.//doi.org/10.1136/medethics-2015-102704
  • Wicks, P., T. E. Vaughan, M. P. Massagli, and J. Heywood. 2011. Accelerated clinical discovery using self-reported patient data collected online and a patient-matching algorithm. Nature Biotechnology 29 (5):411–14. http.//doi.org/10.1038/nbt.1837
  • Yom-Tov, E. 2016. Crowdsourced health. Cambridge, MA: MIT Press.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.