6,001
Views
38
CrossRef citations to date
0
Altmetric
Opinion Paper

SERIES: eHealth in primary care. Part 2: Exploring the ethical implications of its application in primary care practice

, ORCID Icon, , , ORCID Icon, , , , , , , , , , & show all
Pages 26-32 | Received 12 Apr 2019, Accepted 07 Oct 2019, Published online: 30 Oct 2019

Abstract

Background

eHealth promises to increase self-management and personalised medicine and improve cost-effectiveness in primary care. Paired with these promises are ethical implications, as eHealth will affect patients’ and primary care professionals’ (PCPs) experiences, values, norms, and relationships.

Objectives

We argue what ethical implications related to the impact of eHealth on four vital aspects of primary care could (and should) be anticipated.

Discussion

(1) EHealth influences dealing with predictive and diagnostic uncertainty. Machine-learning based clinical decision support systems offer (seemingly) objective, quantified, and personalised outcomes. However, they also introduce new loci of uncertainty and subjectivity. The decision-making process becomes opaque, and algorithms can be invalid, biased, or even discriminatory. This has implications for professional responsibilities and judgments, justice, autonomy, and trust. (2) EHealth affects the roles and responsibilities of patients because it can stimulate self-management and autonomy. However, autonomy can also be compromised, e.g. in cases of persuasive technologies and eHealth can increase existing health disparities. (3) The delegation of tasks to a network of technologies and stakeholders requires attention for responsibility gaps and new responsibilities. (4) The triangulate relationship: patient–eHealth–PCP requires a reconsideration of the role of human interaction and ‘humanness’ in primary care as well as of shaping Shared Decision Making.

Conclusion

Our analysis is an essential first step towards setting up a dedicated ethics research agenda that should be examined in parallel to the development and implementation of eHealth. The ultimate goal is to inspire the development of practice-specific ethical recommendations.

This article is part of the following collections:
EJGP Collection on eHealth in Primary Care

KEY MESSAGES

  • The impact of eHealth on primary care is paired with ethical implications including questions of autonomy and professional responsibilities

  • Practice-specific ethical guidance for the use of eHealth in primary care should be developed

  • Primary care professionals should be aware of the ethical implications when using eHealth approaches

Introduction

Information and Communication Technologies (ICT) are increasingly permeating every aspect of our lives. Algorithms determine the advertisements to which we are exposed, our phones switch our house lights on and off, and wearable sensors track our sleeping patterns. Not surprisingly, ICT also permeate health care in the form of eHealth. EHealth encompasses a broad category of ‘health services and information delivered or enhanced through the internet and related technologies’ that are employed to support or improve health care [Citation1]. Examples include health apps, wearables and tracking devices, monitoring at a distance and digital communication interfaces. EHealth is expected to change primary care dramatically. Not only do applications developed by the health care sector formally enter the consultation room (Box 1) [Citation2–5], also commercially available eHealth applications and wearable devices used by patients impact their interactions with healthcare professionals [Citation6,Citation7].

How should we value these developments? Generally, eHealth promises to increase self-management and personalised medicine, empower patients, and improve cost-effectiveness [Citation4,Citation8,Citation9]. eHealth supporters stress that the technology will contribute to an ideal healthcare system in which patients are empowered and the centre of their personal continuum of care [Citation10]. However, it can also be argued that rather than unleashing a revolution, eHealth is supportive of core values and activities of primary care professionals (PCPs), who have always focussed on encouraging self-management and delivering person-centred and continuous care [Citation11]. eHealth critics stress the downsides of eHealth, such as the potential loss of human interaction, the lack of validation from algorithms used, commercial interests, and a lack of confidentiality in commercially available applications [Citation12–14]. Furthermore, critical voices mention that most eHealth solutions today still lack a solid evidence base; that uptake by PCPs is still limited; that eHealth raises privacy and informed consent concerns [Citation4,Citation12]. The latter have drawn considerable attention in both academic and popular media, particularly in light of the recent General Data Protection Regulation (GDPR) [Citation15].

The ethical implications of eHealth, however, extend well beyond privacy and informed consent issues [Citation6,Citation9,Citation16–18]. As new health technologies affect users’ experiences, values, norms, and relationships [Citation19,Citation20], the increasing use of digital technologies in primary care can be expected to profoundly affect professionals and patients in- and outside of the consultation room. These changes can affect day-to-day primary care practice and raise ethical questions that require attention. Therefore, in this paper, we discuss what ethical implications could (and should) be anticipated in four essential aspects of primary care: (1) dealing with predictive and diagnostic uncertainty; (2) the roles and responsibilities of (future) patients; (3) the roles and responsibilities of PCPs; (4) the patient – PCP relationship. We construct our arguments by drawing upon the academic literature regarding eHealth, ethics, digitalisation, and primary care as well as upon our diverse experiences and expertise regarding the development, ethical evaluation, implementation, and use of eHealth in primary care. Subsequently, we suggest ways forward by listing a set of empirical, conceptual, and normative ethics research questions that need to be addressed. In addition, we propose a method for the ethical accompaniment of the design and implementation of novel eHealth tools. Finally, we emphasise the importance of these ethical dimensions for PCPs when using eHealth approaches in their day-to-day practice.

Ethical implications of the clinical use of eHealth in primary care

Dealing with predictive and diagnostic uncertainty

One of the core tasks of PCPs is to diagnose diseases at an early stage to provide timely treatment and ideally to predict and prevent disease. When executing this task PCPs have to deal with substantial uncertainty. In the early phases of disease, symptoms are undifferentiated, and PCPs have limited predictive and diagnostic possibilities, some of which have limited accuracy [Citation21].

Box 1 Potential applications of eHealth in primary care

  • eHealth enables patients to monitor their (un)healthy behaviour and to spur lifestyle interventions and behavioural change [Citation7,Citation8,Citation16]. A plethora of apps, wearables, and online platforms aim to increase self-management [Citation32]. Examples include monitoring of adherence to medication and remote monitoring of patients with complex medical and social problems at home instead of in the hospital [Citation33].

  • eHealth approaches can facilitate and improve the interaction between patients and healthcare providers and among healthcare professionals [Citation7]. Online consultation via patient portals and digital communication interfaces, such as teleconferencing and online patient access to medical files, are increasingly adopted.

  • Data are expected to inform medical decision-making [Citation7]. Patients increasingly become generators of data via apps, wearables, and smartphones. These data, if combined with other types of information (e.g. information derived from genomics, environmental and socio-cultural data) and machine learning approaches, can personalise medical decision-making.

EHealth can alter how PCPs deal with such uncertainties. Machine learning approaches can use large amounts of data to predict or diagnose disease more accurately through a quantified and personalised calculation. When integrated with clinical decision support systems (CDSSs), these machine learning approaches function as decision aids [Citation22].

Whereas eHealth may solve some aspects of uncertainty, at the same time novel loci of uncertainty are introduced, because the decision-making process becomes opaque (this is sometimes referred to as a ‘black-box’) [Citation22–24]. The quality of the data may be unclear, and the algorithms and weighting of different parameters and trade-offs may not be transparent. This is particularly so because in machine learning the system changes and develops over time depending on the data to which it is exposed [Citation23].

The described opacity has several ethical implications. First, it makes it difficult for PCPs to judge the validity of the outcome, among others because sensitivity and specificity of a test cannot be calculated, this while the validity of algorithms may well be compromised [Citation23]. This not only leads to the epistemological question of what evidence-based practice in primary care is, but it can also conflict with some of the principle moral duties of PCPs’, namely to do no harm and to do good to their patients. If you cannot judge whether algorithms make valid and fair decisions, how can you make sure that you use the decision aid responsibly? [Citation23] Moreover, it can be unclear whether the outcome applies to the specific patient in the consultation room, which may hamper person-centred care. Certain groups may have been excluded from the data analysis as bias and discrimination can be inherent to machine learning approaches [Citation22]. This potentially threatens the PCP’s duty to secure equal accessibility for their patients. Lastly, if the algorithms are embedded in CDSSs that generate recommendations regarding diagnostic follow-up or treatment, the system provides a seemingly objective choice that is in fact value-laden [Citation24]. In developing rule-based algorithms implicit or explicit choices are made regarding e.g. acceptability of risk or treatment preferences. These choices require a trade-off between different values. If ‘safety first’ is the mandate, cut-off points for referral may differ from a situation in which cost-effectiveness is the guiding value. This raises the question of what should be guiding values and who should decide the norm [Citation22,Citation24,Citation25]. Also, PCPs and patients need to know how algorithms determine their recommendations in order to allow for autonomous decision-making [Citation24]. However, this is precisely the difficulty with opaque machine learning systems ().

Table 1. The ethical implications of the clinical use of eHealth in primary care.

Roles and responsibilities of patients

EHealth approaches influence the autonomy, roles, and responsibilities of patients, because of how they can promote self-management and lifestyle interventions [Citation8,Citation9,Citation18].

The use of eHealth to enhance self-management fits well within the scope of primary care, where patient empowerment and the stimulation of patient autonomy are part of the core values [Citation11]. Diagnosing or monitoring disease without the participation of PCPs can make patients more involved in their own care [Citation23]. Also, patients can become more familiar with their own body and disease symptoms, and become increasingly able to respond appropriately to changing symptoms [Citation2,Citation16,Citation17]. Lifestyle trackers can help patients to meet their own goals [Citation12,Citation26]. On the flipside, patient autonomy can be compromised. For lifestyle applications to enhance autonomy, patients should be able to set their own goals, it should be transparent how advice is generated, and patients should be able to choose whether or not to respond to advice [Citation26]. Some of the currently available lifestyle applications lack these characteristics. Examples include applications that automatically adjust the daily goals based on a real-time feedback loop of patient’s data and online behaviour [Citation12].

Furthermore, if PCPs attribute new responsibilities to patients through employing eHealth, they should be aware that eHealth-mediated self-management could also be harmful. It can be (too) burdensome, the quality of measurements can decrease, and it can cause feelings of isolation unless eHealth offers the possibility to share worries and seek immediate contact [Citation18]. Also, in some cases the (moral) responsibility of patients for the quality of measurements, the adherence to medication or lifestyle interventions, or for choosing (not) to consult a physician or go to work can be unjustly heavy, particularly if factors that are beyond their control are insufficiently taken into account.

Besides, eHealth approaches raise questions of accessibility, equality, and inclusivity of care [Citation27]. The primary care population is highly heterogeneous and includes patients from diverse cultural backgrounds, with different socioeconomic status, health literacy, educational level, and age groups. EHealth applications may not be equally accessible to all of these groups, either because they are unsuitable or unaffordable. This may increase existing health disparities [Citation27]. Together with potential bias and discrimination in algorithms these aspects may raise issues of representativity and exclusion of certain groups from appropriate healthcare provision.

Roles and responsibilities of PCPs

If PCPs use eHealth, they delegate tasks to a network of technologies (e.g. to CDSSs and remote monitoring technologies) and people (e.g. designers, data scientists, patients, other caregivers). This delegation of tasks urges a reconsideration of professional responsibilities [Citation22,Citation28].

If PCPs delegate tasks to eHealth applications such as remote monitoring technologies, new responsibilities arise. For example, in applications that have a triage function, who is responsible for monitoring the monitors? Also, PCPs should be aware of potential responsibility gaps [Citation29]. Who is responsible or accountable if an alert generated by a remote monitoring technology, such as the home monitoring of cardiac rhythm, is missed: the patient, the PCP, the manufacturer, the digital infrastructure, or the data scientist? These are pressing questions, among others, because current digital systems frequently have insufficient interoperability.

Additionally, the ways PCPs fulfil their duty to foster the autonomy of their patients should be reconsidered. PCPs should have extra attention for those patients that have low health or digital literacy. Moreover, an adequate balance should be struck between using persuasive lifestyle applications in the best interest of the patient and fostering patient autonomy. Although PCPs have a duty to do good, we should be weary of a return to paternalistic medicine, for instance through strongly persuasive lifestyle applications (see previous paragraph) or through value-laden decision aids (see next paragraph) [Citation24,Citation29].

Patient–PCP relationship

When using digital technologies in primary care, a triangular relationship of patients –eHealth – PCPs comes into being. This means that the direct interaction between patients and PCPs can be influenced by eHealth, e.g. if face-to-face contact is replaced by an eHealth medium or if a CDSS is used to support decision-making [Citation3,Citation22]. Simultaneously, PCPs may be bypassed if eHealth is employed by patients for self-management. These changes have ethical implications, for example regarding the ‘humanness’ of primary care and the role of trust and confidentiality.

Replacement of face-to-face contact by eHealth implies that the role, importance, and meaning of human interaction in primary care must be reconsidered [Citation11,Citation23]. Attention for the human elements of care and a holistic attitude are considered to be core values of primary care [Citation11]. Direct human interaction between PCPs and patients has a vital role in shaping these attitudes. Furthermore, the contact between the doctor and the patient can influence how a patient responds to illness and treatment [Citation30]. Also, direct interaction is necessary for some aspects of adequate diagnosis and triage, such as an overall sensory impression of the patient (e.g. smell) and physical examination [Citation18]. The (partial) replacement of direct human interaction by eHealth may undermine these aspects of primary care. Alternatively, eHealth may lead to a reinterpretation of (or a complementary approach to) human interaction. Patients could, for example, feel more at ease discussing sensitive topics, such as psychological or sexual complaints, via a digital interface [Citation18]. Notwithstanding, attention should be paid to upholding the ‘humanness’ of primary care. To this end, it should be carefully considered what types of care can be delegated to eHealth approaches without losing quality or compromising safety (e.g. in case of triage) and how eHealth can be blended with face-to-face care [Citation4].

Furthermore, eHealth may support, strengthen, or compromise Shared Decision-Making (SDM) [Citation24]. SDM is increasingly considered of crucial importance in primary care [Citation11]. EHealth can improve SDM processes, because it can stimulate self-management and –monitoring and it makes medical knowledge more widely available. A combination of increased self- and medical knowledge can enable patients to identify and articulate their goals, values, and preferences, which spurs the SDM process. Furthermore, if algorithms generate personalised risk scores or treatment options, these outcomes are quantified and made explicit, which could be the starting point of an SDM discussion. Alternatively, eHealth may compromise SDM. CDSSs can generate seemingly objective and personalised rankings, for instance, regarding the preferred treatment modality [Citation24]. However, these rankings are based on general decision principles that do not necessarily align with the patient’s preferences. Although patients can decide to deviate from these recommendations, the apparent objectivity of algorithms may make this problematic. Also, in SDM ideally the values of patients serve as a starting point for the process, rather than an aftermath [Citation24].

Lastly, eHealth can affect the notions of trust and confidentiality within the patient-physician relationship. Trust in clinical judgment can be either enhanced because CDSSs are (seemingly) objective, or decreased because of the opaque decision-making process [Citation23]. Additionally, most eHealth applications are developed by commercial companies whose business model is frequently based on the collection, use, and sale of data [Citation6]. This reality changes the nature of confidentiality in primary care. It should be discussed to what extent PCPs have a duty to judge the trustworthiness of commercially available apps and to guide their patients in responsible and safe use of these applications.

Towards morally responsible innovation and implementation of eHealth

The ethical implications we have discussed form an essential first step towards developing practice-specific ethical guidance for the implementation and use of eHealth in primary care.

We suggest that to proceed forward, a dedicated ethics research agenda should be set up addressing empirical, conceptual, and normative questions (). Empirical questions encourage an investigation of how eHealth affects primary care (e.g. how specific tools enable patients to self-manage their conditions and how this influences their perception of autonomy). Conceptual questions focus on the reinterpretation of moral principles and norms in medical ethics in the digital era. Lastly, normative questions concern the desirability and moral acceptability of specific e-health practices. Examples of questions in serve as a stepping-stone to set up a dedicated ethics research agenda for different types of eHealth.

Table 2. Examples of potential empirical, conceptual, and normative questions.

Moreover, we contend that, because eHealth technologies fundamentally change primary care practice and values, there should be attention for these ethical questions already during the development, design, and implementation process. This goal can be achieved through ‘parallel ethics research’ [Citation19,Citation31]. In this approach ethicists, together with other stakeholders (e.g. PCPs, patients, biotech companies, data scientists), identify and evaluate the ethical challenges associated with an eHealth technology parallel to its development and implementation [Citation19,Citation31]. The output of this interdisciplinary ethical analysis can directly influence the design of the eHealth intervention. If ethical implications specific to the eHealth intervention are anticipated (e.g. a potential compromise of patient autonomy) and desirable use is defined (e.g. stimulating patient autonomy and improving lifestyle), the eHealth intervention can be designed such that the ethical hurdles are avoided and desirable use is propagated. Additionally, conditions can be formulated to ensure that the technology is embedded responsibly in the healthcare system (e.g. conditions for shaping blended care and improving SDM).

Additionally, we advise that formal spaces be created for PCPs to raise awareness of the impacts of eHealth on their own and their patients’ norms and values when using eHealth approaches in their day-to-day practice. To this end, we recommend the setup of specific training and learning opportunities. In a follow-up paper in this eHealth series, we will further elaborate on the newly required professional skills and educational needs.

Concluding remarks

In this paper, we have made the first step towards morally responsible use of eHealth in primary care by providing an exploration of its ethical implications in primary care practice. EHealth may not necessarily revolutionise primary care, but it does bring about necessary reconfigurations of the practice. We have identified four areas of change that need attention, specifying the ethical implications that result from each one of them. Our analysis has shown that eHealth approaches call for a reconsideration of some of the core moral values inherent to primary care. We have also delineated the boundaries for a much-needed dedicated ethics research agenda, providing examples of empirical, conceptual, and normative research questions specific to the use of eHealth in primary care. This is an essential first step towards the formulation of practice-based and practice-specific ethical recommendations for the clinical implementation and use of eHealth in primary care.

Disclosure statement

The authors declare no conflict of interest.

References

  • Eysenbach G. What is e-health? J Med Internet Res. 2001;3:e20.
  • Tan R , Cvetkovski B , Kritikos V , et al. Identifying an effective mobile health applications for the self-management of allergic rhinitis and asthma in Australia. J Asthma. 2019;24:1–12.
  • Pinnock H , Hanley J , McCloughan L , et al. Effectiveness of telemonitoring integrated into existing clinical services on hospital admission for exacerbation of chronic obstructive pulmonary disease: researcher blind, multicentre, randomised controlled trial. BMJ. 2013;347:f6070.
  • Van der Kleij R , Kasteleyn MJ , Meijer E , et al. eHealth in primary care: hype or revolution? Eur J Gen Pract. 2019. DOI:10.1080/13814788.2019.1658190
  • Castle-Clarke S , Imison C. The Digital Patient: Transforming Primary Care? 2016 [accessed 2019 Oct]. Available from: https://www.nuffieldtrust.org.uk/files/2019-08/nt-the-digital-patient-web-update-august-2019.pdf
  • Sharon T. The Googlization of health research: from disruptive innovation to disruptive ethics. Per Med. 2016;13(6):563–574.
  • Shaw T , McGregor D , Brunner M , et al. What is eHealth? Development of a conceptual model for ehealth: qualitative study with key informants. J Med Internet Res. 2017;19(10):e324.
  • Lupton D. The digitally engaged patient: self-monitoring and self-care in the digital health era. Soc Theory Health. 2013;11(3):256–270.
  • Moerenhout T , Devisch I , Cornelis GC. E-health beyond technology: analyzing the paradigm shift that lies beneath. Med Health Care and Philos. 2018;21(1):31–41.
  • Topol EJ. The patient will see you now. The future of medicine is in your hands. New York (NY): Basic Books; 2015.
  • WONCA Europe . The European definition of general practice/family medicine. 2011 ed. [accessed 2019 Aug 19]. Available from: https://www.woncaeurope.org/sites/default/files/documents/Definition%203rd%20ed%202011%20with%20revised%20wonca%20tree.pdf
  • Lanzing M. “Strongly recommended” revisiting decisional privacy to judge hypernudging in self-tracking technologies. Philos Technol. 2019;32:549–568.
  • Chan S , Torous J , Hinton L , et al. Towards a framework for evaluating mobile mental health apps. Telemed e-Health. 2015;21(12):1038–1041.
  • Kleinpeter E. Four ethical issues of eHealth. IRBM. 2017;38(5):245–249.
  • General Data Protection Regulation (GDPR). Article 25: Data Protection by Design and by Default. 2016 [accessed 2019 Aug 19]. Available from: https://gdpr-info.eu/art-25-gdpr/
  • Sharon T. Self-tracking for health and the quantified self: re-articulating autonomy, solidarity, and authenticity in an age of personalized healthcare. Philos Technol. 2017;30(1):93–121.
  • Sharon T. Let’s move beyond critique-but please let’s not depolitize the debate. Am J Bioeth. 2018;18(2):20–22.
  • Lucivero F , Jongsma KR. A mobile revolution for healthcare? Setting the agenda for bioethics. J Med Ethics. 2018;44(10):685–689.
  • van der Burg S. Taking the “Soft Impacts” of technology into account: broadening the discourse in research practice. Soc Epistemol. 2009;23(3-4):301–316.
  • Verbeek PP. Accompanying technology: philosphy of technology after the ethical turn. Techné. 2010;14(1):49–54.
  • Siegel Sommers L , Launer J. Clinical uncertainty in primary care. The challenge of collaborative engagement. New York: Springer-Verlag; 2013.
  • Mittelstadt BD , Allo P , Taddeo M , et al. The ethics of algorithms: mapping the debate. Big Data Soc. 2016;3(2). DOI:10.1177/2053951716679679
  • Preliminary Study on the Ethics of Artificial Intelligence . United Nations Educational, Scientific and Cultural Organization (UNESCO) and the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST); 2019 [accessed 2019 Aug 19]. SHS/COMEST/EXTWG-ETHICS-AI/2019/1.
  • McDougall RJ. Computer knows best? The need fo value-flexibility in medical AI. J Med Ethics. 2019;45(3):156–169.
  • Roeser S. Risk, technology, and moral emotions. New York (NY): Routledge; 2018.
  • Thaler R , Sunstein C. Nudge: improving decisions about health, wealth, and happiness. New Haven (CT): Yale University Press; 2008.
  • Bodie G , Dutta M. Understanding health literacy for strategic health marketing: eHealth literacy, health disparities, and the digital divide. Heal Mark Q. 2008;25(1-2):175–203.
  • Nyholm S. Attributing agency to automated systems: reflection on human-robot collaborations and responsibility loci. Sci Eng Ethics. 2018;24:1201–1219.
  • Avitzour D , Barnea R , Avitzour E , et al. Nudging in the clinic: the ethical implications of differences in doctors’ and patients’ point of view. J Med Ethics. 2019; 45(3):183–189.
  • Kelley JM , Kraft-Todd G , Schapira L , et al. The influence of the patient-clinician relationship on healthcare outcomes: a systematic review and meta-analysis of randomized controlled trials. PLoS One. 2014;9(4):e94207.
  • Jongsma KR , Bredenoord AL , Lucivero F. Digital medicine: an opportunity to revisit the role of bioethicists. Am J Bioeth. 2018;18(9):69–70.
  • Wu X , Guo X , Zhang Z. The efficacy of mobile phone apps for lifestyle modification in diabetes: systematic review and meta-analysis. JMIR Mhealth Uhealth. 2019;15(7):e12297.
  • Jeminiwa R , Hohmann L , Jingjing Q , et al. Impact of eHealth on medication adherence among patients with asthma: a systematic review and meta-analysis. Respir Med. 2019;149:59–68.