18
Views
0
CrossRef citations to date
0
Altmetric
Open Peer Commentaries

Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable

Pages 62-65 | Published online: 24 Jun 2024
 
This article refers to:
A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable

DISCLOSURE STATEMENT

No potential conflict of interest was reported by the author(s).

Notes

1 Earp et al. discuss the autonomy problem, but they do not say how their proposal aims to avoid what elsewhere I’ve called the “scope” and the “multiple models” problems (Sharadin Citation2019). Indeed, P4s appear to exacerbate the scope and the multiple models problems. The scope problem: Earp et al. suggest a P4 might be implemented using a system trained on an individual’s “emails, blog posts, or social media posts […] or even Facebook ‘liking’ activity” (6). But which emails and posts? Public ones? Which social media posts? Which platforms? The multiple models problem: Earp et al. suggest at least 5 different implementations of the P4 (p. 6), and there are multiple reasonable ways of executing each of these implementations, each of which will vary in their predictions concerning patient care. What principled way can there be for clinicians and other healthcare providers to decide between these models? Worse, the kinds of ML models Earp et al. propose to use (LLMs) are particularly prone to producing widely divergent outputs depending on technical design choices made in deploying the model (such as those involving the inference procedure and other “background conditions”). For discussion, see (Harding and Sharadin Citation2024). I say more about this issue below, in Section “TECHNICAL ISSUES.”

2 ​​Earp et al.’s view seems to be that the primary technical challenge to using LLMs as P4s involves acquiring (enough, good) data: “In general, the primary function of LLMs is prediction: given data of a sufficient quality and relevance, prima facie LLMs should be able to predict medical preferences, too” (9). I agree that (enough, good) data will be a barrier to using any LLM as a P4. Here, I’ll set this (very big) issue to one side.

3 For related work on the difficulties associated with evaluating the capabilities of LLMs, see (Harding and Sharadin Citation2024).

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 137.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.