2,226
Views
21
CrossRef citations to date
0
Altmetric
Target Article

A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable

REFERENCES

  • Allen, J., B. D. Earp, J. J. Koplin, and D. Wilkinson. 2023. Consent GPT: Is it ethical to delegate procedural consent to conversational AI? Journal of Medical Ethics. Online ahead of print. doi: 10.1136/jme-2023-109347.
  • Askell, A., Y. Bai, A. Chen, D. Drain, D. Ganguli, T. Henighan, A. Jones, N. Joseph, B. Mann, N. DasSarma, et al. 2021. A general language assistant as a laboratory for alignment. arXiv Preprint (1):1–48. doi: 10.48550/arXiv.2112.00861.
  • Bakker, M., M. Chadwick, H. Sheahan, M. Tessler, L. Campbell-Gillingham, J. Balaguer, N. McAleese, A. Glaese, J. Aslanides, M. M. Botvinick, et al. 2022. Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems 35:38176–38189.
  • Benzinger, L., J. Epping, F. Ursin, and S. Salloch. 2023. Artificial Intelligence to support ethical decision-making for incapacitated patients: A survey among German anesthesiologists and internists. Pre-print available at https://www.researchgate.net/publication/374530025.
  • Berger, J. T. 2005. Patients’ interests in their family members’ well-being: An overlooked, fundamental consideration within substituted judgments. The Journal of Clinical Ethics 16 (1):3–10. doi: 10.1086/JCE200516101.
  • Biller-Andorno, N., A. Ferrario, S. Joebges, T. Krones, F. Massini, P. Barth, G. Arampatzis, and M. Krauthammer. 2022. AI support for ethical decision-making around resuscitation: Proceed with care. Journal of Medical Ethics 48 (3):175–183. doi: 10.1136/medethics-2020-106786.
  • Biller-Andorno, N., and A. Biller. 2019. Algorithm-aided prediction of patient preferences-an ethics sneak peek. The New England Journal of Medicine 381 (15):1480–1485. doi: 10.1056/NEJMms1904869.
  • Bleher, H., and M. Braun. 2022. Diffused responsibility: Attributions of responsibility in the use of AI-driven clinical decision support systems. AI and Ethics 2 (4):747–761. doi: 10.1007/s43681-022-00135-x.
  • Braun, M. 2021. Represent me: Please! Towards an ethics of digital twins in medicine. Journal of Medical Ethics 47 (6):394–400. doi: 10.1136/medethics-2020-106134.
  • Braun, M. 2022. Ethics of digital twins: Four challenges. Journal of Medical Ethics 48 (9):579–580. doi: 10.1136/medethics-2021-107675.
  • Brock, D. W. 2014. Reflections on the patient preference predictor proposal. The Journal of Medicine and Philosophy 39 (2):153–60. doi: 10.1093/jmp/jhu002.
  • Christian, B. 2020. The alignment problem. New York: W. W. Norton & Company.
  • Church, K. W., Z. Chen, and Y. Ma. 2021. Emerging trends: A gentle introduction to fine-tuning. Natural Language Engineering 27 (6):763–778. doi: 10.1017/S1351324921000322.
  • Ciroldi, M., A. Cariou, C. Adrie, D. Annane, V. Castelain, Y. Cohen, A. Delahaye, L. M. Joly, R. Galliot, M. Garrouste-Orgeas, et al. 2007. Ability of family members to predict patient’s consent to critical care research. Intensive Care Medicine 33 (5):807–813. doi: 10.1007/s00134-007-0582-6.
  • de Kerckhove, D. 2021. The personal digital twin, ethical considerations. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 379 (2207):20200367. doi: 10.1098/rsta.2020.0367.
  • Demaree-Cotton, J., B. D. Earp, and J. Savulescu. 2022. How to use AI ethically for ethical decision-making. The American Journal of Bioethics 22 (7):1–3. doi: 10.1080/15265161.2022.2075968.
  • Ditto, P. H., and C. J. Clark. 2014. Predicting end-of-life treatment preferences: Perils and practicalities. The Journal of Medicine and Philosophy 39 (2):196–204. doi: 10.1093/jmp/jhu007.
  • Dresser, R. 2014. Law, ethics, and the patient preference predictor. The Journal of Medicine and Philosophy 39 (2):178–186. doi: 10.1093/jmp/jhu004.
  • Earp, B. D. 2022. Meta-surrogate decision making and artificial intelligence. Journal of Medical Ethics 48 (5):287–289. doi: 10.1136/medethics-2022-108307.
  • Earp, B. D., J. Demaree-Cotton, M. Dunn, V. Dranseika, J. A. C. Everett, A. Feltz, G. Geller, I. R. Hannikainen, L. A. Jansen, J. Knobe, et al. 2020. Experimental philosophical bioethics. The American Journal of Bioethics Empirical Bioethics 11 (1):30–33. doi: 10.1080/23294515.2020.1714792.
  • Earp, B. D., J. Lewis, V. Dranseika, and I. R. Hannikainen. 2021. Experimental philosophical bioethics and normative inference. Theoretical Medicine and Bioethics 42 (3-4):91–111. doi: 10.1007/s11017-021-09546-z.
  • Earp, B. D., J. Lewis, J. A. Skorburg, I. Hannikainen, and J. A. C. Everett. 2022. Experimental philosophical bioethics of personal identity. In Experimental philosophy of identity and the self, by K. Tobia, 183–202. London: Bloomsbury.
  • Ferrario, A., S. Gloeckler, and N. Biller-Andorno. 2023a. Ethics of the algorithmic prediction of goal of care preferences: From theory to practice. Journal of Medical Ethics 49 (3):165–174. doi: 10.1136/jme-2022-108371.
  • Ferrario, A., S. Gloeckler, and N. Biller-Andorno. 2023b. AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction. Journal of Medical Ethics 49 (3):185–186. doi: 10.1136/jme-2023-108945.
  • Gabriel, I. 2020. Artificial intelligence, values, and alignment. Minds and Machines 30 (3):411–437. doi: 10.1007/s11023-020-09539-2.
  • Giubilini, A., and J. Savulescu. 2018. The artificial moral advisor. The “ideal observer” meets artificial intelligence. Philosophy & Technology 31 (2):169–188. doi: 10.1007/s13347-017-0285-z.
  • Gloeckler, S., A. Ferrario, and N. Biller-Andorno. 2022. An ethical framework for incorporating digital technology into advance directives: Promoting informed advance decision making in healthcare. The Yale Journal of Biology and Medicine 95 (3):349–353.
  • Houts, R. M., W. D. Smucker, J. A. Jacobson, P. H. Ditto, and J. H. Danks. 2002. Predicting elderly outpatients’ life-sustaining treatment preferences over time: The majority rules. Medical Decision Making: An International Journal of the Society for Medical Decision Making 22 (1):39–52. doi: 10.1177/0272989X0202200104.
  • Hubbard, R., Greenblum. J., and J. 2020. Surrogates and artificial intelligence: Why AI trumps family. Science and Engineering Ethics 26 (6):3217–27. doi: 10.1007/s11948-020-00266-6.
  • Jardas, E. J., D. Wasserman, and D. Wendler. 2022. Autonomy-based criticisms of the patient preference predictor. Journal of Medical Ethics 48 (5):304–310. doi: 10.1136/medethics-2021-107629.
  • John, S. 2014. Patient preference predictors, apt categorization, and respect for autonomy. The Journal of Medicine and Philosophy 39 (2):169–177. doi: 10.1093/jmp/jhu008.
  • John, S. D. 2018. Messy autonomy: Commentary on patient preference predictors and the problem of naked statistical evidence. Journal of Medical Ethics 44 (12):864–864. doi: 10.1136/medethics-2018-104941.
  • Jongsma, K. R., and S. van de Vathorst. 2015. Beyond competence: Advance directives in dementia research. Monash Bioethics Review 33 (2-3):167–180. doi: 10.1007/s40592-015-0034-y.
  • Jost, L. A. 2023. Affective experience as a source of knowledge. PhD Thesis, University of St Andrews. doi: 10.17630/sta/387.
  • Kang, W. C., J. Ni, N. Mehta, M. Sathiamoorthy, L. Hong, E. Chi, and D. Z. Cheng. 2023. Do LLMs understand user preferences? Evaluating LLMs on user rating prediction. arXivpreprint (1):1–11. doi: 10.48550/arXiv.2305.06474.
  • Kenton, Z., T. Everitt, L. Weidinger, I. Gabriel, V. Mikulik, and G. Irving. 2021. Alignment of language agents. arXiv preprint (1):1–18. doi: 10.48550/arXiv.2103.14659.
  • Kim S. Y. 2014. Improving medical decisions for incapacitated persons: Does focusing on ‘accurate predictions’ lead to an inaccurate picture? Journal of Medicine and Philosophy 39:187–195. doi: 10.1093/jmp/jhu010.
  • Kim, J., and B. Lee. 2023. AI-augmented surveys: Leveraging large language models for opinion prediction in nationally representative surveys. arXiv preprint (1):1–18. doi: 10.48550/arXiv.2305.09620.
  • Kirk, H. R., B. Vidgen, P. Röttger, and S. A. Hale. 2023. Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalized feedback. arXiv preprint (1):1–37. doi: 10.48550/arXiv.2303.05453.
  • Lamanna, C., and L. Byrne. 2018. Should artificial intelligence augment medical decision making? The case for an autonomy algorithm. AMA Journal of Ethics 20 (9):E902–E910. doi: 10.1001/amajethics.2018.902.
  • Lewis, J., J. Demaree-Cotton, and B. D. Earp. 2023. Bioethics, experimental approaches. In Encyclopedia of the philosophy of law and social philosophy, by M. Sellers, S. Kirste. Dordrecht: Springer. doi: 10.1007/978-94-007-6730-0_1053-1.
  • Lindemann, H., and J. L. Nelson. 2014. The surrogate’s authority. The Journal of Medicine and Philosophy 39 (2):161–168. doi: 10.1093/jmp/jhu003.
  • Mainz, J. T. 2023. The patient preference predictor and the objection from higher-order preferences. Journal of Medical Ethics 49 (3):221–222. doi: 10.1136/jme-2022-108427.
  • O'Neil, C. 2022. Commentary on ‘Autonomy-based criticisms of the patient preference predictor. Journal of Medical Ethics 48 (5):315–316. doi: 10.1136/medethics-2022-108288.
  • Perry, J. E., L. R. Churchill, and H. S. Kirshner. 2005. The Terri Schiavo case: Legal, ethical, and medical perspectives. Annals of Internal Medicine 143 (10):744–748. doi: 10.7326/0003-4819-143-10-200511150-00012.
  • Porsdam Mann, S., B. D. Earp, N. Møller, V. Suren, and J. Savulescu. 2023. AUTOGEN: A personalized large language model for academic enhancement—Ethics and proof of principle. The American Journal of Bioethics 23 (10):28–41. doi: 10.1080/15265161.2023.2233356.
  • Rid, A., and D. Wendler. 2014a. Treatment decision making for incapacitated patients: Is development and use of a patient preference predictor feasible? The Journal of Medicine and Philosophy 39 (2):130–152. doi: 10.1093/jmp/jhu006.
  • Rid, A., and D. Wendler. 2014b. Use of a patient preference predictor to help make medical decisions for incapacitated patients. The Journal of Medicine and Philosophy 39 (2):104–129. doi: 10.1093/jmp/jhu001.
  • Ryan, M. 2004. Discrete choice experiments in health care. BMJ (Clinical Research ed.) 328 (7436):360–361. doi: 10.1136/bmj.328.7436.360.
  • Sacchi, L., S. Rubrichi, C. Rognoni, S. Panzarasa, E. Parimbelli, A. Mazzanti, C. Napolitano, S. G. Priori, and S. Quaglini. 2015. From decision to shared-decision: Introducing patients’ preferences into clinical decision analysis. Artificial Intelligence in Medicine 65 (1):19–28. doi: 10.1016/j.artmed.2014.10.004.
  • Savulescu, J., and H. Maslen. 2015. Moral enhancement and artificial intelligence: Moral AI?. In Beyond artificial intelligence. Topics in intelligent engineering and informatics, by Romportl, J., Zackova, E., Kelemen, J. 9, 79–95. Cham: Springer. doi: 10.1007/978-3-319-09668-1_6.
  • Schwartz, S. M., K. Wildenhaus, A. Bucher, and B. Byrd. 2020. Digital twins and the emerging science of self: Implications for digital health experience design and “small” data. Frontiers in Computer Science 2:31. doi: 10.3389/fcomp.2020.00031.
  • Schwitzgebel, E., D. Schwitzgebel, and A. Strasser. 2023. Creating a large language model of a philosopher. arXiv Preprint (1):1–36. doi: 10.48550/arXiv.2302.01339.
  • Senthilnathan, I., and W. Sinnott-Armstrong. Forthcoming. Patient preference predictors: Options, implementations, and policies. Working paper.
  • Shalowitz, D. I., E. Garrett-Mayer, and D. Wendler. 2006. The accuracy of surrogate decision makers: A systematic review. Archives of Internal Medicine 166 (5):493–497. doi: 10.1001/archinte.166.5.493.
  • Shalowitz, D. I., E. Garrett-Mayer, and D. Wendler. 2007. How should treatment decisions be made for incapacitated patients, and why? PLoS Medicine 4 (3):E35. doi: 10.1371/journal.pmed.0040035.
  • Sharadin, N. P. 2018. Patient preference predictors and the problem of naked statistical evidence. Journal of Medical Ethics 44 (12):857–862. doi: 10.1136/medethics-2017-104509.
  • Silveira, M. J. 2022. Advance care planning and advance directives. Up To Date. https://www.uptodate.com/contents/advance-care-planning-and-advance-directives.
  • Sinnott-Armstrong, W., and J. A. Skorburg. 2021. How AI can aid bioethics. Journal of Practical Ethics 9 (1):1–22. doi: 10.3998/jpe.1175.
  • Smucker, W. D., R. M. Houts, J. H. Danks, P. H. Ditto, A. Fagerlin, and K. M. Coppola. 2000. Modal preferences predict elderly patients’ life-sustaining treatment choices as well as patients’ chosen surrogates do. Medical Decision Making: An International Journal of the Society for Medical Decision Making 20 (3):271–280. doi: 10.1177/0272989X0002000303.
  • Stocking, C. B., G. W. Hougham, D. D. Danner, M. B. Patterson, P. J. Whitehouse, and G. A. Sachs. 2006. Speaking of research advance directives: Planning for future research participation. Neurology 66 (9):1361–1366. doi: 10.1212/01.wnl.0000216424.66098.55.
  • Tomasello, M., M. Carpenter, J. Call, T. Behne, and H. Moll. 2005. Understanding and sharing intentions: The origins of cultural cognition. The Behavioral and Brain Sciences 28 (5):675–691. doi: 10.1017/S0140525X05000129.
  • Toomey, J., J. Lewis, I. Hannikainen, and B. D. Earp. 2023. Advance medical decision making differs across first- and third-person perspectives. PsyArXiv. https://osf.io/preprints/psyarxiv/pcrmd/
  • Tooming, U., and K. Miyazono. 2023. Affective forecasting and substantial self-knowledge. Emotional Self-Knowledge. New York: Routledge. doi: 10.4324/9781003310945-3.
  • van Kinschot, C. M. J., V. R. Soekhai, E. W. de Bekker-Grob, W. E. Visser, R. P. Peeters, T. M. van Ginhoven, and C. van Noord. 2021. Preferences of patients and clinicians for treatment of Graves’ disease: A discrete choice experiment. European Journal of Endocrinology 184 (6):803–812. doi: 10.1530/EJE-20-1490.
  • Wasserman, D., and D. Wendler. 2023. Response to commentaries: ‘Autonomy-based criticisms of the patient preference predictor’. Journal of Medical Ethics 49 (8):580–582. Online ahead of print doi: 10.1136/jme-2022-108707.
  • Wendler, D., B. Wesley, M. Pavlick, and A. Rid. 2016. A new method for making treatment decisions for incapacitated patients: What do patients think about the use of a patient preference predictor? Journal of Medical Ethics 42 (4):235–241. doi: 10.1136/medethics-2015-103001.
  • Wu, L., Y. Chen, K. Shen, X. Guo, H. Gao, S. Li, J. Pei, and B. Long. 2023. Graph neural networks for natural language processing: A survey. Foundations and Trends® in Machine Learning 16 (2):119–328. doi: 10.1561/2200000096.