641
Views
0
CrossRef citations to date
0
Altmetric
Correspondences

AUTOGEN and the Ethics of Co-Creation with Personalized LLMs—Reply to the Commentaries

REFERENCES

  • Allen, J. W., B. D. Earp, J. Koplin, and D. Wilkinson. 2024. Consent-GPT: Is it ethical to delegate procedural consent to conversational AI? Journal of Medical Ethics 50:77–83. doi:10.1136/jme-2023-109347.
  • Bakker, M. A., M. J. Chadwick, H. R. Sheahan, M. H. Tessler, L. Campbell-Gillingham, J. Balaguer, N. McAleese, A. Glaese, J. Aslanides, M. M. Botvinick, et al. 2022. Fine-tuning language models to find agreement among humans with diverse preferences. arXiv. doi:10.48550/arXiv.2211.15006.
  • Earp, B. D., S. Porsdam Mann, M. A. Khan, Y. Chu, J. Savulescu, P. Liu, and I. Hannikainen. Under review. Personalizing AI reduces credit-blame asymmetries across cultures.
  • Earp, B. D., S. Porsdam Mann, J. Allen, S. Salloch, V. Suren, K. Jongsma, M. Braun, D. Wilkinson, W. Sinnott-Armstrong, A. Rid, et al. 2024. A personalized patient preference predictor for substituted judgment in healthcare: Technically feasible and ethically desirable. The American Journal of Bioethics. Online ahead of print. doi: 10.1080/15265161.2023.2296402.
  • Erler, A. 2023. Publish with AUTOGEN or perish? Some pitfalls to avoid in the pursuit of academic enhancement via personalized large language models. The American Journal of Bioethics 23 (10):94–6. doi:10.1080/15265161.2023.2250291.
  • Ganguli, D., A. Askell, N. Schiefer, T. I. Liao, K. Lukošiūtė, A. Chen, A. Goldie, A. Mirhoseini, C. Olsson, D. Hernandez, et al. 2023. The capacity for moral self-correction in large language models. arXiv. doi:10.48550/arXiv.2302.07459.
  • Grice, H. P. 1968. Utterer’s meaning, sentence-meaning, and word-meaning. Foundations of Language 4 (3):225–42.
  • Grice, H. P. 1969. Utterer’s meaning and intention. The Philosophical Review 78 (2):147–77. doi:10.2307/2184179.
  • Kar, R. B., and M. J. Radin. 2019. Pseudo-contract and shared meaning analysis. Harvard Law Review 132 (4):1135–219.
  • Laacke, S., and C. Gauckler. 2023. Why personalized large language models fail to do what ethics is all about. The American Journal of Bioethics 23 (10):60–3. doi:10.1080/15265161.2023.2250292.
  • Liedtke, W. 2004. Rembrandt’s “workshop” revisited. Oud Holland – Quarterly for Dutch Art History 117 (1–2):48–73. doi:10.1163/187501704X00278.
  • McMillan, J. 2023. Generative AI and ethical analysis. The American Journal of Bioethics 23 (10):42–4. doi:10.1080/15265161.2023.2249852.
  • Nyholm, S. 2023a. Artificial intelligence and human enhancement: Can AI technologies make us more (artificially) intelligent? Cambridge Quarterly of Healthcare Ethics 33 (1):76–88. doi:10.1017/S0963180123000464.
  • Nyholm, S. 2023b. Is academic enhancement possible by means of generative AI-based digital twins? The American Journal of Bioethics 23 (10):44–7. doi:10.1080/15265161.2023.2249846.
  • Ostertag, G. 2023. Meaning by courtesy: LLM-generated texts and the illusion of content. The American Journal of Bioethics 23 (10):91–3. doi:10.1080/15265161.2023.2249851.
  • Pavlick, E. 2023. Symbols and grounding in large language models. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences A381:1–19. doi:10.1098/rsta.2022.0041.
  • Piantadosi, S. T., and F. Hill. 2022. Meaning without reference in large language models, arXiv.org. Accessed January 12, 2024. https://arxiv.org/abs/2208.02957v2.
  • Porsdam Mann, S., P. de Lora Deltoro, T. Cochrane, and C. Mitchell. 2018. Is the use of Modafinil, a pharmacological cognitive enhancer, cheating? Ethics and Education 13 (2):251–67. doi:10.1080/17449642.2018.1443050.
  • Porsdam Mann, S., B. D. Earp, N. Møller, S. Vynn, and J. Savulescu. 2023a. AUTOGEN: A personalized large language model for academic enhancement—ethics and proof of principle. The American Journal of Bioethics 23 (10):28–41. doi:10.1080/15265161.2023.2233356.
  • Porsdam Mann, S., B. D. Earp, S. Nyholm, J. Danaher, N. Møller, H. Bowman-Smart, J. Hatherley, J. Koplin, M. Plozza, D. Rodger, et al. 2023b. Generative AI entails a credit–blame asymmetry. Nature Machine Intelligence 5 (5):472–5. doi:10.1038/s42256-023-00653-1.
  • Resnik, D. B., and M. Hosseini. 2023. The impact of AUTOGEN and similar fine-tuned large language models on the integrity of scholarly writing. The American Journal of Bioethics 23 (10):50–2. doi:10.1080/15265161.2023.2250276.
  • Sorin, V., D. Brin, B. Yiftach, E. Konen, A. Charney, A., G. Nadkarni, and E. Klang. 2023. Large language models (LLMs) and empathy – a systematic review. medRxiv 2023.08.07.23293769.
  • Tietze, H. 1939. Master and workshop in the Venetian renaissance. Parnassus 11 (8):34–45. doi:10.2307/772019.
  • United Nations. 1969. Vienna convention on the Law of Treaties (adopted 23 May 1969, entry into force 27 January 1980) 1155 UNTS 331 (VCLT).
  • Varma, S. 2023. Large language models and inclusivity in bioethics scholarship. The American Journal of Bioethics 23 (10):105–7. doi:10.1080/15265161.2023.2250286.
  • Van Veen, D., C. Van Uden, L. Blankemeier, J.-B. Delbrouck, A. Aali, C. Bluethgen, A. Pareek, M. Polacin, E. P. Reis, A. Seehofnerová, et al. 2023. Clinical text summarization: Adapting large language models can outperform human experts. Research Square doi: 10.21203/rs.3.rs-3483777/v1.
  • Zhang, T., F. Ladhak, E. Durmus, P. Liang, K. McKeown, and T. B. Hashimoto. 2023. Benchmarking large language models for news summarization, arXiv.org. Accessed January 11, 2024. https://arxiv.org/abs/2301.13848v1.
  • Zohny, H. 2023. Reimagining scholarship: A response to the ethical concerns of AUTOGEN. The American Journal of Bioethics 23 (10):96–9. doi:10.1080/15265161.2023.2250315.
  • Zohny, H., S. Porsdam Mann, B. D. Earp, and J. McMillan. 2024. Generative AI and medical ethics: The state of play. Journal of Medical Ethics. https://www.researchgate.net/publication/376757174_Generative_AI_and_Medical_Ethics_The_State_of_Play.