References
- Aaronson, S. 2022, November 29. “My AI Safety Lecture for UT Effective Altruism.” Shtetl-Optimized. Accessed 11 January 2023. https://scottaaronson.blog/?p=6823.
- Adomaitis, L., A. Grinbaum, and D. Lenzi. 2022. TechEthos D2.2: Identification and Specification of Potential Ethical Issues and Impacts and Analysis of Ethical Issues of Digital Extended Reality, Neurotechnologies, and Climate Engineering. CEA Paris Saclay. Accessed 25 October 2022. https://hal-cea.archives-ouvertes.fr/cea-03710862.
- AI Safety Summit. 2023, November 1. The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023. GOV.UK. Accessed 23 December 2023. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023.
- Anthropic. 2023, July 26. “Frontier Threats Red Teaming for AI Safety.” Accessed 31 July 2023. https://www.anthropic.com/index/frontier-threats-red-teaming-for-ai-safety.
- Atlas, R. M. 2002. “National Security and the Biological Research Community.” Science 298 (5594): 753–754. https://doi.org/10.1126/science.1078329.
- Bagdasaryan, E., and V. Shmatikov. 2022. “Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures.” In 2022 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 2022, 769–786. doi:10.1109/SP46214.2022.9833572.
- Berg, P., D. Baltimore, S. Brenner, R. O. Roblin, and M. F. Singer. 1975. “Asilomar Conference on Recombinant DNA Molecules.” Science 188 (4192): 991–994. https://doi.org/10.1126/science.1056638.
- Bommasani, R., D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, et al. 2022, July 12. On the Opportunities and Risks of Foundation Models. arXiv. https://doi.org/10.48550/arXiv.2108.07258
- Bostrom, N., and E. Yudkowsky. 2014. "The Ethics of Artificial Intelligence." In The Cambridge Handbook of Artificial Intelligence, edited by K. Frankish and W. M. Ramsey, 316-334. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139046855.020
- Brundage, M., S. Avin, J. Clark, H. Toner, P. Eckersley, B. Garfinkel, et al. 2018. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. Accessed 4 May 2023. https://www.repository.cam.ac.uk/handle/1810/275332.
- Collins, F., and A. Fauci. 2012, January 19. NIH Statement on H5N1. Accessed 3 April 2023. https://www.nih.gov/about-nih/who-we-are/nih-director/statements/nih-statement-h5n1.
- C, D., & J, P. 2023, March 14. ChatGPT and Large Language Models: What’s the Risk? National Cyber Security Center. Accessed 5 May 2023. https://www.ncsc.gov.uk/blog-post/chatgpt-and-large-language-models-whats-the-risk.
- Caliskan, A., J. J. Bryson, and A. Narayanan. 2017. “Semantics Derived Automatically from Language Corpora Contain Human-Like Biases.” Science 356: 183–186. https://doi.org/10.1126/science.aal4230.
- Christie, E. H., A. Ertan, L. Adomaitis, and M. Klaus. 2023. “Regulating Lethal Autonomous Weapon Systems: Exploring the Challenges of Explainability and Traceability.” AI and Ethics, 1–17. https://doi.org/10.1007/s43681-023-00261-0.
- Clark, E., T. August, S. Serrano, N. Haduong, S. Gururangan, and N. A. Smith. 2021, July 7. All That’s “Human” Is Not Gold: Evaluating Human Evaluation of Generated Text. ArXiv. https://doi.org/10.48550/arXiv.2107.00061
- Colleoni, E., A. Rozza, and A. Arvidsson. 2014. “Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using big Data.” Journal of Communication 64 (2): 317–332. https://doi.org/10.1111/jcom.12084.
- Coulter, M., and S. Mukherjee. 2023, April 17. “EU Lawmakers Call for Summit to Control 'Very Powerful' AI.” Reuters. Accessed 10 May 2023. https://www.reuters.com/technology/eu-lawmakers-call-political-attention-powerful-ai-2023-04-17/.
- Council of Europe. 2023, December 9. “Artificial Intelligence act: Council and Parliament Strike a Deal on the First Rules for AI in the World.” Press Release. Accessed 23 December 2023. https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/.
- Dance, A. 2021. “The Shifting Sands of ‘Gain-of-Function’ Research.” Nature 598 (7882): 554–557. https://doi.org/10.1038/d41586-021-02903-x.
- Davis, E. 2016. “AI Amusements.” AI Matters 2 (4): 20–24. https://doi.org/10.1145/3008665.3008674.
- Dignum, V. 2019. “Taking Responsibility.” In Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way, edited by V. Dignum, 47–69. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-30371-6_4
- European Commission. 2022. “Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive) (2022).” https://eur-lex.europa.eu/legal-content/EN/TXT/?uri = CELEX%3A52022PC0496. Accessed 31 July 2023.
- European Parliament. 2023a. “Artificial Intelligence Act.” Amendments Adopted by the European Parliament on 14 June 2023., Pub. L. No. P9_TA(2023)0236 (2023).
- European Parliament. 2023b. “Artificial Intelligence Act: Deal on Comprehensive Rules for Trustworthy AI.” Press release on 09 December 2023. https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai.
- EUROPOL. 2023. ChatGPT - the Impact of Large Language Models on Law Enforcement. Publications Office of the European Union. https://www.europol.europa.eu/publications-events/publications/chatgpt-impact-of-large-language-models-law-enforcement. Accessed 5 May 2023.
- Evans, N. G. 2013. ““But Nature Started It”: Examining Taubenberger and Morens’ View on Influenza A Virus and Dual-Use Research of Concern.” mBio 4 (4): e00547–13. https://doi.org/10.1128/mBio.00547-13.
- Evans, N. G. 2020. “Dual-Use and Infectious Disease Research.” In Infectious Diseases in the New Millennium: Legal and Ethical Challenges, edited by M. Eccleston-Turner and I. Brassington, 193–215. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-39819-4_9
- Field, A., S. L. Blodgett, Z. Waseem, and Y. Tsvetkov. 2021. A Survey of Race, Racism, and Anti-Racism in NLP. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) (pp. 1905–1925). Presented at the ACL-IJCNLP 2021, Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2021.acl-long.149
- Future of Life Institute. 2023, March 22. Pause Giant AI Experiments: An Open Letter. Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 3 April 2023.
- Galdzicki, M., C. Rodriguez, D. Chandran, H. M. Sauro, and J. H. Gennari. 2011. “Standard Biological Parts Knowledgebase.” PLOS One 6 (2): e17005. https://doi.org/10.1371/journal.pone.0017005.
- Geissler, E., J. E. Moon, and C. van. 1999. Biological and Toxin Weapons: Research, Development, and Use from the Middle Ages to 1945. Oxford: Oxford University Press.
- Gregory, S. 2022. “Deepfakes, Misinformation and Disinformation and Authenticity Infrastructure Responses: Impacts on Frontline Witnessing, Distant Witnessing, and Civic Journalism.” Journalism 23 (3): 708–729. https://doi.org/10.1177/14648849211060644.
- Grinbaum, A. 2019. Les Robots et le mal. Paris: Desclée de Brouwer.
- Grinbaum, A. 2023. Parole de Machines. Paris: HUMENSCIENCES.
- Grinbaum, A., and L. Adomaitis. 2022a. “Moral Equivalence in the Metaverse.” NanoEthics 16 (3): 257–270. https://doi.org/10.1007/s11569-022-00426-x.
- Grinbaum, A., and L. Adomaitis. 2022b, September 7. The Ethical Need for Watermarks in Machine-Generated Language. arXiv. https://doi.org/10.48550/arXiv.2209.03118
- Grinbaum, A., R. Chatila, L. Devillers, J.-G. Ganascia, C. Tessier, and M. Dauchet. 2017. “Ethics in Robotics Research: CERNA Mission and Context.” IEEE Robotics & Automation Magazine 24 (3): 139–145. https://doi.org/10.1109/MRA.2016.2611586. Presented at the IEEE Robotics & Automation Magazine.
- Grinbaum, A., L. Devillers, G. Adda, R. Chatila, C. Martin, C. Zolynski, and S. Villata. 2021. Agents Conversationnels: Enjeux D’éthique (Report). Comité National Pilote D’éthique du Numérique. Paris: CCNE.
- Grinbaum, A., and C. Groves. 2013. “Responsible Innovation.” Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, 119–142. https://doi.org/10.1002/9781118551424.ch7.
- Hazell, J. 2023, May 12. Large Language Models Can Be Used To Effectively Scale Spear Phishing Campaigns. arXiv. https://doi.org/10.48550/arXiv.2305.06972
- Heaven, W. D. 2023, May 2. “Geoffrey Hinton Tells us why He’s now Scared of the Tech he Helped Build.” MIT Technology Review. https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/. Accessed 3 May 2023.
- Heikkilä, M. 2022, December 19. How to Spot AI-Generated Text. MIT Technology Review. https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/. Accessed 5 May 2023.
- Herfst, S., E. J. A. Schrauwen, M. Linster, S. Chutinimitkul, E. de Wit, V. J. Munster, et al. 2012. “Airborne Transmission of Influenza A/H5N1 Virus Between Ferrets.” Science 336 (6088): 1534–1541. https://doi.org/10.1126/science.1213362.
- Hern, E., W. Glazebrook, and M. Beckett. 2005. “Reducing Knife Crime.” BMJ 330 (7502): 1221–1222. https://doi.org/10.1136/bmj.330.7502.1221.
- Imai, M., T. Watanabe, M. Hatta, S. C. Das, M. Ozawa, K. Shinya, et al. 2012. “Experimental Adaptation of an Influenza H5 HA Confers Respiratory Droplet Transmission to a Reassortant H5 HA/H1N1 Virus in Ferrets.” Nature 486 (7403): 420–428. https://doi.org/10.1038/nature10831.
- Kirchenbauer, J., J. Geiping, Y. Wen, J. Katz, I. Miers, and T. Goldstein. 2023, January 27. A Watermark for Large Language Models. arXiv. https://doi.org/10.48550/arXiv.2301.10226
- Koplin, J. J. 2023. “Dual-use Implications of AI Text Generation.” Ethics and Information Technology 25 (2): 32. https://doi.org/10.1007/s10676-023-09703-z.
- Korn, H., O. Pironneau, A. Fagot-Largeault, B. d’Artemare, N. Becard, S. Zini, et al. 2019. RECHERCHES DUALES A RISQUE-Recommandations pour leur pris en compte dans les processus de conduite de recherche en biologie. Académie des Sciences.
- LAION e.V. 2023, May 4. An Open Letter to the European Parliament.
- Lipsitch, M., and B. R. Bloom. 2012. “Rethinking Biosafety in Research on Potential Pandemic Pathogens.” mBio 3 (5): e00360–12. https://doi.org/10.1128/mBio.00360-12.
- Lipsitch, M., and A. P. Galvani. 2014. “Ethical Alternatives to Experiments with Novel Potential Pandemic Pathogens.” PLOS Medicine 11 (5): e1001646. https://doi.org/10.1371/journal.pmed.1001646.
- Mireshghallah, F., K. Goyal, A. Uniyal, T. Berg-Kirkpatrick, and R. Shokri. 2022, November 3. Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks. arXiv. https://doi.org/10.48550/arXiv.2203.03929
- Mollman, S. 2023, May 4. OpenAI Ignored the ‘have a Problem to Solve’ rule, says President Greg Brockman. It’s now worth nearly $30 Billion. Fortune. Accessed 10 May 2023. https://fortune.com/2023/05/04/openai-success-chatgpt-business-technology-rule-greg-brockman/.
- Moradi, M., and M. Samwald. 2021, August 27. Evaluating the Robustness of Neural Language Models to Input Perturbations. arXiv. https://doi.org/10.48550/arXiv.2108.12237
- National Research Council. 2004. Biotechnology Research in an Age of Terrorism. Washington, DC: National Academies Press. https://doi.org/10.17226/10827
- National Research Council. 2007. Biosecurity and Dual-Use Research in the Life Sciences. In Science and Security in a Post 9/11 World: A Report Based on Regional Discussions Between the Science and Security Communities. National Academies Press (US). Accessed 11 May 2023. https://www.ncbi.nlm.nih.gov/books/NBK11496/.
- NSABB. 2007. Proposed Framework for the Oversight of Dual use Life Sciences Research: Strategies for Minimizing the Potential Misuse of Research Information. A Report of the National Science Advisory Board for Biosecurity (NSABB). National Science Advisory Board for Biosecurity, Office of Biotechnology.
- Oak, R. 2022. “Poster – Towards Authorship Obfuscation with Language Models.” In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 3435–3437. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3548606.3563512
- OpenAI. 2023a, July 21. Moving AI Governance Forward. Accessed 27 July 2023. https://openai.com/blog/moving-ai-governance-forward.
- OpenAI. 2023b, July 26. Frontier Model Forum. Accessed 31 July 2023. https://openai.com/blog/frontier-model-forum.
- Osband, I., S. M. Asghari, B. Van Roy, N. McAleese, J. Aslanides, and G. Irving. 2022, November 2. Fine-Tuning Language Models via Epistemic Neural Networks. arXiv. https://doi.org/10.48550/arXiv.2211.01568
- Pichai, S., and D. Hassabis. 2023, December 6. “Introducing Gemini: Our Largest and Most Capable AI Model.” Google. Accessed 23 December 2023. https://blog.google/technology/ai/google-gemini-ai/.
- Qian, R., C. Ross, J. Fernandes, E. Smith, D. Kiela, and A. Williams. 2022. Perturbation Augmentation for Fairer nlp. arXiv preprint arXiv:2205.12586.
- Regulation (EU) 2021/821., 206 OJ L. 2021. Accessed 10 May 2023. http://data.europa.eu/eli/reg/2021/821/oj/eng.
- Reisach, U. 2021. “The Responsibility of Social Media in Times of Societal and Political Manipulation.” European Journal of Operational Research 291 (3): 906–917. https://doi.org/10.1016/j.ejor.2020.09.020.
- Resnik, D. B. 2010. “Can Scientists Regulate the Publication of Dual Use Research?” Studies in Ethics, Law, and Technology 4 (1), Art. 6. https://doi.org/10.2202/1941-6008.1124.
- Ropek, L. 2023, January 20. “ChatGPT Is Pretty Good at Writing Malware, It Turns Out.” Gizmodo. Accessed 5 May 2023. https://gizmodo.com/chatgpt-ai-polymorphic-malware-computer-virus-cyber-1850012195.
- Sanger, D. E. 2023, May 5. “The Next Fear on A.I.: Hollywood’s Killer Robots Become the Military’s Tools.” The New York Times. Accessed 10 May 2023. https://www.nytimes.com/2023/05/05/us/politics/ai-military-war-nuclear-weapons-russia-china.html.
- Scharre, P. 2018. Army of None: Autonomous Weapons and the Future of War (First Edition, Stated, First Printing.). London, NY: W. W. Norton & Company.
- Schweber, S. S. 2000. In the Shadow of the Bomb: Oppenheimer, Bethe, and the Moral Responsibility of the Scientist. Princeton: Princeton University Press.
- Scouras, J. 2019. “Nuclear War as a Global Catastrophic Risk.” Journal of Benefit-Cost Analysis 10 (2): 274–295. https://doi.org/10.1017/bca.2019.16.
- Selgelid, M. J. 2007. “A Tale of Two Studies: Ethics, Bioterrorism, and the Censorship of Science.” Hastings Center Report 37 (3): 35–43. https://doi.org/10.1353/hcr.2007.0046.
- Smith-Ruiu, J. 2023, April 5. My Dinners with GPT-4. Justin Smith-Ruiu’s Hinternet. Substack Newsletter. Accessed 10 May 2023. https://justinehsmith.substack.com/p/my-dinners-with-gpt-4.
- Taddeo, M., and L. Floridi. 2018. “Regulate Artificial Intelligence to Avert Cyber Arms Race.” Nature 556 (7701): 296–298. https://doi.org/10.1038/d41586-018-04602-6.
- Tamkin, A., M. Brundage, J. Clark, and D. Ganguli. 2021, February 4. Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models. arXiv. https://doi.org/10.48550/arXiv.2102.02503
- Taori, R., I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, et al. 2023, April 3. Stanford Alpaca: An Instruction-following LLaMA Model. Python, Tatsu’s shared repositories. Accessed 3 April 2023. https://github.com/tatsu-lab/stanford_alpaca.
- The White House. 2023. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Accessed 23 December 2023. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
- Touvron, H., L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, et al. 2023, July 19. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv. https://doi.org/10.48550/arXiv.2307.09288
- Tucker, J. B. 2012. Innovation, Dual Use, and Security: Managing the Risks of Emerging Biological and Chemical Technologies. Boston: MIT Press.
- Tumpey, T. M., C. F. Basler, P. V. Aguilar, H. Zeng, A. Solórzano, D. E. Swayne, et al. 2005. “Characterization of the Reconstructed 1918 Spanish Influenza Pandemic Virus.” Science 310 (5745): 77–80. https://doi.org/10.1126/science.1119392.
- Urbina, F., F. Lentzos, C. Invernizzi, and S. Ekins. 2022. "Dual Use of Artificial-Intelligence-Powered Drug Discovery." Nature Machine Intelligence 4: 189-191.
- US Department of Commerce. 2023. "Export Administration Regulations." https://www.bis.doc.gov/index.php/regulations/export-administration-regulations-ear. Accessed 14 January 2024.
- US HHS. 2017. Framework for Guiding Funding Decisions About Proposed Research Involving Enhanced Potential Pandemic Pathogens. Washington, DC: US Department of Health and Human Services.
- Vannuccini, S., and E. Prytkova. 2021. Artificial Intelligence’s New Clothes? From General Purpose Technology to Large Technical System. Rochester, NY: SSRN Scholarly Paper. https://doi.org/10.2139/ssrn.3860041
- Wei, J., Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, et al. 2022, October 26. Emergent Abilities of Large Language Models. arXiv. https://doi.org/10.48550/arXiv.2206.07682
- Weidinger, L., J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, et al. 2021. Ethical and Social Risks of Harm from Language Models. arXiv preprint arXiv:2112.04359.
- White House. 2023a, May 4. “Readout of White House Meeting with CEOs on Advancing Responsible Artificial Intelligence Innovation.” https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/readout-of-white-house-meeting-with-ceos-on-advancing-responsible-artificial-intelligence-innovation/. Accessed 10 May 2023.
- White House. 2023b, July 21. “FACT SHEET: Biden-Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI.” Accessed 31 July 2023. https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.
- Xu, A. Y. 2020, June 22. “Creating Fake News with OpenAI’s Language Models.” Medium. Accessed 30 August 2022. https://towardsdatascience.com/creating-fake-news-with-openais-language-models-368e01a698a3.
- Zhang, H., B. L. Edelman, D. Francati, D. Venturi, G. Ateniese, and B. Barak. 2023, November 14. Watermarks in the Sand: Impossibility of Strong Watermarking for Generative Models. arXiv. https://doi.org/10.48550/arXiv.2311.04378
- Zhang, R., J. Han, A. Zhou, X. Hu, S. Yan, P. Lu, et al. 2023, March 28. LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention. arXiv. https://doi.org/10.48550/arXiv.2303.16199