833
Views
0
CrossRef citations to date
0
Altmetric
Articles

Responsible use of AI in military systems: prospects and challenges

ORCID Icon
Pages 1719-1729 | Received 15 Mar 2023, Accepted 29 Oct 2023, Published online: 06 Nov 2023

References

  • Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.” Information Fusion 58: 82–115. doi:10.1016/j.inffus.2019.12.012.
  • Bainbridge, L. 1983. “Ironies of Automation.” Automatica 19 (6): 775–779. doi:10.1016/0005-1098(83)90046-8.
  • Chen, J. Y. C., K. Procci, M. Boyce, J. Wright, A. Garcia, and M. Barnes. 2014. Situation Awareness-Based Agent Transparency. Aberdeen Proving Ground, MD: Army Research Laboratory. https://apps.dtic.mil/sti/pdfs/ADA600351.pdf.
  • Cummings, M. L. 2019. “Lethal Autonomous Weapons: Meaningful Human Control or Meaningful Human Certification?” IEEE Technology and Society Magazine 38 (4): 20–26. doi:10.1109/MTS.2019.2948438.
  • Dunnmon, J., B. Goodman, P. Kirechu, C. Smith, and A. Van Deusen. 2021. Responsible AI Guidelines in Practice: Lessons Learned from the DIU Portfolio. Washington, DC: Defense Innovation Unit.
  • Ekelhof, M. 2019. “Moving beyond Semantics on Autonomous Weapons: Meaningful Human Control in Operation.” Global Policy 10 (3): 343–348. doi:10.1111/1758-5899.12665.
  • Endsley, M. R. 2017. “From Here to Autonomy: Lessons Learned from Human-Automation Research.” Human Factors 59 (1): 5–27. doi:10.1177/0018720816681350.
  • Endsley, M. R. 2023. “Supporting human-AI Teams: Transparency, Explainability, and Situation Awareness.” Computers in Human Behavior 140: 107574. doi:10.1016/j.chb.2022.107574.
  • Endsley, M. R., B. Bolte, and D. G. Jones. 2003. Designing for Situation Awareness: An Approach to Human-Centered Design. London: Taylor and Francis.
  • Feigenbaum, E. A., P. McCorduck, and H. P. Nii. 1988. The Rise of the Expert Company. New York: Times Books.
  • Friedman, B., and D. G. Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge, MA: The MIT Press.
  • Goldstein, I., and S. Papert. 1977. “Artificial Intelligence, Language, and the Study of Knowledge.” Cognitive Science 1 (1): 84–123. doi:10.1207/s15516709cog0101_5.
  • Hagendorff, T. 2020. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30 (1): 99–120. doi:10.1007/s11023-020-09517-8.
  • Hickok, M. 2021. “Lessons Learned from AI Ethics Principles for Future Actions.” AI and Ethics 1 (1): 41–47. doi:10.1007/s43681-020-00008-1.
  • Human Rights Watch. 2023. Review of the 2023 US policy on autonomy in weapons systems. Review of the 2023 US Policy on Autonomy in Weapons Systems | Human Rights Watch. hrw.org
  • Hwang, W., and G. Salvendy. 2010. “Number of People Required for Usability Evaluation: The 10±/-2 Rule.” Communications of the ACM 53 (5): 130–133. doi:10.1145/1735223.1735255.
  • Jobin, A., M. Ienca, and E. Vayena. 2019. “The Global Landscape of AI Ethics Guidelines.” Nature Machine Intelligence 1 (9): 389–399. doi:10.1038/s42256-019-0088-2.
  • Johnson, M., and A. Vera. 2019. “No AI is an Island: The Case for Teaming Intelligence.” AI Magazine 40 (1): 16–28. doi:10.1609/aimag.v40i1.2842.
  • Koshiyama, A., E. Kazim, and P. Treleaven. 2022. “Algorithm Auditing: Managing the Legal, Ethical, and Technological Risks of Artificial Intelligence, Machine Learning, and Associated Algorithms.” Computer Magazine. 55 (4): 40–50. doi:10.1109/MC.2021.3067225.
  • Kox, E. S., J. H. Kerstholt, T. F. Hueting, and P. W. De Vries. 2021. “Trust Repair in Human-Agent Teams: The Effectiveness of Explanations and Expressing Regret.” Autonomous Agents and Multi-Agent Systems 35 (2): 1–20. doi:10.1007/s10458-021-09515-9.
  • Leith, P. 2016. “The Rise and Fall of the Legal Expert System.” International Review of Law, Computers & Technology 30 (3): 94–106. doi:10.1080/13600869.2016.1232465.
  • Moosavi-Dezfooli, S. M., A. Fawzi, and P. Frossard. 2016. “Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks.” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2574–2582.
  • Moray, N. 1995. “Ergonomics and the Global Problems of the Twenty-First Century.” Ergonomics 38 (8): 1691–1707. doi:10.1080/00140139508925220.
  • National Academies of Sciences, Engineering, and Medicine. 2021. Human-AI Teaming: State of the Art and Research Needs. Washington, DC: The National Academies Press.
  • Nilsson, N. J. 2009. The Quest for Artificial Intelligence. Cambridge: Cambridge University Press.
  • North Atlantic Treaty Organization. 2021. NATO - Summary of the NATO Artificial Intelligence Strategy, October 22. Brussels: Belgium.
  • Peeters, M. M. M., J. Van Diggelen, K. Van den Bosch, A. Bronkhorst, M. A. Neerincx, J. M. Schraagen, and S. Raaijmakers. 2021. “Hybrid Collective Intelligence in a human-AI Society.” Ai & Society 36 (1): 217–238. doi:10.1007/s00146-020-01005-y.
  • Roth, E. M., K. B. Bennett, and D. D. Woods. 1987. “Human Interaction with an “Intelligent” Machine.” International Journal of Man-Machine Studies 27 (5-6): 479–525. doi:10.1016/S0020-7373(87)80012-3.
  • Roth, E. M., A. M. Bisantz, X. Wang, T. Kim, and A. Z. Hettinger. 2021. “A Work-Centered Approach to System User-Evaluation.” Journal of Cognitive Engineering and Decision Making 15 (4): 155–174. doi:10.1177/15553434211028474.
  • Russell, S. J, and P. Norvig. 2021. Artificial Intelligence: A Modern Approach. 4th ed. Hoboken, NJ: Pearson Education, Inc. doi:10.1177/15553434211028474.
  • Salmon, P. M. 2019. “The Horse Has Bolted! Why Human Factors and Ergonomics Has to Catch up with Autonomous Vehicles (and Other Advanced Forms of Automation).” Ergonomics 62 (4): 502–504. doi:10.1080/00140139.2018.1563333.
  • Schmettow, M., W. Vos, and J. M. Schraagen. 2013. “With How Many Users Should You Test a Medical Infusion Pump? Sampling Strategies for Usability Tests on High-Risk Systems.” Journal of Biomedical Informatics 46 (4): 626–641. doi:10.1016/j.jbi.2013.04.007.
  • Schraagen, J. M. C., ed. forthcoming. The Responsible Use of AI in Military Systems. Boca Raton, FL: CRC Press.
  • Schraagen, J. M. C., S. Kerwien Lopez, C. Schneider, V. Schneider, S. Tonjes, and E. Wiechmann. 2021. “The Role of Transparency and Explainability in Automated Systems.”. In Proceedings of the 2021 HFES 65th International Annual Meeting, 27–31.Santa Monica, CA: Human Factors and Ergonomics Society.
  • Schraagen, J. M. C, and J. Van Diggelen. 2021. “A Brief History of the Relationship between Expertise and Artificial Intelligence.”. In Expertise at Work: Current and Emerging Trends, edited by M.-L. Germain and R. S. Grenier, 149–175. Cham, Switzerland: Palgrave Macmillan.
  • Strauch, B. 2018. “Ironies of Automation: Still Unresolved after All These Years.” IEEE Transactions on Human-Machine Systems 48 (5): 419–433. doi:10.1109/THMS.2017.2732506.
  • Sushereba, C. E., L. G. Militello, S. Wolf, and E. S. Patterson. 2021. “Use of Augmented Reality to Train Sensemaking in High-Stakes Medical Environments.” Journal of Cognitive Engineering and Decision Making 15 (2–3): 55–65. doi:10.1177/15553434211019234.
  • Taddeo, M., and A. Blanchard. 2022. “A Comparative Analysis of the Definitions of Autonomous Weapons Systems.” Science and Engineering Ethics 28 (5): 37–59. doi:10.1007/s11948-022-00392-3.
  • Textor, C., R. Zhang, J. Lopez, B. G. Schelble, N. J. McNeese, G. Freeman, R. Pak, C. Tossell, and E. J. de Visser. 2022. “Exploring the Relationship Between Ethics and Trust in Human–Artificial Intelligence Teaming: A Mixed Methods Approach.” Journal of Cognitive Engineering and Decision Making 16 (4): 252–281. doi:10.1177/15553434221113964.
  • Thatcher, A., P. Waterson, A. Todd, and N. Moray. 2018. “State of Science: Ergonomics and Global Issues.” Ergonomics 61 (2): 197–213. doi:10.1080/00140139.2017.1398845.
  • U.S. Department of Defense. 2022. Responsible Artificial Intelligence Strategy and Implementation Pathway. Washington, DC: Department of Defense. U.S. Department of Defense Responsible Artificial Intelligence Strategy and Implementation Strategy.
  • U.S. Department of Defense. 2023. DoD Directive 3000.09. Autonomy in Weapon Systems. Washington, DC: Department of Defense. DoD Directive 3000.09, "Autonomy in Weapon Systems," January 25, 2023 (whs.mil)
  • Van de Merwe, K., S. Mallam, and S. Nazir. 2022. “Agent Transparency, Situation Awareness, Mental Workload, and Operator Performance: A Systematic Literature Review.” Human Factors. 187208221077804. doi:10.1177/00187208221077804.
  • Woods, D. D. 2016. “The Risks of Autonomy: Doyle’s Catch.” Journal of Cognitive Engineering and Decision Making 10 (2): 131–133. doi:10.1177/1555343416653562.
  • Xu, W., M. J. Dainoff, L. Ge, and Z. Gao. 2023. “Transitioning to Human Interaction with AI Systems: New Challenges and Opportunities for HCI Professionals to Enable Human-Centered AI.” International Journal of Human–Computer Interaction 39 (3): 494–518. doi:10.1080/10447318.2022.2041900.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.