References
- Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Paper 3, pp. 1-13). ACM.
- Apple Computer, Inc. (2019). Human Interface Guidelines. Cupertino, CA: Apple. https://developer.apple.com/design/human-interface-guidelines/ios/overview/themes/
- Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779. Pergamon. https://doi.org/10.1016/0005-1098(83)90046-8
- Bennett, K. B., & Hoffman, R. R. (2015). Principles for interaction design, Part 3: Spanning the creativity gap. IEEE Intelligent Systems, 30(6), 82–91. https://doi.org/10.1109/MIS.2015.108
- Berry, J. C., Davis, J. T., Bartman, T., Hafer, C. C., Lieb, L. M., Khan, N., & Brilli, R. J. (2016). Improved safety culture and teamwork climate are associated with decreases in patient harm and hospital mortality across a hospital system. Journal of Patient Safety, 1. https://doi.org/10.1097/PTS.0000000000000251. Jan 7 2016 http://www.ncbi.nlm.nih.gov/pubmed/26741790
- Blackhurst, J. L., Gresham, J. S., & Stone, M. O. (2011). The autonomy paradox. Armed Forces Journal, 20–40. http://armedforcesjournal.com/the-autonomy-paradox/
- Bradshaw, J. M., Hoffman, R. R., Woods, D. D., & Johnson, M. (2013). The seven deadly myths of autonomous systems. IEEE Intelligent Systems, 28(3), 54–61. https://doi.org/10.1109/MIS.2013.70
- Brooks, R. (2017, July 27). The big problem with self-driving cars is people. IEEE Spectrum: Technology, Engineering, and Science News.
- Calvo, R. A., Peters, D., Vold, V., & Ryan, R. M. (2020). Supporting human autonomy in AI systems: A framework for ethical enquiry. In C. Burr & L. Floridi (Eds.), Ethics of digital well-being: A multidisciplinary approach. Springer Open. https://doi.org/10.17863/CAM.45866
- Canadian Government. (2019). Responsible use of artificial intelligence (AI). Canada.ca. https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai.html
- Candy, L. (2020). Creating with the digital: Tool, medium, mediator, partner. In A. L. Brooks & C. Sylla (Eds), Interactivity, game creation, design, learning, and innovation. Springer (to appear). http://lindacandy.com/wp-content/uploads/2019/12/ArtsIT-LCandy.pdf
- Defense Science Board. (2016). Summer study on autonomy. Office of the Undersecretary for Defense for Acquisition, Technology and Logistics, Department of Defense.
- Du, M., Liu, N., & Hu, X. (2020). Techniques for interpretable machine learning. Communications of the ACM, 63(1), 68–77. https://doi.org/10.1145/3359786
- Dudley, J. J., & Kristensson, P. O. (2018). A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems (Tiis), 8(2), 8. https://doi.org/10.1145/3185517
- Edmonds, E. A. (2020). Computation for creativity. In J. S. Gero & M. L. Maher (Eds.), Computational and cognitive models of creative design. Springer, London (to appear). http://www.ernestedmonds.com/www/Research/Download/Edmonds_HI19.pdf
- Edmonds, E. A., & Candy, L. (2002). Creativity, art practice and knowledge. Communications of the ACM Special Section on Creativity and Interface, 45(10), 91–95. https://doi.org/10.1145/570907.570939
- Endsley, M. R. (2017). From here to autonomy: Lessons learned from human–automation research. Human Factors, 59(1), 5–27. https://doi.org/10.1177/0018720816681350
- Endsley, M. R. (2018). March 2018. Level of automation forms a key aspect of autonomy design. Journal of Cognitive Engineering and Decision Making, 12(1), 29–34. https://doi.org/10.1177/1555343417723432
- Fischer, G. (2018). Design trade-offs for quality of life. ACM Interactions, XXV(1), 26–33. https://doi.org/10.1145/3170706
- Fraser, P., Moultrie, J., & Gregory, M. (2002). The use of maturity models/grids as a tool in assessing product development capability. Proceedings of IEEE International Engineering Management Conference (Vol.1, pp. 244–249). IEEE.
- Fukuyama, F. (1995). Trust: The social virtues and the creation of prosperity. Free Press.
- Giuliani, M., Lenz, C., Müller, T., Rickert, M., & Knoll, A. (2010). Design principles for safety in human-robot interaction. International Journal of Social Robotics, 2(3), 253–274. https://doi.org/10.1007/s12369-010-0052-0
- Guldenmund, F. W. (2000). The nature of safety culture: A review of theory and research. Safety Science, 34(1–3), 215–257. https://doi.org/10.1016/S0925-7535(00)00014-X
- Hancock, P. A. (2017). Imposing limits on autonomous systems. Ergonomics, 60(2), 284–291. https://doi.org/10.1080/00140139.2016.1190035
- Heer, J. (2019). Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences, 116(6), 1844–1850. https://doi.org/10.1073/pnas.1807184115
- Hoffman, R. R., & Johnson, M. (2019). The quest for alternatives to “levels of automation” and “task allocation. In M. Mouloua & P. A. Hancock (Eds.), Human performance in automated and autonomous systems (pp. 43–68). CRC Press.
- Hoffman, R. R., Cullen, T. M., & Hawley, J. K. (2016). The myths and costs of autonomous weapon systems. Bulletin of the Atomic Scientists, 72(4), 247–255. https://doi.org/10.1080/00963402.2016.1194619
- Horvitz, E. (1999). Principles of mixed-initiative user interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 159–166). ACM.
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems (First ed.). IEEE. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html
- Johnson, M., Bradshaw, J. M., & Feltovich, P. J. (2018). Tomorrow’s human–machine design tools: From levels of automation to interdependencies. Journal of Cognitive Engineering and Decision Making, 12(1), 77–82. https://doi.org/10.1177/1555343417736462
- Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., Van Riemsdijk, M. B., & Sierhuis, M. (2014). Coactive design: Designing support for interdependence in joint activity. Journal of Human-Robot Interaction, 3(1), 43–69. https://doi.org/10.5898/JHRI.3.1.Johnson
- Jordan, M. I. (2018). Artificial intelligence - The revolution hasn’t happened yet. Medium.com. https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7
- Kaber, D. B. (2018). Issues in human–automation interaction modeling: Presumptive aspects of frameworks of types and levels of automation. Journal of Cognitive Engineering and Decision Making, 12(1), 7–24. https://doi.org/10.1177/1555343417737203
- Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intelligent Systems, 19(6), 91–95. https://doi.org/10.1109/MIS.2004.74
- Konstan, J. A., & Riedl, J. (2012). Recommender systems: From algorithms tommen reco user experience. User Modeling and User-adapted Interaction, 22(1–2), 101–123. https://doi.org/10.1007/s11257-011-9112-x
- Lacerda, T. C., & von Wangenheim, C. G. (2018). Systematic literature review of usability capability/maturity models. Computer Standards & Interfaces, 55, 95–105. https://doi.org/10.1016/j.csi.2017.06.001
- Lazer, D., Kennedy, R., King, G., & Vespignani, A. (2014). The parable of Google Flu: Traps in the big data analysis. Science, 343(6176), 1203–1205. https://doi.org/10.1126/science.1248506
- Li, F.-F. (2018, March 7). How to make A.I. that’s good for people. The New York Times. https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-human.html
- Macintyre, P. E. (2001). Safety and efficacy of patient-controlled analgesia. British Journal of Anaesthesia, 87(1), 36–46. https://doi.org/10.1093/bja/87.1.36
- Markoff, J. (2016). Machines of loving grace: The quest for common ground between humans and robots. HarperCollins Publishers.
- Martin, D. (1993). The myth of the awesome thinking machine. Communications of the ACM, 36 (4), 120–133. https://doi.org/10.1145/255950.153587
- Mindell, D. (2015). Our robots, ourselves: Robotics and the myths of autonomy. Viking Press.
- Modarres, M., Kaminskiy, M. P., & Krivtsov, V. (2016). Reliability engineering and risk analysis: A practical guide. CRC press.
- Murphy, R., & Shields, J. (2012, July). The role of autonomy in DoD systems (Defense Science Board Task Force Report). U.S. Department of Defense.
- Nicas, J., Kitroeff, N., Gelles, D., & Glanz, J. (2019, June 6). Boeing built deadly assumptions into 737 Max, blind to a late design change. The New York Times. https://www.nytimes.com/2019/06/01/business/boeing-737-max-crash.html
- O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishers.
- Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man and Cybernetics-Part A: Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354
- Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. MIT Press.
- Sheridan, T. B. (2000). Function allocation: Algorithm, alchemy or apostasy? International Journal of Human-computer Studies, 52(2), 203–216. https://doi.org/10.1006/ijhc.1999.0285
- Sheridan, T. B., & Verplank, W. L. (1978). Human and computer control of undersea teleoperators. Massachusetts Institute of Technology Cambridge Man-Machine Systems Lab.
- Shneiderman, B. (1987). Designing the user interface: Strategies for effective human-computer interaction. Addison-Wesley Publ. Co..
- Shneiderman, B. (2000). Designing trust into online experiences. Communications of the ACM, 43(12), 57–59. https://doi.org/10.1145/355112.355124
- Shneiderman, B. (2016). Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proceedings of the National Academy of Sciences, 113(48), 13538–13540. https://doi.org/10.1073/pnas.1618211113
- Shneiderman, B., Plaisant, C., Cohen, M., Jacobs, S., & Elmqvist, N. (May, 2016). Designing the user interface: Strategies for effective human-computer interaction (Sixth ed.). Pearson.
- Society of Automotive Engineers. (2014). Taxonomy and definitions for terms related to on-road motor vehicle automated driving systems (SAE Report J3016). Society of Automotive Engineers. https://www.sae.org/standards/content/j3016_201401/
- Strauch, B. (2017). Ironies of automation: Still unresolved after all these years. IEEE Transactions on Human-Machine Systems, 48(5), 419–433. https://doi.org/10.1109/THMS.2017.2732506
- Thimbleby, H. (2020). Fix IT: Stories from healthcare IT. Oxford University Press.
- U.S. National Transportation Safety Board. (2017). Collision between a car operating with automated vehicle control systems and a tractor-semitrailer truck near Williston, Florida (Report HAR1702). www.ntsb.gov.
- Woods, D. D. (2017). Essential characteristics of resilience. In E. Hollnagel, D. W. Woods, & N. Leveson (Eds.), Resilience engineering: Concepts and precepts (pp. 21–34). Ashgate Publishing.
- Woods, D. D., Tittle, J., Feil, M., & Roesler, A. (2004). Envisioning human-robot coordination in future operations. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 34(2), 210–218. https://doi.org/10.1109/TSMCC.2004.826272