495
Views
0
CrossRef citations to date
0
Altmetric
Articles

Design Thinking Framework for Integration of Transparency Measures in Time-Critical Decision Support

ORCID Icon, ORCID Icon, ORCID Icon &
Pages 1874-1890 | Received 15 Apr 2021, Accepted 18 Apr 2022, Published online: 03 May 2022

References

  • Balfe, N., Sharples, S., & Wilson, J. R. (2015). Impact of automation: Measurement of performance, workload and behaviour in a complex control environment. Applied Ergonomics, 47, 52–64. https://doi.org/10.1016/j.apergo.2014.08.002
  • Bhatt, U., Zhang, Y., Antorán, J., Liao, Q. V., Sattigeri, P., Fogliato, R., Melançon, G. G., Krishnan, R., Stanley, J., Tickoo, O., Nachman, L., Chunara, R., Weller, A., & Xiang, A. (2020). Uncertainty as a form of transparency: Measuring, communicating, and using uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (pp. 401–413).
  • Chen, J. Y., & Barnes, M. J. (2013). Human-agent teaming for multi-robot control: A literature review ARL-TR-6328. Army Research Laboratory, Aberdeen Proving Ground.
  • Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. Army Research Lab Aberdeen Proving Ground Md Human Research and Engineering Directorate.
  • Cummings, M. L. (2004). Automation bias in intelligent time critical decision support systems. American Institute of Aeronautics and Astronautics, 2, 557–562. https://doi.org/10.2514/6.2004-6313
  • de Fine Licht, K., & de Fine Licht, J. (2020). Artificial intelligence, transparency, and public decision-making. AI & Society, 35(4), 917–926. https://doi.org/10.1007/s00146-020-00960-w
  • de Fine Licht, J. (2011). Do we really want to know? The potentially negative effect of transparency in decision making on perceived legitimacy. Scandinavian Political Studies, 34(3), 183–201. https://doi.org/10.1111/j.1467-9477.2011.00268.x
  • Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. Proceedings on Privacy Enhancing Technologies, 2015(1), 92–112. https://doi.org/10.1515/popets-2015-0007
  • de Vries, P., Midden, C., & Bouwhuis, D. (2003). The effects of errors on system trust, self-confidence, and the allocation of control in route planning. International Journal of Human Computer Studies, 58(6), 719–735. https://doi.org/10.1016/S1071-5819(03)00039-9
  • Dorst, K. (2011). The core of ‘design thinking’ and its application. Design Studies, 32(6), 521–532. https://doi.org/10.1016/j.destud.2011.07.006
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Dzindolet, M. T., Pierce, L. G., Beck, H. P., & Dawe, L. A. (2002). The perceived utility of human and automated aids in a visual detection task. Human Factors, 44(1), 79–94. https://doi.org/10.1518/0018720024494856
  • Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 32–64. https://doi.org/10.1518/001872095779049543
  • Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 205395171986054–205395171986014. https://doi.org/10.1177/2053951719860542
  • Fan, X., & Yen, J. (2011). Modeling cognitive loads for evolving shared mental models in human-agent collaboration. IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics: A Publication of the IEEE Systems, Man, and Cybernetics Society, 41(2), 354–367. https://doi.org/10.1109/TSMCB.2010.2053705
  • Gasparini, A. (2015). Perspective and use of empathy in design thinking. In ACHI 2015, the Eight International Conference on Advances in Computer-Human Interactions. IARIA.
  • Gestwicki, P., & McNely, B. (2012). A case study of a five-step design thinking process in educational museum game design. Proceedings of Meaningful Play, USA, 1–30.
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
  • Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association: JAMIA, 19(1), 121–127. https://doi.org/10.1136/amiajnl-2011-000089
  • Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254
  • Haselton, M. G., Nettle, D., & Murray, D. R. (2015). The evolution of cognitive bias. In D. M. Buss (Ed.), The handbook of evolutionary psychology (2nd ed., pp. 968–987). John Wiley & Sons, Inc. https://doi.org/10.1002/9781119125563.evpsych241
  • Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust–the case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014
  • Helldin, T. (2014). Transparency for Future Semi-Automated Systems: Effects of transparency on operator performance, workload and trust [Doctoral dissertation, Örebro Universitet]. DiVA. https://www.diva-portal.org/smash/get/diva2:710832/FULLTEXT02
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Hoffman, R. R., Feltovich, P. J., Ford, K. M., & Woods, D. D. (2002). A rose by any other name… would probably be given an acronym [cognitive systems engineering]. IEEE Intelligent Systems, 17(4), 72–80. https://doi.org/10.1109/MIS.2002.1024755
  • Hollnagel, E., & Woods, D. D. (1983). Cognitive systems engineering: New wine in new bottles. International Journal of Man-Machine Studies, 18(6), 583–600. https://doi.org/10.1016/S0020-7373(83)80034-0
  • Horvitz, E. J., & Barry, M. (1995). Display of information for time-critical decision making. In Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence (pp. 296–305). Morgan Kaufmann.
  • Ikuma, L. H., Harvey, C., Taylor, C. F., & Handal, C. (2014). A guide for assessing control room operator performance using speed and accuracy, perceived workload, situation awareness, and eye tracking. Journal of Loss Prevention in the Process Industries, 32, 454–465. https://doi.org/10.1016/j.jlp.2014.11.001
  • Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007
  • Jessup, S. A., Schneider, T. R., Alarcon, G. M., Ryan, T. J., & Capiola, A. (2019). The measurement of the propensity to trust automation. In J. Y. C. Chen, & G. Fragomeni (Eds.), Lecture notes in computer science: Vol. 11575. Virtual, augmented and mixed reality: Applications and case studies (pp. 476–489). Springer. https://doi.org/10.1007/978-3-030-21565-1_32
  • Johnson, G. M. (2021). Algorithmic bias: On the implicit biases of social technology. Synthese, 198(10), 9941–9961. https://doi.org/10.1007/s11229-020-02696-y
  • Jones, D. G., & Endsley, M. R. (1996). Sources of situation awareness errors in aviation. Aviation, Space, and Environmental Medicine, 67(6), 507–512.
  • Jones, G. R., & George, J. M. (1998). The experience and evolution of trust: Implications for cooperation and teamwork. Academy of Management Review, 23(3), 531–546. https://doi.org/10.5465/amr.1998.926625
  • Karnon, J. (2003). Alternative decision modelling techniques for the evaluation of health care technologies: Markov processes versus discrete event simulation. Health Economics, 12(10), 837–848. https://doi.org/10.1002/hec.770
  • Kim, B., Park, J., & Suh, J. (2020). Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information. Decision Support Systems, 134, 113302. https://doi.org/10.1016/j.dss.2020.113302
  • Kembel, G. (2009). Awakening creativity. Chautauqua Institution. https://programarchive.chq.org/ci/sessions/6409/view
  • Kumar, V. (2012). 101 design methods: A structured approach for driving innovation in your organization. John Wiley & Sons.
  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444. https://doi.org/10.1038/nature14539
  • Lee, J. D., & Moray, N. (1994). Trust, self-confidence, and operators’ adaptation to automation. International Journal of Human-Computer Studies, 40(1), 153–184. https://doi.org/10.1006/ijhc.1994.1007
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lewicki, R. J., Tomlinson, E. C., & Gillespie, N. (2006). Models of interpersonal trust development: Theoretical approaches, empirical evidence, and future directions. Journal of Management, 32(6), 991–1022. https://doi.org/10.1177/0149206306294405
  • Lewis, M., Sycara, K., & Walker, P. (2018). The role of trust in human-robot interaction. In H. A. Abbass, J. Scholz, & D. J. Reid (Eds.), Foundations of trusted autonomy (pp. 135–159). Springer. https://doi.org/10.1007/978-3-319-64816-3_8
  • Lyons, J. B., Nam, C. S., Jessup, S. A., Vo, T. Q., & Wynne, K. T. (2020). The role of individual differences as predictors of trust in autonomous security robots. In IEEE International Conference on Human-Machine Systems (ICHMS) (pp. 1–5). https://doi.org/10.1109/ichms49158.2020.9209544
  • Lyons, J. B., Sadler, G. G., Koltai, K., Battiste, H., Ho, N. T., Hoffmann, L. C., Smith, D., Johnson, W., & Shively, R. (2017). Shaping trust through transparent design: Theoretical and experimental guidelines. Advances in Intelligent Systems and Computing, 499, 127–136. https://doi.org/10.1007/978-3-319-41959-6_11
  • Lucas, F. (2018). Techniques for empathy interviews in design thinking. Web Design Envato Tuts. https://webdesign.tutsplus.com/articles/techniques-of-empathy-interviews-in-design-thinking–cms-31219
  • Malle, B. F., & Ullman, D. (2021). A multidimensional conception and measure of human-robot trust. In C. S. Nam & J. B. Lyons (Eds.), Trust in human-robot interaction (pp. 3–25). Elsevier. https://doi.org/10.1016/b978-0-12-819472-0.00001-0
  • Mann, N. C. (2016). National EMS Information System database. Prehospital Emergency Care, 10(3), 314–316. https://nemsis.org/
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709–734. https://doi.org/10.2307/258792
  • Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors, 58(3), 401–415. https://doi.org/10.1177/0018720815621206
  • Merritt, S. M., Unnerstall, J. L., Lee, D., & Huber, K. (2015). Measuring individual differences in the perfect automation schema. Human Factors, 57(5), 740–753. https://doi.org/10.1177/0018720815581247
  • Mosier, K. L., & Skitka, L. J. (1996). Human decision makers and automated decision aids: Made for each other? In M. Mouloua & R. Parasuraman (Eds.), Automation and human performance: Theory and applications (pp. 201–220). Taylor & Francis.
  • Muir, B. M. (1987). Trust between humans and machines, and the design of decision aids. International Journal of Man-Machine Studies, 27(5–6), 527–539. https://doi.org/10.1016/S0020-7373(87)80013-5
  • Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
  • Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model of types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 30(3), 286–297. https://doi.org/10.1109/3468.844354
  • Sarter, N. B., Woods, D. D., & Billings, C. E. (1997). Automation surprises. In G. Salvendy (Ed.), Handbook of human factors & ergonomics (2nd ed., pp. 1926–1943). Wiley.
  • Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A Meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377–400. https://doi.org/10.1177/0018720816634228
  • Schulz, C. M., Krautheim, V., Hackemann, A., Kreuzer, M., Kochs, E. F., & Wagner, K. J. (2016). Situation awareness errors in anesthesia and critical care in 200 cases of a critical incident reporting system. BMC Anesthesiology, 16(1), 4–10. https://doi.org/10.1186/s12871-016-0172-7
  • Sheridan, T. B., Verplank, W. L. (1978). Human and computer control of undersea teleoperators. Massachusetts Institute of Technology. https://apps.dtic.mil/sti/citations/ADA057655
  • Sheridan, T. B. (1997). Task analysis, task allocation and supervisory control. In M. G. Helander, T. K. Landauer, & P. V. Prabhu (Eds.), Handbook of human-computer interaction (2nd ed., pp. 87–105). North-Holland. https://doi.org/10.1016/B978-044481862-1.50071-6
  • Simpson, J. A. (2007). Foundations of interpersonal trust. In A. W. Kruglanski & E. T. Higgins (Eds.), Social psychology: Handbook of basic principles (pp. 587–607). The Guilford Press.
  • Stone, P. B. (2019). Agent-based simulation of artificial-intelligence-assisted transfer of care [Master’s thesis]. Wright State University. https://corescholar.libraries.wright.edu/cgi/viewcontent.cgi?article=3281&context=etd_all
  • Strobel, M. (2019). Aspects of transparency in machine learning. Proceedings of the International Conference on Autonomous Agents and MultiAgent Systems, Canada, 2449–2451. https://doi.org/10.5555/3306127.3332143
  • Thelisson, E., Padh, K., & Celis, L. E. (2017). Regulatory mechanisms and algorithms towards trust in AI/ML. Proceedings of the International Joint Conference on Artificial Intelligence, Australia, 1–5.
  • Thoring, K., & Müller, R. M. (2011). Understanding the creative mechanisms of design thinking: an evolutionary approach [Paper presentation]. Proceedings of the Second Conference on Creativity and Innovation in Design, Netherlands (pp. 137–147). https://doi.org/10.1145/2079216.2079236
  • Topol, E. (2019). Deep medicine: How artificial intelligence can make healthcare human again. Basic Books.
  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science (New York, N.Y.), 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
  • Wachter, S., Mittelstadt, B., & Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology, 31(2), 841–887. https://doi.org/10.2139/ssrn.3063289
  • Woods, D. D. (1985). Cognitive technologies: The design of joint human-machine cognitive systems. AI Magazine, 6(4), 86–92. https://doi.org/10.1609/aimag.v6i4.511
  • Yoko, K. (2006). Student pilot situational awareness: The effects of trust in technology [Doctoral dissertation]. Embry-Riddle Aeronautical University.
  • Zuk, T., & Carpendale, S. (2007). Visualization of uncertainty and reasoning. In A. Butz, B. Fisher, A. Krüger, P. Olivier, & S. Owada (Eds.), Lecture notes in computer science: Vol. 4569. Smart graphics (pp. 164–177). Springer. https://doi.org/10.1007/978-3-540-73214-3_15

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.