1,068
Views
0
CrossRef citations to date
0
Altmetric
Articles

Transparent-AI Blueprint: Developing a Conceptual Tool to Support the Design of Transparent AI Agents

, , &
Pages 1846-1873 | Received 16 Apr 2021, Accepted 11 Apr 2022, Published online: 17 Jul 2022

References

  • Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–18), Montreal, QC, Canada. https://doi.org/10.1145/3173574.3174156
  • Alonso, V., & de la Puente, P. (2018). System transparency in shared autonomy: A mini review. Frontiers in Neurorobotics, 12, 83. https://doi.org/10.3389/fnbot.2018.00083
  • Amarsy, N. (2015). Why and how organizations around the world apply the business model canvas. https://www.strategyzer.com/blog/posts/2015/2/9/why-and-how-organizations-around-the-world-apply-the-business-model-canvas
  • Amershi, S., Inkpen, K., Teevan, J., Kikin-Gil, R., Horvitz, E., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., & Bennett, P. N. (2019). Guidelines for human-AI interaction [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI ’19 (pp. 1–13), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300233
  • Anderson, J., Arbour, B., Arnold, R., Kadiofsky, T., Keeley, T., MacLeod, M., Bourdon, S., Crootof, R., Matsumura, J., Mayer, C., Roorda, M., Saariluoma, P., Scharre, P., Suarez, B., Sulzbachner, C., Theunissen, E., Tolk, A., Williams, A., & Zinner, C. (2015). Autonomous systems: Issues for defence policymakers. /paper/Autonomous-Systems%3A-Issues-for-Defence-Policymakers-Anderson-Arbour/c2abe9ec4538e532b50982729b9d0e3cf454104e
  • Anderson, A., Dodge, J., Sadarangani, A., Juozapaitis, Z., Newman, E., Irvine, J., Chattopadhyay, S., Olson, M., Fern, A., & Burnett, M. (2020). Mental models of mere mortals with explanations of reinforcement learning. ACM Transactions on Interactive Intelligent Systems, 10(2), 1–37. https://doi.org/10.1145/3366485
  • Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K. (2019). Explainable agents and robots: Results from a systematic literature review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1078–1088). IFAAMAS.
  • Ashktorab, Z., Jain, M., Liao, Q. V., & Weisz, J. D. (2019). Resilient chatbots: Repair strategy preferences for conversational breakdowns [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300484
  • Augstein, M., Neumayr, T., Kurschl, W., Kern, D., Burger, T., & Altmann, J. (2017). A personalized interaction approach: Motivation and use case [Paper presentation]. Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization (pp. 221–226), Bratislava, Slovakia. https://doi.org/10.1145/3099023.3099051
  • Bahdanau, D., Cho, K., & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. ICLR.
  • Baraka, K., Paiva, A., & Veloso, M. (2016). Expressive lights for revealing mobile service robot state. In L. P. Reis, A. P. Moreira, P. U. Lima, L. Montano, & V. Muñoz-Martinez (Eds.), Robot 2015: Second Iberian robotics conference (pp. 107–119). Springer International Publishing. https://doi.org/10.1007/978-3-319-27146-0_9
  • Barria-Pineda, J. (2020). Exploring the need for transparency in educational recommender systems [Paper presentation]. Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization (pp. 376–379), Genoa, Italy. https://doi.org/10.1145/3340631.3398676
  • Bass, E. J., Baumgart, L. A., & Shepley, K. K. (2013). The effect of information analysis automation display content on human judgment performance in noisy environments. Journal of Cognitive Engineering and Decision Making, 7(1), 49–65. https://doi.org/10.1177/1555343412453461
  • Bastani, O., Kim, C., & Bastani, H. (2017). Interpreting blackbox models via model extraction. ArXiv, arXiv:1705.08504 [cs.LG].
  • Beller, J., Heesen, M., & Vollrath, M. (2013). Improving the driver–automation interaction: An approach using automation uncertainty. Human Factors, 55(6), 1130–1141. https://doi.org/10.1177/0018720813482327
  • Bellotti, V., & Edwards, K. (2001). Intelligibility and accountability: Human considerations in context-aware systems. Human–Computer Interaction, 16(2–4), 193–212. https://doi.org/10.1207/S15327051HCI16234_05
  • Beneteau, E., Richards, O. K., Zhang, M., Kientz, J. A., Yip, J., & Hiniker, A. (2019). Communication breakdowns between families and alexa [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300473
  • Betzing, J. H., Tietz, M., vom Brocke, J., & Becker, J. (2020). The impact of transparency on mobile privacy decision making. Electronic Markets, 30(3), 607–625. https://doi.org/10.1007/s12525-019-00332-3
  • Bhaskara, A., Skinner, M., & Loft, S. (2020). Agent transparency: A review of current theory and evidence. IEEE Transactions on Human-Machine Systems, 50(3), 215–224. https://doi.org/10.1109/THMS.2020.2965529
  • Blass, J. A. (2018). Legal, ethical, customizable artificial intelligence [Paper presentation]. AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 358–359). New Orleans, LA, USA. https://doi.org/10.1145/3278721.3278793
  • Boell, S., & Cecez-Kecmanovic, D. (2014). A hermeneutic approach for conducting literature reviews and literature searches. Communications of the Association for Information Systems, 34(1), 257–286. https://doi.org/10.17705/1CAIS.03412
  • Bondarenko, A., Borisov, A., & Zmanovska, T. (2010, June 25). Decompositional rules extraction methods from neural networks. Mendel. 16th International Conference on Soft Computing, Brno, Czech Republic.
  • Boring, R. L. (2001). User-interface design principles for experimental control software [Paper presentation]. CHI ’01 Extended Abstracts on Human Factors in Computing Systems (pp. 399–400), Seattle, Washington. https://doi.org/10.1145/634067.634302
  • Bostandjiev, S., O’Donovan, J., & Höllerer, T. (2012). TasteWeights: A visual interactive hybrid recommender system [Paper presentation]. Proceedings of the Sixth ACM Conference on Recommender Systems (pp. 35–42), Dublin, Ireland. https://doi.org/10.1145/2365952.2365964
  • Bostandjiev, S., O’Donovan, J., & Höllerer, T. (2013). LinkedVis: Exploring social and semantic career recommendations [Paper presentation]. Proceedings of the 2013 International Conference on Intelligent User Interfaces (pp. 107–116), Santa Monica, California, USA. https://doi.org/10.1145/2449396.2449412
  • Braun, M., Li, J., Weber, F., Pfleging, B., Butz, A., & Alt, F. (2020). What if your car would care? Exploring use cases for affective automotive user interfaces [Paper presentation]. 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (pp. 1–12), Oldenburg, Germany. https://doi.org/10.1145/3379503.3403530
  • Bruns, S., Valdez, A. C., Greven, C., Ziefle, M., & Schroeder, U. (2015). What should i read next? A personalized visual publication recommender system. In S. Yamamoto (Ed.), Human interface and the management of information. Information and knowledge in context (pp. 89–100). Springer International Publishing. https://doi.org/10.1007/978-3-319-20618-9_9
  • Bunt, A., Lount, M., & Lauzon, C. (2012). Are explanations always important? A study of deployed, low-cost intelligent interactive systems [Paper presentation]. Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces (pp. 169–178), Lisbon, Portugal. https://doi.org/10.1145/2166966.2166996
  • Burke, M., Amento, B., & Isenhour, P. (2006). Error correction of voicemail transcripts in SCANMail [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 339–348), Montréal, Québec, Canada. https://doi.org/10.1145/1124772.1124823
  • Cai, C. J., Jongejan, J., & Holbrook, J. (2019). The effects of example-based explanations in a machine learning interface [Paper presentation]. Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 258–262), Marina del Rey, California. https://doi.org/10.1145/3301275.3302289
  • Cai, C. J., Reif, E., Hegde, N., Hipp, J., Kim, B., Smilkov, D., Wattenberg, M., Viegas, F., Corrado, G. S., Stumpe, M. C., & Terry, M. (2019). Human-centered tools for coping with imperfect algorithms during medical decision-making [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300234
  • Chatterjee, S., & Hadi, A. S. (1988). Sensitivity analysis in linear regression (1st ed.). Wiley.
  • Chen, J. Y. C., Barnes, M. J., Selkowitz, A. R., Stowers, K., Lakhmani, S. G., & Kasdaglis, N. (2016). Human-autonomy teaming and agent transparency [Paper presentation]. Companion Publication of the 21st International Conference on Intelligent User Interfaces (pp. 28–31), Sonoma, California, USA. https://doi.org/10.1145/2876456.2879479
  • Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M. (2018). Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259–282. https://doi.org/10.1080/1463922X.2017.1315750
  • Chen, J., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness–based agent transparency. Army Research Laboratory.
  • Cheng, H.-F., Wang, R., Zhang, Z., O’Connell, F., Gray, T., Harper, F. M., & Zhu, H. (2019). Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300789
  • Cho, M., Lee, S., & Lee, K.-P. (2019). Once a kind friend is now a thing: Understanding how conversational agents at home are forgotten [Paper presentation]. Proceedings of the 2019 on Designing Interactive Systems Conference (pp. 1557–1569), San Diego, CA, USA. https://doi.org/10.1145/3322276.3322332
  • Choi, E., Bahadori, M. T., Sun, J., Kulas, J., Schuetz, A., & Stewart, W. (2016). RETAIN: An Interpretable predictive model for healthcare using reverse time attention mechanism. NIPS.
  • Choi, J. (2020). Interpreting and explaining deep neural networks: A perspective on time series data [Paper presentation]. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 3563–3564), Virtual Event, CA, USA. https://doi.org/10.1145/3394486.3406478
  • Chuang, L. L., Manstetten, D., Boll, S., & Baumann, M. (2017). 1st Workshop on understanding automation: Interfaces that facilitate user understanding of vehicle automation [Paper presentation]. Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications Adjunct (pp. 1–8), Oldenburg, Germany. https://doi.org/10.1145/3131726.3131729
  • Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Spillane, B., Gilmartin, E., Murad, C., Munteanu, C., Wade, V., & Cowan, B. R. (2019). What makes a good conversation? Challenges in designing truly conversational agents [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300705
  • Cramer, H., Evers, V., Ramlal, S., van Someren, M., Rutledge, L., Stash, N., Aroyo, L., & Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction, 18(5), 455–496. https://doi.org/10.1007/s11257-008-9051-3
  • Craven, M. W. (1996). Extracting comprehensible models from trained neural networks [PhD]. The University of Wisconsin - Madison.
  • Defense Science Board (2016). Defense science board summer study on autonomy. Defense Science Board.
  • Department of Health (2000). No secrets: Guidance on protecting vulnerable adults in care. https://dementiapartnerships.com/resource/no-secrets-guidance-on-protecting-vulnerable-adults-in-care/
  • Donkers, T., Kleemann, T., & Ziegler, J. (2020). Explaining recommendations by means of aspect-based transparent memories [Paper presentation]. Proceedings of the 25th International Conference on Intelligent User Interfaces (pp. 166–176), Cagliari, Italy. https://doi.org/10.1145/3377325.3377520
  • Doshi-Velez, F., & Kim, B. (2017). A roadmap for a rigorous science of interpretability. /paper/A-Roadmap-for-a-Rigorous-Science-of-Doshi-Velez-Kim/7f2d2d14f0c07e4b3d9d636d904afa7087673b62
  • Du, M., Liu, N., & Hu, X. (2019). Techniques for interpretable machine learning. Communications of the ACM, 63(1), 68–77. https://doi.org/10.1145/3359786
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Edwards, J., Perrone, A., & Doyle, P. R. (2020). Transparency in language generation: Levels of automation [Paper presentation]. Proceedings of the 2nd Conference on Conversational User Interfaces (pp. 1–3), Bilbao, Spain. https://doi.org/10.1145/3405755.3406136
  • Edwards, L., & Veale, M. (2018). Enslaving the algorithm: From a ‘right to an explanation’ to a ‘right to better decisions?’ IEEE Security & Privacy, 16(3), 46–54. https://doi.org/10.1109/MSP.2018.2701152
  • Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018). Bringing transparency design into practice [Paper presentation]. 23rd International Conference on Intelligent User Interfaces (pp. 211–223), Tokyo, Japan. https://doi.org/10.1145/3172944.3172961
  • Endsley, M. R. (1995). Toward a theory of situation awareness in dynamic systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 32–64. https://doi.org/10.1518/001872095779049543
  • Engineering and Physical Sciences Research Council. (2010). Principles of robotics. https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/
  • Eslami, M. (2017). Understanding and designing around users’ interaction with hidden algorithms in sociotechnical systems [Paper presentation]. Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 57–60), Portland, Oregon, USA. https://doi.org/10.1145/3022198.3024947
  • Eslami, M., Vaccaro, K., Lee, M. K., Elazari Bar On, A., Gilbert, E., & Karahalios, K. (2019). User attitudes towards algorithmic opacity and transparency in online reviewing platforms [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300724
  • Felzmann, H., Villaronga, E. F., Lutz, C., & Tamò-Larrieux, A. (2019). Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society, 6(1), 205395171986054. https://doi.org/10.1177/2053951719860542
  • Feng, S., & Boyd-Graber, J. (2019). What can AI do for me? Evaluating machine learning interpretations in cooperative play [Paper presentation]. Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 229–239), Marina del Rey, California. https://doi.org/10.1145/3301275.3302265
  • Fortuny, E. J., & de Martens, D. (2015). Active learning-based pedagogical rule extraction. IEEE Transactions on Neural Networks and Learning Systems, 26(11), 2664–2677. https://doi.org/10.1109/TNNLS.2015.2389037
  • Frosst, N., & Hinton, G. E. (2017). Distilling a neural network into a soft decision tree. ArXiv, arXiv:1711.09784 [cs.LG].
  • General Data Protection Regulation (GDPR) Compliance Guidelines (2016). GDPR.Eu. https://gdpr.eu/
  • Gentner, D., & Stevens, A. L. (2014). Mental models. Psychology Press.
  • Gevrey, M., Dimopoulos, I., & Lek, S. (2003). Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecological Modelling, 160(3), 249–264. https://doi.org/10.1016/S0304-3800(02)00257-0
  • Grgic-Hlaca, N., Zafar, M., Gummadi, K., & Weller, A. (2018). Beyond distributive fairness in algorithmic decision making: Feature selection for procedurally fair learning. AAAI-18, 32(1), 51–60. https://doi.org/10.1609/aaai.v32i1.11296
  • Hancock, P. A. (2017). Imposing limits on autonomous systems. Ergonomics, 60(2), 284–291. https://doi.org/10.1080/00140139.2016.1190035
  • Harbers, M., van den Bosch, K., & Meyer, J. J. C. (2011). A theoretical framework for explaining agent behavior [Paper presentation]. 1st International Conference on Simulation and Modeling Methodologies, Technologies and Applications, SIMULTECH 2011, 29 July 2011 through 31 July 2011, Noordwijkerhout, 228, Noordwijkerhout, The Netherlands.
  • He, C., Parra, D., & Verbert, K. (2016). Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities. Expert Systems with Applications, 56(2016), 9–27. https://doi.org/10.1016/j.eswa.2016.02.013
  • Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. ArXiv E-Prints, 1812, arXiv:1812.04608.
  • Hohman, F., Park, H., Robinson, C., & Chau, D. H. P. (2020). Summit: Scaling deep learning interpretability by visualizing activation and attribution summarizations. IEEE Transactions on Visualization and Computer Graphics, 26(1), 1096–1106. https://doi.org/10.1109/TVCG.2019.2934659
  • Holstein, K., Wortman Vaughan, J., Daumé, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–16), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300830
  • Hopkin, V. D. (1995). Human factors in air traffic control (1st ed.). CRC Press.
  • Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an empirically determined scale of trust in automated systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04
  • Jin, Y., Seipp, K., Duval, E., & Verbert, K. (2016). Go with the flow: Effects of transparency and user control on targeted advertising using flow charts [Paper presentation]. Proceedings of the International Working Conference on Advanced Visual Interfaces (pp. 68–75), Bari, Italy. https://doi.org/10.1145/2909132.2909269
  • Johnson, H., & Johnson, P. (1993). Explanation facilities and interactive systems [Paper presentation]. Proceedings of the 1st International Conference on Intelligent User Interfaces (pp. 159–166), Orlando, Florida, USA. https://doi.org/10.1145/169891.169951
  • Kallinen, K. (2017). The effects of transparency and task type on trust, stress, quality of work, and co-worker preference during human-autonomous system collaborative work [Paper presentation]. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (pp. 153–154), Vienna, Austria. https://doi.org/10.1145/3029798.3038386
  • Kangasrääsiö, A., Glowacka, D., & Kaski, S. (2015). Improving controllability and predictability of interactive recommendation interfaces for exploratory search [Paper presentation]. Proceedings of the 20th International Conference on Intelligent User Interfaces (pp. 247–251), Atlanta, Georgia, USA. https://doi.org/10.1145/2678025.2701371
  • Kaptein, F., Broekens, J., Hindriks, K., & Neerincx, M. (2019). Evaluating cognitive and affective intelligent agent explanations in a long-term health-support application for children with type 1 diabetes [Paper presentation]. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 1–7), Cambridge, UK. https://doi.org/10.1109/ACII.2019.8925526
  • Karahasanovic, A., Brandtzaeg, P. B., Vanattenhoven, J., Lievens, B., Nielsen, K. T., & Pierson, J. (2009). Ensuring trust, privacy, and etiquette in web 2.0 applications. Computer Magazine. 42(6), 42–49. https://doi.org/10.1109/MC.2009.186
  • Kaufman, L., & Rousseeuw, P. J. (1987). Clustering by means of medoids. In Y. Dodge (Ed.), Statistical data analysis based on the L1-norm and related methods. (pp. 405–416).
  • Kim, B. H., Koh, S., Huh, S., Jo, S., & Choi, S. (2020). Improved explanatory efficacy on human affect and workload through interactive process in artificial intelligence. IEEE Access, 8, 189013–189024. https://doi.org/10.1109/ACCESS.2020.3032056
  • Kim, B., Khanna, R., & Koyejo, O. (2016). Examples are not enough, learn to criticize! Criticism for interpretability [Paper presentation]. Proceedings of the 30th International Conference on Neural Information Processing Systems (pp. 2288–2296), Barcelona, Spain.
  • Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., Viegas, F., & Sayres, R. (2018). Interpretability beyond feature attribution: Quantitative Testing with Concept Activation Vectors (TCAV). International Conference on Machine Learning (pp. 2668–2677I), Stockholm, Sweden. http://proceedings.mlr.press/v80/kim18d.html
  • Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface [Paper presentation]. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 2390–2395), San Jose, California, USA. https://doi.org/10.1145/2858036.2858402
  • Koelle, M., Ananthanarayan, S., & Boll, S. (2020). Social acceptability in HCI: A survey of methods, measures, and design strategies [Paper presentation]. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–19), Honolulu, HI, USA. https://doi.org/10.1145/3313831.3376162
  • Krauss, C., Merceron, A., & Arbanowski, S. (2019). The timeliness deviation: A novel approach to evaluate educational recommender systems for closed-courses [Paper presentation]. Proceedings of the 9th International Conference on Learning Analytics & Knowledge (pp. 195–204), Tempe, AZ, USA. https://doi.org/10.1145/3303772.3303774
  • Krishnan, R., Sivakumar, G., & Bhattacharya, P. (1999). Extracting decision trees from trained neural networks. Pattern Recognition, 32(12), 1999–2009. https://doi.org/10.1016/S0031-3203(98)00181-2
  • Kuner, C., Svantesson, D. J. B., Cate, F. H., Lynskey, O., & Millard, C. (2017). Machine learning with personal data: Is data protection law smart enough to meet the challenge? International Data Privacy Law, 7(1), 1–2. https://doi.org/10.1093/idpl/ipx003
  • Lagström, T., & Malmsten Lundgren, V. (2016). AVIP-Autonomous vehicles’ interaction with pedestrians-An investigation of pedestrian-driver communication and development of a vehicle external interface [Master’s thesis].
  • Lakkaraju, H., Bach, S. H., & Leskovec, J. (2016). Interpretable decision sets: A joint framework for description and prediction [Paper presentation]. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1675–1684), San Francisco, California, USA. https://doi.org/10.1145/2939672.2939874
  • Lane, H. C., Core, M. G., van Lent, M., Solomon, S., Gomboc, D. (2005). Explainable artificial intelligence for training and tutoring [Paper presentation]. Proceedings of the 2005 Conference on Artificial Intelligence in Education: Supporting Learning through Intelligent and Socially Informed Technology (pp. 762–764), Amsterdam, The Netherlands.
  • Large, D. R., Clark, L., Burnett, G., Harrington, K., Luton, J., Thomas, P., & Bennett, P. (2019). ‘It’s small talk, Jim, but not as we know it’: Engendering trust through human-agent conversation in an autonomous, self-driving car [Paper presentation]. Proceedings of the 1st International Conference on Conversational User Interfaces (pp. 1–7), Dublin, Ireland. https://doi.org/10.1145/3342775.3342789
  • Lauff, C., Menold, J., & Wood, K. L. (2019). Prototyping canvas: Design tool for planning purposeful prototypes. Proceedings of the Design Society: International Conference on Engineering Design, 1(1), 1563–1572. https://doi.org/10.1017/dsi.2019.162
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lee, M. K., & Baykal, S. (2017). Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division [Paper presentation]. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 1035–1048), Portland, Oregon, USA. https://doi.org/10.1145/2998181.2998230
  • Lee, M. K., Jain, A., Cha, H. J., Ojha, S., & Kusbit, D. (2019). Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–182. https://doi.org/10.1145/3359284
  • Lee, S., Cho, M., & Lee, S. (2020). What if conversational agents became invisible? Comparing users’ mental models according to physical entity of AI speaker. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4(3), 1–24. https://doi.org/10.1145/3411840
  • Lee, S., Kim, S., & Lee, S. (2019). ‘What does your Agent look like?’: A drawing study to understand users’ perceived persona of conversational agent [Paper presentation]. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–6), Glasgow, Scotland, UK. https://doi.org/10.1145/3290607.3312796
  • Lettl, B., & Schulte, A. (2013). Self-explanation capability for cognitive agents on-board of UCAVs to improve cooperation in a manned-unmanned fighter team [Paper presentation]. AIAA Infotech@Aerospace (I@A) Conference (Vol. 1–0). American Institute of Aeronautics and Astronautics, Boston, MA. https://doi.org/10.2514/6.2013-4898
  • Lewis, J. W. (1983). An effective graphics user interface for rules and inference mechanisms [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 139–143), Boston, Massachusetts, USA. https://doi.org/10.1145/800045.801598
  • Li, T. J.-J., Chen, J., Xia, H., Mitchell, T. M., & Myers, B. A. (2020). Multi-modal repairs of conversational breakdowns in task-oriented dialogs [Paper presentation]. Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (pp. 1094–1107), Virtual Event, USA. https://doi.org/10.1145/3379337.3415820
  • Lim, B. Y., & Dey, A. K. (2009). Assessing demand for intelligibility in context-aware applications [Paper presentation]. Proceedings of the 11th International Conference on Ubiquitous Computing (pp. 195–204), Orlando, Florida, USA. https://doi.org/10.1145/1620545.1620576
  • Lim, B. Y., & Dey, A. K. (2011). Design of an intelligible mobile context-aware application [Paper presentation]. Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (pp. 157–166), Stockholm, Sweden. https://doi.org/10.1145/2037373.2037399
  • Lim, B. Y., Dey, A. K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2119–2128), Boston, MA, USA. https://doi.org/10.1145/1518701.1519023
  • Lim, B., Sarkar, A., Smith-Renner, A., & Stumpf, S. (2019 ). ExSS: Explainable smart systems 2019 [Paper presentation]. Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion (pp. 125–126), Marina del Rey, California. https://doi.org/10.1145/3308557.3313112
  • Lim, B., Yang, Q., Abdul, A., & Wang, D. (2019). Why these explanations? Selecting intelligibility types for explanation goals. IUI Workshops, Los Angeles, CA, USA.
  • Lindberg, S., Wärnestål, P., Nygren, J., & Svedberg, P. (2014 ). Designing digital peer support for children: Design patterns for social interaction [Paper presentation]. Proceedings of the 2014 Conference on Interaction Design and Children (pp. 47–56), Aarhus, Denmark. https://doi.org/10.1145/2593968.2593972
  • Lipton, Z. C. (2018). The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue, 16(3), 31–57. https://doi.org/10.1145/3236386.3241340
  • Lombrozo, T. (2009). Explanation and categorization: How “why?” informs “what? Cognition, 110(2), 248–253. https://doi.org/10.1016/j.cognition.2008.10.007
  • Lou, Y., Caruana, R., & Gehrke, J. (2012). Intelligible models for classification and regression [Paper presentation]. Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 150–158), Beijing, China. https://doi.org/10.1145/2339530.2339556
  • Luger, E., & Sellen, A. (2016). ‘ Like having a really bad PA’: The Gulf between user expectation and experience of conversational agents [Paper presentation]. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 5286–5297), San Jose, California, USA. https://doi.org/10.1145/2858036.2858288
  • Lyons, J. (2013). Being transparent about transparency: A Model for human-robot interaction. AAAI Spring Symposium: Trust and Autonomous Systems, Menlo Park, CA.
  • Maaten, L., & van der Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9(86), 2579–2605. https://www.jmlr.org/papers/v9/vandermaaten08a.html
  • Marcinkowski, F., Kieslich, K., Starke, C., & Lünich, M. (2020). Implications of AI (un-)fairness in higher education admissions: The effects of perceived AI (un-)fairness on exit, voice and organizational reputation [Paper presentation]. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 122–130), Barcelona, Spain. https://doi.org/10.1145/3351095.3372867
  • Markie, P. (2017). Rationalism vs. Empiricism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2017). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2017/entries/rationalism-empiricism/
  • Mars, N. J. I. (1995). Towards very large knowledge bases (1st ed.). IOS Press.
  • Maurya, A. (2012). Running lean: Iterate from plan A to a plan that works (2nd ed.). O’Reilly Media.
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 115.
  • Merat, N., Louw, T., Madigan, R., Wilbrink, M., & Schieben, A. (2018). What externally presented information do VRUs require when interacting with fully Automated Road Transport Systems in shared space? Accident; Analysis and Prevention, 118, 244–252. https://doi.org/10.1016/j.aap.2018.03.018
  • Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human–agent teaming for multi-UxV management. Human Factors, 58(3), 401–415. https://doi.org/10.1177/0018720815621206
  • Miglani, A., Diels, C., & Terken, J. (2016). Compatibility between trust and non-driving related tasks in UI design for highly and fully automated driving [Paper presentation]. Adjunct Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 75–80), Ann Arbor, MI, USA. https://doi.org/10.1145/3004323.3004331
  • Mikhail, B., Aleksei, M., & Ekaterina, S. (2018). On the way to legal framework for AI in public sector [Paper presentation]. Proceedings of the 11th International Conference on Theory and Practice of Electronic Governance (pp. 682–684), Galway, Ireland. https://doi.org/10.1145/3209415.3209448
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Milton, S., Keen, C., & Kurnia, S. (2010). Understanding the benefits of ontology use for Australian industry: A conceptual study. ACIS 2010 Proceedings, Brisbane, Australia. https://aisel.aisnet.org/acis2010/64
  • Mohseni, S., Zarei, N., & Ragan, E. D. (2018). A multidisciplinary survey and framework for design and evaluation of explainable AI systems. https://doi.org/10.48550/arXiv.1811.11839
  • Montero, C. S., Alexander, J., Marshall, M. T., & Subramanian, S. (2010). Would you do that? Understanding social acceptance of gestural interfaces [Paper presentation]. Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services (pp. 275–278), Lisbon, Portugal. https://doi.org/10.1145/1851600.1851647
  • Morland, D. V. (1983). Human factors guidelines for terminal interface design. Communications of the ACM, 26(7), 484–494. https://doi.org/10.1145/358150.358156
  • Murmann, P. (2018). Usable transparency for enhancing privacy in mobile health apps [Paper presentation]. Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct (pp. 440–442), Barcelona, Spain. https://doi.org/10.1145/3236112.3236184
  • Nowacka, D., & Kirk, D. (2014). Tangible autonomous interfaces (TAIs): Exploring autonomous behaviours in TUIs [Paper presentation]. Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (pp. 1–8), Munich, Germany. https://doi.org/10.1145/2540930.2540942
  • Olah, C., Satyanarayan, A., Johnson, I., Carter, S., Schubert, L., Ye, K., & Mordvintsev, A. (2018). The Building Blocks of Interpretability. Distill, 3(3). https://distill.pub/2018/building-blocks/ https://doi.org/10.23915/distill.00010
  • Osterwalder, A. (2004). The business model ontology a proposition in a design science approach. Undefined. /paper/The-business-model-ontology-a-proposition-in-a-Osterwalder/87bbedf0efbf010515ed54086bdf31c7cb33e4a3
  • Osterwalder, A., & Pigneur, Y. (2010). Business model generation: A handbook for visionaries, game changers, and challengers (1st ed.). John Wiley and Sons.
  • Parra, D., Brusilovsky, P., & Trattner, C. (2014). See what you want to see: Visual user-driven approach for hybrid recommendation [Paper presentation]. Proceedings of the 19th International Conference on Intelligent User Interfaces (pp. 235–240), Haifa, Israel. https://doi.org/10.1145/2557500.2557542
  • Pöppel, E. (1994). Temporal mechanisms in perception. International Review of Neurobiology, 37, 185–202. https://doi.org/10.1016/s0074-7742(08)60246-9
  • Porcheron, M., Fischer, J. E., Reeves, S., & Sharples, S. (2018). Voice interfaces in everyday life [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–12), Montreal, QC, Canada. https://doi.org/10.1145/3173574.3174214
  • Prakken, H. (2017). On making autonomous vehicles respect traffic law: A case study for Dutch law [Paper presentation]. Proceedings of the 16th International Conference on Artificial Intelligence and Law (pp. 241–244), London, UK. https://doi.org/10.1145/3086512.3086542
  • Pu, P., & Chen, L. (2006). Trust building with explanation interfaces [Paper presentation]. Proceedings of the 11th International Conference on Intelligent User Interfaces (pp. 93–100), Sydney, Australia. https://doi.org/10.1145/1111449.1111475
  • Raweni, A. M., & Majstorović, V. D. (2017). ISO CERTIFICATIONS DIFFUSION IN EUROPEAN COUNTRIES 2007–2014 AND FORECASTINNG FOR 2022-STARE OF THE ART. International Journal “Advanced Quality”, 44(1), 53–58. https://doi.org/10.25137/IJAQ.n1.v44.y2016.p53-58
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Nothing else matters: Model-agnostic explanations by identifying prediction invariance. ArXiv, arXiv:1611.05817 [stat.ML].
  • Ribeiro, M., Singh, S., & Guestrin, C. (2016). “ Why should i trust you?”: Explaining the predictions of any classifier [Paper presentation]. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations (pp. 97–101), San Diego, California. https://doi.org/10.18653/v1/N16-3020
  • Saltelli, A., Tarantola, S., Campolongo, F., & Ratto, M. (2004). Sensitivity analysis in practice: A guide to assessing scientific models. Halsted Press.
  • Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A. (2016). A meta-analysis of factors influencing the development of trust in automation: Implications for understanding autonomy in future systems. Human Factors, 58(3), 377–400. https://doi.org/10.1177/0018720816634228
  • Schaffer, J., O’Donovan, J., Michaelis, J., Raglin, A., & Höllerer, T. (2019). I can do better than your AI: Expertise and explanations [Paper presentation]. Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 240–251), Marina del Rey, California. https://doi.org/10.1145/3301275.3302308
  • Schlesinger, A., O’Hara, K. P., & Taylor, A. S. (2018). Let’s talk about race: Identity, chatbots, and AI [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. https://doi.org/10.1145/3173574.3173889
  • Selkowitz, A. R., Lakhmani, S. G., Larios, C. N., & Chen, J. Y. C. (2016). Agent transparency and the autonomous squad member. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1), 1319–1323. https://doi.org/10.1177/1541931213601305
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization [Paper presentation]. 2017 IEEE International Conference on Computer Vision (ICCV) (pp. 618–626), Venice, Italy. https://doi.org/10.1109/ICCV.2017.74
  • Sendak, M., Elish, M. C., Gao, M., Futoma, J., Ratliff, W., Nichols, M., Bedoya, A., Balu, S., & O’Brien, C. (2020). ‘ The human body is a black box’: Supporting clinical decision-making with deep learning [Paper presentation]. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 99–109), Barcelona, Spain. https://doi.org/10.1145/3351095.3372827
  • Shishehchi, S., Banihashem, S. Y., & Zin, N. A. M. (2010). A proposed semantic recommendation system for e-learning: A rule and ontology based e-learning recommendation system. Proceedings of the 2010 International Symposium on Information Technology (pp. 1–5). https://doi.org/10.1109/ITSIM.2010.5561329
  • Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 1–31. https://doi.org/10.1145/3419764
  • Simonyan, K., Vedaldi, A., & Zisserman, A. (2014). Deep inside convolutional networks: visualising image classification models and saliency maps. ICLR.
  • Springer, A., & Whittaker, S. (2019). Progressive disclosure: Empirically motivated approaches to designing effective transparency [Paper presentation]. Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 107–120), Marina del Rey, California. https://doi.org/10.1145/3301275.3302322
  • Springer, A., & Whittaker, S. (2020). Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems, 10(4), 1–32. https://doi.org/10.1145/3374218
  • Tam, N. T., Huy, N. T., Thoa, L. T. B., Long, N. P., Trang, N. T. H., Hirayama, K., & Karbwang, J. (2015). Participants’ understanding of informed consent in clinical trials over three decades: Systematic review and meta-analysis. Bulletin of the World Health Organization, 93(3), 186–198H. https://doi.org/10.2471/BLT.14.141390
  • Tan, S., Caruana, R., Hooker, G., Koch, P., & Gordo, A. (2018). Learning global additive explanations for neural nets using model distillation.
  • Tang, A., Finke, M., Blackstock, M., Leung, R., Deutscher, M., & Lea, R. (2008). Designing for bystanders: Reflections on building a public digital forum [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 879–882), Florence, Italy. https://doi.org/10.1145/1357054.1357193
  • Thelisson, E., Sharma, K., Salam, H., & Dignum, V. (2018). The general data protection regulation: An opportunity for the HCI community? Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–8), Montreal, QC, Canada. https://doi.org/10.1145/3170427.3170632
  • Theodorou, A., Wortham, R. H., & Bryson, J. J. (2017). Designing and implementing transparency for real time inspection of autonomous robots. Connection Science, 29(3), 230–241. https://doi.org/10.1080/09540091.2017.1310182
  • Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7
  • Tullio, J., Dey, A. K., Chalecki, J., & Fogarty, J. (2007). How it works: A field study of non-technical users interacting with an intelligent system [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 31–40), San Jose, California, USA. https://doi.org/10.1145/1240624.1240630
  • Turek, M. (2016). Explainable artificial intelligence. https://www.darpa.mil/program/explainable-artificial-intelligence
  • Uschold, M., & Gruninger, M. (1996). Ontologies: Principles, methods and applications. The Knowledge Engineering Review, 11(2), 93–136. https://doi.org/10.1017/S0269888900007797
  • van Capelleveen, G., Amrit, C., Yazan, D. M., & Zijm, H. (2019). The recommender canvas: A model for developing and documenting recommender system design. Expert Systems with Applications, 129, 97–117. https://doi.org/10.1016/j.eswa.2019.04.001
  • Van Mechelen, M., Baykal, G. E., Dindler, C., Eriksson, E., & Iversen, O. S. (2020). 18 Years of ethics in child-computer interaction research: A systematic literature review [Paper presentation]. Proceedings of the Interaction Design and Children Conference (pp. 161–183), London, UK. https://doi.org/10.1145/3392063.3394407
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 6000–6010). arXiv:1706.03762
  • Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In Proceedings of the 2018 CHI conference on human factors in computing systems (pp. 1–14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3174014
  • Verbert, K., Parra, D., Brusilovsky, P., & Duval, E. (2013). Visualizing recommendations to support exploration, transparency and controllability [Paper presentation]. Proceedings of the 2013 International Conference on Intelligent User Interfaces (pp. 351–362), Santa Monica, California, USA. https://doi.org/10.1145/2449396.2449442
  • Vermeulen, J., Vanderhulst, G., Luyten, K., & Coninx, K. (2010). PervasiveCrystal: Asking and answering why and why not questions about pervasive computing applications [Paper presentation]. 2010 Sixth International Conference on Intelligent Environments (pp. 271–276), Kuala Lumpur, Malaysia. https://doi.org/10.1109/IE.2010.56
  • Vines, J., McNaney, R., Clarke, R., Lindsay, S., McCarthy, J., Howard, S., Romero, M., & Wallace, J. (2013). Designing for- and with- vulnerable people [Paper presentation]. CHI ’13 Extended Abstracts on Human Factors in Computing Systems (pp. 3231–3234), Paris, France. https://doi.org/10.1145/2468356.2479654
  • Vlachos, M., & Svonava, D. (2012). Graph embeddings for movie visualization and recommendation. /paper/Graph-Embeddings-for-Movie-Visualization-and-Vlachos-Svonava/1d8273a4276682067ac6de0bffb4609721e63bed
  • Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. ArXiv:1711.00399 [Cs]. http://arxiv.org/abs/1711.00399
  • Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing Theory-driven user-centric explainable AI [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–15), Glasgow, Scotland, UK. https://doi.org/10.1145/3290605.3300831
  • Wang, L., Jamieson, G. A., & Hollands, J. G. (2009). Trust and reliance on an automated combat identification system. Human Factors, 51(3), 281–291. https://doi.org/10.1177/0018720809338842
  • Wang, N., Pynadath, D. V., & Hill, S. G. (2016). The impact of POMDP-generated explanations on trust and performance in human-robot teams [Paper presentation]. Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (pp. 997–1005), Singapore, Singapore.
  • Wang, W., & Benbasat, I. (2007). Recommendation agents for electronic commerce: Effects of explanation facilities on trusting beliefs. Journal of Management Information Systems, 23(4), 217–246. https://doi.org/10.2753/MIS0742-1222230410
  • Wang, X., Liu, S., Liu, J., Chen, J., Zhu, J., & Guo, B. (2016). TopicPanorama: A full picture of relevant topics. IEEE Transactions on Visualization and Computer Graphics, 22(12), 2508–2521. https://doi.org/10.1109/TVCG.2016.2515592
  • Weitz, K., Schiller, D., Schlagowski, R., Huber, T., & André, E. (2021). “Let me explain!”: Exploring the potential of virtual agents in explainable AI interaction design. Journal on Multimodal User Interfaces, 15, 87–98. https://doi.org/10.1007/s12193-020-00332-0
  • Weller, A. (2019). Transparency: Motivations and challenges. ArXiv:1708.01870 [Cs]. http://arxiv.org/abs/1708.01870
  • West, S. M. (2019). Data capitalism: Redefining the logics of surveillance and privacy. Business & Society, 58(1), 20–41. https://doi.org/10.1177/0007650317718185
  • Wood, S. N. (2006). Generalized additive models: An introduction with R (1st ed.). Chapman and Hall/CRC.
  • Woodruff, A., Fox, S. E., Rousso-Schindler, S., & Warshaw, J. (2018). A qualitative exploration of perceptions of algorithmic fairness [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. https://doi.org/10.1145/3173574.3174230
  • Wouters, N., Kelly, R., Velloso, E., Wolf, K., Ferdous, H. S., Newn, J., Joukhadar, Z., & Vetere, F. (2019). Biometric mirror: Exploring ethical opinions towards facial analysis and automated decision-making [Paper presentation]. Proceedings of the 2019 on Designing Interactive Systems Conference (pp. 447–461), San Diego, CA, USA. https://doi.org/10.1145/3322276.3322304
  • Wright, J. L., Chen, J. Y. C., & Lakhmani, S. G. (2020). Agent transparency and reliability in human–robot interaction: The influence on user confidence and perceived reliability. IEEE Transactions on Human-Machine Systems, 50(3), 254–263. https://doi.org/10.1109/THMS.2019.2925717
  • Wright, J. L., Chen, J. Y. C., Barnes, M. J., & Boyce, M. W. (2015). The effects of information level on human-agent interaction for route planning. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59(1), 811–815. https://doi.org/10.1177/1541931215591247
  • Wu, B., Xu, C., Dai, X., Wan, A., Zhang, P., Yan, Z., Tomizuka, M., Gonzalez, J., Keutzer, K., & Vajda, P. (2020). Visual transformers: Token-based image representation and processing for computer vision. ArXiv:2006.03677 [Cs, Eess]. http://arxiv.org/abs/2006.03677
  • Wu, H. C., Luk, R. W. P., Wong, K. F., & Kwok, K. L. (2008). Interpreting TF-IDF term weights as making relevance decisions. ACM Transactions on Information Systems, 26(3), 1–37. https://doi.org/10.1145/1361684.1361686
  • Yang, C., Rangarajan, A., & Ranka, S. (2018). Global model interpretation via recursive partitioning [Paper presentation]. 2018 IEEE 20th International Conference on High Performance Computing and Communications; IEEE 16th International Conference on Smart City; IEEE 4th International Conference on Data Science and Systems (HPCC/SmartCity/DSS) (pp. 1563–1570), Exeter, UK. https://doi.org/10.1109/HPCC/SmartCity/DSS.2018.00256
  • Yang, Q., Banovic, N., & Zimmerman, J. (2018). Mapping machine learning advances from hci research to reveal starting places for design innovation [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI, 18 (pp. 1–11), Montreal, QC, Canada. https://doi.org/10.1145/3173574.3173704
  • Yang, Q., Steinfeld, A., Rosé, C., & Zimmerman, J. (2020). Re-examining whether, why, and how human-ai interaction is uniquely difficult to design [Paper presentation]. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–13), Honolulu, HI, USA. https://doi.org/10.1145/3313831.3376301
  • Yao, Y., Basdeo, J. R., Mcdonough, O. R., & Wang, Y. (2019). Privacy perceptions and designs of bystanders in smart homes. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–24. https://doi.org/10.1145/3359161
  • Yu, Z., Nakamura, Y., Jang, S., Kajita, S., & Mase, K. (2007). Ontology-based semantic recommendation for context-aware E-learning. In J. Indulska, J. Ma, L. T. Yang, T. Ungerer, & J. Cao (Eds.), Ubiquitous intelligence and computing (pp. 898–907). Springer. https://doi.org/10.1007/978-3-540-73549-6_88
  • Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In D. Fleet, T. Pajdla, B. Schiele, & T. Tuytelaars (Eds.), Computer vision – ECCV 2014 (pp. 818–833). Springer International Publishing. https://doi.org/10.1007/978-3-319-10590-1_53
  • Zhang, Q., & Zhu, S. (2018). Visual interpretability for deep learning: A survey. Frontiers of Information Technology & Electronic Engineering, 19(1), 27–39. https://doi.org/10.1631/FITEE.1700808
  • Zhang, Y., Bellamy, R., & Varshney, K. (2020). Joint optimization of AI fairness and utility: A human-centered approach [Paper presentation]. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 400–406), New York, NY, USA. https://doi.org/10.1145/3375627.3375862
  • Zhao, E., & Sukkerd, R. (2019). Interactive explanation for planning-based systems: WIP abstract [Paper presentation]. Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems (pp. 322–323), Montreal, Quebec, Canada. https://doi.org/10.1145/3302509.3313322
  • Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., & Torralba, A. (2015). Object detectors emerge in deep scene CNNs. ICLR.
  • Zhou, J., & Chen, F. (2015). Making machine learning useable. International Journal of Intelligent Systems Technologies and Applications, 14(2), 91–109. https://doi.org/10.1504/IJISTA.2015.074069

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.