1,680
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Communicating the Limitations of AI: The Effect of Message Framing and Ownership on Trust in Artificial Intelligence

ORCID Icon & ORCID Icon
Pages 790-800 | Received 17 Feb 2021, Accepted 01 Mar 2022, Published online: 18 Apr 2022

References

  • Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
  • Bahner, J. E., Hüper, A.-D., & Manzey, D. (2008). Misuse of automated decision aids: Complacency, automation bias and the impact of training experience. International Journal of Human-Computer Studies, 66(9), 688–699. https://doi.org/10.1016/j.ijhcs.2008.06.001
  • Brehm, J. W. (1966). A theory of psychological reactance. Academic Press.
  • Brehm, S. S., & Brehm, J. W. (1981). Psychological reactance: A theory of freedom and control. Academic Press.
  • Bunt, A., McGrenere, J., & Conati, C. (2007). Understanding the utility of rationale in a mixed-initiative system for GUI customization. In International Conference on User Modeling (pp. 147–156). Springer.
  • Burgoon, J. K., Bonito, J. A., Bengtsson, B., Cederberg, C., Lundeberg, M., & Allspach, L. (2000). Interactivity in human-computer interaction: A study of credibility, understanding, and influence. Computers in Human Behavior, 16(6), 553–574. https://doi.org/10.1016/S0747-5632(00)00029-7
  • Cai, C. J., Jongejan, J., & Holbrook, J. (2019). The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 258–262). ACM. https://doi.org/10.1145/3301275.3302289
  • Carmon, Z., Wertenbroch, K., & Zeelenberg, M. (2003). Option attachment: When deliberating makes choosing feel like losing. Journal of Consumer Research, 30(1), 15–29. https://doi.org/10.1086/374701
  • Chen, P. D., & Haynes, R. M. (2016). Transparency for whom? Impacts of accountability movements for institutional researchers and beyond. New Directions for Institutional Research, 2015(166), 11–21. https://doi.org/10.1002/ir.20127
  • Chiou, E. K., & Lee, J. D. (2021). Trusting automation: Designing for responsivity and resilience. Human factors, 00187208211009995. https://doi.org/10.1177/00187208211009995
  • Davis, M. A., & Bobko, P. (1986). Contextual effects on escalation processes in public sector decision making. Organizational Behavior and Human Decision Processes, 37(1), 121–138. https://doi.org/10.1016/0749-5978(86)90048-8
  • de Carvalho Delgado Marques, T., Noels, E., Wakkee, M., Udrea, A., & Nijsten, T. (2019). Development of smartphone apps for skin cancer risk assessment: Progress and promise. Journal of Medical Internet Research, 21(7), e13376. [10.2196/13376]
  • de Visser, E. J., Peeters, M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A. (2020). Towards a theory of longitudinal trust calibration in human-robot teams. International Journal of Social Robotics, 12(2), 459–478. https://doi.org/10.1007/s12369-019-00596-x
  • Dhar, R., & Wertenbroch, K. (2000). Consumer choice between hedonic and utilitarian goods. Journal of Marketing Research, 37(1), 60–71. https://doi.org/10.1509/jmkr.37.1.60.18718
  • Donders, F. (1969). On the speed of mental processes. Acta psychologica, 30, 412–431. https://doi.org/10.1016/0001-6918(69)90065-1
  • Duchon, D., Dunegan, K. J., & Barton, S. L. (1989). Framing the problem and making decisions: The facts are not enough. IEEE Transactions on Engineering Management, 36(1), 25–27. https://doi.org/10.1109/17.19979
  • Dunegan, K. J. (1993). Framing, cognitive modes, and image theory: Toward an understanding of a glass half full. Journal of Applied Psychology, 78(3), 491–503. https://doi.org/10.1037/0021-9010.78.3.491
  • Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P. (2003). The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. https://doi.org/10.1016/S1071-5819(03)00038-7
  • Engs, R., & Hanson, D. J. (1989). Reactance theory: A test with collegiate drinking. Psychological Reports, 64(3_suppl), 1083–1086. https://doi.org/10.2466/pr0.1989.64.3c.1083
  • Fazio, R. H. (1990). Multiple processes by which attitudes guide behavior: The mode model as an integrative framework. Advances in Experimental Social Psychology, 23, 75–109.
  • Fogg, B. J., & Tseng, H. (1999). The elements of computer credibility. In Proceedings of the Sigchi Conference on Human Factors in Computing Systems (pp. 80–87). ACM. https://doi.org/10.1145/302979.303001
  • Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90. https://doi.org/10.2307/30036519
  • Gomez-Uribe, C. A., & Hunt, N. (2016). The Netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS), 6(4), 13. https://doi.org/10.1145/2843948
  • Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 50–57. https://doi.org/10.1609/aimag.v38i3.2741
  • Gunning, D. (2017). Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA). https://www.darpa.mil/attachments/XAIIndustryDay_Final.pptx
  • Ha, T., Kim, S., Seo, D., & Lee, S. (2020). Effects of explanation types and perceived risk on trust in autonomous vehicles. Transportation Research Part F: Traffic Psychology and Behaviour, 73, 271–280. https://doi.org/10.1016/j.trf.2020.06.021
  • Hengstler, M., Enkel, E., & Duelli, S. (2016). Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change, 105, 105–120. https://doi.org/10.1016/j.techfore.2015.12.014
  • Hertz, N., Shaw, T., de Visser, E. J., & Wiese, E. (2019). Mixing it up: How mixed groups of humans and machines modulate conformity. Journal of Cognitive Engineering and Decision Making, 13(4), 242–257. https://doi.org/10.1177/1555343419869465
  • Hoch, S. J., & Loewenstein, G. F. (1991). Time-inconsistent preferences and consumer self-control. Journal of Consumer Research, 17(4), 492–507. https://doi.org/10.1086/208573
  • James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning. (Vol. 112, p. 18). Springer.
  • Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). Experimental tests of the endowment effect and the Coase theorem. Journal of Political Economy, 98(6), 1325–1348. https://doi.org/10.1086/261737
  • Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1991). Anomalies: The endowment effect, loss aversion, and status quo bias. Journal of Economic Perspectives, 5(1), 193–206. https://doi.org/10.1257/jep.5.1.193
  • Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 2390–2395). ACM.
  • Lafferty, J., Eady, P., Pond, A., & Synergistics, H. (1974). The desert survival problem: A group decision making experience for examining and increasing individual and team effectiveness: Manual. Experimental Learning Methods.
  • Lee, J. D., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35(10), 1243–1270.
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.
  • Levin, I. P., & Gaeth, G. J. (1988). How consumers are affected by the framing of attribute information before and after consuming the product. Journal of Consumer Research, 15(3), 374–378. https://doi.org/10.1086/209174
  • Levin, I. P., Schneider, S. L., & Gaeth, G. J. (1998). All frames are not created equal: A typology and critical analysis of framing effects. Organizational Behavior and Human Decision Processes, 76(2), 149–188. https://doi.org/10.1006/obhd.1998.2804
  • Levin, I. P., Schnittjer, S. K., & Thee, S. L. (1988). Information framing effects in social and personal decisions. Journal of Experimental Social Psychology, 24(6), 520–529. https://doi.org/10.1016/0022-1031(88)90050-9
  • Lewin, K. (1938). The conceptual representation and the measurement of psychological forces. Duke University Press.
  • Li, X., Hess, T. J., & Valacich, J. S. (2008). Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems, 17(1), 39–71. https://doi.org/10.1016/j.jsis.2008.01.001
  • Lim, B. Y., & Dey, A. K. (2011). Investigating intelligibility for uncertain context-aware applications. In Proceedings of the 13th International Conference on Ubiquitous Computing (pp. 415–424). ACM. https://doi.org/10.1145/2030112.2030168
  • Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013
  • Lui, A., & Lamb, G. W. (2018). Artificial intelligence and augmented intelligence collaboration: Regaining trust and confidence in the financial sector. Information & Communications Technology Law, 27(3), 267–283. https://doi.org/10.1080/13600834.2018.1488659
  • Lyell, D., & Coiera, E. (2017). Automation bias and verification complexity: A systematic review. Journal of the American Medical Informatics Association : JAMIA, 24(2), 423–431.
  • Marteau, T. M. (1989). Framing of information: Its influence upon decisions of doctors and patients. British Journal of Social Psychology, 28(1), 89–94. https://doi.org/10.1111/j.2044-8309.1989.tb00849.x
  • Mason, M. (2008). Transparency for whom? Information disclosure and power in global environmental governance. Global Environmental Politics, 8(2), 8–13. https://doi.org/10.1162/glep.2008.8.2.8
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709–734. https://doi.org/10.2307/258792
  • McCroskey, J. C., & Teven, J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66(1), 90–103. https://doi.org/10.1080/03637759909376464
  • McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G. S., Darzi, A., Etemadi, M., Garcia-Vicente, F., Gilbert, F. J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly, C. J., King, D., … Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89–94. https://doi.org/10.1038/s41586-019-1799-6
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Moon, Y., & Nass, C. (1996). How “real” are computer personalities? Psychological responses to personality types in human-computer interaction. Communication Research, 23(6), 651–674. https://doi.org/10.1177/009365096023006002
  • Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., & Doshi-Velez, F. (2018). How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682. https://doi.org/10.48550/arXiv.1802.00682
  • Nass, C., Fogg, B., & Moon, Y. (1996). Can computers be teammates? International Journal of Human-Computer Studies, 45(6), 669–678. https://doi.org/10.1006/ijhc.1996.0073
  • Papenmeier, A., Englebienne, G., & Seifert, C. (2019). How model accuracy and explanation fidelity influence user trust. arXiv preprint arXiv:1907.12652. https://doi.org/10.48550/arXiv.1907.12652
  • Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410.
  • Parasuraman, R., Molloy, R., & Singh, I. L. (1993). Performance consequences of automation-induced ‘complacency. The International Journal of Aviation Psychology, 3(1), 1–23. https://doi.org/10.1207/s15327108ijap0301_1
  • Plous, S. (1993). The psychology of judgment and decision making. McGraw-Hill.
  • Pu, P., & Chen, L. (2006). Trust building with explanation interfaces. In Proceedings of the 11th International Conference on Intelligent User Interfaces (pp. 93–100). ACM. https://doi.org/10.1145/1111449.1111475
  • PyQt. (2012). PyQt reference guide. https://doc.bccnsoft.com/docs/PyQt4/
  • Rains, S. A. (2013). The nature of psychological reactance revisited: A meta-analytic review. Human Communication Research, 39(1), 47–73. https://doi.org/10.1111/j.1468-2958.2012.01443.x
  • Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2021). Systematic review: Trust-building factors and implications for conversational agent design. International Journal of Human–Computer Interaction, 37(1), 81–96. https://doi.org/10.1080/10447318.2020.1807710
  • Robinette, P., Howard, A. M., & Wagner, A. R. (2017). Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4), 425–436. https://doi.org/10.1109/THMS.2017.2648849
  • Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. (2016). Overtrust of robots in emergency evacuation scenarios. In 2016 ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 101–108). ACM.
  • Salomons, N., van der Linden, M., Strohkorb Sebo, S., & Scassellati, B. (2018). Humans conform to robots: Disambiguating trust, truth, and conformity. In 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 187–195). ACM. https://doi.org/10.1145/3171221.3171282
  • Schroeder, N. L., Chiou, E. K., & Craig, S. D. (2021). Trust influences perceptions of virtual humans, but not necessarily learning. Computers & Education, 160, 104039. https://doi.org/10.1016/j.compedu.2020.104039
  • Siau, K., Sheng, H., Nah, F., & Davis, S. (2004). A qualitative investigation on consumer trust in mobile commerce. International Journal of Electronic Business, 2(3), 283–300. https://doi.org/10.1504/IJEB.2004.005143
  • Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31(2), 47–53.
  • Smith-Renner, A., Fan, R., Birchfield, M., Wu, T., Boyd-Graber, J., Weld, D. S., & Findlater, L. (2020, April). No explainability without accountability: An empirical study of explanations and feedback in interactive ML. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–13). ACM.
  • Stowers, K., Kasdaglis, N., Rupp, M., Chen, J., Barber, D., & Barnes, M. (2017). Insights into human-agent teaming: Intelligent agent transparency and uncertainty. In J. Chen (Ed.), Advances in human factors in robots and unmanned systems (pp. 149–160). Springer.
  • Strahilevitz, M. A., & Loewenstein, G. (1998). The effect of ownership history on the valuation of objects. Journal of Consumer Research, 25(3), 276–289. https://doi.org/10.1086/209539
  • Van Rossum, G., & Drake Jr, F. L. (1995). Python tutorial (Vol. 620). Centrum voor Wiskunde en Informatica.
  • Yokoi, R., Eguchi, Y., Fujita, T., & Nakayachi, K. (2021). Artificial intelligence is trusted less than a doctor in medical treatment decisions: Influence of perceived care and value similarity. International Journal of Human–Computer Interaction, 37(10), 981–990. https://doi.org/10.1080/10447318.2020.1861763
  • Wickens, C. D., Hollands, J. G., Banbury, S., & Parasuraman, R. (2015). Engineering psychology and human performance. Psychology Press.
  • Wong, R. S. (2020). An alternative explanation for attribute framing and spillover effects in multidimensional supplier evaluation and supplier termination: Focusing on asymmetries in attention. Decision Sciences, 52(2), 262–282. https://doi.org/10.1111/deci.12435
  • Wu, K., Zhao, Y., Zhu, Q., Tan, X., & Zheng, H. (2011). A meta-analysis of the impact of trust on technology acceptance model: Investigation of moderating influence of subject and context type. International Journal of Information Management, 31(6), 572–581. https://doi.org/10.1016/j.ijinfomgt.2011.03.004
  • Zhang, J., & Curley, S. P. (2018). Exploring explanation effects on consumers’ trust in online recommender agents. International Journal of Human–Computer Interaction, 34(5), 421–432. https://doi.org/10.1080/10447318.2017.1357904

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.