250
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Evaluating Trust in Recommender Systems: A User Study on the Impacts of Explanations, Agency Attribution, and Product Types

ORCID Icon & ORCID Icon
Received 22 Aug 2023, Accepted 29 Jan 2024, Published online: 14 Feb 2024

References

  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April 21–26). ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. https://doi.org/10.1145/3173574.3173951
  • Botti, S., & McGill, A. L. (2011). The locus of choice: Personal causality and satisfaction with hedonic and utilitarian decisions. Journal of Consumer Research, 37(6), 1065–1078. https://doi.org/10.1086/656570
  • Burke, R. (2002). Hybrid recommender systems: Survey and experiments. User Modeling and User-Adapted Interaction, 12(4), 331–370. https://doi.org/10.1023/A:1021240730564
  • Burton, J. W., Stein, M., & Jensen, T. B. (2020). A systematic review of algorithm aversion in augmented decision making. Journal of Behavioral Decision Making, 33(2), 220–239. https://doi.org/10.1002/bdm.2155
  • Calhoun, C. S., Bobko, P., Gallimore, J. J., & Lyons, J. B. (2019). Linking precursors of interpersonal trust to human-automation trust: An expanded typology and exploratory experiment. Journal of Trust Research, 9(1), 28–46. https://doi.org/10.1080/21515581.2019.1579730
  • Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. https://doi.org/10.1177/0022243719851788
  • Chancey, E. T., Bliss, J. P., Yamani, Y., & Handley, H. A. (2017). Trust and the compliance–reliance paradigm: The effects of risk, error bias, and reliability on trust and dependence. Human Factors, 59(3), 333–345.
  • Chang, S., Harper, F. M., & Terveen, L. G. (2016, September 15–19). Crowd-based personalized natural language explanations for recommendations [Paper presentation]. Proceedings of the 10th ACM Conference on Recommender Systems (pp. 175–182), Boston, MA, USA. https://doi.org/10.1145/2959100.2959153
  • Chmielewski, M., & Kucker, S. C. (2020). An MTurk crisis? Shifts in data quality and the impact on study results. Social Psychological and Personality Science, 11(4), 464–473. https://doi.org/10.1177/1948550619875149
  • Connors, M. (1989). Crew system dynamics: Combining humans and automation. SAE Transactions, 98, 923–930.
  • Connors, M. M., Harrison, A. A., & Summit, J. (1994). Crew systems: Integrating human and technical subsystems for the exploration of space. Behavioral Science, 39(3), 183–212. https://doi.org/10.1002/bs.3830390303
  • Coutts, J. J., & Hayes, A. F. (2022). Questions of value, questions of magnitude: An exploration and application of methods for comparing indirect effects in multiple mediator models. Behavior Research Methods, 55(7), 3772–3785. https://doi.org/10.3758/s13428-022-01988-0
  • Dawes, R. M., Faust, D., & Meehl, P. E. (1989). Clinical versus actuarial judgment. Science (New York, NY), 243(4899), 1668–1674. https://doi.org/10.1126/science.2648573
  • Dijkstra, J. J. (1999). User agreement with incorrect expert system advice. Behaviour & Information Technology, 18(6), 399–411. https://doi.org/10.1080/014492999118832
  • Drolet, A., Williams, P., & Lau-Gesk, L. (2007). Age-related differences in responses to affective vs. rational ads for hedonic vs. utilitarian products. Marketing Letters, 18(4), 211–221. https://doi.org/10.1007/s11002-007-9016-z
  • Dzindolet, M. T., Beck, H. P., Pierce, L. G., & Dawe, L. A. (2001). A framework of automation use. Army Research Laboratory. Aberdeen Proving Ground.
  • Dzindolet, M. T., Pierce, L. G., Beck, H. P., & Dawe, L. A. (2002). The perceived utility of human and automated aids in a visual detection task. Human Factors, 44(1), 79–94. https://doi.org/10.1518/0018720024494856
  • Hamilton, K. A., & Yao, M. Z. (2018). Blurring boundaries: Effects of device features on metacognitive evaluations. Computers in Human Behavior, 89, 213–220. https://doi.org/10.1016/j.chb.2018.07.044
  • Hamilton, K. A., Lee, S. Y., Chung, U. C., Liu, W., & Duff, B. R. (2021). Putting the “Me” in endorsement: Understanding and conceptualizing dimensions of self-endorsement using intelligent personal assistants. New Media & Society, 23(6), 1506–1526. https://doi.org/10.1177/1461444820912197
  • Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000, December 1). Explaining collaborative filtering recommendations [Paper presentation]. Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (pp. 241–250), Philadelphia, PA, USA. https://doi.org/10.1145/358916.358995
  • Hirschman, E. C., & Holbrook, M. B. (1982). Hedonic consumption: Emerging concepts, methods and propositions. Journal of Marketing, 46(3), 92–101. https://doi.org/10.2307/1251707
  • Hou, Y. T.-Y., & Jung, M. F. (2021). Who is the expert? Reconciling algorithm aversion and algorithm appreciation in AI-supported decision making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–25. https://doi.org/10.1145/3479864
  • Jourard, S. M. (1971). Self-disclosure: An experimental analysis of the transparent self (pp. xiii, 248). John Wiley.
  • Kleinmuntz, B. (1990). Why we still use our heads instead of formulas: Toward an integrative approach. Psychological Bulletin, 107(3), 296–310. https://doi.org/10.1037/0033-2909.107.3.296
  • Kleinmuntz, D. N., & Schkade, D. A. (1993). Information displays and decision processes. Psychological Science, 4(4), 221–227. https://doi.org/10.1111/j.1467-9280.1993.tb00265.x
  • Konstan, J. A., & Riedl, J. (2012). Recommender systems: From algorithms to user experience. User Modeling and User-Adapted Interaction, 22(1–2), 101–123. https://doi.org/10.1007/s11257-011-9112-x
  • Kouki, P., Schaffer, J., Pujara, J., O’Donovan, J., & Getoor, L. (2019, March 17–20). Personalized explanations for hybrid recommender systems [Paper presentation]. Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 379–390), Marina del Ray, CA, USA. https://doi.org/10.1145/3301275.3302306
  • Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
  • Lelkes, Y., Krosnick, J. A., Marx, D. M., Judd, C. M., & Park, B. (2012). Complete anonymity compromises the accuracy of self-reports. Journal of Experimental Social Psychology, 48(6), 1291–1299. https://doi.org/10.1016/j.jesp.2012.07.002
  • Lerch, F. J., Prietula, M. J., & Kulik, C. T. (1997). The Turing effect: The nature of trust in expert systems advice. In P. J. Feltovich, K. M. Ford, & R. R. Hoffman (Eds.), Expertise in context: Human and machine (pp. 417–448). MIT Press.
  • Liu, B. (2021). In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. Journal of Computer-Mediated Communication, 26(6), 384–402. https://doi.org/10.1093/jcmc/zmab013
  • Liu, W., Xu, K., & Yao, M. Z. (2023). “Can you tell me about yourself?” The impacts of chatbot names and communication contexts on users’ willingness to self-disclose information in human-machine conversations. Communication Research Reports, 40(3), 122–133. https://doi.org/10.1080/08824096.2023.2212899
  • Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
  • Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277–301. https://doi.org/10.1080/14639220500337708
  • Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709–734. https://doi.org/10.5465/amr.1995.9508080335
  • McKnight, D. H., & Chervany, N. L. (2001, January 6). Conceptualizing trust: A typology and e-commerce customer relationships model [Paper presentation]. Proceedings of the 34th Annual Hawaii International Conference on System Sciences (10 pp.), Maui, HI, USA. https://doi.org/10.1109/HICSS.2001.927053
  • Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 1–25. https://doi.org/10.1145/1985347.1985353
  • McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research, 13(3), 334–359. https://doi.org/10.1287/isre.13.3.334.81
  • Mennecke, B. E., & Valacich, J. S. (1998). Information is what you make of it: The influence of group history and computer support on information sharing, decision quality, and member perceptions. Journal of Management Information Systems, 15(2), 173–197. https://doi.org/10.1080/07421222.1998.11518213
  • Miyazaki, K., Murayama, T., Uchiba, T., An, J., & Kwak, H. (2023). Public perception of generative AI on Twitter: An empirical study based on occupation and usage. EPJ Data Science, 13(1), 1–20. https://doi.org/10.48550/arXiv.2305.09537
  • Molina, M. D., & Sundar, S. S. (2022). When AI moderates online content: Effects of human collaboration and interactive transparency on user trust. Journal of Computer-Mediated Communication, 27(4), zmac010. https://doi.org/10.1093/jcmc/zmac010
  • Morewedge, C. K. (2022). Preference for human, not algorithm aversion. Trends in Cognitive Sciences, 26(10), 824–826. https://doi.org/10.1016/j.tics.2022.07.007
  • Murphy-Hill, E., & Murphy, G. C. (2014). Recommendation delivery: Getting the user interface just right (M. P. Robillard, W. Maalej, R. J. Walker, & T. Zimmermann, Eds.; pp. 223–242). Springer. https://doi.org/10.1007/978-3-642-45135-5_9
  • Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
  • Ozok, A. A., Fan, Q., & Norcio, A. F. (2010). Design guidelines for effective recommender system interfaces based on a usability criteria conceptual model: Results from a college student population. Behaviour & Information Technology, 29(1), 57–83. https://doi.org/10.1080/01449290903004012
  • Pazzani, M. J., & Billsus, D. (2007). Content-based recommendation systems. In P. Brusilovsky, A. Kobsa, & W. Nejdl (Eds.), The adaptive web: Methods and strategies of web personalization (pp. 325–341). Springer. https://doi.org/10.1007/978-3-540-72079-9_10
  • Petty, R. E., Briñol, P., & Tormala, Z. L. (2002). Thought confidence as a determinant of persuasion: The self-validation hypothesis. Journal of Personality and Social Psychology, 82(5), 722–741. https://doi.org/10.1037//0022-3514.82.5.722
  • Phelps, J., Nowak, G., & Ferrell, E. (2000). Privacy concerns and consumer willingness to provide personal information. Journal of Public Policy & Marketing, 19(1), 27–41. https://doi.org/10.1509/jppm.19.1.27.16941
  • Prahl, A., & Van Swol, L. (2017). Understanding algorithm aversion: When is advice from automation discounted? Journal of Forecasting, 36(6), 691–702. https://doi.org/10.1002/for.2464
  • Rubin, Z. (1975). Disclosing oneself to a stranger: Reciprocity and its limits. Journal of Experimental Social Psychology, 11(3), 233–260. https://doi.org/10.1016/S0022-1031(75)80025-4
  • Schafer, J. B., Frankowski, D., Herlocker, J., & Sen, S. (2007). Collaborative filtering recommender systems. In P. Brusilovsky, A. Kobsa, & W. Nejdl (Eds.), The adaptive web: Methods and strategies of web personalization (pp. 291–324). Springer. https://doi.org/10.1007/978-3-540-72079-9_9
  • Sinha, R., & Swearingen, K. (2002, April). The role of transparency in recommender systems [Paper presentation]. CHI ’02 Extended Abstracts on Human Factors in Computing Systems (pp. 830–831), Minneapolis, MI, USA. https://doi.org/10.1145/506443.506619
  • Solomon, J., & Wash, R. (2014). Human-what interaction? Understanding user source orientation. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 58(1), 422–426. https://doi.org/10.1177/1541931214581088
  • Sundar, S. S. (2008). The MAIN model: A Heuristic approach to understanding technology effects on credibility (pp. 73–100). MacArthur Foundation Digital Media and Learning Initiative.
  • Sundar, S. S., & Kim, J. (2019, May 4–9). Machine heuristic: When we trust computers more than humans with our personal information [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–9), Glasgow, Scotland, UK.
  • Sundar, S. S., & Nass, C. (2000). Source orientation in human-computer interaction: Programmer, networker, or independent social actor. Communication Research, 27(6), 683–703. https://doi.org/10.1177/009365000027006001
  • Tintarev, N., & Masthoff, J. (2012). Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction, 22(4–5), 399–439. https://doi.org/10.1007/s11257-011-9117-5
  • Voss, K. E., Spangenberg, E. R., & Grohmann, B. (2003). Measuring the hedonic and utilitarian dimensions of consumer attitude. Journal of Marketing Research, 40(3), 310–320. https://doi.org/10.1509/jmkr.40.3.310.19238
  • Wheeless, L. R., & Grotz, J. (1977). The measurement of trust and its relationship to self-disclosure. Human Communication Research, 3(3), 250–257. https://doi.org/10.1111/j.1468-2958.1977.tb00523.x
  • Wiegmann, D. A., Rich, A., & Zhang, H. (2001). Automated diagnostic aids: The effects of aid reliability on users’ trust and reliance. Theoretical Issues in Ergonomics Science, 2(4), 352–367. https://doi.org/10.1080/14639220110110306
  • Wiener, E. L., & Curry, R. E. (1980). Flight-deck automation: Promises and problems. Ergonomics, 23(10), 995–1011. https://doi.org/10.1080/00140138008924809
  • Zaichkowsky, J. L. (1994). The personal involvement inventory: Reduction, revision, and application to advertising. Journal of Advertising, 23(4), 59–70. https://doi.org/10.1080/00913367.1943.10673459

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.