626
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Understanding the Effects of Personalized Recommender Systems on Political News Perceptions: A Comparison of Content-Based, Collaborative, and Editorial Choice-Based News Recommender System

References

  • Afridi, A. H. (2019). Transparency for beyond-accuracy experiences: A novel user interface for recommender systems. Procedia computer science, 151, 335–344. https://doi.org/10.1016/j.procs.2019.04.047
  • Appelman, A., & Sundar, S. S. (2016). Measuring message credibility: Construction and validation of an exclusive scale. Journalism & Mass Communication Quarterly, 93(1), 59–79. https://doi.org/10.1177/1077699015606057
  • Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w
  • Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on facebook. Science, 348(6239), 1130–1132. https://doi.org/10.1126/science.aaa1160
  • Beam, M. A. (2014). Automating the news: How personalized news recommender system design choices impact news reception. Communication Research, 41(8), 1019–1041. https://doi.org/10.1177/0093650213497979
  • Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116–131. https://doi.org/10.1037/0022-3514.42.1.116
  • Cho, J., Ahmed, S., Hilbert, M., Liu, B., & Luu, J. (2020). Do search algorithms endanger democracy? An experimental investigation of algorithm effects on political polarization. Journal of Broadcasting & Electronic Media, 64(2), 150–172. https://doi.org/10.1080/08838151.2020.1757365
  • Cramer, H., Evers, V., Ramlal, S., Someren, M., Rutledge, L., Stash, N., Aroyo, L., & Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction, 18(5), 455–496. https://doi.org/10.1007/s11257-008-9051-3
  • Cummings, M. L. (2004). Automation bias in intelligent time critical decision support systems. AIAA 3rd Intelligent Systems Conference, Chicago, Illinois, 2004–6313.
  • Diakopoulos, N., & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–828. https://doi.org/10.1080/21670811.2016.1208053
  • Dietvorst, B. J., Simmons, J., & Massey, C. (2014). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114.
  • Eveland, W. P., Shah, D. V., & Kwak, N. (2003). Assessing causality in the cognitive mediation model: A panel study of motivations, information processing, and learning during campaign 2000. Communication Research, 30(4), 359–386. https://doi.org/10.1177/0093650203253369
  • Faul, F., Erdfelder, E., Lang, A. -G., & Buchner, A. (2007). G*power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
  • Feezell, J. T., Wagner, J. K., & Conroy, M. (2021). Exploring the effects of algorithm-driven news sources on political behavior and polarization. Computers in Human Behavior, 116, 106626. https://doi.org/10.1016/j.chb.2020.106626
  • Fletcher, R., & Nielsen, R. K. (2019). Generalised scepticism: How people navigate news on social media. Information, Communication & Society, 22(12), 1751–1769. https://doi.org/10.1080/1369118X.2018.1450887
  • Gil de Zúñiga, H., Weeks, B., & Ardèvol-Abreu, A. (2017). Effects of the news-finds-me perception in communication: Social media use implications for news seeking and learning about politics. Journal of Computer-Mediated Communication, 22(3), 105–123. https://doi.org/10.1111/jcc4.12185
  • Gillespie, T., Boczkowski, P. J., & Foot, K. A. (2014). Media technologies: essays on communication, materiality, and society. MIT Press.
  • Gsenger, R., & Strle, T. (2021). Trust, automation bias and aversion: Algorithmic decision-making in the context of credit scoring. Interdisciplinary Description of Complex Systems, 19(4), 540–558. https://doi.org/10.7906/indecs.19.4.7
  • Guess, A., Lyons, B., Nyhan, B., & Reifler, J. (2018). Avoiding the echo chamber about echo chambers. Knight Foundation, 2(1), 1–25.
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Jones-Jang, S. M., & Park, Y. J. (2023). How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability. Journal of Computer-Mediated Communication, 28(1), zmac029. https://doi.org/10.1093/jcmc/zmac029
  • Jürgens, P., Jungherr, A., & Schoen, H. (2011). Small worlds with a difference: New gatekeepers and the filtering of political information on twitter. Proceedings of the 3rd International Web Science Conference, 1–5. https://doi.org/10.1145/2527031.2527034
  • Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2390–2395. https://doi.org/10.1145/2858036.2858402
  • Liao, M., & Sundar, S. S. (2022). When E-Commerce personalization systems show and tell: Investigating the relative persuasive appeal of content-based versus collaborative filtering. Journal of Advertising, 51(2), 256–267. https://doi.org/10.1080/00913367.2021.1887013
  • Litman, L., Robinson, J., & Abberbock, T. (2017). TurkPrime.Com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433–442. https://doi.org/10.3758/s13428-016-0727-z
  • Llamero, L. (2014). Conceptual mindsets and heuristics in credibility evaluation of e-word of mouth in tourism. Online Information Review, 38(7), 954–968. https://doi.org/10.1108/OIR-06-2014-0128
  • Lu, Z., Dou, Z., Lian, J., Xie, X., & Yang, Q. (2015). Content-based collaborative filtering for news topic recommendation. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1), Article 1. https://ojs.aaai.org/index.php/AAAI/article/view/9183
  • McCroskey, J. C., & Teven, J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66(1), 90–103. https://doi.org/10.1080/03637759909376464
  • Meraz, S., & Papacharissi, Z. (2013). Networked gatekeeping and networked framing on #Egypt. The International Journal of Press/politics, 18(2), 138–166. https://doi.org/10.1177/1940161212474472
  • Merritt, S. M., Ako-Brew, A., Bryant, W. J., Staley, A., McKenna, M., Leone, A., & Shirase, L. (2019). Automation-induced complacency potential: Development and validation of a new scale. Frontiers in Psychology, 10, 225. https://doi.org/10.3389/fpsyg.2019.00225
  • Möller, J., Trilling, D., Helberger, N., & Es, B. V. (2018). Do not blame it on the algorithm: An empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication & Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076
  • Molina, M. D., & Sundar, S. S. (2022) When AI moderates online content: Effects of human collaboration and interactive transparency on user trust. Journal of Computer-Mediated Communication, 27(4), zmac010. https://doi.org/10.1093/jcmc/zmac010
  • Newman, N. (2017). Journalism, Media, and Technology Trends and Predictions 2017. https://ora.ox.ac.uk/objects/uuid:c46faa43-eed0-4708-b607-fb5d3a12a70f
  • Nowak, K. L., & Rauh, C. (2005). The influence of the avatar on online perceptions of anthropomorphism, androgyny, credibility, homophily, and attraction. Journal of Computer-Mediated Communication, 11(1), 153–178. https://doi.org/10.1111/j.1083-6101.2006.tb00308.x
  • Ochi, P., Rao, S., Takayama, L., & Nass, C. (2010). Predictors of user perceptions of web recommender systems: How the basis for generating experience and search product recommendations affects user responses. International Journal of Human-Computer Studies, 68(8), 472–482. https://doi.org/10.1016/j.ijhcs.2009.10.005
  • Paepcke, S., & Takayama, L. (2010). Judging a bot by its cover: An experiment on expectation setting for personal robots. 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 45–52. https://doi.org/10.1109/HRI.2010.5453268
  • Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
  • Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and how we think. Penguin.
  • Perdomo, G., & Rodrigues-Rouleau, P. (2022). Transparency as metajournalistic performance: The new york times’ caliphate podcast and new ways to claim journalistic authority. Journalism, 23(11), 2311–2327. https://doi.org/10.1177/1464884921997312
  • Pu, P., Chen, L., & Hu, R. (2012). Evaluating recommender systems from the user’s perspective: Survey of the state of the art. User Modeling and User-Adapted Interaction, 22(4), 317–355. https://doi.org/10.1007/s11257-011-9115-7
  • Raza, S., & Ding, C. (2022). News recommender system: A review of recent progress, challenges, and opportunities. Artificial Intelligence Review, 55(1), 749–800. https://doi.org/10.1007/s10462-021-10043-x
  • Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., & Riedl, J. (1994). GroupLens: An open architecture for collaborative filtering of netnews. Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, 175–186. https://doi.org/10.1145/192844.192905
  • Schaffer, J., Hollerer, T., & O’Donovan, J. (2015, April 6). Hypothetical recommendation: A study of interactive profile manipulation behavior for recommender systems. The Twenty-Eighth International Flairs Conference. The Twenty-Eighth International Flairs Conference. https://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS15/paper/view/10444
  • Shin, D., Hameleers, M., Park, Y. J., Kim, J. N., Trielli, D., Diakopoulos, N., Helberger, N., Lewis, S. C., Westlund, O., & Baumann, S. (2022). Countering algorithmic bias and disinformation and effectively harnessing the power of AI in media. Journalism & Mass Communication Quarterly, 99(4), 887–907. https://doi.org/10.1177/10776990221129245
  • Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019
  • Shin, D., Zaid, B., Biocca, F., & Rasul, A. (2022). In platforms we trust? Unlocking the black-box of news algorithms through interpretable AI. Journal of Broadcasting & Electronic Media, 66(2), 235–256. https://doi.org/10.1080/08838151.2022.2057984
  • Shoemaker, P. J., & Vos, T. (2009). Gatekeeping Theory. Routledge. https://doi.org/10.4324/9780203931653
  • Stroud, N. J., & Lee, J. K. (2013). Perceptions of cable news credibility. Mass Communication and Society, 16(1), 67–88. https://doi.org/10.1080/15205436.2011.646449
  • Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger, & A. J. Flanagin (Eds.), Digital Media, Youth, and Credibility (pp. 72–100). The MIT Press. Retrieved from http://mitpress2.mit.edu/books/chapters/0262294230chap4.pdf.
  • Sundar, S. S., & Nass, C. (2001). Conceptualizing sources in online news. Journal of Communication, 51(1), 52–72. https://doi.org/10.1111/j.1460-2466.2001.tb02872.x
  • Sundar, S. S., Oeldorf-Hirsch, A., & Xu, Q. (2008). The bandwagon effect of collaborative filtering technology. CHI ’08 Extended Abstracts on Human Factors in Computing Systems, 3453–3458. https://doi.org/10.1145/1358628.1358873
  • Thurman, N., Moeller, J., Helberger, N., & Trilling, D. (2019). My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digital Journalism, 7(4), 447–469. https://doi.org/10.1080/21670811.2018.1493936
  • Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S., Srivastava, M., Pearson, G., & Kaplan, L. (2020). Rapid trust calibration through interpretable and uncertainty-aware AI. Patterns, 1(4), 100049. https://doi.org/10.1016/j.patter.2020.100049
  • Tran, T. N. T., Le, V. M., Atas, M., Felfernig, A., Stettinger, M., & Popescu, A. (2021). Do users appreciate explanations of recommendations? An analysis in the movie domain. In Fifteenth ACM Conference on Recommender Systems (pp. 645–650). Association for Computing Machinery. https://doi.org/10.1145/3460231.3478859
  • Yeo, S. K., Cacciatore, M. A., & Scheufele, D. A. (2015). News selectivity and beyond: Motivated reasoning in a changing media environment. In O. Jandura, T. Petersen, C. Mothes, & A.-M. Schielicke (Eds.), Publizistik und gesellschaftliche Verantwortung: Festschrift für Wolfgang Donsbach (pp. 83–104). Springer Fachmedien. https://doi.org/10.1007/978-3-658-04704-7_7
  • Zimmer, F., Scheibe, K., Stock, M., & Stock, W. G. (2019). Fake news in social media: Bad algorithms or biased users? Journal of Information Science Theory and Practice, 7(2), 40–53. https://doi.org/10.1633/JISTaP.2019.7.2.4
  • Zuiderveen Borgesius, F., Trilling, D., Moeller, J., Bodó, B., de Vreese, C. H., & Helberger, N. (2016). Should we worry about filter bubbles? (SSRN Scholarly Paper ID 2758126). Social Science Research Network. https://papers.ssrn.com/abstract=2758126

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.