References
- Alexander, V., Blinder, C., & Zak, P. (2018). Why trust an algorithm? Computers in Human Behavior, 89, 279–288. https://doi.org/10.1016/j.chb.2018.07.026
- Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application. New Media and Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645
- Bedi, P., & Vashisth, P. (2014). Empowering recommender systems using trust and argumentation. Information Sciences, 279, 569–586. https://doi.org/10.1016/j.ins.2014.04.012
- Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147
- Bolin, G., & Schwarz, J. (2015). Heuristics of the algorithm: Big data, user interpretation and institutional translation. Big Data & Society, 2(2), 1–12. https://doi.org/10.1177/2053951715608406
- Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39(5), 752. https://doi.org/10.1037/0022-3514.39.5.752
- Chaiken, S., & Ledgerwood, A. (2012). A theory of heuristic and systematic information processing. In P. A. M. van Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of theories of social psychology (Vol. 1, pp. 246–266). SAGE Publishing.
- Courtois, C., & Timmermans, E. (2018). Cracking the tinder code. Journal of Computer-Mediated Communication, 23(1), 1–16. https://doi.org/10.1093/jcmc/zmx001
- Crain, M. (2018). The limits of transparency. New Media & Society, 20(1), 88–104. https://doi.org/10.1177/1461444816657096
- Cramer, H., Evers, V., Ramlal, S., van Someren, M., Rutledge, L., Stash, N., Aroyo, L., & Wielinga, J. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Model User-Adapt Interact, 18(5), 455–496. https://doi.org/10.1007/s11257-008-9051-3
- Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of ACM, 59(2), 58–62. doi:10.1145/2844110. URL: https://cacm.acm.org/magazines/2016/2/197421-accountability-in-algorithmic-decisionmaking/fulltext
- Diakopoulos, N., & Koliska, M. (2016). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–828. https://doi.org/10.1080/21670811.2016.1208053
- Dörr, K., & Hollnbuchner, K. (2017). Ethical challenges of algorithmic journalism. Digital Journalism, 5(4), 404–419. https://doi.org/10.1080/21670811.2016.1167612
- Ehsan, U., & Riedl, M. (2019). On design and evaluation of human-centered explainable AI systems. Glasgow ’19. ACM.
- Ferrario, A., Loi, M., & Viganò, E. (2020). In AI we trust incrementally. Philosophy & Technology, 33(3), 523–539. https://doi.org/10.1007/s13347-019-00378-3
- Graefe, A., Haim, M., Haarmann, B., & Brosius, H. (2018). Readers’ perception of computer-generated news. Journalism, 19(5), 595–610. https://doi.org/10.1177/1464884916641269
- Gursoy, D., Chi, O., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent device use in service delivery. International Journal of Information Management, 49, 157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008
- Just, N., & Latzer, M. (2019). Governance by algorithms. Media, Culture, and Society, 39, 238–258. https://doi.org/10.1177/0163443716643157
- Kemper, J., & Kolkman, D. (2019). Transparent to whom? Information, Communication & Society, 22(14), 2081–2096. https://doi.org/10.1080/1369118X.2018.1477967
- Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), 14–29. https://doi.org/10.1080/1369118X.2016.1154087
- Kizilcec, R. (2016, May 7–12). How much information? CHI 2016. https://doi.org/10.1145/2858036.2858402
- Knijnenburg, B., Willemsen, M., Gantner, Z., Soncu, H., & Newell, C. (2012). Explaining the user experience of recommender systems. User Modeling and User-Adapted Interaction, 22(4–5), 441–504. https://doi.org/10.1007/s11257-011-9118-4
- Lee, M. (2018). Understanding perception of algorithmic decisions. Big Data & Society, 5(1), 1–16. https://doi.org/10.1177/2053951718756684
- Meijer, A. (2014). Transparency. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), Oxford handbook of public accountability. pp.661–672. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199641253.013.0043
- Moller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm. Information, Communication & Society, 21(7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076
- Rai, A. (2020). Explainable AI. Journal of the Academy of Marketing Science, 48(1), 137–141. https://doi.org/10.1007/s11747-019-00710-5
- Renijith, S., Sreekumar, A., & Jathavedan, M. (2020). An extensive study on the evolution of context-aware personalized travel recommender systems. Information Processing & Management, 57(1), 102078. https://doi.org/10.1016/j.ipm.2019.102078
- Shin, D. (2019). Toward fair, accountable, and transparent algorithms. Javnost: The Public, 26(3), 274–290. doi.10.1080/13183222.2019.1589249
- Shin, D. (2010). The effects of trust, security and privacy in social networking. Interacting with Computers, 22(5), 428–438. https://doi.org/10.1016/j.intcom.2010.05.001
- Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551 https://doi.org/10.1016/j.ijhcs.2020.102551
- Shin, D., & Biocca, F. (2018). Exploring immersive experience in journalism what makes people empathize with and embody immersive journalism? New Media and Society, 20(8), 2800–2823. https://doi.org/10.1177/1461444817733133
- Shin, D., & Hwang, Y. (2020). The role of affordance in the experience of blockchain: The effects of security, privacy and traceability on affective affordance Online Information Review, 44(4), 913–932. doi.10.1108/OIR-01-2019-0013
- Shin, D., & Park, Y. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019
- Shin, D., Zhong, B., & Biocca, F. (2020). Beyond user experience. International Journal of Information Management, 52, 1–11. https://doi.org/10.1016/j.ijinfomgt.2019.102061
- Sloan, R., & Warner, R. (2017, May/June). When is an algorithm transparent? IEEE: Security & Privacy. https://doi.org/10.2139/ssrn.3051588
- Sundar, S. (2020). Rise of machine agency. Journal of Computer-Mediated Communication, 25(1), 74–88. https://doi.org/10.1093/jcmc/zmz026
- Thurman, N., Moeller, J., Helberger, N., & Trilling, D. (2019). My friends, editors, algorithms, and I. Digital Journalism, 7(4), 447–469. https://doi.org/10.1080/21670811.2018.1493936
- Wölker, A., & Powell, T. (in press). Algorithms in the newsroom? Journalism. https://doi.org/10.1177/1464884918757072
- Zhang, B., Wang, N., & Jin, H. (2014, July 9–11). Privacy concerns in online recommender systems. Symposium on Usable Privacy and Security 2014, Menlo Park, CA.
- Zheng, L., Yang, F., & Li, T. (2014). Modeling and broadening temporal user interest in personalized news recommendation. Expert Systems with Applications, 47(7), 3168–3177. https://doi.org/10.1016/j.eswa.2013.11.020