198
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Users’ Experiences of Algorithm-Mediated Public Services: Folk Theories, Trust, and Strategies in the Global South

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 04 Jan 2024, Accepted 13 May 2024, Published online: 29 May 2024

References

  • Aldoney, D., & Prieto, F. (2023). Paternal and maternal predictors of affective and cognitive involvement of three-year-old Chilean children in Chile (Predictores del involucramiento afectivo y cognitivo de padres y madres con sus hijas/os de tres años en Chile). Infancia y Aprendizaje, 46(2), 385–414. https://doi.org/10.1080/02103702.2022.2159614
  • Andrews, L. (2019). Public administration, public leadership and the construction of public value in the age of the algorithm and ‘big data’. Public Administration, 97(2), 296–310. https://doi.org/10.1111/padm.12534
  • Ayoub, J., Yang, X. J., & Zhou, F. (2021). Modeling dispositional and initial learned trust in automated vehicles with predictability and explainability. Transportation Research Part F: Traffic Psychology and Behaviour, 77, 102–116. https://doi.org/10.1016/j.trf.2020.12.015
  • Bedué, P., & Fritzsche, A. (2022). Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management, 35(2), 530–549. https://doi.org/10.1108/JEIM-06-2020-0233
  • Benlian, A., Wiener, M., Cram, W. A., Krasnova, H., Maedche, A., Möhlmann, M., Recker, J., & Remus, U. (2022). Algorithmic management: Bright and dark sides, practical implications, and research opportunities. Business & Information Systems Engineering, 64(6), 825–839. https://doi.org/10.1007/s12599-022-00764-w
  • Bernales, B., Bravo, S., Causa, L., Gómez, N., & Valdés, M. (2021). Aporte de un sistema predictivo de contraloría médica en la gestión de licencias médicas electrónicas. Revista Chilena de Salud Pública, 24(2), 115–126. https://doi.org/10.5354/0719-5281.2020.61265
  • Bland, G., Brinkerhoff, D., Romero, D., Wetterberg, A., & Wibbels, E. (2023). Public Services, Geography, and Citizen Perceptions of Government in Latin America. Political Behavior, 45(1), 125–152. https://doi.org/10.1007/s11109-021-09691-0
  • Boddy, C. R. (2016). Sample size for qualitative research. Qualitative Market Research. An International Journal, 19(4), 426–432. https://doi.org/10.1108/QMR-06-2016-0053
  • Brauneis, R., & Goodman, E. P. (2018). Algorithmic transparency for the smart city. Yale Journal of Law and Technology, 20, 103–176. https://doi.org/10.2139/ssrn.3012499
  • Bucchi, M. (2004). Science in society: An introduction to social studies of science. Routledge.
  • Bucknell, C., Schmidt, A., Cruz, D., & Muñoz, J. C. (2017). Identifying and visualizing congestion bottlenecks with automated vehicle location systems: Application to transantiago, Chile. Transportation Research Record: Journal of the Transportation Research Board, 2649(1), 61–70. https://doi.org/10.3141/2649-07
  • Burkell, J., & Bailey, J. (2018). Unlawful distinctions? Canadian yearbook for human rights (Vol II). Human Rights Research and Education Centre.
  • Busuioc, M. (2021). Accountable artificial intelligence: Holding algorithms to account. Public Administration Review, 81(5), 825–836. https://doi.org/10.1111/puar.13293
  • Cabalin, C., Saldaña, M., & Fernández, M. B. (2023). Framing school choice and merit: News media coverage of an education policy in Chile. Discourse: Studies in the Cultural Politics of Education, 44(6), 927–942. https://doi.org/10.1080/01596306.2023.2218272
  • Cabiddu, F., Moi, L., Patriotta, G., & Allen, D. G. (2022). Why do users trust algorithms? A review and conceptualization of initial trust and trust over time. European Management Journal, 40(5), 685–706. https://doi.org/10.1016/j.emj.2022.06.001
  • Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 376(2133), 20180080. https://doi.org/10.1098/rsta.2018.0080
  • Chen, K. (2023). If it is bad, why don’t I quit? Algorithmic recommendation use strategy from folk theories. Global Media and China. Advance online publication. https://doi.org/10.1177/20594364231209354
  • Chen, Y. N K., & Wen, C. H R. (2021). Impacts of attitudes toward government and corporations on public trust in artificial intelligence. Communication Studies, 72(1), 115–131. https://doi.org/10.1080/10510974.2020.1807380
  • Cho, H. (2022). Privacy helplessness on social media: Its constituents, antecedents and consequences. Internet Research, 32(1), 150–171. https://doi.org/10.1108/INTR-05-2020-0269
  • Choung, H., David, P., & Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), 1727–1739. https://doi.org/10.1080/10447318.2022.2050543
  • Cobbe, J. (2021). Algorithmic censorship by social platforms: Power and resistance. Philosophy & Technology, 34(4), 739–766. https://doi.org/10.1007/s13347-020-00429-0
  • Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. The Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
  • Colquitt, J. A., & Rodell, J. B. (2011). Justice, trust, and trustworthiness: A longitudinal analysis integrating three theoretical perspectives. Academy of Management Journal, 54(6), 1183–1206. https://doi.org/10.5465/amj.2007.0572
  • Correa, T., Pavez, I., & Contreras, J. (2020). Digital inclusion through mobile phones?: A comparison between mobile-only and computer users in internet access, skills and use. Information, Communication and Society, 23(7), 1074–1091. https://doi.org/10.1080/1369118X.2018.1555270
  • Correa, T., Valenzuela, S., & Pavez, I. (2024). For better and for worse: A panel survey of how mobile-only and hybrid Internet use affects digital skills over time. New Media & Society, 26(2), 995–1017. https://doi.org/10.1177/14614448211059114
  • Corvalán, J. G. (2017). Administración Pública digital e inteligente: Transformaciones en la era de la inteligencia artificial. Revista de Direito Econômico e Socioambiental, 8(2), 26. https://doi.org/10.7213/rev.dir.econ.soc.v8i2.19321
  • Cotter, K., & Reisdorf, B. C. (2020). Algorithmic knowledge gaps: A new horizon of (digital) inequality. International Journal of Communication, 14, 745–765. https://ijoc.org/index.php/ijoc/article/view/12450/2952
  • Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24–42. https://doi.org/10.1007/s11747-019-00696-0
  • Dogruel, L. (2021). What is algorithm literacy? A conceptualization and challenges regarding its empirical measurement. In M. Taddicken & C. Schumann (Eds.), Algorithms and Communication (pp. 67–93). Berlin. https://doi.org/10.48541/dcr.v9.3
  • Eagly, A. H., & Chaiken, S. (1993). The psychology of attitudes. Harcourt, Brace, Jovanovich.
  • Engin, Z., & Treleaven, P. (2019). Algorithmic government: Automating public services and supporting civil servants in using data science technologies. The Computer Journal, 62(3), 448–460. https://doi.org/10.1093/comjnl/bxy082
  • Eslami, M., Karahalios, K., Sandvig, C., Vaccaro, K., Rickman, A., Hamilton, K., & Kirlik, A. (2016, May). First I like it, then I hide it: Folk theories of social feeds. In J. Kaye (Ed.), Proceedings of the 2016 CHI conference on human factors in computing systems (pp. 2371–2382). Association for Computing Machinery. https://doi.org/10.1145/2858036.2858494
  • Eslami, M., Krishna Kumaran, S. R., Sandvig, C., & Karahalios, K. (2018, April). Communicating algorithmic process in online behavioral advertising [Paper presentation]. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. CHI ’18: CHI Conference on Human Factors in Computing Systems (pp. 1–13). https://doi.org/10.1145/3173574.3174006
  • European Commission. (2020). On artificial intelligence - A European approach to excellence and trust. 2020. COM, 65 final. [White paper]. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0065
  • Faúndez-Ugalde, A., Mellado-Silva, R., & Aldunate-Lizana, E. (2020). Use of artificial intelligence by tax administrations: An analysis regarding taxpayers’ rights in Latin American countries. Computer Law & Security Review, 38, 105441. https://doi.org/10.1016/j.clsr.2020.105441
  • Ferrario, A., & Loi, M. (2022, January). How explainability contributes to trust in AI [Paper presentation]. 2022 ACM Conference on Fairness, Accountability, and Transparency, New York (pp. 1457–1466). https://doi.org/10.1145/3531146.3533202
  • Festl, R. (2021). Social media literacy & adolescent social online behavior in Germany. Journal of Children and Media, 15(2), 249–271. https://doi.org/10.1080/17482798.2020.1770110
  • Flick, U. (Ed.). (2013). The SAGE handbook of qualitative data analysis. SAGE.
  • Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI (Berkman Klein Center Research Publication No. 2020-1). Available at SSRN: https://ssrn.com/abstract=3518482
  • Flores, I., Sanhueza, C., Atria, J., & Mayer, R. (2020). Top incomes in Chile: A historical perspective on income inequality, 1964–2017. Review of Income and Wealth, 66(4), 850–874. https://doi.org/10.1111/roiw.12441
  • Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People-an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
  • Foucault, M. (2023). Discipline and punish. Social theory re-wired (pp. 291–299). Routledge.
  • Garcia Alonso, R., Thoene, U., & Davila Benavides, D. (2022). Digital health and artificial intelligence: Advancing healthcare provision in Latin America. IT Professional, 24(2), 62–68. https://doi.org/10.1109/MITP.2022.3143530
  • García Zaballos, A., Iglesias Rodriguez, E., & Puig Gabarró, P. (2022). Informe anual del Indice de Desarrollo de la Banda Ancha: Brecha digital en América Latina y el Caribe: IDBA 2021. Banco Interamericano de Desarrollo.
  • Geertz, C. (1983). Local knowledge: Further essays in interpretive anthropology. BasicBooks.
  • Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory. Strategies for Qualitative Research. Routledge.
  • Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.00577
  • GobLab Universidad Adolfo Ibáñez. (2023). Repositorio algoritmos públicos. Informe anual 2023. https://goblab.uai.cl/wp-content/uploads/2023/04/Informe_Repositorio_Algoritmos_Publicos_2023.pdf
  • Graells-Garrido, E., Opitz, D., Rowe, F., & Arriagada, J. (2023). A data fusion approach with mobile phone data for updating travel survey-based mode split estimates. Transportation Research Part C: Emerging Technologies, 155, 104285. https://doi.org/10.1016/j.trc.2023.104285
  • Gray, T. J., Gainous, J., & Wagner, K. M. (2017). Gender and the digital divide in Latin America. Social Science Quarterly, 98(1), 326–340. https://doi.org/10.1111/ssqu.12270
  • Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
  • Gutiérrez, J. D., & Muñoz-Cadena, S. (2023). Adopción de sistemas de decisión automatizada en el sector público: Cartografía de 113 sistemas en Colombia. GIGAPP Estudios Working Papers, 10(267–272), 365–395. https://www.gigapp.org/ewp/index.php/GIGAPP-EWP/article/view/329
  • Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8
  • Han, B., Buchanan, G., & Mckay, D. (2022, November). Learning in the Panopticon. In Proceedings of the [Paper presentation]. 34th Australian Conference on Human-Computer Interaction. OzCHI ’22: 34th Australian Conference on Human-Computer Interaction, Australia. https://doi.org/10.1145/3572921.3572937
  • Hargittai, E. (2009). An update on survey measures of web-oriented digital literacy. Social Science Computer Review, 27(1), 130–137. https://doi.org/10.1177/0894439308318213
  • Hargittai, E., & Litt, E. (2013). New strategies for employment? Internet skills and online privacy practices during people’s job search. IEEE Security & Privacy, 11(3), 38–45. https://doi.org/10.1109/MSP.2013.64
  • Hatlevik, O. E., Guðmundsdóttir, G. B., & Loi, M. (2015). Digital diversity among upper secondary students: A multilevel analysis of the relationship between cultural capital, self-efficacy, strategic use of information and digital competence. Computers & Education, 81, 345–353. https://doi.org/10.1016/j.compedu.2014.10.019
  • Hermann, E. (2022). Leveraging artificial intelligence in marketing for social good—An ethical perspective. Journal of Business Ethics, 179(1), 43–61. https://doi.org/10.1007/s10551-021-04843-y
  • Hertzberg, L. (1988). On the attitude of trust. Inquiry, 31(3), 307–322. https://doi.org/10.1080/00201748808602157
  • Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
  • Holton, J. (2007). The coding process and its challenges. The SAGE handbook of grounded theory (pp. 265–289). SAGE Publications Ltd. https://doi.org/10.4135/9781848607941
  • Issaka, A. (2023). Techno-optimism: Framing data and digital infrastructure for public acceptance in Ghana. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231215359
  • Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in technocratic governance. Government Information Quarterly, 33(3), 371–377. https://doi.org/10.1016/j.giq.2016.08.011
  • Jarrahi, M. H., & Sutherland, W. (2019). Algorithmic management and algorithmic competencies: Understanding and appropriating algorithms in gig work. In N. G. Taylor, C. Christian-Lamb, M. H. Martin, & B. Nardi (Eds.), Information in contemporary society (pp. 578–589). Springer International Publishing.
  • Jones, K. (1996). Trust as an affective attitude. Ethics, 107(1), 4–25. http://www.jstor.org/stable/2382241 https://doi.org/10.1086/233694
  • Kim, S. S. Y., Watkins, E. A., Russakovsky, O., Fong, R., & Monroy-Hernández, A. (2023, June). Humans, AI, and context: Understanding end-users’ trust in a real-world computer vision application [Paper presentation]. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, USA (pp. 77–88). https://doi.org/10.1145/3593013.3593978
  • Kim, T., & Song, H. (2023). Communicating the limitations of AI: The effect of message framing and ownership on trust in artificial intelligence. International Journal of Human–Computer Interaction, 39(4), 790–800. https://doi.org/10.1080/10447318.2022.2049134
  • Klumbytė, G., Piehl, H., & Draude, C. (2023). Towards feminist intersectional XAI: From explainability to response-ability. In ACM CHI workshop on Human-Centered Explainable AI (HCXAI). ACM.
  • Koenig, A. (2020). The algorithms know me and I know them: Using student journals to uncover algorithmic literacy awareness. Computers and Composition, 58, 102611. https://doi.org/10.1016/j.compcom.2020.102611
  • Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
  • Lee, M. K., & Rich, K. (2021, May). Who is included in human perceptions of AI: Trust and perceived fairness around healthcare AI and cultural mistrust [Paper presentation]. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan (pp. 1–14). https://doi.org/10.1145/3411764.3445570
  • Liao, Q. V., & Sundar, S. S. (2022, June). Designing for responsible trust in AI systems: A communication perspective [Paper presentation]. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Korea (pp. 1257–1268). https://doi.org/10.1145/3531146.3533182
  • Lillemäe, E., Talves, K., & Wagner, W. (2023). Public perception of military AI in the context of techno-optimistic society. AI & Society, 1–15. https://doi.org/10.1007/s00146-023-01785-z
  • Liu, B. (2021). In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. Journal of Computer-Mediated Communication, 26(6), 384–402. https://doi.org/10.1093/jcmc/zmab013
  • Lobschat, L., Mueller, B., Eggers, F., Brandimarte, L., Diefenbach, S., Kroschke, M., & Wirtz, J. (2021). Corporate digital responsibility. Journal of Business Research, 122, 875–888. https://doi.org/10.1016/j.jbusres.2019.10.006
  • Madiega, T. (2021). Artificial intelligence act. European Parliament: European Parliamentary Research Service. https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
  • Malone, M. F. T., & Dammert, L. (2021). The police and the public: Policing practices and public trust in Latin America. Policing and Society, 31(4), 418–433. https://doi.org/10.1080/10439463.2020.1744600
  • Mansi, G., & Riedl, M. O. (2023). Why don’t you do something about it? Outlining connections between AI explanations and user actions. In ACM CHI workshop on Human-Centered Explainable AI (HCXAI). ACM. https://doi.org/10.48550/arXiv.2305.06297
  • Mariani, M. M., Perez‐Vega, R., & Wirtz, J. (2022). AI in marketing, consumer research and psychology: A systematic literature review and research agenda. Psychology & Marketing, 39(4), 755–776. https://doi.org/10.1002/mar.21619
  • Marshall, B., Cardon, P., Poddar, A., & Fontenot, R. (2013). Does sample size matter in qualitative research?: A review of qualitative interviews in IS research. Journal of Computer Information Systems, 54(1), 11–22. https://doi.org/10.1080/08874417.2013.11645667
  • Montoya, L., & Rivas, P. (2019, November). Government AI readiness meta-analysis for Latin America and the Caribbean [Paper presentation]. 2019 IEEE International Symposium on Technology and Society (ISTAS) (pp. 1–8), Boston, USA. https://doi.org/10.1109/ISTAS48451.2019.8937869
  • Möhlmann, M., & Zalmanson, L. (2017, December). Hands on the wheel: Navigating algorithmic management and Uber drivers [Paper presentation]. In Proceedings of the International Conference on Information Systems (ICIS) (pp. 10–13). Seoul, South Korea.
  • Möhlmann, M., Zalmanson, L., Henfridsson, O., & Gregory, R. W. (2021). Algorithmic management of work on online labor platforms: When matching meets control. MIS Quarterly, 45(4), 1999–2022. https://doi.org/10.25300/MISQ/2021/15333
  • Möhlmannn, M., Alves de Lima Salge, C., & Marabelli, M. (2023). Algorithm sensemaking: How platform workers make sense of algorithmic management. Journal of the Association for Information Systems, 24(1), 35–64. https://doi.org/10.17705/1jais.00774
  • Munizaga, M. A., & Palma, C. (2012). Estimation of a disaggregate multimodal public transport Origin–Destination matrix from passive smartcard data from Santiago, Chile. Transportation Research Part C: Emerging Technologies, 24, 9–18. https://doi.org/10.1016/j.trc.2012.01.007
  • Nader, K., & Lee, M. K. (2022). Folk theories and user strategies on dating apps. In M. Smits (Eds.), Information for a better world: Shaping the global future (pp. 445–458). Springer International Publishing. https://doi.org/10.1007/978-3-030-96957-8_37
  • Oeldorf-Hirsch, A., & Neubaum, G. (2023). Attitudinal and behavioral correlates of algorithmic awareness among German and US social media users. Journal of Computer-Mediated Communication, 28(5), zmad035. https://doi.org/10.1093/jcmc/zmad035
  • Oudshoorn, N., & Pinch, T. J. (2003). How users matter: The co-construction of users and technologies. MIT Press.
  • Papenmeier, A., Kern, D., Englebienne, G., & Seifert, C. (2022). It’s complicated: The relationship between user trust, model accuracy and explanations in AI. ACM Transactions on Computer-Human Interaction, 29(4), 1–33. https://doi.org/10.1145/3495013
  • Pasquinelli, M. (2023). The eye of the master: A social history of artificial intelligence. Verso Books.
  • Pick, J., Sarkar, A., & Parrish, E. (2021). The Latin American and Caribbean digital divide: A geospatial and multivariate analysis. Information Technology for Development, 27(2), 235–262. https://doi.org/10.1080/02681102.2020.1805398
  • Potter, N. N. (2020). Interpersonal trust. The routledge handbook of trust and philosophy (pp. 243–255). Routledge.
  • Quezada, R. (2022). Chile’s digital learning strategy during the COVID-19 pandemic: Connecting policy with social realities? Current Issues in Comparative Education, 24(2), 136–150. https://journals.library.columbia.edu/index.php/cice/article/view/9495/5008
  • Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O., Jennings, N. R., Kamar, E., Kloumann, I. M., Larochelle, H., Lazer, D., McElreath, R., Mislove, A., Parkes, D. C., Pentland, A. '., … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. https://doi.org/10.1038/s41586-019-1138-y
  • Ramizo, G. (2022). Platform playbook: A typology of consumer strategies against algorithmic control in digital platforms. Information, Communication & Society, 25(13), 1849–1864. https://doi.org/10.1080/1369118X.2021.1897151
  • Riedl, M. O. (2019). Human‐centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies, 1(1), 33–36. https://doi.org/10.1002/hbe2.117
  • Ruslin, R., Mashuri, S., Rasak, M. S. A., Alhabsyi, F., & Syam, H. (2022). Semi-structured Interview: A methodological reflection on the development of a qualitative research instrument in educational studies. IOSR Journal of Research & Method in Education (IOSR-JRME), 12(1), 22–29. https://doi.org/10.9790/7388-1201052229
  • Sanchez-Pi, N., Martí, L., Bicharra Garcia, A., Baeza Yates, R., Vellasco, M., & Coello, C. A. (2021, November) A roadmap for AI in Latin America. Side Event AI in Latin America of the Global Partnership for AI (GPAI) Paris Summit https://inria.hal.science/hal-03526055
  • Schlicker, N., Langer, M., Ötting, S. K., Baum, K., König, C. J., & Wallach, D. (2021). What to expect from opening up ‘black boxes’? Comparing perceptions of justice between human and automated agents. Computers in Human Behavior, 122, 106837. https://doi.org/10.1016/j.chb.2021.106837
  • Schmidt, P., Biessmann, F., & Teubner, T. (2020). Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4), 260–278. https://doi.org/10.1080/12460125.2020.1819094
  • Selten, F., & Klievink, B. (2024). Organizing public sector AI adoption: Navigating between separation and integration. Government Information Quarterly, 41(1), 101885. https://doi.org/10.1016/j.giq.2023.101885
  • Shen, H., DeVos, A., Eslami, M., & Holstein, K. (2021). Everyday algorithm auditing: Understanding the power of everyday users in surfacing harmful algorithmic behaviors. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–29. https://doi.org/10.1145/3479577
  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  • Shin, D. (2022). How do people judge the credibility of algorithmic sources? AI & Society, 37(1), 81–96. https://doi.org/10.1007/s00146-021-01158-4
  • Siles, I. (2023). Living with algorithms: Agency and user culture in Costa Rica. MIT Press.
  • Siles, I., Segura-Castillo, A., Solís, R., & Sancho, M. (2020). Folk theories of algorithmic recommendations on Spotify: Enacting data assemblages in the global South. Big Data & Society, 7(1), 205395172092337. https://doi.org/10.1177/2053951720923377
  • Smuha, N. A. (2019). The EU approach to ethics guidelines for trustworthy artificial intelligence. Computer Law Review International, 20(4), 97–106. https://doi.org/10.9785/cri-2019-200402
  • Stahl, B. C., Andreou, A., Brey, P., Hatzakis, T., Kirichenko, A., Macnish, K., Laulhé Shaelou, S., Patel, A., Ryan, M., & Wright, D. (2021). Artificial intelligence for human flourishing–beyond principles for machine learning. Journal of Business Research, 124, 374–388. https://doi.org/10.1016/j.jbusres.2020.11.030
  • Suárez‑Cao, J. (2021). Reconstructing legitimacy after crisis: The Chilean path to a new constitution. Hague Journal on the Rule of Law, 13(2–3), 253–264. https://doi.org/10.1007/s40803-021-00160-8
  • Swart, J. (2021). Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social Media + Society, 7(2), 205630512110088. https://doi.org/10.1177/20563051211008828
  • Thornberg, R., & Charmaz, K. (2014). Grounded theory and theoretical coding. The SAGE handbook of qualitative data analysis (pp. 153–169). SAGE Publications Ltd. https://doi.org/10.4135/9781446282243
  • Tomsett, R., Preece, A., Braines, D., Cerutti, F., Chakraborty, S., Srivastava, M., Pearson, G., & Kaplan, L. (2020). Rapid trust calibration through interpretable and uncertainty-aware AI. Patterns, 1(4), 100049. https://doi.org/10.1016/j.patter.2020.100049
  • United Nations General Assembly. (2024). Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development (Resolution A/78/L.49). United Nations. https://undocs.org/A/78/L.49
  • Veale, M., & Brass, I. (2019). Administration by algorithm? Public management meets public sector machine learning. In K. Yeung, & M. Lodge (Eds.), Algorithmic regulation (p. 121–200). Oxford University Press. https://doi.org/10.1093/oso/9780198838494.003.0006
  • Vissenberg, J., d’Haenens, L., & Livingstone, S. (2022). Digital literacy and online resilience as facilitators of young people’s well-being: A systematic review. European Psychologist, 27(2), 76–85. https://doi.org/10.1027/1016-9040/a000478
  • von Eschenbach, W. J. (2021). Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4), 1607–1622. https://doi.org/10.1007/s13347-021-00477-0
  • Weitz, K., Schiller, D., Schlagowski, R., Huber, T., & André, E. (2019, July). Do you trust me [Paper presentation]. Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, Paris, France (pp. 7–9). https://doi.org/10.1145/3308532.3329441
  • Whitley, R. D. (1970). Black boxism and the sociology of science: A discussion of the major developments in the field. The Sociological Review, 18(1), 61–92. https://doi.org/10.1111/j.1467-954X.1970.tb03176.x
  • Winfield, A. F., & Jirotka, M. (2018). Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 376(2133), 20180085. https://doi.org/10.1098/rsta.2018.0085
  • Winner, L. (1993). Upon opening the black box and finding it empty: Social constructivism and the philosophy of technology. Science, Technology, & Human Values, 18(3), 362–378. https://doi.org/10.1177/016224399301800306
  • Williams, B. A., Brooks, C. F., & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions, and policy implications. Journal of Information Policy, 8(1), 78–115. https://doi.org/10.5325/jinfopoli.8.2018.0078
  • Wirtz, J., Kunz, W. H., Hartley, N., & Tarbit, J. (2023). Corporate digital responsibility in service firms and their ecosystems. Journal of Service Research, 26(2), 173–190. https://doi.org/10.1177/10946705221130467
  • Zhang, Y., Liao, Q. V., & Bellamy, R. K. E. (2020, January). Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making [Paper presentation]. Proceedings of the 2020 C Onference on Fairness, Accountability, and Transparency, Barcelona, Spain (pp. 295–305). https://doi.org/10.1145/3351095.3372852
  • Zhu, M., Wu, C., Huang, S., Zheng, K., Young, S. D., Yan, X., & Yuan, Q. (2021). Privacy paradox in mHealth applications: An integrated elaboration likelihood model incorporating privacy calculus and privacy fatigue. Telematics and Informatics, 61, 101601. https://doi.org/10.1016/j.tele.2021.101601

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.