3,645
Views
4
CrossRef citations to date
0
Altmetric
Research Article

The importance of effectiveness versus transparency and stakeholder involvement in citizens’ perception of public sector algorithms

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

References

  • Agarwal, P. K. 2018. “Public Administration Challenges in the World of AI and Bots: Public Administration Challenges in the World of AI and Bots.” Public Administration Review 78 (6): 917–921. doi:10.1111/puar.12979.
  • Ananny, M., and K. Crawford. 2018. “Seeing Without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media & Society 20 (3): 973–989. doi:10.1177/1461444816676645.
  • Aoki, N. 2020. “An Experimental Study of Public Trust in AI Chatbots in the Public Sector.” Government Information Quarterly 37 (4): 101490. doi:10.1016/j.giq.2020.101490.
  • Barocas, S., and A. D. Selbst. 2016. “Big Data’s Disparate Impact.” California Law Review 104: 671–732. doi:10.2139/ssrn.2477899.
  • Bovens, M. 2007. “Analysing and Assessing Accountability: A Conceptual Framework.” European Law Journal 13 (4): 447–468. doi:10.1111/j.1468-0386.2007.00378.x.
  • Busuioc, M. 2020. “Accountable Artificial Intelligence: Holding Algorithms to Account.” Public Administration Review 81 (5): uar.13293. doi:10.1111/puar.13293.
  • Criado, J. I., and L. O. de Zarate-Alcarazo. 2022. “Technological Frames, CIOs, and Artificial Intelligence in Public Administration: A Socio-Cognitive Exploratory Study in Spanish Local Governments.” Government Information Quarterly 39 (3): 101688. doi:10.1016/j.giq.2022.101688.
  • de Bekker-Grob, E. W., B. Donkers, M. F. Jonker, and E. A. Stolk. 2015. “Sample Size Requirements for Discrete-Choice Experiments in Healthcare: A Practical Guide.” The Patient - Patient-Centered Outcomes Research 8 (5): 373–384. doi:10.1007/s40271-015-0118-z.
  • Dickinson, H., and S. Yates. 2021. “From External Provision to Technological Outsourcing: Lessons for Public Sector Automation from the Outsourcing Literature.” Public Management Review 1–19. doi:10.1080/14719037.2021.1972681.
  • Felzmann, H., E. Fosch-Villaronga, C. Lutz, and A. Tamò-Larrieux. 2020. “Towards Transparency by Design for Artificial Intelligence.” Science and Engineering Ethics 26 (6): 3333–3361. doi:10.1007/s11948-020-00276-4.
  • Gidengil, E., D. Stolle, and O. Bergeron‐boutin. 2021. “The Partisan Nature of Support for Democratic Backsliding: A Comparative Perspective.” European Journal of Political Research 61 (4): 1475–6765.12502. doi:10.1111/1475-6765.12502.
  • Giest, Sarah N., and Bram Klievink. 2022. ”More Than a Digital System: How AI is Changing the Role of Bureaucrats in Different Organizational Contexts.” Public Management Review 1–20. online first. doi:10.1080/14719037.2022.2095001.
  • Gil-Garcia, J. R., S. S. Dawes, and T. A. Pardo. 2018. “Digital Government and Public Management Research: Finding the Crossroads.” Public Management Review 20 (5): 633–646. doi:10.1080/14719037.2017.1327181.
  • Glikson, E., and A. W. Woolley. 2020. “Human Trust in Artificial Intelligence: Review of Empirical Research.” The Academy of Management Annals 14 (2): 627–660. doi:10.5465/annals.2018.0057.
  • Graham, M. H., and M. W. Svolik. 2020. “Democracy in America? Partisanship, Polarization, and the Robustness of Support for Democracy in the United States.” The American Political Science Review 114 (2): 392–409. doi:10.1017/S0003055420000052.
  • Grimmelikhuijsen, S. 2022. ”Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision‐making.” Public Administration Review online first. doi:10.1111/puar.13483.
  • Grimmelikhuijsen, S., S. J. Piotrowski, and G. G. Van Ryzin. 2020. “Latent Transparency and Trust in Government: Unexpected Findings from Two Survey Experiments.” Government Information Quarterly 37 (4): 101497. doi:10.1016/j.giq.2020.101497.
  • Grimmelikhuijsen, S., G. Porumbescu, B. Hong, and T. Im. 2013. “The Effect of Transparency on Trust in Government: A Cross-National Comparative Experiment.” Public Administration Review 73 (4): 575–586. doi:10.1111/puar.12047.
  • Guidotti, R., A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi. 2018. “A Survey of Methods for Explaining Black Box Models.” ACM Computing Surveys 51 (5): 1–42. doi:10.1145/3236009.
  • Hainmueller, J., D. Hangartner, and T. Yamamoto. 2015. “Validating Vignette and Conjoint Survey Experiments Against Real-World Behavior.” Proceedings of the National Academy of Sciences 112 (8): 2395–2400. doi:10.1073/pnas.1416587112.
  • Halvorsen, Kathleen. E. 2003. “Assessing the Effects of Public Participation.” Public Administration Review 63 (5): 535–543. doi:10.1111/1540-6210.00317.
  • Horiuchi, Y., Z. Markovich, and T. Yamamoto. (2021). Does Conjoint Analysis Mitigate Social Desirability Bias. Massachusetts Institute of Technology Political Science Department Research Paper, 2018–15, 1–29.
  • Hou, Y. T.-Y., and M. F. Jung. 2021. “Who is the Expert? Reconciling Algorithm Aversion and Algorithm Appreciation in AI-Supported Decision Making.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2): 1–25. doi:10.1145/3479864.
  • Juravle, G., A. Boudouraki, M. Terziyska, and C. Rezlescu. 2020. ”Trust in Artificial Intelligence for Medical Diagnoses.” Progress in Brain Research, Vol. 253, 263–282. Elsevier. doi:10.1016/bs.pbr.2020.06.006.
  • Kaminski, M. E., and G. Malgieri. 2020. “Algorithmic Impact Assessments Under the GDPR: Producing Multi-Layered Explanations.” International Data Privacy Law, (Online First), international Data Privacy Law, (Online First) 1–20. doi:10.2139/ssrn.3456224.
  • Kennedy, R. P., P. D. Waggoner, and M. M. Ward. 2022. “Trust in Public Policy Algorithms.” The Journal of Politics 84 (2): 1132–1148. doi:https://doi.org/10.1086/716283.
  • Kizilcec, R. F. (2016). How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2390–2395. 10.1145/2858036.2858402
  • König, P. D., and G. Wenzelburger. 2021. “The Legitimacy Gap of Algorithmic Decision-Making in the Public Sector: Why It Arises and How to Address It.” Technology in Society 67: 101688. doi:10.1016/j.techsoc.2021.101688.
  • König, P. D., S. Wurster, and M. B. Siewert. 2022. “Consumers are Willing to Pay a Price for Explainable, but Not for Green AI. Evidence from a Choice-Based Conjoint Analysis.” Big Data & Society 9 (1): 1–13. doi:10.1177/20539517211069632.
  • Krafft, T. D., K. A. Zweig, and P. D. König. 2020. How to Regulate Algorithmic Decision‐making: A Framework of Regulatory Requirements for Different Applications. online first. Regulation & Governance. doi:10.1111/rego.12369.
  • Kroll, Joshua A., Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Yu Harlan. 2017. “Accountable Algorithms.” University of Pennsylvania Law Review 165: 633–705.
  • Lee, M. K., A. Psomas, A. D. Procaccia, D. Kusbit, A. Kahng, J. T. Kim, X. Yuan, et al. 2019. “WeBuildai: Participatory Framework for Algorithmic Governance.” Proceedings of the ACM on Human-Computer Interaction 3 (CSCW): 1–35. doi:10.1145/3359283.
  • Lepri, B., N. Oliver, E. Letouzé, A. Pentland, and P. Vinck. 2018. “Fair, Transparent, and Accountable Algorithmic Decision-Making Processes: The Premise, the Proposed Solutions, and the Open Challenges.” Philosophy & Technology 31 (4): 611–627. doi:10.1007/s13347-017-0279-x.
  • Levy, K., K. E. Chasalow, and S. Riley. 2021. “Algorithms and Decision-Making in the Public Sector.” Annual Review of Law and Social Science 17 (1): 309–334. doi:10.1146/annurev-lawsocsci-041221-023808.
  • Liu, B. 2021. “In AI We Trust? Effects of Agency Locus and Transparency on Uncertainty Reduction in Human–AI Interaction.” Journal of Computer-Mediated Communication 26 (6): 384–402. doi:10.1093/jcmc/zmab013.
  • Martin, K. 2019. “Ethical Implications and Accountability of Algorithms.” Journal of Business Ethics 160 (4): 835–850. doi:10.1007/s10551-018-3921-3.
  • Miller, S. M., and L. R. Keiser. 2021. “Representative Bureaucracy and Attitudes Toward Automated Decision Making.” Journal of Public Administration Research and Theory 31 (1): 150–165. doi:10.1093/jopart/muaa019.
  • Mohler, G. O., M. B. Short, S. Malinowski, M. Johnson, G. E. Tita, A. L. Bertozzi, and P. J. Brantingham. 2015. “Randomized Controlled Field Trials of Predictive Policing.” Journal of the American Statistical Association 110 (512): 1399–1411. doi:10.1080/01621459.2015.1077710.
  • Mussweiler, T. 2002. “The Malleability of Anchoring Effects.” Experimental Psychology 49 (1): 67–72. doi:10.1027//1618-3169.49.1.67.
  • Oetzel, M. C., and S. Spiekermann. 2014. “A Systematic Methodology for Privacy Impact Assessments: A Design Science Approach.” European Journal of Information Systems 23 (2): 126–150. doi:10.1057/ejis.2013.18.
  • Oswald, M., J. Grace, S. Urwin, and G. C. Barnes. 2018. “Algorithmic Risk Assessment Policing Models: Lessons from the Durham HART Model and ‘Experimental’ Proportionality.” Information & Communications Technology Law 27 (2): 223–250. doi:10.1080/13600834.2018.1458455.
  • Pasquale, F. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press.
  • Peters, B. G. 2011. “Responses to NPM: From Input Democracy to Output Democracy.” In The Ashgate Research Companion to New Public Management, edited by T. Christensen and P. Lægreid, 361–373. Farnham: Ashgate Pub. Co.
  • Reed Johnson, F., E. Lancsar, D. Marshall, V. Kilambi, A. Mühlbacher, D. A. Regier, B. W. Bresnahan, B. Kanninen, and J. F. P. Bridges. 2013. “Constructing Experimental Designs for Discrete-Choice Experiments: Report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.” Value in Health 16 (1): 3–13. doi:10.1016/j.jval.2012.08.2223.
  • Roffman, D., G. Hart, M. Girardi, C. J. Ko, and J. Deng. 2018. “Predicting Non-Melanoma Skin Cancer via a Multi-Parameterized Artificial Neural Network.” Scientific Reports 8 (1): 1701. doi:10.1038/s41598-018-19907-9.
  • Rosenfeld, A., and A. Richardson. 2019. “Explainability in Human–Agent Systems.” Autonomous Agents and Multi-Agent Systems 33 (6): 673–705. doi:10.1007/s10458-019-09408-y.
  • Schiff, D. S., K. J. Schiff, and P. Pierson. 2021. “Assessing Public Value Failure in Government Adoption of Artificial Intelligence.” Public Administration 100 (3): 1–21. doi:10.1111/padm.12742.
  • Schmidt, V. 2013. “Democracy and Legitimacy in the European Union Revisited: Input, Output and Throughput.” Political Studies 61 (1): 2–22. doi:10.1111/j.1467-9248.2012.00962.x.
  • Schmidt, P., F. Biessmann, and T. Teubner. 2020. “Transparency and Trust in Artificial Intelligence Systems.” Journal of Decision Systems 29 (4): 260–278. doi:10.1080/12460125.2020.1819094.
  • Shin, D. 2021. ”The Effects of Explainability and Causability on Perception, Trust, and Acceptance: Implications for Explainable AI.” International Journal of Human-Computer Studies 146: 102551. doi:https://doi.org/10.1016/j.ijhcs.2020.102551, online first.
  • Shin, D., and Y. J. Park. 2019. “Role of Fairness, Accountability, and Transparency in Algorithmic Affordance.” Computers in Human Behavior 98: 277–284. doi:10.1016/j.chb.2019.04.019.
  • Starke, Christopher, and Marco Lünich. 2020. “Artificial Intelligence for Political Decision-Making in the European Union: Effects on Citizens’ Perceptions of Input, Throughput, and Output Legitimacy.” Data & Policy 2 (E16). doi:10.1017/dap.2020.19.
  • Strebel, M. A., D. Kübler, and F. Marcinkowski. 2018. “The Importance of Input and Output Legitimacy in Democratic Governance: Evidence from a Population-Based Survey Experiment in Four West European Countries.” European Journal of Political Research 5 (2): 488–513. doi:10.1111/1475-6765.12293.
  • Tyler, T. R. 2000. “Social Justice: Outcome and Procedure.” International Journal of Psychology 35 (2): 117–125. doi:10.1080/002075900399411.
  • van der Wal, Zeger, Gjalt de Graaf, and Alan Lawton. 2011. “Competing Values in Public Management.” Public Management Review 13 (3): 331–341. doi:10.1080/14719037.2011.554098.
  • Veale, M., and I. Brass. 2019. “Administration by Algorithm? Public Management Meets Public Sector Machine Learning.” In Algorithmic Regulation, edited by K. Yeung and M. Lodge, 121–149. Oxford: Oxford University Press.
  • Verbeeten, F. H. M., and R. F. Speklé. 2015. “Management Control, Results-Oriented Culture and Public Sector Performance: Empirical Evidence on New Public Management.” Organization Studies 36 (7): 953–978. doi:10.1177/0170840615580014.
  • Warren, M. E. 2014. “Accountability and Democracy.” In The Oxford Handbook of Public Accountability, edited by M. Bovens, R. E. Goodin, and T. Schillemans, 39–54. Oxford: Oxford University Press.
  • Wieringa, M. (2020). What to Account for When Accounting for Algorithms: A Systematic Literature Review on Algorithmic Accountability. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 1–18. 10.1145/3351095.3372833
  • Willems, Jurgen, Moritz J. Schmid, Vanderelst Dieter, Vogel Dominik, and Falk Ebinger. 2022. ”AI-Driven Public Services and the Privacy Paradox: Do Citizens Really Care About Their Privacy?” Public Management Review 1–19. online first. doi:10.1080/14719037.2022.2063934.
  • Wirtz, B. W., and W. M. Müller. 2019. “An Integrated Artificial Intelligence Framework for Public Management.” Public Management Review 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268.
  • Wirtz, B. W., J. C. Weyerer, and C. Geyer. 2019. “Artificial Intelligence and the Public Sector—applications and Challenges.” International Journal of Public Administration 42 (7): 596–615. doi:10.1080/01900692.2018.1498103.
  • Yeung, K., A. Howes, and G. Pogrebna. 2019. “AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing.” SSRN Electronic Journal. doi:10.2139/ssrn.3435011.
  • Yigitcanlar, T., Md. Kamruzzaman, M. Foth, J. Sabatini-Marques, E. da Costa, and G. Ioppolo. 2019. “Can Cities Become Smart Without Being Sustainable? A Systematic Review of the Literature.” Sustainable Cities and Society 45: 348–365. doi:10.1016/j.scs.2018.11.033.
  • Young, M. M., J. B. Bullock, and J. D. Lecy. 2019. “Artificial Discretion as a Tool of Governance: A Framework for Understanding the Impact of Artificial Intelligence on Public Administration.” Perspectives on Public Management and Governance, Online first, perspectives on Public Management and Governance, Online First 1–13. doi:10.1093/ppmgov/gvz014.
  • Zhu, H., B. Yu, A. Halfaker, and L. Terveen. 2018. “Value-Sensitive Algorithm Design: Method, Case Study, and Lessons.” Proceedings of the ACM on Human-Computer Interaction 2 (CSCW): 1–23. doi:10.1145/3274463.