277
Views
0
CrossRef citations to date
0
Altmetric
Editorial

The brave new world of AI: implications for public sector agents, organisations, and governance

&

Over the past 10 years, public administration scholarship has begun to wrestle with the increasing capabilities of artificial intelligence (AI). This can be seen in the increasing attention paid to the various opportunities and issues presented by AI to public administration. These topics have included observing changes in discretion, management, organisational design, and process frameworks (J. Bullock et al., Citation2020; Busch & Henriksen, Citation2018; Peeters, Citation2023; Zuiderwijk et al., Citation2021). Public organisations have evolved from street-level to screen-level organisations (Bovens & Zouridis, Citation2002) and many organisations have evolved further to be dominated by digital systems that shape the behaviours throughout these organisations.

Studies have found that AI can improve efficiency, effectiveness, and equity in some domains, while many studies have found the opposite effects in other domains (Compton et al., Citation2023). These systems can be poorly implemented, designed, and executed, sometimes reducing and sometimes increasing the prevalence of administrative evil (Young et al., Citation2019). We should no longer see the implementation of these systems as either a panacea to be universally embraced or a plague to be avoided.

Very little of the existing work, however, is forward looking. That is, the field of AI is experiencing a breadth of investment, policy attention, and increases in capabilities on a scale that arguably surpasses that of the introduction of computers and the internet. It is important that we carefully analyse how to best govern this technology (Taeihagh, Citation2021). One of the most important questions is what will public administration, public organisations, and public management need to do to prepare and shape the fast pace of these increasing AI capabilities (J. B. Bullock et al., Citation2022)? The new paradigm of machine learning has already shown incredible progress in generating language, images, audio, and video. In addition, experts generally agree that progress with the current methods appears to be scaling with large amounts of data and computational power and that improvement in the learning algorithms themselves are likely to continue (Kaplan et al., Citation2020). These systems are already being applied to solve long-standing problems in other fields.

This suggests that public service organisations are in the midst of another fundamental transition shaped by technological changes. And this transition requires creative, forward-looking thinking across three core areas within public administration: 1) agents, 2) organisations, and (3) governance. We briefly highlight outstanding questions that AI development presents to each of these core areas of PA research and practice.

Agents

Within the PA literature there has been a focus on how AI changes the use of discretion by bureaucrats (Alon-Barkat & Busuioc, Citation2023; Ranerup & Henriksen, Citation2022). There is an ongoing debate about whether AI enhances or curtails the discretion of bureaucrats, and whether there might be an automation bias and an exuberance in the use of automation and augmentation tools (Buffat, Citation2015; De Boer & Raaphorst, Citation2023). A forward-looking and technologically sound approach conceptualises AI as agents. Some cutting-edge work in the field of AI is creating AI agents, that is, AI systems that can act as if they are agents within an organisation that can complete various tasks on demand. These agents can be given certain roles and responsibilities to complete sets of complex tasks.

These AI agents are already being trialled in private organisations and will likely form the basis for a new form of bureaucrat, an artificial bureaucrat (J. B. Bullock & Kim, Citation2020; J. B. Bullock et al., Citation2022). This will pose a whole new set of questions for how to make best use of these agents within public agencies for the creation of public values. What roles should they be given? How will their behaviour be directed, constrained, and managed? How will they fit within current organisational hierarchies? What motivations should they be given?

Organizations

Generative AI has already presented new challenges and opportunities for how information is created, shared, used, and stored within organisations. Private and public organisations are already experimenting with how AI tools facilitate efficiency and effectiveness gains in organisational decision making (Dell’acqua et al., Citation2023; Glaze et al., Citation2023). This is only the tip of the iceberg. For example, organisations themselves will need to evolve to accompany these new ways of creating, sharing, using, and storing important information. Additionally, AI agents themselves will begin to populate these organisations. And, given this, we will need to create new designs for organisations that accompany, support, and make use of these new forms of agents. Moreover, public values need to guide the designs for collaboration among human bureaucrats and artificial bureaucrats. Organisational structure and processes need to identify values and beliefs of members of the organisations (Simon, Citation2013).

The most effective public service organisations will also need their human bureaucrats to make more effective use of the broader set of AI tools. As with the use of personal computers, email, and the internet, for organisations to remain effective they will have to evolve to make further use of not only AI agents, but all sorts of AI tools. These developments pose new and difficult questions for public organisations. How should organisations be designed and redesigned to make better use of the opportunities and challenges AI presents to information sharing and communication? What new management structures are needed to properly hold AI agents accountable? How can we protect against loss of accountability and transparency as AI agents are given more tasks to complete?

Governance

Governance of emergent AI agents in public service organisations should strive for advancing public values such as equity, democracy, transparency, and effectiveness. AI governance includes policies, regulations, and institutions to prescribe, enable, and enforce organisational and individual behaviours (J. B. Bullock et al., Citation2024). The existing research treats artificial intelligence mainly as a productivity tool to enhance information processing, data analytics, communication, and decision-making. A forward-looking AI governance can anticipate and monitor the next evolution of AI while addressing the fundamental governance challenges. The first is about digital divides. For public administration, such divides include those between government and citizens, resource-rich and resource-poor governments, dominant vs. marginalised groups in the society, and public entities vs. major technology corporations.

The second is democratic accountability as more sophisticated AI introduces more opaqueness in data analysis and decision-making at scale. Ordinary citizens do not possess the resources and knowledge to challenge AI-based decisions. Human public service agents are also at a disadvantage to independently audit and correct potential error and bias of their AI counterparts.

The final long-term associated challenge is dependency on AI agents in administrative decision-making. Human bureaucrats may gradually pay more deference to the rising AI agents as their capabilities continue to rise. The lack of rigorous governance to enhance humans’ independent and critical evaluation and provision of ethical frameworks for AI agents will be detrimental.

Frontier and salient areas of research

This editorial identifies frontier and salient areas of research on AI for public administration scholars.

  • Agents

    • What roles within public organisations should AI agents be given?

    • How will their behaviour be directed, constrained, and managed?

    • How will they fit within current organisational hierarchies?

    • What motivations should they be given?

    • How can these “artificial bureaucrats” be held accountable?

  • Organizations

    • How should organisations be designed and redesigned to make better use of the opportunities and challenges AI presents to information sharing and communication?

    • What new management structures are needed to properly hold AI agents accountable?

    • How can we protect against loss of accountability and transparency as AI agents are given more tasks to complete?

  • Governance:

    • What governance mechanisms are needed to advance equity, democratic accountability, inclusivity, and human-centred capability (Kim & Lee, Citation2024)?

    • How can we ensure the effectiveness and adaptability of these governance mechanisms?

Conclusion

We are in an age of fast-paced technological evolution. This is exemplified by the rapid pace of AI capabilities development. Equally important is that the evolution of AI is spearheaded by the private sector. This requires that governments and public organisations be deliberately forward thinking in how AI is governed and integrated into public organisations. We call on public administration and public management scholars and practitioners to rise to the occasion and be forward thinking on how to use AI and AI agents in the work of public administration and public organisations.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Justin B. Bullock

Dr. Justin B. Bullock is an accomplished scholar and expert in the fields of public administration, public policy, and AI governance. He currently serves as an Associate Professor Affiliate and Distinguished Scholar at the University of Washington’s Evans School of Public Policy and Governance, a Senior Researcher and Project Lead for Project AI Clarity Lead at Convergence Analysis, and is a Research Fellow for the Global Governance Institute. Dr. Bullock has led groundbreaking initiatives such as The Oxford Handbook of AI Governance and previously held a tenured faculty positions at Texas A&M University’s Bush School of Government and Public Service. His research focuses on the intersection of artificial intelligence, public administration, and governance, with a particular emphasis on AI safety, human control, and the design of effective AI bureaucracies for the public good. Dr. Bullock’s work has been published in numerous top-tier academic journals, and he has been invited to present his research at prestigious institutions worldwide.

Yu-Che Chen

Yu-Che Chen, Ph.D., is Isaacson Professor at the University of Nebraska at Omaha and Professor at the School of Public Administration. Dr. Chen is the Director of the Digital Governance and Analytics Lab. Dr. Chen received his Master of Public Affairs and Ph.D. in Public Policy from Indiana University-Bloomington. His current research interests are public governance of artificial intelligence, cyberinfrastructure governance, and collaborative digital governance. He has served as PI or Co-PI of NSF and other external grants with a total award of over $3.5 million. Dr. Chen’s most recent co-edited book is the Oxford Handbook of AI Governance. He has published Managing Digital Governance with Routledge and served as lead editor for two other books. In addition, he has published over 50 peer-reviewed journal articles, book chapters, and conference proceedings papers in digital government and governance. He is Associate Editor of the Government Information Quarterly and the Digital Government: Research and Practice. He is a former chair and current executive committee member of the ASPA’s Section on Science and Technology in Government. He is a two-term former board member of the Digital Government Society. He is co-chair of the 2024 International Conference on Digital Government Research (dg.o 2024).

References

  • Alon-Barkat, S., & Busuioc, M. (2023). Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory, 33(1), 153–169. https://doi.org/10.1093/jopart/muac007
  • Bovens, M., & Zouridis, S. (2002). From street‐level to system‐level bureaucracies: How information and communication technology is transforming administrative discretion and constitutional control. Public Administration Review, 62(2), 174–184. https://doi.org/10.1111/0033-3352.00168
  • Buffat, A. (2015). Street-level bureaucracy and e-government. Public Management Review, 17(1), 149–161. https://doi.org/10.1080/14719037.2013.771699
  • Bullock, J. B., Chen, Y. C., Himmelreich, J., Hudson, V. M., Korinek, A., Young, M. M., & Zhang, B. (Eds.). (2024). The oxford handbook of AI governance. Oxford University Press.
  • Bullock, J. B., Huang, H., & Kim, K. C. (2022). Machine intelligence, bureaucracy, and human control. Perspectives on Public Management and Governance, 5(2), 187–196. https://doi.org/10.1093/ppmgov/gvac006
  • Bullock, J. B., & Kim, K. C. (2020). Creation of artificial bureaucrats. Proceedings of European Conference on the Impact of Artificial Intelligence and Robotics, Online, October 22-23, 2020.
  • Bullock, J., Young, M. M., Wang, Y. F., Giest, S., & Grimmelikhuijsen, S. (2020). Artificial intelligence, bureaucratic form, and discretion in public service. Information Polity, 25(4), 491–506. https://doi.org/10.3233/IP-200223
  • Busch, P. A., & Henriksen, H. Z. (2018). Digital discretion: A systematic literature review of ICT and street-level discretion. Information Polity, 23(1), 3–28. https://doi.org/10.3233/IP-170050
  • Compton, M. E., Young, M. M., Bullock, J. B., & Greer, R. (2023, July). Administrative errors and race: Can technology mitigate inequitable administrative outcomes? Journal of Public Administration Research & Theory, 33(3), 512–528. https://doi.org/10.1093/jopart/muac036
  • De Boer, N., & Raaphorst, N. (2023). Automation and discretion: Explaining the effect of automation on how street-level bureaucrats enforce. Public Management Review, 25(1), 42–62. https://doi.org/10.1080/14719037.2021.1937684
  • Dell’acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper, 24–013.
  • Glaze, K., Ho, D. E., Ray, G. K., & Tsang, C. (2023). Artificial intelligence for adjudication: The social security administration and ai governance. In J. B. Bullock (Ed.), The oxford handbook of AI governance (pp. 779–796). Oxford University Press.
  • Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
  • Kim, Y., & Lee, J. (2024). Digitally vulnerable populations’ use of e-government services: Inclusivity and access. Asia Pacific Journal of Public Administration, 1–25. https://doi.org/10.1080/23276665.2024.2321569
  • Peeters, R. (2023). Digital administrative burdens: An agenda for analyzing the citizen experience of digital bureaucratic encounters. Perspectives on Public Management and Governance, 6(1), 7–13. https://doi.org/10.1093/ppmgov/gvac024
  • Ranerup, A., & Henriksen, H. Z. (2022). Digital discretion: Unpacking human and technological agency in automated decision making in Sweden’s social services. Social Science Computer Review, 40(2), 445–461. https://doi.org/10.1177/0894439320980434
  • Simon, H. A. (2013). Administrative behavior. Simon and Schuster.
  • Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40(2), 137–157. https://doi.org/10.1080/14494035.2021.1928377
  • Young, M. M., Himmelreich, J., Bullock, J. B., & Kim, K. C. (2019). Artificial intelligence and administrative evil. Perspectives on Public Management and Governance, 4(3), 244–258. https://doi.org/10.1093/ppmgov/gvab006
  • Zuiderwijk, A., Chen, Y. C., & Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3), 101577. https://doi.org/10.1016/j.giq.2021.101577

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.