0
Views
0
CrossRef citations to date
0
Altmetric
Case Report

Leveraging ChatGPT in public sector human resource management education

ABSTRACT

The potential benefits and challenges of AI in the workplace are documented with public service facing a particular choice to leverage this tool to better communities. This article explores the use of generative artificial intelligence (AI) tools in public service education. As students prepare to enter public service, they need to gain the skills to manage the advantages and threats of AI. This article explores one attempt to prepare students for the intersection of public service and AI using a human resource management course. This article explores the use of ChatGPT, as a generative AI tool, for both instructors and students across the academic year. We explore the pedagogical strategies involved and share outcomes and lessons learned for future integration of AI in the classroom.

AI has the potential to radically change the efficiency and effectiveness of public goods and services delivery across multiple sectors including transportation, taxes, healthcare, and education (World Economic Forum, Citationn.d.). With the addition of new artificial intelligence (AI) tools, the practice of work is inherently changing (Berryhill et al., Citation2019). Students are facing a workplace that may integrate both basic and generative AI tools in a multitude of ways. In public service, AI has been suggested as a potential virtual assistant for public servants and community members seeking help (Berglind et al., Citation2022), as an evaluator for distributing publicly supported benefits (Martinho-Truswell, Citation2018), and as a risk-assessment tool for air cargo transportation (Berryhill et al., Citation2019). Educators in public service are tasked with the opportunity to prepare the next generation of public leaders by introducing important theories and practices and providing a space to explore the nexus of the two areas. With this charge in mind, public service education and educators should consider how to incorporate the use of AI into their curriculum, to better prepare students for the workplaces of tomorrow.

Generative AI and public service

AI tools have existed for some time, with research into this topic starting in the early 1950s (Anyoha, Citation2017). The reality of AI was realized in the late 1990s with speech recognition software (Anyoha, Citation2017). These early AI tools, often referred to as traditional AI, are designed to perform tasks based on a specific set of inputs and within programmed boundaries (Heaslip, Citation2023). While the technology can learn from the data and even make predictions, no new strategies are invented. For example, a very traditional AI, Grammarly, uses the rules of the specified language to check for proper spelling, grammar, and tone within a provided text. On the other hand, generative AI, which has recently started to become more publicly available, generates new information based on the data provided, including new strategies for solving problems (Heaslip, Citation2023). The introduction of these generative AI tools, like ChatGPT and Gemini, shifted the nature of education and the expectations around workplaces of the future. Within education, universities began writing policies around the use of AI in the classroom. Collaboration between institutions of higher education and AI developers to expand both faculty and student use of and understanding of AI has emerged as demonstrated by the recent partnership between Arizona State University and OpenAI (abc15.comstaff & Associated Press, Citation2024). Students and instructors are benefiting from the tools of AI, using the technology to assist in brainstorming, editing, problem-solving, and data analysis (Fui-Hoon Nah et al., Citation2023).

Additionally, researchers and practitioners are identifying the importance of understanding and leveraging AI in work and public service (National Academy of & Public Administration, Citation2019). With the ability of AI to rapidly process data, sift through multiple contexts, and streamline repetitive and basic tasks, public servants may be able to focus more on the quality of services provided as the technology helps mitigate the growing quantity of tasks (Martinho-Truswell, Citation2018). However, AI inherently poses risks for public servants wanting to use the technology ranging from deepening the digital divide to concerns about ethics, discrimination, misinformation, or disinformation (i.e., hallucinations), privacy of information shared, and limiting an individual’s agency to choose (Fui-Hoon Nah et al., Citation2023; National Academy of & Public Administration, Citation2019).

Teaching students how to navigate the benefits of AI while addressing and avoiding the pitfalls should be a topic of discussion and exploration in public service education. As public affairs educators, we should be providing students with the opportunity to experiment with AI in a controlled environment, help students construct their mental models for engaging with AI, and create opportunities to critically evaluate the inputs and outputs of AI technology within the context of public service. Our teaching should move beyond placing limits on students, instead, we should focus on teaching students how to responsibly and ethically use a tool that is becoming more ubiquitous over time (Chiu, Citation2023; Lambert & Stevens, Citation2023). One area where public service education programs may find success in introducing AI into their curriculum is human resource management.

In this article, we review how both instructors and students engaged with ChatGPT, the justification behind the decision to engage, and our pedagogical approach (which reviews the audience, the framing and emphasis, and the topics covered). We conclude by sharing both lessons learned and some outcomes for the students and instructors.

Implementing AI in an HRM course

This article describes the approach taken by instructors over two semesters to integrate AI use into their work and the class work of students for a survey course on human resource management (HRM) in response to student questions and changing practices in public service. Human resource management (HRM) professionals are expected to engage in multiple aspects of talent management, including recruitment and retention, workforce planning, and development and implementation of workplace policies and procedures. HRM professionals are also expected to participate in and connect their work to the strategic vision and plan for their organization. Many of these elements have relied on some form of non-generative, or traditional, AI to support hiring, provide basic user support, analyze performance metrics, and visualize data (Afzal et al., Citation2023; Berryhill et al., Citation2019). With the introduction of generative AI to the public in the early 2020s, the potential for non-generative AI to be eventually subsumed into HR-focused generative AI tools exists (Afzal et al., Citation2023). As such, the decision was made to focus on generative AI tools to ensure students were prepared for any future AI interactions in the workplace. We developed weekly AI demonstrations and student activities to model how to integrate AI into HRM work. For example, during one demonstration instructors modeled how ChatGPT can serve as a coach for employees going through the onboarding process by providing both guidance and opportunities to check for understanding around organizational policies. Students had the opportunity to experiment with this technique in the following week’s demonstration. The weekly practice provided students the opportunity to see how AI would work within their HRM practices. Several students reported pulling AI into their workplace during the semester to assist in writing job descriptions, updating organizational policies and practices, and brainstorming solutions to HRM-related challenges.

The context

This course, taught over two semesters by the lead author (with assistance in the first semester from the coauthor), provided 24 students enrolled in either a master’s in public administration (MPA) or master’s in health administration (MHA) the opportunity to explore the application of AI in their workplaces and within the HRM field more generally. Students from both programs tended to be mid-career professionals who work full-time in addition to attending night classes as part of their graduate education. In terms of class make-up, most students were residents of the United States, with about 20% of the students across both semesters coming from outside of the U.S. Additionally, the course leaned majority female (just over 60%) and students of color (about 60%). Students had a variety of educational backgrounds in public administration before their acceptance into the MPA and MHA programs and work experience spanning across the state and local government service, nonprofit and community work, and healthcare sectors. Additionally, the HRM course, a required course for all students in the MPA/MHA degree track, is taught in the first two semesters of the required sequence of core courses (ranging from data analysis to organizational behavior), so many students are engaging with this content in addition to familiarizing themselves with graduate school expectations. The course was developed under the guidance of the lead author using philosophies pulled from ungrading, project-based learning, and reflective teaching. During the first semester, the co-teacher assisted in developing the context for the semester-long simulation and provided weekly insights for the assigned topic of study, due to his extensive HRM experience and professional background in the healthcare industry.

During the first few weeks of the school year, multiple students asked many questions about the role of AI in relation to different elements of HRM being studied, from job design and analysis to strategic human resource management. From the questions being asked, it was clear that students were aware of the benefits and threats posed by AI in HRM and public service in general. It was determined that one way to support student learning was not only to provide opportunities for students to learn about the use of AI through preparation materials and weekly class lectures and discussions, but to demonstrate how students may leverage AI in future HRM-related positions. We also incorporated AI into our teaching preparation to better understand some of the issues that may arise when using AI, both technically and ethically.

In our class, we invited students to use the AI tool ChatGPT. The choice to use ChatGPT was based on three elements: the user-friendly interface, the documented issues with other generative AI tech at the start of the class (Metz, Citation2023), and the free user version students could easily access (to avoid placing any additional financial or technological burdens on students). No additional materials (i.e., textbooks, podcasts, articles, etc.) were provided as required reading, although a section on reading materials related to AI and HRM was curated for a stand-alone additional resources page provided for students who wanted to pursue additional learning. While we focused on ChatGPT due to the widely available and affordable nature of the product, we made sure to spend time reviewing other forms of AI as well, although students did not get instructor-directed or hands-on experience with these tools.

Pedagogical approach: reflective teaching and learning & inquiry-based learning

Reflection is a key to learning and provides an opportunity to reinforce knowledge and identify areas to improve (Moon, Citation2013). The course and instructor teaching philosophy draws on various concepts of reflective teaching and learning as well as inquiry-guided learning practices to center on four different elements of learning: preparation, participation, application, and reflection. Designed as a survey course of the fundamental elements of human resource management (HRM), the course centers on concepts of strategic HRM and reviews different HRM policies and processes. The course is a core class requirement for students in Master of Public Administration and Master of Healthcare Administration degree programs. Students enrolled in the two programs tend to be working professionals. Many students are currently working in public service or healthcare organizations and use the course as a sandbox to improve their skills and organizations. Additionally, the course emphasized concepts of accessibility, working to provide materials to students in accessible formats and at the lowest cost possible.

The course was structured to emphasize reflective practices for the instructors and reflective practices for the students, as both are part of the teaching and learning process. Reflective learning and teaching are focused on evaluating our behaviors and outcomes to improve the quality of both our teaching and learning over time (Ashwin et al., Citation2020). Reflective teaching and learning move from simply observing the outcomes to seeking to understand and apply the lessons learned from observation to improve our behaviors and increase our knowledge (Ashwin et al., Citation2020). The principles of reflective teaching, including active engagement, appropriate assessment, scaffolded and systematic development, and application to professional experiences (Ashwin et al., Citation2020), are emphasized in the course design to cultivate a classroom culture of learning accountability and engagement. Reflection was encouraged for both the instructors and the students using reflective writing, feedback tools, and discussion.

In addition to the reflective teaching practices, the course was built around inquiry-based learning, using both problem-based and case study methods. Both methods center on student growth, either learning how to identify problems and provide potential solutions (Nilson, Citation2016) or generating experiences where students could identify areas for growth and personal application (Harrington & Simon, Citation2022). Per the suggestion of Ng et al. (Citation2023), we focused on designing AI interactions and learning experiences focused on project-based learning where students could create authentic, meaningful artifacts and learn through collaboration.Footnote1

Integrating ChatGPT into a HRM course

Before the semester, language was added to the course syllabus to encourage student use of ChatGPT and other AI tools in the course. The syllabus language (see ) focused on students engaging with ChatGPT as a reference and editor, not creating original content. ChatGPT was integrated as a tool for both students and course instructors as explained below.

Table 1. Syllabus statement on AI in the classroom.

Student use and outcomes

In questions asked through class discussion and the written weekly reading logs, students indicated their desire to understand AI’s basic functionality and how to apply this technology within HRM. Two example questions (below) reflect the range of AI awareness and understanding by the students:

  1. “How can staff adapt to using artificial intelligence in the future?”

  2. “How will standard practices, such as merit-based principles and practices change as

    technology increasingly replaces folks’ skills and knowledge?”

lists the required student activities and desired outcomes as well as how each activity worked to incorporate AI.

Table 2. Student activities and outcomes.

Analysis assignments

Students selected an organization to analyze over the semester and were given bi-monthly prompts to evaluate specific parts of the organization’s HRM processes and policies. Analysis assignments were short 800–900 word papers analyzing policies around recruitment and selection, diversity, equity, and inclusion, and work-life balance that also identified environmental challenges to the current practices of the HRM policies and/or processes and offered suggestions for future implementation.

The instructors provided two specific assignments for students to apply their learning and explore the potential of generative AI (a sample assignment is found in the appendix). For the first AI-based analysis assignment, students conducted a 30-minute job analysis interview with an employee in the organization they analyzed or with an employee in a partner nonprofit organization. After creating duty statements from the interview task, students used ChatGPT to generate a sample job description from the provided prompt and the table of duty statements. The original, generated output was part of the required submission materials. Students then edited the generated job description to reflect their understanding of the interviewee’s job more accurately. Students also included a personal reflection on the assignment topic, the process, and the use of ChatGPT.

For the second analysis assignment using ChatGPT, students created a future policy for the organization they had analyzed throughout the semester (see appendix for assignment description). Students had to generate a summary of the policy they felt their organization needed to create, the environmental factors that could influence the policy implementation, and identify other implementation barriers. Students then had to generate a prompt for ChatGPT, which they submitted as part of the assignment. Students provided the original output from ChatGPT, an edited version of the policy, and a personal reflection on the assignment overall.

From both assignments, students reported their engagement with this assignment was a positive experience. Students discussed how using ChatGPT helped them identify areas they had missed in their analysis and thinking. Other students discussed how ChatGPT helped them become more confident in their writing. Many students reported they could envision using ChatGPT in their future work while some students reported they had already begun using ChatGPT in their work as a supervisor or manager from editing and producing summary reports to developing workplace policies.

I was most proud of coming up with a policy that I felt would be useful for employees and help individuals achieve a happier work-life balance. After conducting my interview earlier this semester, I learned about all the little factors involved in maintaining an organization … I could potentially incorporate this type of practice into my work by using the analytical skills that I developed in this project to analyze not only my work but also my references when completing assignments. I think this assignment helped me dive deeper into the resources I was provided with and gain a better understanding, as it was necessary to analyze this information thoroughly to develop a policy that had not yet been implemented within the given organization. – Student enrolled in MHA program.Footnote2

Discussions

As instructors, we engaged in weekly reflection as well. We met weekly as a team to review the previous course session, reflect on student questions from the reading logs (see the next section on reflective writing), review other reflection responses from students, and prepare for the next class session. We used the weekly reading log questions to customize the prepared lesson content to student concerns and ideas. In particular, the co-teacher with experience in the healthcare industry focused on providing a 10–20 minute perspective on specific questions that incorporated the instructor’s experience and large group discussion activities.

Semester-long simulation

Students engaged in weekly simulations facilitated during class-time. Students assumed the role of board members for Sunnyside, a nonprofit, midsize fictional hospital (semester 1), or community services nonprofit (semester 2). Sunnyside’s relevant details, challenges, and community information were all drawn from public records for a similar hospital within the United States (semester 1) and from local nonprofit information (semester 2). Each week, the students, as a board, confronted specific challenges they needed to solve. Students were provided a scenario, needed information and materials like sample performance evaluation plans or job descriptions, and asked to produce a work product as a small group or class. For example, in Week 7, students updated Sunnyside’s basic evaluation plan to one that was more comprehensive and integrated across all employee groups. During Weeks 13 and 14, students worked to develop contingency plans and policies around labor disputes and emergency management crises. Students also used the simulation to develop job interview questions and matrixes for a new Sunnyside CEO and create an onboarding plan for this new leader.

Reflective writing

Reflective writing encourages active learning and is a critical element in helping students enrolled in professional programs engage in deeper learning (McGuire et al., Citation2009). As part of the reading logs, students answered questions about the assigned preparation materials, connected the assigned materials to previous classes and the learning objectives of the class, and generated three to five questions that the preparation materials sparked. Students also had space to report on their progress and share any private questions or concerns. Students also had time at the beginning and end of each class to engage in reflective writing in some form whether as a KWL chart or a topical summary. For example, at the end of each class, students had the opportunity to participate in an anonymous reflection about their experience, in an activity called “Clear as Mud.” Students could provide their name if they wanted a personal follow-up from the instructors. We found that students often used this time to share their questions, concerns, and applications around AI and HRM.

AI Demonstrations

In addition to the specific analysis assignments, students were given the opportunity to practice using ChatGPT in the HRM context with specific exercises during class each week. These exercises lasted between 15–20 minutes with the first half being student practice and the second half being an instructor demonstration of how to move beyond the basic outline of the task (see appendix for sample demonstration brief). For example, in Weeks 8 & 10, students were asked to practice engaging ChatGPT to generate parts of an employee handbook for Sunnyside, the simulated organization. After students had a chance to try different techniques, the instructor demonstrated one potential way to accomplish the activity. The instructor then highlighted how to leverage ChatGPT’s capability beyond generating written text, including creating a short visual summary of the policy to display on bulletin boards, summarizing the changes between an old and new policy, and highlighting potential conflicts to monitor where the policy may generate inequitable conditions within the workplace.

Instructor use

To support student learning, the instructors also experimented with ChatGPT as part of their work process. The technology was used to generate content and materials for the weekly simulation. A prompt was entered into ChatGPT based on desired learning goals and a generic real-life scenario experienced by one of the instructors. The content generated by ChatGPT was then edited to better suit the needs and capabilities of the students in the class and implemented during the simulation. In the first semester, after the first week of the simulation, we identified that students were struggling to analyze the output of AI (in terms of the generated content) to determine what was most important within the situational context. Some students also struggled to analyze how they could potentially use the tool to support their work as public servants. Part of this struggle was most likely due to the newness of the technology. However, we found that students, just like the instructors, had questions about the flaws in the technology and sometimes were overwhelmed in trying to contextualize how AI could work for them. As a result of this struggle, we developed the weekly demonstrations, which morphed from instructors demonstrating to students first trying and then instructors showing how to navigate the software and analyze the results.

The emphasis of these demonstrations was two-fold. First, we used the demonstrations to help students begin to identify the pitfalls of relying on a strong tool. We found that many students assumed the content was good enough since it was generated by AI. During the weekly demonstration, after the simulation, students and the instructor(s) modeled how to use ChatGPT to generate the simulation materials and critique the output together (as shared above). As part of our class discussion, students spent time reflecting on the challenges of using AI, from the basic questions about bias and privacy to deeper questions. We spent time discussing how while ChatGPT, and other AI tools, can be leveraged to solve problems and increase efficiency and performance, users must know (a) what questions to ask before engaging with the technology, (b) what prompts will provide the best output for the situation, (c) how to evaluate the output, and (d) how to balance ethical considerations when engaging with an evolving tool.

Second, we sought to provide an opportunity for students to see the wide range of uses AI can have within the HRM field. As Afzal et al. (Citation2023) explain, the main elements of HRM including recruitment, training, performance evaluation, etc. are strongly impacted by AI tools. Yet, recent surveys by the Society for Human Resource Management report that practitioners are concerned about bias in AI and are unsure how to best implement AI in their work (Society for Human Resource Management, Citation2022a, Citation2022b). Through our demonstrations, we focused on helping students identify the areas where generative AI may be most useful in their work including reviewing the qualifications of job applicants, creating templates that can be altered and customized for organization-wide training, developing project management plans to support employee development and performance, creating job descriptions, and crafting specific personnel policies. For example, in the demonstration discussed above, we also spent time modeling how to use ChatGPT to generate a performance evaluation plan for a new hire, develop a time-bound project management plan based on the performance evaluation plan that listed key elements (times for check-ins, when to conduct reviews, etc.), and identify implementation supports that are needed (like calendar reminders, etc.). We also explored how ChatGPT identifies evaluation tools and key performance indicators (KPIs) for set professions (in this case, an emergency room nurse) and how to use ChatGPT as a starting point in monitoring KPIs when strategic goals change.

Discussion

Students reflected on some of the challenges of using ChatGPT. While students found the output was a good launching point to develop a product, they explained they struggled to come up with the correct prompt. As a solution to this challenge, students identified they heavily relied on the theoretical principles to edit ChatGPT’s output to produce the required content. For example, in the first analysis assignment using AI, students were given a template on how to conduct interviews and generate material to submit for ChatGPT (like KSAs) with basic instructions. Students found that with this untailored approach, their engagement with ChatGPT required a lot more editing on their end to refine and correct the output. As we reflected with students in class and as students reported in their written reflections, the foundational knowledge of what constitutes a job description was an important tool in analyzing the output of ChatGPT. Students noted the experience of inputting all the information without tailoring the prompt, created a lot of repetitive work.

I learned about how to construct a job posting overall. Although I had seen them online before, I never put much thought into the person creating the description. I wasn’t aware of the amount of thought and time that goes into this process. I could also see the overall layout and amount of detail in the posting playing a significant role because it’s important to attract the right candidates. I think this assignment created a great opportunity for me to go back and revise my work and see what I could potentially improve on. If I am ever tasked with this in the future, I will have a great deal of foundational knowledge. – Student enrolled in MHA program

If I could redo this assignment, I would probably provide ChatGPT with a more specific outline tailored to what I was looking for, potentially including a bulleted list. I think sometimes ChatGPT can fail to highlight specific information and instead provide somewhat irrelevant information.” Student enrolled in MHA program

However, students felt better prepared to engage with this type of activity in the future. In the next ChatGPT-centered assignments, students demonstrated their learning by generating more detailed and complex prompts, signaling the importance of reflecting on learning through the lens of theory.

In general, AI tends to produce information, rather than knowledge, for students to draw upon, making this traditional education practice of reflection on theoretical knowledge an important exercise for students moving forward to understand their interactions with the technology and the subsequent positive and negative consequences of any generated output. We found it was important to help our students frame their interactions with AI through the lens of theory and learn how to analyze their engagement. During our classes, we found that the students who were most successful in engaging and leveraging ChatGPT as a tool for public service had a stronger understanding of the theoretical principles of HRM and spent time reflecting on their understanding of theory through their weekly reading logs and class participation.

Researchers have identified the importance of building a concrete understanding of theory in student performance (Kuhn & Dean, Citation2004; M. F. Pang & Ling, Citation2012; M.-F. Pang & Marton, Citation2005). As students develop a solid understanding of the theory, they can develop the appropriate mental models to engage critically with the problem at hand and move beyond the “what” and “how” to the “why” (Larsson, Citation2017). As we provided students with a theoretical understanding of HRM policies, they could move beyond “what” an HRM policy is and “how” to develop a policy to discuss “why” a particular policy mattered. Then, when tasked with generating a new policy for their organization using ChatGPT (see appendix), students could identify the most important parts of the content created focusing on the connection between the “why” of a policy the “what” and “how” of the content generated.

This metamorphosis was important as we identified that students need to understand several things to be successful in their engagement with ChatGPT and any AI tool. These concepts include: (a) knowing what problem the student was seeking to enlist ChatGPT to help address, (b) what processes went into solving the problem, (c) what the desired output should look like (i.e., did the student want policy text, some potential social media posts detailing their engagement, or a suggested script to help role-play a difficult conversation, etc.), (d) what role ChatGPT was playing in the problem solving process (i.e., creator, consultant, brainstorm partner, etc.), and (e) what prompts would maximize the desired output from ChatGPT.

We found that, as instructors, we need to spend time with students discussing the output and how to identify flaws and refine the material produced, including checking for their assumptions and biases. As the output from ChatGPT is not always duplicative, we talked about the importance of understanding the entire process and created opportunities for students to analyze and edit AI output in two forms: (a) created by another user (through materials used during the simulation) and (b) created by themselves (through the analysis assignments). Students who were able to analyze their engagement through the lens of these concepts had more success editing AI outputs and had better experiences overall with AI in the classroom.

Students needed to be trained to be critical users of a technological tool that can discriminate against the very people the technology is meant to support. This training was an area that, upon reflection, we were not able to cover substantively. While we emphasized the importance of understanding the black box of AI and asked students to be vigilant not to share any information that could be covered by HIPAA or FERPA as part of their work, we didn’t develop any protocols to help students assess both the technologies and the users (i.e., the student’s) bias and assumptions

Recommendations

Through reflection on our experience, we identified some things that either helped us integrate AI into our class or we want to incorporate moving forward, which may be helpful for other public service instructors who want to use AI in other areas of public service education.

Recommendation #1: Help students distinguish between information and knowledge

Students should understand that AI generates information and content, but doesn’t necessarily result in mastery of a subject (i.e., knowledge). Students must first put in the work to understand the theoretical foundations before and while engaging with AI. Having this deep theoretical background will help students better analyze AI-generated content. The weekly AI demonstrations allowed for discussions around the information provided and the knowledge needed to accurately assess the output.

Recommendation #2: Push students to reflect on their work

Students should be pressed to think about how they use the AI tool, what the process before and after using the tool looked like, and any consequences that come from using AI in the stated task. This reflection can be oral or written, but should be revisited often to help the student identify patterns around their AI use including (a) techniques that lead to success or failure, (b) common errors emerging in the AI output, and (c) common questions surfaced by others when reviewing the AI output. Teaching students to be self-aware and reflective is not only a critical teaching practice, but a practice that can provide insights for organizations looking to develop policies around AI use within their organizational context. In particular, student reflections as part of their AI-related analysis assignments were helpful as we could see individual-level concerns and learning trends that could be addressed in future classes.

Recommendation #3: Have a plan to address any privacy and bias concerns before engaging in the tool

As research has demonstrated, AI struggles with questions about privacy and bias (Ntoutsi et al., Citation2020; Varona & Suárez, Citation2022). Students should be aware of the conflicts that can be found when any new technology or practice is adopted. Along with educating students about the murkiness surrounding the explainability of a tool (i.e., how an AI made the decision to generate a specific output), students should also consider questions of ethics, bias, and privacy. While we emphasized the importance of avoiding sharing any private information, particularly information governed by HIPAA and FERPA, we lacked a comprehensive plan to address such concerns throughout the course. Instructors should have a plan to help students protect themselves and the data they are using to engage with the system. The best plans are generated with the class itself and evolve over the semester as new ideas or problems are surfaced. Instructors should teach students how to have transparent practices and dialogs around their use of AI to help engage community members concerned about leveraging a powerful, yet potentially harmful tool.

Recommendation #4: Generate multiple opportunities for students to engage with AI within a variety of settings

Students need multiple experiences to analyze both their interactions with the technology and the outputs from that interaction. Ask students to evaluate AI generate content (either their or someone else’s) and to generate content together as a learning experience to provide different pathways toward understanding best practices. Help students understand what opportunities can be best for leveraging the use of AI technologies and when the use of such technologies will cause further harm to the public. Providing AI-generated content within a set context, like our Sunnyside simulation, was helpful for students to learn how to objectively analyze AI-generated output.

Recommendation #5: Provide boundaries for student exploration

Students emphasized how easy it was to get lost in the content creation process and feel overwhelmed by the potential of ChatGPT in HRM. For all activities, students should have a clear understanding of what output may look like (either through a rubric or an example assignment). As students can get overwhelmed by the type of content created, the pace of content creation, and the additional variables (like bias, privacy, explainability, transparency, etc.) that need to be considered when generating AI output, instructions should consider providing a set of instructions that can be referred to throughout the process (see Appendix A for example). While these instructions may vary based on subject, we found it was more helpful to provide written briefs with a stated task, instead of asking students to explore using a particular style of prompt or follow detailed step-by-step instructions. With these boundaries, students were able to focus on exploring different ways to go about the task, including different prompt approaches, and better evaluate their work.

Conclusion

As public service education continues to evolve, we must prepare our future public servants to understand and harness the power of AI while preventing additional harm to vulnerable communities. While the sample is quite small, the experience across both semesters for all students, regardless of degree program, professional background, or previous experience was consistent. Throughout the semester, students became more aware of the benefits and threats of AI, especially within the context of human resource management. The class experience found that ChatGPT helped with tasks that required some level of creativity (like writing job descriptions) or generating templates (like a project management plan for an employee on a performance improvement plan) were more successful while tasks requiring more analysis and detail (like developing specific policies) necessitated more supervision and involvement of the user. This experience mirrors findings from Dell’acqua et al. (Citation2023), which identified a skills distribution within Chat GPT. Dell’acqua et al. (Citation2023) also suggested that as AI continues to emerge as a workplace tool, employees will find different ways to engage with AI including delegating specific sets of tasks to AI or humans or finding a way to integrate the use of AI within their work.

However, the reality is AI is a tool that is still being explored and poses questions about ethics, bias, and effectiveness. Questions remain about the explainability of AI (i.e., how the technology reaches the provided conclusions and outputs), the transparency of data use, the training models, and more. While these are all questions the authors tried to address as part of their teaching, there simply wasn’t enough time to provide a substantive examination of each of these topics. As such, further classes, including an “AI in Public Service” course are in development to offer students a chance to examine this technology more in-depth and allow the HRM course to remain more grounded within the context of field-related practice.

In the future, we suggest the integration of more AI tools, including non-generative AI tools like limited chatbots, to help students seeking to merge theory and practice into their education. Additionally, we encourage educators to consider conducting research within their classrooms to identify best practices in applying different types of AI (traditional and generative) in both HRM and public service as a whole. Regardless of the tools used, students will only benefit from learning how to manage and approach AI before joining the workforce as AI is a technology that has staying power. Students must be trained to separate the concept of gaining knowledge and usable skills from the easy-to-access information and content generation AIs can provide. If students learn to navigate the application of AI in their work, situated in the public context, they will be better prepared to address the implementation and applications of AI in public service in the future.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Michelle Allgood

Michelle Allgood is an assistant professor from the University of New Mexico’s School of Public Affairs. Her research focuses on public management, workplace coping and employee well-being, and equity and access issues, especially for members of the disability community. Her recent work has been published in Information Polity, Review of Public Personnel Administration, and Public Performance & Management Review.

Paul Musgrave

Paul Musgrave is a professor of practice from the University of New Mexico’s School of Public Affairs. He has over 40 years of experience as a senior leader in the healthcare sector. His areas of expertise include hospital operations management, human resources, strategic planning, strategic sourcing, hospital-wide process re-engineering and multi-hospital process integration.

Notes

1. For example, students participated in a pre- and post-survey about the larger class simulation and had the opportunity to participate in a weekly survey that reflected on the subject and the simulation design. At the end of the semester, both instructors met with the students one-on-one to review their learning, assign a grade, and reflect on ways to improve the course moving forward.

2. The IRB ethics committee waived the requirement for approval if students signed a consent form allowing the use of their words in an anonymized fashion. Students were presented with the opportunity to participate and share their words after the semester had ended to avoid any undue influence.

References

Appendix A

Sample AI analysis assignment

The purpose of this assignment is to draft a policy your chosen organization may need based on your analysis of the policy and your understanding of the future of HRM. You are expected to critically analyze the material and provide a well-written response with references. The analysis should be formatted (including references) according to APA guidelines (12-point font, Times New Roman, double-spaced) and submitted as a Word document to Canvas. Writing should be clear, well-organized, and contain minimal grammatical errors. There are two parts to this assignment: (a) justification of the policy and (b) drafting the policy language.

Part A: Justification of Policy

You should aim to provide a draft between 500 and 600 words (including references). This requirement helps you write with brevity and focus on the details most important to the analysis. There are several aspects you will be expected to examine as part of your draft. Please note that the prompts below only contain suggested sub-questions. You should be thinking critically about the policy and the major questions that need to be addressed. This requirement means you may be answering questions not listed below.

  • Explain the needed future policy.

    • What is the policy?

    • What is the purpose of the policy? What is the problem being addressed?

    • What is the process described by the policy? Who is in charge of implementing the policy? What issues are at stake?

  • Discuss the environmental factors involved in the creation/implementation of the policy.

    • What factors would influence this policy creation?

    • What factors would influence this policy’s implementation?

    • What challenges this policy’s existence or implementation?

  • Discuss the policy’s implementation in the future.

    • How could the organization implement the adoption of this policy?

    • What resources are available to assist in the adoption and implementation of this policy?

    • How might the organization address challenges to the implementation of the policy?

    • How might the organization leverage environmental conditions to support this policy?

Part B: Policy Language

To help you draft the policy language, you will be using ChatGPT. You will need to submit the following:

  • The prompt you use to prompt the generation of your policy. Make sure you only use publicly available information to guide the ChatGPT results (for privacy reasons) Remember, good policies include information about the policy name, the person responsible for updating/implementing the policy, the purpose of the policy, definitions, the scope of the policy, and a policy statement.

  • The output generated by Chat GPT

  • Your edits to the Chat GPT output of policy

Part C: Reflection

Reflect on your experience with this assignment. You must answer the following questions. Your reflection should be at least two (2) paragraphs).

  • What did you learn while completing this assignment?

  • Reflect on your thinking, learning, and work. What were you most proud of?

  • If you could do this assignment over, what would you do differently?

  • Were the strategies, skills, and procedures you used effective for this assignment?

  • How might you incorporate this type of practice into your work?

Learning Objectives

  • Learning Objective 1.1: Identify the components of HRM operations.

  • Learning Objective 1.2: Explain the environmental conditions that impact HRM practices.

  • Learning Objective 2.1: Explain how an HRM manager implements different HRM methods and functions (including, but not limited to job analysis, recruitment and selection, performance evaluation, compensation, training, etc.) to build healthy workplaces.

  • Learning Objective 2.2: Evaluate the factors (individual, team, organizational, and environmental) that influence healthy workplace behaviors.

  • Learning Objective 2.4: Appraise the need for and best practices related to workplace belonging and inclusion efforts.

  • Learning Objective 2.5: Determine the effectiveness of a real-world organization’s HRM methods and functions.

  • Learning Objective 3.1: Apply their understanding of HRM practices and operations to the public service context.

  • Learning Objective 3.2: Identify the relationship between HRM operational requirements, public policy, and social equity.

Sample AI demonstration activity

Instructions

  1. Open ChatGPT: https://chat.openai.com/

  2. Develop a prompt that produces an onboarding plan for the newly hired CEO that covers some of the concepts we have discussed so far in class: strategic planning, workplace and succession planning, etc.

  3. Work with the AI to refine the onboarding plan and address the following needs:

    1. Address the critical needs identified in the needs narrative found on Canvas

    2. Identify the role of the board members and HRM director in the on-boarding process

    3. Identify what an effective training would look like

  4. Edit the product to focus on supporting the onboarding instructions to support a first-time CEO.

  5. Explore the different supporting products for the onboarding process the AI can generate. Some ideas include:

    1. Slide outlines for different training elements

    2. Orientation packet materials

    3. Reflection questions

    4. Calendar of trainings that need to occur throughout the first 90 days, etc.

Learning Objectives

  • Learning Objective 2.2: Evaluate the factors (individual, team, organizational, and environmental) that influence healthy workplace behaviors.

  • Learning Objective 3.1: Apply their understanding of HRM practices and operations to the public service context.