1,591
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Artificial Intelligence and Implications for the Australian Social Work Journal

ORCID Icon, ORCID Icon & ORCID Icon

Social work is a profession committed to integrity and social justice. The AASW Social Work Practice Standards (AASW, Citation2023) calls on social workers to be critically reflective, ethical practitioners engaged in lifelong professional development and learning. Equally, social work education seeks to prepare students for research-informed, culturally-responsive practice across a diverse range of contexts, and in this Issue, we showcase critical social work education and practice diversity. However, a different ethical challenge to integrity and practice standards is the focus of this Editorial. Here, we highlight some of the concerns and implications of generative Artificial Intelligence (generative AI) for social work education, research, practice, and scholarly publishing.

In November 2022, OpenAI released ChatGPT, a generative AI Large Language Model (LLM) that could generate realistic and natural text outputs from simple prompts. This technology had been in development for some time but had not been released to the public for general use. Since then, there has been a proliferation of different AI models that can generate and augment text, images, video, and audio. Generative AI is being used to perform analytical and interpretive tasks such as language translation; responding to queries on specific data sources, coding, and interpreting code; summarising documents and webpages; and creating case assessments and plans. This technology can be used to construct legal documents; machine learning for facial recognition; and for undertaking medical, mental health, and other diagnostic assessments. These are just some examples. In this fast-moving field, the uses and applications seem endless. The open-sourcing of generative AI models and their underlying architecture means developers are starting to create a myriad of practical applications and tools that rapidly increase the depth and scale of automation, potentially replacing or augmenting many everyday tasks normally performed by humans.

The implications for social work education, practice, research, and scholarship are extensive. As with any new technology, there are a range of stances, from early adopters to positions that have resonance with luddism. This adds to the complexities of responding to AI as a whole profession. Nevertheless, what is clear is that the rise and integration of generative AI systems, at scale, will yield a wide range of practical, ethical, and epistemological problems for many professions, including social work. It is to some of these problems we turn our attention below.

Beginning with social work education, generative AI will have profound effects on assessment and learning for higher education providers. It is likely to cause educators to re-evaluate their educational practices, assessments, and assumptions about what is core to a social work curriculum. Social work will need to refine and reappraise its ideas about critical thinking, ethical decision making, professional judgement, and reflective practice—all skills that are considered core to effective social work practice as outlined in the AASW Practice Standards (AASW, Citation2023). How will we ensure students have an educational environment that promotes the attainment of these capabilities? And what role can generative AI play in the educative process? It will be important to consider any limitations and maintain some scepticism in light of the hype surrounding the capabilities of generative AI. With regards to social work practice, generative AI already has started to make an impact because it is integrated into many digital platforms and applications. The temptation to use generative AI in time-poor environments is real and likely to increase as workplace cultures tighten expectations and routines. Social workers will need to engage in upskilling in digital literacy to assist service users, and themselves, in navigating an increasingly generative, AI-polluted, information landscape. We will need to resist anthropomorphising generative AI by maintaining our critical reflective capabilities. Discerning accurate sources of information has become an even more critical skill. Clear guidelines on the adoption of generative AI seem necessary to enable and ensure its competent and ethical use by students, educators, social workers, researchers, and scholars. At the same time, we need to guard against the temptation to use content generated by generative AI in a manner that delegates knowledge creation to AI (Baron, Citation2023).

Social work scholarship is beginning to grapple with a generative AI future, with some authors highlighting the possibilities and benefits for social work education and research (e.g., Singer et al., Citation2023), and others adopting a more cautious stance (e.g., Hodgson et al., Citation2022). In the field of AI research, positions range from the exuberance and optimism of big tech companies such as Alphabet (Google, YouTube parent company) and Meta (Facebook, Instagram, WhatsApp, Threads parent company) to concerns about life-ending existential threats that would arise if we ever created super-intelligent, general AI systems (Yudkowsky, Citation2023). More immediately, we need to comprehend and question the different end-user experimentations that students, educators, and researchers are grappling with: will it lead to a violation of academic and professional integrity; is it acceptable to use AI as a surrogate tutor to teach and learn critical thinking and practice concepts; should AI be used to do editing, proofing, and corrections to citations and references; is it acceptable to use AI to summarise articles, and suggest critical questions? We might accept that it is not ethical or transparent to use generative AI to write content and pass it off as one's own work, but what about using something like perplexity.ai to generate novel research ideas that are instantly sourced from internet documents, which propose areas for future research? A key concern is ensuring ethical integrity in authorship and professional communications. Thus, an important issue for educators, researchers, authors, and publishers is declaring who, or what, is generating the text, idea or image.

Taylor and Francis (T&F), the publisher of Australian Social Work, recently released a statement acknowledging that “the use of generative AI tools in research and writing is an evolving practice” (online), including its use in academic research (https://newsroom.taylorandfrancisgroup.com/taylor-francis-clarifies-the-responsible-use-of-ai-tools-in-academic-content-creation/). They state that “such tools, where used appropriately and responsibly, have the potential to augment research and foster knowledge generation” (online). However, T&F caution that “authors are accountable for the originality, validity, and integrity of their submissions” (online, emphasis added). Authors are expected to use AI tools responsibly in accordance with publishing ethics. For now, T&F reiterate that uniquely human responsibilities such as integrity cannot be undertaken by AI tools. Taylor and Francis confirm that “AI tools must not be listed as an author” and “[w]here… AI tools are used, such use must be acknowledged and documented appropriately” (online, original italics). Equally, the position statement of the international Committee on Publishing Ethics (COPE) identifies that “AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work” (online). Australian Social Work affirms this position that authors must use caution, and that they are accountable for the originality, validity, and integrity of submitted content.

Furthermore, the recently published Australian Research Council’s (Citation2023) Policy on Use of Generative Artificial Intelligence in the ARC’s grants program advises researchers to “use caution in relation to the use of generative AI tools in developing their grant applications” (p. 2). Equally, reviewers must be transparent about the use of AI systems to perform or assist with reviews of manuscripts and grant applications. Taking a lead from the above-noted Australian Research Council’s (2023) Policy, the position of Australian Social Work is that reviewers should not use AI to perform or assist with reviews of articles or books. One key reason is that the integrity of an AI-assisted review cannot be assured and pasting submitted manuscripts into any form of LLMs would constitute a breach of confidentiality and copyright protections.

Generative AI is here to stay and, while it presents a challenge on a range of levels, many observers feel it is opening up imaginative possibilities. In coming months and years, as social work educators, practitioners, and researchers, we will need to engage in serious discussions about the use of generative AI tools within our own educative, practice, and research traditions.

References

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.