2,367
Views
0
CrossRef citations to date
0
Altmetric
Research Article

More or less than human? Evaluating the role of AI-as-participant in online qualitative research

&

ABSTRACT

Artificial intelligence (AI) has an increasing presence in scholarship, posing new challenges and opportunities for qualitative researchers. Generative AI, such as Chat-GPT, can supposedly produce humanlike responses, which has implications for online qualitative research, which relies on human participation. In this paper, we contribute to debates about AI as a research participant (‘AI-as-participant’) and the threat of imposter participation in qualitative research. We share our unexpected encounter with AI during our story completion study on mobile dating during the COVID-19 pandemic and discuss how we identified AI responses within our dataset. Central to our analysis was our theoretical grounding of feminist new materialism, which attuned us to the affective and discursive qualities of our participant data. Using our theoretical lens, in tandem with other strategies, we examined the affective forces that signalled stark differences between previous, human-generated data and that of the current study. Analysing the discursive construction of narratives further alerted us to the absence of humans within our data. We conclude that AI cannot sufficiently replicate affect or capture the richness of human experience that is central to qualitative research, and offer recommendations for future researchers to anticipate and check for AI as an unwelcome research participant.

Introduction: AI entering qualitative research

Much like the advent of the Internet, the potential of generative AI to contribute to and/or disrupt qualitative research is immense. There is an increasing number of AI-powered software tools that can summarise literature and produce annotated bibliographies (Jones Citation2023) or even purportedly analyse qualitative data (Chubb Citation2023; Towler et al. Citation2023). AI has been found to offer a way to rapidly analyse large qualitative datasets when information needs to be quickly synthesised (such as in the case of the COVID-19 pandemic) (Towler et al. Citation2023). Similarly, Anis and French (Citation2023) claim that AI can be instructed to analyse large amounts of data, using the qualitative researcher’s coding scheme, which can save researchers time, and free them to focus on the subjective parts of analysis, such as adjusting codes, (re)conceptualising, and theorising the data. As qualitative researchers in psychology, we are at a watershed moment in determining the implications for our role as researchers and the ways in which we conduct research.

In this paper, we specifically attend to the implications of AI fraudulently replacing human participants in qualitative research, otherwise known as ‘imposter participation’ (Drysdale et al. Citation2023a; Hewitt, Purcell, and Bundy Citation2022; Jones et al. Citation2021). We aim to contribute to emerging scholarship about the introduction of AI (Anis and French Citation2023; Chubb Citation2023; Hämäläinen, Tavast, and Kunnari Citation2023; Towler et al. Citation2023), offering our reflections and critical evaluation of the role of AI-as-participant in qualitative research, especially that which is conducted online. However, we hope to extend this ongoing conversation, by situating our reflexive discussion within the theoretical framework of feminist new materialism (Lupton Citation2019a, Citation2019b). Broadly speaking, this theoretical approach emphasises the relational and affective encounters people can have with other people and more-than-human actors (e.g., generative AI) in a given situation (Lupton Citation2019b, Citation2020; Lupton and Maslen Citation2018). Attention is additionally given to the ‘agential capacities’ that all actors (human and non-human) have to act upon and, in relation, to each other (Lupton Citation2019b, 1999). We draw insights from this theoretical standpoint to explore and evaluate the role – and agential capacity – of generative AI as an actor within the research assemblage (of researchers, participants, and digital technologies) and consider the implications for qualitative researchers conducting research in a world where AI has now come to exist and operate in online spaces.

Background: the rise of imposter participants in qualitative research

Inauthentic or fraudulent participation in qualitative studies is a growing area of inquiry (Drysdale et al. Citation2023b; Ridge et al. Citation2023; Roehl and Harland Citation2022). ‘Imposter participants’ are individuals who intentionally misrepresent themselves or perform a certain identity for the purpose of meeting inclusion criteria (Ridge et al. Citation2023; Roehl and Harland Citation2022). This is notably different from ‘ineligible participants’, who are well-meaning persons who incorrectly believe they meet inclusion criteria (Jones et al. Citation2021). What draws imposter participants to research is the promise of financial compensation. Imposter participants can come in the form of humans and non-humans (e.g., bots), leading to the widespread use of bot detection software to identify and block their participation (Jones et al. Citation2021; Salinas Citation2022). In some instances, qualitative research has been positioned as not as prone to fraudulent participation, owing to bots’ inability to convincingly answer qualitative questions that often require reflexive responses (Griffin et al. Citation2022). While qualitative researchers have long been cautioning about the risks of fraudulent participation when using the Internet for recruitment purposes (Hamilton and Bowers Citation2006), the COVID-19 pandemic has raised the urgency of this issue.

Many qualitative researchers shifted to conducting their research online, or asynchronously (given the physical restrictions of COVID-19 lockdowns), and realised the benefits of online research, such as accessing hard-to-reach participants or saving time and costs (Dodds and Hess Citation2021; Ridge et al. Citation2023; Salinas Citation2022). However, online and/or asynchronous research is particularly vulnerable to fraudulent participation. Unlike synchronous research where the researcher interacts with a participant in person or over the phone or video chat, in asynchronous research there is less direct contact between researcher and participant. Participants can respond at their own leisure and time, away from the gaze of the researcher, but also more easily misrepresent their identity (Roehl and Harland Citation2022; Salinas Citation2022). As such, there is an increased risk of imposter participants ‘who intentionally complete study tasks inappropriately’ (Jones et al. Citation2021, 2), which can compromise data quality. This is exemplified by a story completion study conducted by Jones et al. (Citation2021), who received a high number of non-sensical responses to their story prompts.

With the rise of fraudulent participation and more recent arrival of generative AI, researchers have started to draw on their own experiences, offering lessons to minimise the impact of imposter participants. Recommendations include: using bot detection software such as CAPTCHA to identify and bar more-than-human actors from participation (Jones et al. Citation2021; Salinas Citation2022); removing any mention of financial incentives on online advertisements to deter imposter participants (O’Donnell et al. Citation2023); and privately sending links directly to participants to ensure greater control over access to the study (Jones et al. Citation2021). To our knowledge, there does not appear to be any explicit investigations on the role of generative AI in producing fraudulent research, other than the threat and how to detect the presence of bots (Jones et al. Citation2021; Salinas Citation2022).

AI as research participant

Our focus is the role of AI as a participant (‘AI-as-participant’) in qualitative research. We intend AI-as-participant to mean the use of AI to replace human participation, as opposed to the use of AI tools (e.g., Grammarly Go) to supplement human participation in research, which is not always a clear distinction and warrants further discussion. Aside from the promised cost savings to recruitment activities (Synthetic Users Citationn.d..), the discussion here is whether AI can stand in, more or less, for humans as ‘synthetic participants’ (Hämäläinen, Tavast, and Kunnari Citation2023) in a way that is meaningful for the aims of qualitative research. The prerequisite of ‘meaningful’ appears to come down to whether generative AI is capable of producing humanlike responses. Hämäläinen, Tavast, and Kunnari (Citation2023), for instance, utilised OpenAI’s Chat-GPT-3 model to generate synthetic responses to open-ended questionnaires and found human participants could not distinguish the synthetic responses from human responses. They recommend the use of generative AI to ideate and pilot qualitative studies, but not to replace human-generated data entirely (Hämäläinen, Tavast, and Kunnari (Citation2023).

From a positivist stance, generative AI has been regarded as a powerful tool, with some quantitative studies supporting the claim that generative AI can produce ostensibly human responses. In one psychological study, Dillion et al. (Citation2023) investigate whether Chat-GPT is capable of producing humanlike judgement. Utilising over 400 moral scenarios, they compared human decisions to Chat-GPT and concluded that Chat-GPT replicated human moral judgement to a ‘remarkable ability’ (Dillion et al. Citation2023, 598). Cognitive psychologists have likewise claimed that Chat-GPT can perform heuristic-based assessments and human-esque problem solving, even better than humans (Binz and Schulz Citation2023). In the computer sciences, Schramowski et al. (Citation2022) argue that large language models, such as Chat-GPT, can make judgements like humans, so long as the quality of data used to train the AI is good enough.

Some researchers, however, caution that AI cannot entirely replace the human participant, as AI reproduces cultural and social biases that are present in its training data, which can perpetuate age-based, racial, gendered, and ableist discrimination (Stahl, Schroeder, and Rodrigues Citation2023). We offer another warning: that if generative AI can produce humanlike responses, there are serious ramifications for the integrity of qualitative research that relies on subjective human participation. This takes us to our own encounter with AI in our research.

Confronting the more-than-human in our research

Along with our larger research team,Footnote1 led by AG, we are in the process of exploring people’s experiences of navigating mobile dating during the COVID-19 pandemic in Aotearoa New Zealand. The first part of this programme of research has involved a three-stage online story completion study, occurring in early 2021 (referred to as Study A), early 2022 (Study B), and late 2023 (Study C), with ethical approval granted by Victoria University of Wellington Human Ethics Committee (ID: 0000029958). This consisted of creating story stems to prompt participants to develop stories in response to hypothetical scenarios involving characters navigating mobile dating in the context of the pandemic (Braun et al. Citation2019; Clarke et al. Citation2017). Our aim was to explore broader social responses to the affordances of, and hindrances to, mobile dating as the pandemic progressed. We were also interested in how people might imagine creating intimacy and connection with others, while negotiating possible hurdles of meeting up in person, against the backdrop of changing ‘alert levels’ and public health guidelines (New Zealand Government Citation2023) – not to mention navigating widely varied social responses to public health issues, such as vaccination and mask-wearing (Gray et al. Citation2020).

Through two initial rounds of data collection, we received stories written by 110 participants, all of whom were recruited through Facebook advertising and paid a koha (gift voucher worth NZD30) for their time. Over a series of six days, we (AG and AB) engaged in an intensive process of data coding and analysis, which we conducted together in person. Other qualitative researchers will be able to easily imagine the reality of this experience – the excitement over finding ‘gold’ quotes or extracts, the ideas and interpretations generated from coding when it is performed together, the headache induced by wading through less-than-articulate responses from participants, and so on.

We specifically conducted a thematic discourse analysis (Clarke Citation2005; Gibson, Lee, and Crabb Citation2015), which involved identifying patterns across the written stories that illustrated broader socially available constructions and ways of speaking (i.e., about living through the pandemic; about mobile dating). We then analysed what rhetorical purposes might be achieved through the execution of these discursive themes (i.e., positioning people in certain ways). This analysis was also informed by our theoretical standpoint of feminist new materialism (Lupton Citation2019a, Citation2019b). Through this theoretical lens, we examined how subjectivity, affect, risk, and intimacy could be constructed through the more-than-human collection – or ‘assemblage’ – (Lupton Citation2013, Citation2019b; Lupton and Maslen Citation2018) of people in relation to each other, mobile dating technologies, and the broader socio-historical context of Aotearoa New Zealand during the COVID-19 pandemic. Given our combined methodological and theoretical perspective, we were deeply focused on the affective, embodied, and discursive construction of our participants’ stories.

As the pandemic has since progressed, and all COVID-19 public health mandates have been removed in Aotearoa New Zealand, we wanted to get a sense of whether people’s sentiment had changed from earlier responses, prompting a third round of data collection. Again, we set up the study using Qualtrics and paid for Facebook advertising to recruit participants aged 18 years and older, who lived in Aotearoa New Zealand, and who had engaged in mobile dating since the start of the pandemic in 2020. Much like other story completion studies (Clarke et al. Citation2017; Jones et al. Citation2021), we took careful steps to ensure the integrity and rigour of the data generated from the study – such as requiring a minimum word limit for more in-depth stories and piloting story stems and instructions, to ensure we elicited data that could help to answer our research questions. Yet, a few days into data collection, the Qualtrics data report flagged a poor-quality response (a 6% quality response was detected in this study vs. 74% and 81% in the first two studies respectively). For context, Qualtrics utilises its ‘ExpertReview technology’ to assess survey responses for ‘ambiguous text’, ‘potential bots’, ‘duplicate responses’, ‘unanswered questions’, and other fraudulent practices (see (Qualtrics Citation2023, for description). This AI technology is aimed to assist researchers with detecting fraudulent responses – whether generated by bots or by human participants – and calculates a quality response score. Individual responses are then flagged to be checked by researchers. This pattern continued each week, with only two responses being detected by Qualtrics as ‘possibly’ passing the quality checks. Notably, in contrast, bots that had been detected in Study A and B had generally produced randomly generated text to fill in the word count, with text that either (nonsensically) referred to COVID-19 or to dating (but often not both, and not in story form). For example:

Extract 1

Story stem: Rangi had been on lots of dates and was using a dating app to hook up with people before COVID but never felt a real connection with anyone. During the latest COVID lockdown, Rangi is feeling connected to someone Rangi recently met on a dating app, but they haven’t yet met in person. What happens next?

Response: While many believe the virus will ease during the summer, the consensus is that no one is sure if this is the case. We obviously have no immunity to it, and a vaccine is probably 18 months or so away. In addition, this is a mutated virus with multiple strains, which makes it possible that any antiviral methods we develop will not be applicable to strains that spread when they are available. Finally, climate change increases the likelihood that we’ll see an increase in viruses like COVID-19…. [text continues]

In comparison, responses identified as bot-generated in Study C had provided word-perfect stories responding to the story stems (as we will show below). Our initial reaction was one of dismay, followed by the swift realisation that it was inevitable that this would happen, what with the recent proliferation of generative AI technology, such as Chat-GPT, in 2022. This presented our team with the challenge – or potential opportunity – to work through what role AI played in the research assemblage, of which we were a part, and what implications it had for our research.

Quite quickly we (AG and AB) were struck with the question of what these stories might offer us as a discrete dataset – wondering what potential they had to give us insight into broader socially available ways of narrating intimacy in relation to the pandemic. This led to discussions as a wider research team, as well as some rich conversations with colleagues in qualitative research (Virginia Braun), sociology (Annemarie Jutel), literary studies (Bonnie Etherington), and computer science (Simon McCallum). Like us, we were met with mixed responses about the possibilities and limitations that AI might offer qualitative research. For instance, despite our initial excitement about the philosophical possibilities of AI offering insights into patterns of human experience, one main concern we had was that in analysing AI responses, we did not have confidence about which AI tools were used, and therefore what datasets the AI drew from and generated meaning from to provide responses to our prompts. Increasingly, we started to question the methodological implications of using AI-generated data, and what problems it could introduce in terms of the integrity of the data generated (i.e., not knowing what datasets have informed the AI; who might benefit – or not from knowledge or policies generated from research; and the decontextualised nature of AI dataFootnote2). But what finally helped us to determine our position on AI-as-participant was our analysis of the stories in relation to those that had been generated by our previous (human) participants.

Capturing the more-than-human actors in our dataset

Previous research has highlighted the use of software or recruitment practices (such as screening participants) to deter and detect fraudulent responses (Jones et al. Citation2021; O’Donnell et al. Citation2023); however, we believe that one’s theoretical framework in qualitative research has something valuable to offer in determining the role of AI in the research process and how to respond to it. In this instance, we use our theoretical approach of feminist new materialism to illustrate our point, by especially focusing on the way in which we could attend to the ‘affective intensities’ and ‘agential capacities’ that human and non-human actors possess and express (Lupton Citation2019b). Scholars have used feminist new materialism theory, for example, to describe the wide range of ‘affective intensities’ (e.g., boredom, enthralment) that scrolling social media can elicit in people (Paasonen Citation2016). Similarly, health apps have been critically assessed, as exemplars of digital technologies, for how they can be rewarding for users by increasing their agency to undertake health practices, while also creating normative, restrictive expectations of ideal body types (Lupton Citation2020). Engaging in feminist new materialism additionally involves a focus not just on language or discourse, but also on the materiality and agency of the physical world that shapes people’s experiences, bodies, and their being in the world (Jagger Citation2015). Having taken a feminist new materialist approach to analyse previous data from Study A and B, we were already attuned to our previous participants’ affective responses to participation. Upon reflection, we can now see how these responses shaped what data was generated, the implications for our analysis, and even our own reactions to our human and synthetic participants, as we will discuss below.

Affective forces: expressions of participation

When participating in research, people become involved within the broader research ‘assemblage’, acting, and feeling in response and, in relation, to research questions, tasks of data collection (e.g., interviews, diaries, writing stories), and the researchers involved. This can even be said for participants involved in an anonymous online story completion study. For example, a few participants in our previous rounds of data collection (Study A and Study B) displayed strong responses of frustration:

Extract 2

Story stem: Aroha’s currently living with other family members during the pandemic and working from home. After chatting to a match on a dating app, the match has asked if they can video chat. What happens next?

Response: I don’t know. You’ve said nothing about who Aroha is, or what they might like to chat about. I really can’t understand what it is you expect me to imagine in this scenario. I don’t even do video chat, but I don’t expect I’d feel it as being any more of a real connection than text chatting, so it seems pointless to me. I need to meet someone in real life to know how I feel about them, so any form of online chatting is just a frustrating waste of time to me. Much like the frustrating waste of time of having to come up with 1000 characters just to move on from a scenario that’s even more stupid than the previous one. If this doesn’t become more interesting soon, I’m just going to give up on it, and you’ll miss out on whatever insights might be gained from my unique perspective. Did the designer of this survey even go through the experience of trying to complete it themselves? Or ask friends or family to have a go and give their feedback? If you want me to come up with 1000 characters in response, then give me more interesting scenarios! [Participant B13, cisgender male, ‘non-racist human’, 39Footnote3]

As demonstrated here, participants can have strong reactions to what is asked of them through participation, and some expressed clear frustration over having to meet a minimum character limit to finish the study and receive their gift voucher. This participant above also models the idiosyncratic preferences of humans – that regardless of piloting any data collection materials (which we had done), there is always the risk of certain people not responding positively or in the way we as researchers might hope them to.

The material conditions of the research process also became clear in our study, when participants displayed less-than-desirable responses to the story prompts:

Extract 3

Story stem: Kennedy and Sam recently matched on their dating app and have really connected. They make each other laugh, they share similar interests, and are looking for the same thing in a relationship. With a recent move to Orange in the COVID traffic light system, they’ve agreed to meet up for a date. It’s just as they hoped in person – the spark is there, and they’ve slipped into easy conversation. Later in the conversation, Sam mentions not wanting to get vaccinated. What happens next …

Response: Kennedy realises that personal connections take months, if not years, to start to full understand the other person. Wonders to himself what else this woman fails to comprehend about science and what other crazy thoughts are going on in her head.

Having had the debate with close friends and family he knows this is a polarising issue and no amount of evidence or intelligence can save some people, so he is dodging a bullet by finishing his drink and parting amicably.

1000 characters? yada yada yada

1000 characters? yada yada yada

1000 characters? yada yada yada

1000 characters? yada yada yada

1000 characters? yada yada yada [continues] [Participant B10, cisgender male, straight, New Zealander European/Pākehā, 54]

Extract 4

Response: … [irrelevant addendum to reach 1000 characters so I can move on…………………………………………………………] [Participant B13, cisgender male, ‘non-racist human’, 39]

For us, as researchers, it was equally frustrating being on the receiving end of participants’ complaints (albeit only a few) and having to honour our promise of financial compensation for their participation, even though at times it felt somewhat fraudulent or poor quality (another example was a participant who somewhat amusingly finished the word count with emojis). We were acutely aware of these affective reactions and regularly talked them through during sessions of data analysis. Nevertheless, they still gave us some insight into possible affective responses to mobile dating during the pandemic for the parts they did respond to (e.g., the awkwardness of virtual dating, or the difficulties of navigating alternative opinions on vaccinations). They also eventually proved to be a valuable source of data to later compare the AI-generated stories against.

When analysing the datasets from Studies A and B, we also read a range of emotion displayed, not only in response to participation, but also to the situation of dating during the pandemic – from humour, to boredom, to despair, to sarcasm. This rich character of storytelling is illustrated in this colourful story below:

Extract 5

Story stem: Kennedy and Sam have just matched and have quickly hit it off while chatting to each other on the app. Kennedy is an essential worker, based at a regional hospital, while Sam is working from home. Another COVID variant recently started spreading around the country, but the country is still at Orange in the COVID traffic light system. Kennedy has just asked Sam on a date … What happens next?

Response: Kennedy and Sam meet at a coffe shop decide to find an epic spot to go on there date they head to an awesome walking track and have an amazing picnic lunch a bottle of champagne and don’t really have to worry about the whole covid situation after conversing about life they get a bit tipsy where things start getting a bit heated the local kea [native birds] are very impressed as the two love birds court each other they decide to make sure that there really compatible the ravish each other and fuck like animals in the bush both relieved of the stress of the city and covid bullshit. They decide to spice things up a bit with the other attractive couple that caught them out it’s really just a large mixing of fluids and love they forgot about asking covid questions but that’s ok cause Google’s tracking everyone just fine. As they return back to there city’s covid is still happening so they bang more eventually leading to pregnancy and a happy life covid eventually gets ignored and a few die but we all still carry on. [Participant B7, cisgender male, straight, New Zealander European/Pākehā, 36]

The very method of story completion was inevitably going to gather stories from participants that would range from dramatic, to humorous, to wildly creative, what with instructions asking participants not to relate personal experiences and to be as creative as they liked. While being highly sarcastic, the story shown above illustrates the deeper frustrations and concerns that people felt during the height of the pandemic, particularly in relation to government mandates (Hunt Citation2021). This is further demonstrated here:

Extract 6

Story stem: Kennedy and Sam recently matched on their dating app and have really connected. They make each other laugh, they share similar interests, and are looking for the same thing in a relationship. With a recent move to Orange in the COVID traffic light system, they’ve agreed to meet up for a date. It’s just as they hoped in person – the spark is there and they’ve slipped into easy conversation. Later in the conversation, Sam mentions not wanting to get vaccinated. What happens next …

Response: Kennedy asks why Sam made that decision. Sam explains saying they don’t feel comfortable taking something that is pushed for free worldwide. If everyone’s health is important why isn’t cancer, hiv or any medications that people may need free? That we were promised it wouldn’t be forced vaccinations at the start. That if we go hard to stamp it out we would get all our freedoms back but all that happened was the government got more and more control in all areas of our lives. People including doctors, other medical professionals, teachers and supermarket workers who had to work without a vaccine during lockdown are now required to be fully vaccinated including boosters and are losing their jobs. The people who got our team of 5 million through the worst parts are just cast aside like rubbish. There isn’t enough research on this medication. We are the human guinea pigs in these trials. There are adverse life changing effects that sometimes leads to death. It just isn’t worth it. Kennedy is overwhelmed with all that information and is sorry they even asked. [Participant B5, non-binary, bisexual, New Zealand European/Pākehā, 31]

Navigating conflict and differing world views was a key discursive theme we identified in our two previous rounds of data analysis, particularly in response to story stems about daters’ vaccination status. We had specifically chosen this topic of vaccination to explore the complicated and often intense inter-relationships between people, health practices, and the technologies involved. Some stories included the affective intensity of a character providing a long-winded ‘rant’ about government conspiracies and lost freedoms (as illustrated in Extract 6 above). The dialogue between characters often captured the frustration of encountering someone with values and/or beliefs that contrasted to one’s own – the jarring experience of coming across someone who might not adhere to the ‘team of 5 million’ discourse that characterised early responses to COVID-19 in Aotearoa New Zealand (Beattie and Priestley Citation2021). Stories such as the one above drew us in as the analysts – the ease with which we could imagine this conversation playing out, as fellow New Zealanders, the sense of being overwhelmed by someone’s vitriolic response, and, of course, the excitement we felt at being presented with a story that so richly characterised public sentiment and tension during the pandemic.

Many other participants also applied the instructions and wrote imaginative stories involving characters with deeply familiar back-stories:

Extract 7

Story prompt: Aroha’s currently living with other family members during the pandemic and working from home. After chatting to a match on a dating app, the match has asked if they can video chat… What happens next?

Response: Aroha responded to the request Auare akeFootnote4’. he wants a vid chat with me. She hurriedly left the room to go and bother her sister for a top and skinny jeans to look her best for the video call. “Ngāpera! She yelled ‘where’s that dropped neckline Trelise Cooper top of yours?’ ‘What do you want it for Aroha?’ her sibling responded .“I got this vid chat happening in half an hour and I wanna look my best, this guy is such a hunk. […] “oh kia tere girl, it ‘ll only be for like 20 mins while I’m chatting with him.”.[…] Aroha hurriedly donned the clothing, fixed her hair and put makeup on to give her a radiant natural glow. Positioning herself in front of her laptop preparing for the call, she could feel her heart rate elevating and had a myriad of questions on her mind Would he like me?, can he see my pouch puku under my younger sister’s skinny jeans? I t’s ok, kua pai, breathe deeply and ring him.waea atu ia … [Participant B15, cisgender female, straight, Māori, 55]

This story poignantly illustrates the affective, embodied experience of being a young woman, trying to look her best for a date, but also while dealing with the reality of fitting her body into the confines of uncomfortable ‘skinny’ clothes, experiencing all the internal experiences of nerves, and making certain choices to meet the expectations of a first date. In addition to this, the participant’s story is peppered with Te Reo Māori – the Indigenous language of Aotearoa New Zealand – words that bring through this participant’s cultural background and the context in which she’s writing.

In contrast, the AI-generated stories glaringly lacked any signs of human affect and embodiment. We invite readers to consider this story (from Study C) in comparison to the Kennedy and Sam ‘rant’ in Extract 6 previously:

Extract 8

Story stem: Greg and Dave are meeting in-person for the first time. Greg is wearing a mask, but Dave isn’t. Greg asks Dave to put on a mask, but Dave says: ‘I’m healthy, you can trust me’. What happens next?”

Response: Greg was wearing a mask, but Dave wasn’t. They were meeting in a crowded cafe, and Greg felt a little apprehensive. He knows that in the current outbreak, wearing a mask is an important measure to protect yourself and others.

Greg decided to remind Dave to wear a mask. He whispered to Dave, ‘Hey Dave, I know you probably don’t feel the need to wear a mask, but at this point, it’s our responsibility to stay safe and healthy.’ Could you please put on a mask?”

Dave frowned at Greg’s words. He replied, ‘I’m healthy. You can trust me. I don’t feel the need to wear a mask.’

Greg understood Dave’s point of view, but he still stood his ground. He explained: ‘Wearing a mask is not only to protect yourself, but also to protect others. We don’t know if we have the virus, so wearing a mask is a responsible thing to do.’

Dave was silent for a moment, then took a deep breath. He realized that Greg’s concern was motivated by a concern for the health of both parties. Dave nodded and pulled a mask from his pocket and put it on his face.

Greg was relieved and thanked Dave for his cooperation. They sat down and began to chat happily. Even though they wore masks, their conversations and laughter were still full of friendliness and closeness.

This little episode helped Greg and Dave get to know each other better. They learn to respect and care for each other’s needs, while also realizing that it is each of our responsibility to stay safe and healthy during this special time.

Greg and Dave’s story tells us that while people may have different attitudes and perspectives on the pandemic, we can work through our differences with mutual understanding and respect. In the face of the epidemic, we should uphold a sense of responsibility and care for others, and work together to protect our own health and the health of others. [Participant R_3MEgHOSbJ1iYAhW]

Reading this story felt ‘off’ from the very beginning for us: descriptions of the characters appeared to us to be slightly odd (e.g., a man whispering to another in a crowded café). Word ‘choices’ also did not ring true, in that they were often quite stilted or formal in register (e.g., ‘full of friendliness and closeness’), rather than more colloquial words we had previously seen in participants’ stories. What did come through was a moralistic or didactic tone of what one ‘should’ do in such a situation. One recurring response we had to reading this extract (and many others like it) was the way it sounded as if it had almost been taken from a Ministry of Health ‘protect yourself’ educational campaign (New Zealand Government Citation2023). We simply continued to come up against this stark difference between the varied ways our human participants had responded and the sanitised versions we received in Study C.

Discursive functions: absences of participation

It was the absence of mistakes, and not just affect, that made us question the origin of the responses in Study C and their value for our research. Participants’ stories across the previous two studies were naturally varied in the choice of words used and in how grammatically ‘correct’ they were. Some stories featured reported speech, others did not. Some included more typos than others. Some were structured into separate paragraphs while one or two were entirely devoid of full stops (one of our more challenging reads). The only commonality between our two previous datasets was, paradoxically, the variation – the idiosyncratic nature of human writing. In comparison, responses from Study C were consistently word-perfect and ‘clean’ of any errors (except, strangely, character names which sometimes got mixed up or mis-spelled). This can be seen in the extracts already provided above, as well as here:

Extract 9

Story stem: Bryan and Kate greet each other at the start of their date. Neither is wearing a face mask. Bryan goes to hug Kate. What happens next?

Response: Brian and Kate greeted each other gleefully at the start of their date, neither of them wearing masks. Brian approached Kate passionately and tried to hug her.

Just before he touched her, however, Kate stopped and took a slight step back. Her expression became a little awkward, and she said softly, ‘Brian, I think we should keep some distance, at least for now.’

Feeling a little confused, Brian stopped and looked into Kate’s eyes, trying to understand what she meant. Kate explained: ‘I know we’re all excited, but given the current situation, we should follow social distancing and hygiene measures. It’s for our own health and the safety of others.’

Brian began to realize that his impulse might have been irresponsible, and he was grateful to Kate for reminding him. He nodded and apologized, saying, ‘I’m sorry, Kate, I didn’t think about that. You’re right, we should follow the rules and stay safe.’

Kate smiled and appreciated Brian’s understanding and cooperation. They decide to keep dating, but do it in a safer and more responsible way.

For the rest of their dates, Brian and Kate chose to be outdoors, observe social distancing, and wear masks. They walked, biked and enjoyed nature together. Even though they can’t touch as closely as before, they build a deeper emotional connection through deep dialogue and interaction.

Over time, Brian and Kate’s relationship deepens. They learn to connect on a deeper level through virtual communication and intimate conversations. They support and understand each other and face together the challenges brought by this extraordinary period.

Brian and Kate’s story shows us that even in the face of difficulties and limitations, we can still find creative ways to build and maintain relationships. Following the rules and staying safe is what we should do, and by communicating and understanding each other, we can find a true emotional connection during this special time.

The reported speech in the above extract is introduced with Oxford commas, commas that reflect grammatically correct written speech for pauses, and no errant spaces between punctuation marks. While one or two (human) participants might be sticklers for English grammar or use tools such as Grammarly Go (AI grammar checker); however, it seems statistically unlikely that an entire sample of participants would be. As shown above, and in Extract 8, responses in Study C also regularly ended with ‘this story reminds us’ or ‘this story reflects’ and offered an explicit moral lesson about appropriate feelings or social behaviour during the pandemic. In analysing the stories across our three datasets, we realised that it was not just the tone of the story or characters being seen to ‘do the right thing’, but that these elements needed to be read in combination with other elements. This extract is taken from Study A:

Extract 10

Story stem: Two people match on a dating app during Level 4 lockdown and have been chatting online ever since. Both feel like they’ve really hit it off. The New Zealand Government has just announced that the country will be moving down to Level 3 tomorrow. One of them suggests finally meeting up. What happens next?

Response: Sarah was keen at first but then decided to bail as she had knots in her stomach. Tim was super keen and guttered that she had cancelled. Sarah then suggested to Tim they will meet at Level 2. Tim and Sarah both mutually agreed to meet at a level 2 lockdown. (Participant A12, female, heterosexual, NZ European/Pākehā, 22)

In this extract, the characters clearly choose to follow the government mandates at the time, much like the characters in Extract 9. However, the 22-year-old narrator’s voice is apparent in her word choice, of ‘super keen’, ‘guttered’, with attendant orthographic spelling mistakes (i.e., a word written the way it sounds). It is by examining these narratives in relation to each other that the ‘synthetic’ responses of AI become obvious. In addition to tone and word choice, we noticed a recurring pattern across the responses – they always ended neatly, either in the traditional ‘causal arc’ (of beginning, middle, end; see (Alison Citation2019) or with a moralistic summary, as already noted above. The story structures came across as formulaic. In comparison, while we did identify a tendency towards a ‘happily ever after’ narrative arc (especially in Study A, with participants yearning for a return to ‘normal’ after the pandemic), participants varied considerably in terms of the length of story they wrote and how they ended them, often ending abruptly (presumably having realised they had reached the word limit). We believe that this lack of affect is a result of generative AI simply being unhindered by the material conditions of human research participation (e.g., frustration, fatigue, wanting to get paid, time available, etc.).

In this third study, we were presented with the question of whether generative AI could play a role in qualitative research, and were open to what possibilities AI could supply for our research. However, as we have discussed in this section, in analysing the data, we became acutely aware of the disparity between our human-generated stories and those written by AI. We therefore concluded that AI could not offer us the rich, embodied, affective responses that stem from humans’ subjective experience, or shed light on what it might be like to navigate intimacy while living through the pandemic. Yet, our experience suggests that AI will continue to be an unwelcome actor in qualitative research, which we discuss in our next section.

Anticipating and checking for AI-as-participant

We now consider several recommendations for qualitative researchers on how to anticipate and check for AI as an unwelcome research participant (see ). These recommendations not only draw from and extend our reflections in the previous sections, but also include several strategies that we would have undertaken with the benefit of hindsight. Our hope is that they offer practical steps to support other researchers to navigate and respond to AI-as-participant when conducting qualitative research. We do not believe that there is just one step to follow that will enable the detection of AI-as-participant(s); rather, multiple steps need to be undertaken, that they need to be considered iteratively, in tandem, and at different points in the research process. Furthermore, we argue that any suggested steps that researchers ultimately take will be determined by the overarching theoretical framework that shapes all elements of study design and conduct.

Table 1. Checklist for anticipating and managing AI in qualitative research.

Start with Theoretical Framework

The theoretical framework of a study determines what epistemological role researchers take up in the research process and shapes decisions regarding study design and what tools are used. Hence, in this instance, your theoretical framework acts as a guide for determining your stance to AI-as-participant and then deciding which strategies you might take up in anticipating and checking for AI. Here, we have focused on the specific example of feminist new materialism, exploring the active role that AI had come to bear on our study, reflecting on the possibilities and challenges it offered us, and finally determining what we made of it. In comparison, as we noted earlier, AI has not been considered unwelcome to all in research. Positivist research that is focused on gathering summative information might not be as sensitive to – or as affected by – the presence or use of AI (see, for example, Dillion et al. Citation2023; Hämäläinen, Tavast, and Kunnari Citation2023; Schramowski et al. Citation2022), although that remains to be seen by researchers from these quarters. We imagine, however, that other qualitative researchers positioned in social constructionist, interpretivist, or critical realist traditions might respond similarly to us, given that qualitative researchers tend to be engaged with and open to the nuance and context of human experience. Our inclination, therefore, is that one’s theoretical framework plays a central, guiding force in determining one’s stance to the involvement of AI and can also offer researchers some ways of anticipating and/or checking the presence of AI, in combination with other strategies across the research process (i.e., as listed in ).

As we discussed above, in taking a feminist new materialist lens, we were also primed to explore the discursive, affective, and embodied aspects of how people might create and negotiate intimacy with each other, through mobile dating technologies, during the pandemic. We were especially interested in the relational challenges of dating during the pandemic, as well as the agential opportunities and foreclosures of mobile dating. In other words, we designed our story completion study to explore how stories of intimacy were written, what emotions were coming through these stories, and what experiences and ways of ‘doing’ mobile dating people imagined as possible (or not) during the pandemic. Our research questions were as follows:

  1. What discursive scripts of intimacy are made available or foreclosed through mobile dating apps and in the context of COVID-19, in how people can understand and express intimacy?

  2. What affective responses and relational connections are generated through dating technologies?

It was this theoretical interest (and insight) into discourse, affect, and embodiment that helped us to analyse and finally identify the absence of human perspective from our responses and therefore failed to meaningfully answer our research questions. In much the same way as theory, research questions can serve as parameters for guiding what type of data will be eligible for inclusion in a study.

Design your research for humans, but anticipate AI

Our next recommendation is to acknowledge the potential of AI-as-participant when designing your research. One way to do so, is to acknowledge its potential use or involvement in research materials (e.g., participant information sheet, recruitment communications etc.). This practice is already occurring in other parts of universities. For example, when we were developing our research materials for Study C in late 2023, our university (Victoria University of Wellington – Te Herenga Waka), updated its academic policy on students’ use of AI in assignments (Victoria University of Wellington Citation2023). The policy offers a starting point on how AI can support student learning, as well as what constitutes appropriate and inappropriate use of AI, and therefore enables teachers and students to develop a shared understanding of usage that supports learning and that avoids undermining academic integrity. We now wish that we had anticipated and acknowledged AI-as-participant in our participant information sheet and communications to participants. Doing so would have set expectations to participants about acceptable or non-acceptable use, consistent with our theoretical and methodological standpoint, as well as our method of data collection.

There are other benefits to anticipating AI in the design process. As our research materials were silent concerning AI, we did not feel that we could withhold koha (gift vouchers) for participation, if we had identified human participants using AI (in our case, we finally determined that all our ‘participants’ were bots using AI, based on the triangulation of Qualtrics reports with our own manual analysis of the responses (also see Godinho, Schell, and Cunningham Citation2020 for further suggestions). While participants’ use of AI could be viewed as undermining the purpose of koha (which recognises the time and labour it takes to complete the stories, in the case of our study), they would not strictly contravene the conditions of research participation, meaning paying a gift voucher might still seem to be necessary. Providing participants with a disclaimer or some additional guidelines on the use of AI would help to alleviate this ethical dilemma for researchers.

Anticipating AI does not, however, mean designing for AI. It is critical that qualitative researchers still design their projects with the human participant in mind. Our key qualitative research materials – the story prompts – were designed for responses by humans, with rich affective and embodied experiences. Our story prompts were intentionally brief, lacked specificity, references to demographics or pronouns, and/or open-ended (see the extracts provided above). This was to avoid leading participants, to elicit a wide range of interpretations to the scenarios in each prompt, and, critically, to enable participants to fill in the gaps with fictional accounts based on their imaginations, biographical experiences or understandings, or socially available ways of speaking or thinking about life during the pandemic. While we were not anticipating AI-as-participant at this point in our research, the generic and relative lack of detail in the prompts was important in being able to later detect the presence of AI. Natural Language Programmes, like the current example of Chat-GPT, can supposedly produce more humanlike responses when the instructions given to them are specific and include step-by-step information, such as preferred writing style, tone, character aliases, and so on (Samant Citation2023). Our experience suggests that AI will produce plain or more generic responses when given less instruction (such as brief story stems or prompts). Researchers should therefore consider how their qualitative research materials (e.g., interview questions, focus group scripts, story completion prompts) can sufficiently instruct their human participant without providing excessive suggestions on how to respond. Detailed instructions, on the other hand, may inadvertently make it easier for AI-as-participant to produce humanlike responses.

Pilot your research ideas with humans to create a baseline

Another lesson we have learned is the importance of creating a human baseline for your dataset. As we described earlier, we collected data in previous rounds of the larger study (Study A and Study B). These rounds provided us with a range of human responses to our research materials or, in other words, a baseline of human participation in our research. As such, our ability to detect synthetic/imposter responses in Study C was predicated on having prior human-generated data with which to compare them. We therefore recommend researchers test their data collection materials (e.g., interview questions, focus group scripts, etc.) with colleagues, postgraduate students, or team members to develop an understanding and expectation of the range of possible human responses to their research.

It is notable that this recommendation starkly contrasts to other research on AI-as-participant. For instance, Hämäläinen, Tavast, and Kunnari (Citation2023) recommend researchers actually use AI-as-participant to generate and test potential responses for later human participant-driven research. Their recommendation is partly owing to focusing on the potential benefits of AI in research. In contrast, our focus is less about the potential of AI and more about the reality of AI being used by human participants (or bots) to participate in fraudulent ways. As such, we recommend the opposite: that human-generated data is used to pilot the research and provide a baseline for researchers to learn what to expect from human participants.

Through this process, researchers can also use their original research questions as a guideline for determining the quality and integrity of responses (both initial pilot responses and then responses once the study has begun). Traditional piloting of data collection materials provides researchers with assurance of an element of credibility and reliability, that they will generate the type of responses to answer the research questions. But piloting can also help to check the integrity of responses – that the data generated is what researchers believe they are collecting (i.e., human experiences, not synthetic or AI-generated approximations) and that can provide researchers with the richness and experiential insight that is sought in qualitative research.

Utilise tools at your discretion

Much like previous strategies for identifying bots (Teitcher et al. Citation2015), we recommend that, when appropriate, researchers use tools to detect AI-as-participant. We used the affordances of Qualtrics, which was the research platform we were using in this study. These tools can initially flag potentially fraudulent responses and streamline the process of assessing the presence of AI-as-participant. There are also AI detection tools that, as it currently stands, are free to use (e.g., ZeroGPT) and provide a determination of how likely the text was written using AI (with a percentage score). However, we caution that researchers do not overly rely on these tools and still exercise their own judgement, especially given that such tools are still open to considerable errors and in need of further development (Kirchner et al. Citation2023). For example, in detecting non-standard English, tools such as ZeroGPT can incorrectly flag the language of neurodiverse and second language English speakers as AI (Corless Citation2023; Simon McCallum, personal communication, October 14, 2023).

As an aside, we are also concerned about the commodification of AI detection tools. There could be a future implication for the equity of research, if detection tools with higher accuracy become premium models and access becomes restricted to those who cannot afford to use them. If AI detection tools are only available to institutions and/or academics that have the resources to access them, then human participant research could become more expensive and less accessible. Again, this raises opportunities for the continued development of open access software and the role that institutions might need to play in widening availability to researchers.

Nevertheless, if used cautiously, AI (somewhat ironically) can assist researchers in catching instances of more-than-human participation. As we experienced, however, all bot- and AI-detection reports need to be carefully considered in combination with, and in relation to, an analysis of flagged responses, through the lens of one’s theoretical paradigm, and methodological design.

A final note

As alluded to earlier, the recommendations we have provided are not just offered to individual researchers, but also – or even primarily – to research institutions. This is because there is a deeper issue of equity in responding to the challenges of AI-as-participant. What is notably silent in most of the literature that has been written on imposter or fraudulent research participation (except see Ridge et al. Citation2023) is the additional labour that is placed on the researcher to spot imposter participants. Aside from the noted laborious process of trawling through hundreds of bot-generated participant responses (e.g., as Drysdale et al. Citation2023b experienced), not much has been said about the labour that the threat of imposter participants or synthetic responses places on the researcher. While we offer suggestions to other researchers for dealing with the practicalities of this dilemma, we are concerned that simply adding more steps for the researcher to undertake independently will exacerbate known issues and pressures in qualitative research, such as underfunding, unrecognised labour (that does not fit the scientific model of research), burnout, and so forth. Our hope is that these recommendations are also taken up by university institutions, research centres, and ethics committees, to start developing policies and practices (including access to any necessary tools) to reduce the burden of individual researchers having to adjust to AI without institutional support.

Furthermore, we want to acknowledge our viewpoint and our own subjective decision-making and assumptions that we relied on to confront and make sense of AI-as-participant. Engaging in this work required using the theoretical and practical tools at our disposal, but also relying on our own subjective ‘sense’ of what counts as human participation or determining what might be more-than-human. We acknowledge that any attempt to determine what is human invites possible ableist and Eurocentric perspectives. We therefore welcome scholars in other critical paradigms (e.g., crip theory; critical race studies) to enter this conversation, if they have not already, and to consider other limitations or even opportunities for the role of AI in qualitative research, to explore whether there are possibilities of AI supplementing or assisting human participation (i.e., in the case of neurodiverse participants, or second language English speakers using AI to facilitate written responses).

Conclusion

In setting out to conduct this study, we never anticipated that we would be contending with AI as an agentic player within the research process. Yet, as qualitative researchers, we are continually reflecting on – and grappling with – the ethical and methodological implications of the choices we make in our pursuit for understanding human experience. AI-as-participant just so happens to be one of the most contemporary issues within research, which requires careful thought and approach.

We found AI to be not only an unwelcome presence in our project, but also an unconvincing participant. As such, our research departs from prior studies (Dillion et al. Citation2023; Hämäläinen, Tavast, and Kunnari Citation2023; Schramowski et al. Citation2022) that claim AI can produce convincing humanlike responses, and therefore replace human participants in research in certain instances. What distinguishes these studies to ours is a theoretical grounding. Prior empirical work appears to have interpreted human phenomena from a largely positivist lens: where moral reasoning or heuristic assessments can be measured or generalised into scorecards and yes/no statements, rendering the human perspective into a (reductive) framework that can be more easily replicable by computational or synthetic interpretation. In contrast, our theory of feminist new materialism guided our interest in the affective and embodied experience of dating. We were drawn to the thrills and disappointments of dating, the frustrations, and challenges of trying dating apps, as well as the opportunities that they might create for more-than-human modes of intimacy. In general, we found that AI poorly simulated these experiences. While the human participant responses were brimming with expressions of affect – not only between characters in the stories, but also towards us as researchers – the AI responses were notably flat and lacked affect or embodiment. Furthermore, the neatness, the lack of variation in the choice of words, tone, and narrative structure in AI responses flagged a possible absence of human participation. Ultimately, human-generated stories encapsulated the ‘messiness’ that is inevitably characteristic of human experience (and human social research).

Our aim in contributing this reflexive discussion is that in identifying and analysing the more-than-human involvement of AI in our research we, in fact, discovered it to be less than human. Whilst there is no denying that AI has come to be a part of the assemblage of research, we contend that it simply cannot replace the materiality and affective quality of human experience that is the focus of qualitative research in psychology and related social sciences.

Acknowledgement

Many thanks to Antonia Lyons, Deborah Lupton, Virginia Braun, and Clive Aspin for generously reading and commenting on an earlier draft of this article. Thanks also to Simon McCallum, Annemarie Jutel, and Bonnie Etherington for sharing their thoughts and comments.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Faculty of Health, Victoria University of Wellington [226148]; Royal Society Te Apārangi [MFP-20-VUW-048].

Notes on contributors

Alexandra F. Gibson

Dr. Alexandra F. Gibson is a Senior Lecturer and Acting Programme Director of Health Psychology at Te Herenga Waka – Victoria University of Wellington. Ally currently holds a Marsden Fast-Start Fellowship with the Royal Society - Te Apārangi, leading research on people’s experiences of mobile dating during the COVID-19 pandemic in Aotearoa New Zealand. She has over 10 years’ experience conducting a range of qualitative research projects relating to health, illness, and the practice of medicine. Her work is interdisciplinary, bridging health psychology, the sociology of health and illness, and public health.

Alexander Beattie

Dr. Alexander Beattie is a Lecturer in Science Communication at the School for Science in Society, Te Herenga Waka-Victoria University of Wellington. His work explores media resistance, digital wellbeing and the media and technology industries. He is currently researching news avoidance and political attitudes towards science communicators in Aotearoa New Zealand.

Notes

1 Our larger team also includes Antonia Lyons, Deborah Lupton, and Clive Aspin.

2 Disciplines tend to disagree whether AI can understand context. Broadly speaking, computer scientists argue that AI can understand context by processing meaning between words to generate knowledge about the range of conditions that undergirds language and surround people or events. In contrast, researchers in the humanities contend that context is beyond the computational and draws from subjectivity, the senses, and/or embodied experiences.

3 Demographic details were collected at the start of participation. Participants’ own descriptors have been used.

4 Glossary of Te Reo Māori terms.

auare ake = no way

kia tere = hurry

puku = stomach

kua pai = it’s okay

waea atu ki ia = ring him

References

  • Alison, J. 2019. Meander, spiral, explode: Design and pattern in narrative. Catapult.
  • Anis, S., and J. A. French. 2023. Efficient, explicatory, and equitable: Why qualitative researchers should embrace AI, but cautiously. Business & Society 62 (6):1139–1144. doi:10.1177/00076503231163286.
  • Beattie, A., and R. Priestley. 2021. Fighting COVID-19 with the team of 5 million: Aotearoa New Zealand government communication during the 2020 lockdown. Social Sciences & Humanities Open 4 (1):100209. doi:10.1016/j.ssaho.2021.100209.
  • Binz, M., and E. Schulz. 2023. Using cognitive psychology to understand GPT-3 [article]. Proceedings of the National Academy of Sciences of the United States of America 120 (6):e2218523120. doi:10.1073/pnas.2218523120.
  • Braun, V., V. Clarke, N. Hayfield, H. Frith, H. Malson, N. Moller, and I. Shah-Beckley. 2019. Qualitative story completion: Possibilities and potential pitfalls. Qualitative Research in Psychology 16 (1):136–155. doi:10.1080/14780887.2018.1536395.
  • Chubb, L. 2023. Me and the machines: Possibilities and pitfalls of using artificial intelligence for qualitative data analysis. International Journal of Qualitative Methods 22:1–16. doi:10.1177/16094069231193593.
  • Clarke, V. 2005. Were all very liberal in our views: Students talk about lesbian and gay parenting. Lesbian & Gay Psychology Review 6 (1):2–15.
  • Clarke, V., N. Hayfield, N. Moller, and I. Tischner. 2017. Once upon a time… qualitative story completion methods. In Collecting qualitative data: A practical Guide to textual, media and virtual techniques, ed. V. Braun, V. Clarke, and D. Gray, 15–44. Cambridge: Cambridge University.
  • Corless, V. (2023). AI detectors have a bias against non-native English speakers. Advanced Science News. https://www.advancedsciencenews.com/ai-detectors-have-a-bias-against-non-native-english-speakers/
  • Dillion, D., N. Tandon, Y. Gu, and K. Gray. 2023. Can AI language models replace human participants? Trends in Cognitive Sciences 27 (7):597–600. doi:10.1016/j.tics.2023.04.008.
  • Dodds, S., and A. C. Hess. 2021. Adapting research methodology during COVID-19: Lessons for transformative service research. Journal of Service Management 32 (2):203–217. doi:10.1108/JOSM-05-2020-0153.
  • Drysdale, K., N. Wells, A. K. J. Smith, N. Gunatillaka, E. A. Sturgiss, and T. Wark. 2023a. Beyond the challenge to research integrity: Imposter participation in incentivised qualitative research and its impact on community engagement. Health Sociology Review 1–9. doi:10.1080/14461242.2023.2261433.
  • Drysdale, K., N. Wells, A. K. J. Smith, N. Gunatillaka, E. A. Sturgiss, and T. Wark. 2023b. Beyond the challenge to research integrity: Imposter participation in incentivised qualitative research and its impact on community engagement. Health Sociology Review 32 (3):372–80. doi:10.1080/14461242.2023.2261433.
  • Gibson, A. F., C. Lee, and S. Crabb. 2015. ‘Take ownership of your condition’: Australian women’s health and risk talk in relation to their experiences of breast cancer. Health, Risk & Society 17 (2):132–148. doi:10.1080/13698575.2015.1032215.
  • Godinho, A., C. Schell, and J. A. Cunningham. 2020. Out damn bot, out: Recruiting real people into substance use studies on the Internet. Substance Abuse 41 (1):3–5. doi:10.1080/08897077.2019.1691131.
  • Gray, L., C. MacDonald, N. Tassell-Matamua, J. Stanley, A. Kvalsvig, J. Zhang, S. Murton, S. Wiles, V. Puloka, J. Becker, et al. 2020. Wearing one for the team: Views and attitudes to face covering in new Zealand/Aotearoa during COVID-19 alert level 4 lockdown. Journal of Primary Health Care 12 (3):199–206. doi:10.1071/HC20089.
  • Griffin, M., R. J. Martino, C. LoSchiavo, C. Comer-Carruthers, K. D. Krause, C. B. Stults, and P. N. Halkitis. 2022. Ensuring survey research data integrity in the era of internet bots. Quality & Quantity 56 (4):2841–2852. doi:10.1007/s11135-021-01252-1.
  • Hämäläinen, P., M. Tavast, and A. Kunnari (2023). Evaluating large language models in generating synthetic HCI research data: A case study. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany. 10.1145/3544548.3580688
  • Hamilton, R. J., and B. J. Bowers. 2006. Internet recruitment and e-mail interviews in qualitative studies. Qualitative Health Research 16 (6):821–835. doi:10.1177/1049732306287599.
  • Hewitt, R. M., C. Purcell, and C. Bundy. 2022. Safeguarding online research integrity: Concerns from recent experience. British Journal of Dermatology 187 (6):999–1000. doi:10.1111/bjd.21765.
  • Hunt, E. (2021). ’Everyone is angry’: Ardern under pressure over latest Auckland Covid lockdown. The Guardian. https://www.theguardian.com/world/2021/mar/01/ardern-covid-lockdown-pressure-auckland-new-zealand
  • Jagger, G. 2015. The new materialism and sexual difference. Signs: Journal of Women in Culture and Society 40 (2):321–342. doi:10.1086/678190.
  • Jones, B. (2023, 28 Sept). How generative AI tools help transform academic research. Forbes. https://www.forbes.com/sites/beatajones/2023/09/28/how-generative-ai-tools-help-transform-academic-research/?sh=77fafd3234fc
  • Jones, A., L. Caes, T. Rugg, M. Noel, S. Bateman, and A. Jordan. 2021. Challenging issues of integrity and identity of participants in non-synchronous online qualitative methods. Methods in Psychology 5:100072. doi:10.1016/j.metip.2021.100072.
  • Kirchner, J. H., L. Ahmad, S. Aaronson, and J. Leike (2023, November 7th). New AI classifier for indicating AI-written text. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text
  • Lupton, D. 2013. Risk and emotion: Towards an alternative theoretical perspective. Health, Risk & Society 15 (8):634–647. doi:10.1080/13698575.2013.848847.
  • Lupton, D. 2019a. Australian women’s use of health and fitness apps and wearable devices: A feminist new materialism analysis. Feminist Media Studies 20 (7):983–98. doi:10.1080/14680777.2019.1637916.
  • Lupton, D. 2019b. Toward a more-than-human analysis of digital health: Inspirations from feminist new materialism. Qualitative Health Research 29 (14):1998–2009. doi:10.1177/1049732319833368.
  • Lupton, D. 2020. Australian women’s use of health and fitness apps and wearable devices: A feminist new materialism analysis. Feminist Media Studies 20 (7):983–98. doi:10.1080/14680777.2019.1637916.
  • Lupton, D., and S. Maslen. 2018. The more-than-human sensorium: Sensory engagements with digital self-tracking technologies. The Senses & Society 13 (2):190–202. doi:10.1080/17458927.2018.1480177.
  • New Zealand Government. (2023). Unite Against COVID-19. New Zealand Government. https://covid19.govt.nz/
  • O’Donnell, N., R.-M. Satherley, E. Davey, and G. Bryan. 2023. Fraudulent participants in qualitative child health research: Identifying and reducing bot activity. Archives of Disease in Childhood 108 (5):415–416. doi:10.1136/archdischild-2022-325049.
  • Paasonen, S. 2016. Fickle focus: Distraction, affect and the production of value in social media. First Monday 21 (10). doi: 10.5210/fm.v21i10.6949.
  • Qualtrics. (2023). ExpertReview Functionality. https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/quality-iq-functionality/
  • Ridge, D., L. Bullock, H. Causer, T. Fisher, S. Hider, T. Kingstone, L. Gray, R. Riley, N. Smyth, V. Silverwood, et al. 2023. ‘Imposter participants’ in online qualitative research, a new and increasing threat to data integrity? Health Expectations 26 (3):941–44. doi:10.1111/hex.13724.
  • Roehl, J., and D. Harland. 2022. Imposter participants: Overcoming methodological challenges related to balancing participant privacy with data quality when using online recruitment and data collection. The Qualitative Report 27 (11):2469–2485. doi:10.46743/2160-3715/2022.5475.
  • Salinas, M. R. 2022. Are your participants real? Dealing with fraud in recruiting older adults online. Western Journal of Nursing Research 45 (1):93–99. doi:10.1177/01939459221098468.
  • Samant, N. 2023. Prompt engineering: Crafting effective prompts for chat GPT. Medium. https://medium.com/@nikhilsamant4/prompt-engineering-crafting-effective-prompts-for-chat-gpt-dbeeb3735136.
  • Schramowski, P., C. Turan, N. Andersen, C. A. Rothkopf, and K. Kersting. 2022. Large pre-trained language models contain human-like biases of what is right and wrong to do. Nature Machine Intelligence 4 (3):258–268. doi:10.1038/s42256-022-00458-8.
  • Stahl, B. C., D. Schroeder, and R. Rodrigues. 2023. Unfair and illegal discrimination. In Ethics of artificial intelligence: Case studies and options for addressing ethical challenges, 9–23. Springer International Publishing. doi:10.1007/978-3-031-17040-9_2.
  • Synthetic Users. (n.d.). User Research. https://www.syntheticusers.com/
  • Teitcher, J. E. F., W. O. Bockting, J. A. Bauermeister, C. J. Hoefer, M. H. Miner, and R. L. Klitzman. 2015. Detecting, preventing, and responding to “fraudsters” in internet research: Ethics and tradeoffs. Journal of Law, Medicine & Ethics 43 (1):116–33. doi:10.1111/jlme.12200.
  • Towler, L., P. Bondaronek, T. Papakonstantinou, R. Amlôt, T. Chadborn, B. Ainsworth, and L. Yardley. 2023. Applying machine-learning to rapidly analyze large qualitative text datasets to inform the COVID-19 pandemic response: Comparing human and machine-assisted topic analysis techniques [original research]. Frontiers in Public Health 11:11. doi:10.3389/fpubh.2023.1268223.
  • Victoria University of Wellington. 2023. Student use of artificial intelligence. Victoria University of Wellington. https://www.wgtn.ac.nz/students/study/exams/academic-integrity/student-use-of-artificial-intelligence.