2,367
Views
0
CrossRef citations to date
0
Altmetric
Research Article

More or less than human? Evaluating the role of AI-as-participant in online qualitative research

&

References

  • Alison, J. 2019. Meander, spiral, explode: Design and pattern in narrative. Catapult.
  • Anis, S., and J. A. French. 2023. Efficient, explicatory, and equitable: Why qualitative researchers should embrace AI, but cautiously. Business & Society 62 (6):1139–1144. doi:10.1177/00076503231163286.
  • Beattie, A., and R. Priestley. 2021. Fighting COVID-19 with the team of 5 million: Aotearoa New Zealand government communication during the 2020 lockdown. Social Sciences & Humanities Open 4 (1):100209. doi:10.1016/j.ssaho.2021.100209.
  • Binz, M., and E. Schulz. 2023. Using cognitive psychology to understand GPT-3 [article]. Proceedings of the National Academy of Sciences of the United States of America 120 (6):e2218523120. doi:10.1073/pnas.2218523120.
  • Braun, V., V. Clarke, N. Hayfield, H. Frith, H. Malson, N. Moller, and I. Shah-Beckley. 2019. Qualitative story completion: Possibilities and potential pitfalls. Qualitative Research in Psychology 16 (1):136–155. doi:10.1080/14780887.2018.1536395.
  • Chubb, L. 2023. Me and the machines: Possibilities and pitfalls of using artificial intelligence for qualitative data analysis. International Journal of Qualitative Methods 22:1–16. doi:10.1177/16094069231193593.
  • Clarke, V. 2005. Were all very liberal in our views: Students talk about lesbian and gay parenting. Lesbian & Gay Psychology Review 6 (1):2–15.
  • Clarke, V., N. Hayfield, N. Moller, and I. Tischner. 2017. Once upon a time… qualitative story completion methods. In Collecting qualitative data: A practical Guide to textual, media and virtual techniques, ed. V. Braun, V. Clarke, and D. Gray, 15–44. Cambridge: Cambridge University.
  • Corless, V. (2023). AI detectors have a bias against non-native English speakers. Advanced Science News. https://www.advancedsciencenews.com/ai-detectors-have-a-bias-against-non-native-english-speakers/
  • Dillion, D., N. Tandon, Y. Gu, and K. Gray. 2023. Can AI language models replace human participants? Trends in Cognitive Sciences 27 (7):597–600. doi:10.1016/j.tics.2023.04.008.
  • Dodds, S., and A. C. Hess. 2021. Adapting research methodology during COVID-19: Lessons for transformative service research. Journal of Service Management 32 (2):203–217. doi:10.1108/JOSM-05-2020-0153.
  • Drysdale, K., N. Wells, A. K. J. Smith, N. Gunatillaka, E. A. Sturgiss, and T. Wark. 2023a. Beyond the challenge to research integrity: Imposter participation in incentivised qualitative research and its impact on community engagement. Health Sociology Review 1–9. doi:10.1080/14461242.2023.2261433.
  • Drysdale, K., N. Wells, A. K. J. Smith, N. Gunatillaka, E. A. Sturgiss, and T. Wark. 2023b. Beyond the challenge to research integrity: Imposter participation in incentivised qualitative research and its impact on community engagement. Health Sociology Review 32 (3):372–80. doi:10.1080/14461242.2023.2261433.
  • Gibson, A. F., C. Lee, and S. Crabb. 2015. ‘Take ownership of your condition’: Australian women’s health and risk talk in relation to their experiences of breast cancer. Health, Risk & Society 17 (2):132–148. doi:10.1080/13698575.2015.1032215.
  • Godinho, A., C. Schell, and J. A. Cunningham. 2020. Out damn bot, out: Recruiting real people into substance use studies on the Internet. Substance Abuse 41 (1):3–5. doi:10.1080/08897077.2019.1691131.
  • Gray, L., C. MacDonald, N. Tassell-Matamua, J. Stanley, A. Kvalsvig, J. Zhang, S. Murton, S. Wiles, V. Puloka, J. Becker, et al. 2020. Wearing one for the team: Views and attitudes to face covering in new Zealand/Aotearoa during COVID-19 alert level 4 lockdown. Journal of Primary Health Care 12 (3):199–206. doi:10.1071/HC20089.
  • Griffin, M., R. J. Martino, C. LoSchiavo, C. Comer-Carruthers, K. D. Krause, C. B. Stults, and P. N. Halkitis. 2022. Ensuring survey research data integrity in the era of internet bots. Quality & Quantity 56 (4):2841–2852. doi:10.1007/s11135-021-01252-1.
  • Hämäläinen, P., M. Tavast, and A. Kunnari (2023). Evaluating large language models in generating synthetic HCI research data: A case study. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany. 10.1145/3544548.3580688
  • Hamilton, R. J., and B. J. Bowers. 2006. Internet recruitment and e-mail interviews in qualitative studies. Qualitative Health Research 16 (6):821–835. doi:10.1177/1049732306287599.
  • Hewitt, R. M., C. Purcell, and C. Bundy. 2022. Safeguarding online research integrity: Concerns from recent experience. British Journal of Dermatology 187 (6):999–1000. doi:10.1111/bjd.21765.
  • Hunt, E. (2021). ’Everyone is angry’: Ardern under pressure over latest Auckland Covid lockdown. The Guardian. https://www.theguardian.com/world/2021/mar/01/ardern-covid-lockdown-pressure-auckland-new-zealand
  • Jagger, G. 2015. The new materialism and sexual difference. Signs: Journal of Women in Culture and Society 40 (2):321–342. doi:10.1086/678190.
  • Jones, B. (2023, 28 Sept). How generative AI tools help transform academic research. Forbes. https://www.forbes.com/sites/beatajones/2023/09/28/how-generative-ai-tools-help-transform-academic-research/?sh=77fafd3234fc
  • Jones, A., L. Caes, T. Rugg, M. Noel, S. Bateman, and A. Jordan. 2021. Challenging issues of integrity and identity of participants in non-synchronous online qualitative methods. Methods in Psychology 5:100072. doi:10.1016/j.metip.2021.100072.
  • Kirchner, J. H., L. Ahmad, S. Aaronson, and J. Leike (2023, November 7th). New AI classifier for indicating AI-written text. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text
  • Lupton, D. 2013. Risk and emotion: Towards an alternative theoretical perspective. Health, Risk & Society 15 (8):634–647. doi:10.1080/13698575.2013.848847.
  • Lupton, D. 2019a. Australian women’s use of health and fitness apps and wearable devices: A feminist new materialism analysis. Feminist Media Studies 20 (7):983–98. doi:10.1080/14680777.2019.1637916.
  • Lupton, D. 2019b. Toward a more-than-human analysis of digital health: Inspirations from feminist new materialism. Qualitative Health Research 29 (14):1998–2009. doi:10.1177/1049732319833368.
  • Lupton, D. 2020. Australian women’s use of health and fitness apps and wearable devices: A feminist new materialism analysis. Feminist Media Studies 20 (7):983–98. doi:10.1080/14680777.2019.1637916.
  • Lupton, D., and S. Maslen. 2018. The more-than-human sensorium: Sensory engagements with digital self-tracking technologies. The Senses & Society 13 (2):190–202. doi:10.1080/17458927.2018.1480177.
  • New Zealand Government. (2023). Unite Against COVID-19. New Zealand Government. https://covid19.govt.nz/
  • O’Donnell, N., R.-M. Satherley, E. Davey, and G. Bryan. 2023. Fraudulent participants in qualitative child health research: Identifying and reducing bot activity. Archives of Disease in Childhood 108 (5):415–416. doi:10.1136/archdischild-2022-325049.
  • Paasonen, S. 2016. Fickle focus: Distraction, affect and the production of value in social media. First Monday 21 (10). doi: 10.5210/fm.v21i10.6949.
  • Qualtrics. (2023). ExpertReview Functionality. https://www.qualtrics.com/support/survey-platform/survey-module/survey-checker/quality-iq-functionality/
  • Ridge, D., L. Bullock, H. Causer, T. Fisher, S. Hider, T. Kingstone, L. Gray, R. Riley, N. Smyth, V. Silverwood, et al. 2023. ‘Imposter participants’ in online qualitative research, a new and increasing threat to data integrity? Health Expectations 26 (3):941–44. doi:10.1111/hex.13724.
  • Roehl, J., and D. Harland. 2022. Imposter participants: Overcoming methodological challenges related to balancing participant privacy with data quality when using online recruitment and data collection. The Qualitative Report 27 (11):2469–2485. doi:10.46743/2160-3715/2022.5475.
  • Salinas, M. R. 2022. Are your participants real? Dealing with fraud in recruiting older adults online. Western Journal of Nursing Research 45 (1):93–99. doi:10.1177/01939459221098468.
  • Samant, N. 2023. Prompt engineering: Crafting effective prompts for chat GPT. Medium. https://medium.com/@nikhilsamant4/prompt-engineering-crafting-effective-prompts-for-chat-gpt-dbeeb3735136.
  • Schramowski, P., C. Turan, N. Andersen, C. A. Rothkopf, and K. Kersting. 2022. Large pre-trained language models contain human-like biases of what is right and wrong to do. Nature Machine Intelligence 4 (3):258–268. doi:10.1038/s42256-022-00458-8.
  • Stahl, B. C., D. Schroeder, and R. Rodrigues. 2023. Unfair and illegal discrimination. In Ethics of artificial intelligence: Case studies and options for addressing ethical challenges, 9–23. Springer International Publishing. doi:10.1007/978-3-031-17040-9_2.
  • Synthetic Users. (n.d.). User Research. https://www.syntheticusers.com/
  • Teitcher, J. E. F., W. O. Bockting, J. A. Bauermeister, C. J. Hoefer, M. H. Miner, and R. L. Klitzman. 2015. Detecting, preventing, and responding to “fraudsters” in internet research: Ethics and tradeoffs. Journal of Law, Medicine & Ethics 43 (1):116–33. doi:10.1111/jlme.12200.
  • Towler, L., P. Bondaronek, T. Papakonstantinou, R. Amlôt, T. Chadborn, B. Ainsworth, and L. Yardley. 2023. Applying machine-learning to rapidly analyze large qualitative text datasets to inform the COVID-19 pandemic response: Comparing human and machine-assisted topic analysis techniques [original research]. Frontiers in Public Health 11:11. doi:10.3389/fpubh.2023.1268223.
  • Victoria University of Wellington. 2023. Student use of artificial intelligence. Victoria University of Wellington. https://www.wgtn.ac.nz/students/study/exams/academic-integrity/student-use-of-artificial-intelligence.