0
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Development and Validation of the Perceived Deepfake Trustworthiness Questionnaire (PDTQ) in Three Languages

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Received 01 May 2024, Accepted 22 Jul 2024, Published online: 05 Aug 2024

References

  • Ahmed, S. (2021). Fooled by the fakes: Cognitive differences in perceived claim accuracy and sharing intention of non-political deepfakes. Personality and Individual Differences, 182, 111074. https://doi.org/10.1016/j.paid.2021.111074
  • Alhabash, S., McAlister, A. R., Lou, C., & Hagerstrom, A. (2015). From clicks to behaviors: The mediating effect of intentions to like, share, and comment on the relationship between message evaluations and offline behavioral intentions. Journal of Interactive Advertising, 15(2), 82–96. https://doi.org/10.1080/15252019.2015.1071677
  • Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236. https://doi.org/10.1257/jep.31.2.211
  • Al-Qaysi, N., Al-Emran, M., Al-Sharafi, M. A., Iranmanesh, M., Ahmad, A., & Mahmoud, M. A. (2024). Determinants of ChatGPT use and its impact on learning performance: An integrated model of BRT and TPB. International Journal of Human–Computer Interaction, 1–13. https://doi.org/10.1080/10447318.2024.2361210
  • American Psychological Association (2023). 11 emerging trends for 2023. https://www.apa.org/monitor/2023/01/trends-report
  • American Psychological Association (2024). 12 emerging trends for 2024. https://www.apa.org/monitor/2024/01/trends-report
  • Bornstein, B. H., & Tomkins, A. J. (2015). Institutional trust: An introduction. In B. H. Bornstein in A. J. Tomkins (Ed.), Motivating cooperation and compliance with authority: The role of institutional trust (pp. 1–12). Springer.
  • Bovet, A., & Makse, H. A. (2019). Influence of fake news in Twitter during the 2016 US presidential election. Nature Communications, 10(1), 7. https://doi.org/10.1038/s41467-018-07761-2
  • Chandler, J., Rosenzweig, C., Moss, A. J., Robinson, J., & Litman, L. (2019). Online panels in social science research: Expanding sampling methods beyond Mechanical Turk. Behavior Research Methods, 51(5), 2022–2038. https://doi.org/10.3758/s13428-019-01273-7
  • Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling: A Multidisciplinary Journal, 14(3), 464–504. https://doi.org/10.1080/10705510701301834
  • Chen, S., Xiao, L., & Kumar, A. (2023). Spread of misinformation on social media: What contributes to it and how to combat it. Computers in Human Behavior, 141, 107643. https://doi.org/10.1016/j.chb.2022.107643
  • Chi, O. H., Jia, S., Li, Y., & Gursoy, D. (2021). Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery. Computers in Human Behavior, 118, 106700. https://doi.org/10.1016/j.chb.2021.106700
  • Doss, C., Mondschein, J., Shu, D., Wolfson, T., Kopecky, D., Fitton-Kane, V. A., Bush, L., & Tucker, C. (2023). Deepfakes and scientific knowledge dissemination. Scientific Reports, 13(1), 13429. https://doi.org/10.1038/s41598-023-39944-3
  • Eristi, B., & Erdem, C. (2017). Development of a media literacy skills scale. Contemporary Educational Technology, 8(3), 249–267. https://doi.org/10.30935/cedtech/6199
  • European Commission (2018). Action plan on disinformation: Commission contribution to the European Council. https://commission.europa.eu/publications/action-plan-disinformation-commission-contribution-european-council-13-14-december-2018_en
  • Field, A. (2013). Discovering statistics using SPSS. SAGE.
  • Gaillard, S., Oláh, Z. A., Venmans, S., & Burke, M. (2021). Countering the cognitive, linguistic, and psychological underpinnings behind susceptibility to fake news: A review of current literature with special focus on the role of age and digital literacy. Frontiers in Communication, 6, 661801. https://doi.org/10.3389/fcomm.2021.661801
  • Goh, D. H.-L. (2024). “He looks very real”: Media, knowledge, and search-based strategies for deepfake identification. Journal of the Association for Information Science and Technology, 75(6), 643–654. https://doi.org/10.1002/asi.24867
  • Götz, F. M., Maertens, R., Loomba, S., & van der Linden, S. (2023). Let the algorithm speak: How to use neural networks for automatic item generation in psychological scale development. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000540
  • Hameleers, M., van der Meer, T. G., & Dobber, T. (2022). You won’t believe what they just said! The effects of political deepfakes embedded as vox populi on social media. Social Media and Society, 8(3), 20563051221116346. https://doi.org/10.1177/2056305122111634
  • Hameleers, M., van der Meer, T. G., & Dobber, T. (2023). They would never say anything like this! Reasons to doubt political deepfakes. European Journal of Communication, 39(1), 56–70. https://doi.org/10.1177/0267323123118470
  • Hameleers, M., van der Meer, T. G., & Dobber, T. (2024). Distorting the truth versus blatant lies: The effects of different degrees of deception in domestic and foreign political deepfakes. Computers in Human Behavior, 152, 108096. https://doi.org/10.1016/j.chb.2023.108096
  • Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. Electronic Journal of Business Research Methods, 6(1), 53–60.
  • Hunsley, J., & Mash, E. J. (2008). A guide to assessments that work. Oxford University Press. https://doi.org/10.1093/med:psych/9780195310641.001.0001
  • Hwang, Y., Ryu, J. Y., & Jeong, S. H. (2021). Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior and Social Networking, 24(3), 188–193. https://doi.org/10.1089/cyber.2020.0174
  • Iacobucci, S., De Cicco, R., Michetti, F., Palumbo, R., & Pagliaro, S. (2021). Deepfakes unmasked: The effects of information priming and bullshit receptivity on deepfake recognition and sharing intention. Cyberpsychology, Behavior and Social Networking, 24(3), 194–202. https://doi.org/10.1089/cyber.2020.0149
  • Jin, X., Zhang, Z., Gao, B., Gao, S., Zhou, W., Yu, N., & Wang, G. (2023). Assessing the perceived credibility of deepfakes: The impact of system-generated cues and video characteristics. New Media & Society. Advance online publication. https://doi.org/10.1177/14614448231199664
  • Kiyonari, T., Yamagishi, T., Cook, K. S., & Cheshire, C. (2006). Does trust beget trustworthiness? Trust and trustworthiness in two games and two cultures: A research note. Social Psychology Quarterly, 69(3), 270–283. https://doi.org/10.1177/019027250606900304
  • Köbis, N. C., Doležalová, B., & Soraperra, I. (2021). Fooled twice: People cannot detect deepfakes but think they can. iScience, 24(11), 103364. https://doi.org/10.1016/j.isci.2021.103364
  • Lankton, N. K., McKnight, D. H., & Tripp, J, Marshall University (2015). Technology, humanness, and trust: Rethinking trust in technology. Journal of the Association for Information Systems, 16(10), 880–918. https://doi.org/10.17705/1jais.00411
  • Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology, 28(4), 563–575. https://doi.org/10.1111/j.1744-6570.1975.tb01393.x
  • Lee, J., & Shin, S. Y. (2022). Something that they never said: Multimodal disinformation and source vividness in understanding the power of AI-enabled deepfake news. Media Psychology, 25(4), 531–546. https://doi.org/10.1080/15213269.2021.2007489
  • Lewandowsky, S. (2021). Climate change disinformation and how to combat it. Annual Review of Public Health, 42(1), 1–21. https://doi.org/10.1146/annurev-publhealth-090419-102409
  • Li, Z., Zhang, W., Zhang, H., Gao, R., & Fang, X. (2024). Global digital compact: A mechanism for the governance of online discriminatory and misleading content generation. International Journal of Human–Computer Interaction, 1–16. https://doi.org/10.1080/10447318.2024.2314350
  • Maniaci, M. R., & Rogge, R. D. (2014). Caring about carelessness: Participant inattention and its effects on research. Journal of Research in Personality, 48, 61–83. https://doi.org/10.1016/j.jrp.2013.09.008
  • Rubio, D. M., Berg-Weger, M., Tebb, S. S., Lee, E. S., & Rauch, S. (2003). Objectifying content validity: Conducting a content validity study in social work research. Social Work Research, 27(2), 94–104. https://doi.org/10.1093/swr/27.2.94
  • Neylan, J., Biddlestone, M., Roozenbeek, J., & van der Linden, S. (2023). How to “inoculate” against multimodal misinformation: A conceptual replication of Roozenbeek and van der Linden (2020). Scientific Reports, 13(1), 18273. https://doi.org/10.1038/s41598-023-43885-2
  • Ng, Y. L. (2023). An error management approach to perceived fakeness of deepfakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics. Current Psychology, 42(29), 25658–25669. https://doi.org/10.1007/s12144-022-03621-x
  • O’Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instruments, and Computers, 32(3), 396–402. https://doi.org/10.3758/BF03200807
  • Park, J., Kang, H., & Kim, H. Y. (2023). Human, do you think this painting is the work of a real artist? International Journal of Human–Computer Interaction, 1–18. https://doi.org/10.1080/10447318.2023.2232978
  • Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2015). On the reception and detection of pseudo-profound bullshit. Judgment and Decision Making, 10(6), 549–563. https://doi.org/10.1017/S1930297500006999
  • Pennycook, G., & Rand, D. G. (2020). Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. Journal of Personality, 88(2), 185–200. https://doi.org/10.1111/jopy.12476
  • Pew Research Center (2022). US adults under 30 now trust information from social media almost as much as from national news outlets. https://www.pewresearch.org/short-reads/2022/10/27/u-s-adults-under-30-now-trust-information-from-social-media-almost-as-much-as-from-national-news-outlets/
  • Pew Research Center (2023). Social media and news fact sheet. https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
  • Plohl, N., & Musil, B. (2021). Modeling compliance with COVID-19 prevention guidelines: The critical role of trust in science. Psychology, Health & Medicine, 26(1), 1–12. https://doi.org/10.1080/13548506.2020.1772988
  • Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L., Recchia, G., van der Bles, A. M., & Van Der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science, 7(10), 201199. https://doi.org/10.1098/rsos.201199
  • Rousseau, D. M., Sitkin, S. B., Burt, R. S., Camerer, C., Rousseau, D. M., & Burt, R. S. (1998). Not so different after all: A cross- discipline view of trust. Academy of Management Review, 23(3), 393–404. https://doi.org/10.5465/amr.1998.926617
  • Samuels, P. (2017). Advice on exploratory factor analysis. Centre for Academic Success, Birmingham City University.
  • Shin, S. Y., & Lee, J. (2022). The effect of deepfake video on news credibility and corrective influence of cost-based knowledge about deepfakes. Digital Journalism, 10(3), 412–432. https://doi.org/10.1080/21670811.2022.2026797
  • Sirota, M., & Juanchich, M. (2018). Effect of response format on cognitive reflection: Validating a two-and four-option multiple choice question version of the Cognitive Reflection Test. Behavior Research Methods, 50(6), 2511–2522. https://doi.org/10.3758/s13428-018-1029-4
  • Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). ChatGPT: More than a “weapon of mass deception” ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective. International Journal of Human–Computer Interaction, 1–20. https://doi.org/10.1080/10447318.2023.2225931
  • Somoray, K., & Miller, D. J. (2023). Providing detection strategies to improve human detection of deepfakes: An experimental study. Computers in Human Behavior, 149, 107917. https://doi.org/10.1016/j.chb.2023.107917
  • Stanescu, G. (2022). Ukraine conflict: The challenge of informational war. Social Sciences and Education Research Review, 9(1), 146–148. https://doi.org/10.5281/zenodo.6795674
  • Suen, H. Y., & Hung, K. E. (2023). Building trust in automatic video interviews using various AI interfaces: Tangibility, immediacy, and transparency. Computers in Human Behavior, 143, 107713. https://doi.org/10.1016/j.chb.2023.107713
  • Sütterlin, S., Ask, T. F., Mägerle, S., Glöckler, S., Wolf, L., Schray, J., Chandi, A., Bursac, T., Khodabakhsh, A., Knox, B. J., Canham, M., & Lugo, R. G. (2023). Individual deep fake recognition skills are affected by viewer’s political orientation, agreement with content and device used. In International Conference on Human-Computer Interaction (pp. 269–284). Springer Nature Switzerland.
  • Tabachnick, B. G., Fidell, L. S., & Ullman, J. B. (2013). Using multivariate statistics. Pearson.
  • Tahir, R., Batool, B., Jamshed, H., Jameel, M., Anwar, M., Ahmed, F., Zaffar, M. A., & Zaffar, M. F. (2021). Seeing is believing: Exploring perceptual differences in deepfake videos [Paper presentation]. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–16). Association for Computing Machinery. https://doi.org/10.1145/3411764.3445699
  • Talhelm, T., Haidt, J., Oishi, S., Zhang, X., Miao, F. F., & Chen, S. (2015). Liberals think more analytically (more “WEIRD”) than conservatives. Personality & Social Psychology Bulletin, 41(2), 250–267. https://doi.org/10.1177/0146167214563672
  • Thaw, N. N., July, T., Wai, A. N., Goh, D. H. L., & Chua, A. Y. (2021). How are deepfake videos detected? An initial user study [Paper presentation]. Proceedings of the 23rd HCI International Conference, HCII 2021, Part 1 (pp. 631–636). Springer International Publishing. http://dx.doi.org/10.1007/978-3-030-78635-9_80
  • Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1), 205630512090340. https://doi.org/10.1177/2056305120903408
  • Valori, I., Jung, M. M., & Fairhurst, M. T. (2023). Social touch to build trust: A systematic review of technology-mediated and unmediated interactions. Computers in Human Behavior, 153, 108121. https://doi.org/10.1016/j.chb.2023.108121
  • van der Linden, S. (2022). Misinformation: Susceptibility, spread, and interventions to immunize the public. Nature Medicine, 28(3), 460–467. https://doi.org/10.1038/s41591-022-01713-6
  • van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global Challenges (Hoboken, NJ), 1(2), 1600008. https://doi.org/10.1002/gch2.201600008
  • Wilson, F. R., Pan, W., & Schumsky, D. A. (2012). Recalculation of the critical values for Lawshe’s content validity ratio. Measurement and Evaluation in Counseling and Development, 45(3), 197–210. https://doi.org/10.1177/0748175612440286
  • Xu, X., Huang, Y., & Apuke, O. D. (2024). A quasi experiment on the effectiveness of social media literacy skills training for combating fake news proliferation. International Journal of Human–Computer Interaction, 1–11. https://doi.org/10.1080/10447318.2024.2311973
  • Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395(10225), 676. https://doi.org/10.1016/S0140-6736(20)30461-X