References
- Ali, C. (2021). Do people actually care about data privacy in messaging apps? Userlike. https://www.userlike.com/en/blog/messaging-data-privacy-survey
- Alwin, D. F. (2016). Survey data quality and measurement precision. In C. Wolf, D. Joye, T. Smith, & Y. Fu (Eds.), The SAGE handbook of survey methodology (pp. 527–557). SAGE Publications Ltd. https://doi.org/10.4135/9781473957893
- Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183–189. https://doi.org/10.1016/j.chb.2018.03.051
- Araujo, T. (2020). Conversational agent research toolkit: An alternative for creating and managing chatbots for experimental research. Computational Communication Research, 2(1), 35–51. https://doi.org/10.5117/CCR2020.1.002.ARAU
- Barrios, M., Villarroya, A., Borrego, Á., & Ollé, C. (2011). Response rates and data quality in web and mail surveys administered to PhD holders. Social Science Computer Review, 29(2), 208–220. https://doi.org/10.1177/0894439310368031
- Baruch, Y., & Holtom, B. C. (2008). Survey response rate levels and trends in organizational research. Human Relations, 61(8), 1139–1160. https://doi.org/10.1177/0018726708094863
- Bell, S., Wood, C., & Sarkar, A. (2019). Perceptions of Chatbots in Therapy. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, 1–6 https://doi.org/10.1145/3290607.3313072.
- Börger, T. (2016). Are fast responses more random? Testing the effect of response time on scale in an online choice experiment. Environmental and Resource Economics, 65(2), 389–413. https://doi.org/10.1007/s10640-015-9905-1
- Bosnjak, M., Metzger, G., & Gräf, L. (2010). Understanding the willingness to participate in mobile surveys: Exploring the role of utilitarian, affective, hedonic, social, self-expressive, and trust-related factors. Social Science Computer Review, 28(3), 350–370. https://doi.org/10.1177/0894439309353395
- Celino, I., & Re Calegari, G. (2020). Submitting surveys via a conversational interface: An evaluation of user acceptance and approach effectiveness. International Journal of Human-Computer Studies, 139, 102410. https://doi.org/10.1016/j.ijhcs.2020.102410
- Cella, D., Hahn, E. A., Jensen, S. E., Butt, Z., Nowinski, C. J., Rothrock, N., & Lohr, K. N. (2015). Patient-reported outcomes in performance measurement. RTI Press.
- Chau, P. Y. K. (1999). On the use of construct reliability in MIS research: A meta-analysis. Information & Management, 35(4), 217–227. https://doi.org/10.1016/S0378-7206(98)00089-5
- Crutzen, R., Peters, G. Y., Portugal, S. D., Fisser, E. M., & Grolleman, J. J. (2011). An Artificially Intelligent Chat Agent That Answers Adolescents' Questions Related to Sex, Drugs, and Alcohol: An Exploratory Study. Journal of Adolescent Health, 48(5), 514–519. https://doi.org/10.1016/j.jadohealth.2010.09.002
- Curtin, R., Presser, S., & Singer, E. (2000). The Effects of Response Rate Changes on the Index of Consumer Sentiment. Public Opinion Quarterly, 64(4), 413–428. https://doi.org/10.1086/318638
- Daikeler, J., Bošnjak, M., & Lozar Manfreda, K. (2020). Web versus other survey modes: An updated and extended meta-analysis comparing response rates. Journal of Survey Statistics and Methodology, 8(3), 513–539. https://doi.org/10.1093/jssam/smz008
- de Leeuw, E. D. (2018). Internet surveys as part of a mixed-mode design. In M. Das, P. Ester, & L. Kaczmirek (Eds.), Social and behavioral research and the internet (1st), pp. 45–76). Routledge. https://doi.org/10.4324/9780203844922-3
- Denniston, M. M., Brener, N. D., Kann, L., Eaton, D. K., McManus, T., Kyle, T. M., Roberts, A. M., Flint, K. H., & Ross, J. G. (2010). Comparison of paper-and-pencil versus web administration of the Youth Risk Behavior Survey (YRBS): Participation, data quality, and perceived privacy and anonymity. Computers in Human Behavior, 26(5), 1054–1060. https://doi.org/10.1016/j.chb.2010.03.006
- Denscombe, M. (2008). The length of responses to open-ended questions: A comparison of online and paper questionnaires in terms of a mode effect. Social Science Computer Review, 26(3), 359–368. https://doi.org/10.1177/0894439307309671
- Denscombe, M. (2009). Item non‐response rates: A comparison of online and paper questionnaires. International Journal of Social Research Methodology, 12(4), 281–291. https://doi.org/10.1080/13645570802054706
- Diedenhofen, B., & Musch, J. (2016). cocron: A Web Interface and R Package for the Statistical Comparison of Cronbach’s Alpha Coefficients. International Journal of Internet Science, 11(1), 51–60.
- Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode surveys: The tailored design method. John Wiley & Sons.
- Edwards, A., Edwards, C., Spence, P. R., Harris, C., & Gambino, A. (2016). Robots in the classroom: Differences in students’ perceptions of credibility and learning between “teacher as robot” and “robot as teacher. Computers in Human Behavior, 65, 627–634. https://doi.org/10.1016/j.chb.2016.06.005
- Felderer, B., Kirchner, A., & Kreuter, F. (2019). The effect of survey mode on data quality: Disentangling nonresponse and measurement error bias. Journal of Official Statistics, 35(1), 93–115. http://dx.doi.org/10.2478/jos-2019-0005
- Flavián, C., & Guinalíu, M. (2006). Consumer trust, perceived security and privacy policy. Industrial Management & Data Systems, 106(5), 601–620. https://doi.org/10.1108/02635570610666403
- Fosnacht, K., Sarraf, S., Howe, E., & Peck, L. K. (2017). How Important are High Response Rates for College Surveys?. The Review of Higher Education, 40(2), 245–265. https://doi.org/10.1353/rhe.2017.0003
- Fricker, S., Galesic, M., Tourangeau, R., & Yan, T. (2005). An experimental comparison of web and telephone surveys. Public Opinion Quarterly, 69(3), 370–392. https://doi.org/10.1093/poq/nfi027
- Galesic, M. & Bosnjak, M. (2009). Effects of Questionnaire Length on Participation and Indicators of Response Quality in a Web Survey. Public Opinion Quarterly, 73(2), 349–360. https://doi.org/10.1093/poq/nfp031
- Garton, S., & Copland, F. (2010). ‘I like this interview; I get cakes and cats!’: The effect of prior relationships on interview talk. Qualitative Research, 10(5), 533–551. https://doi.org/10.1177/1468794110375231
- Geisen, E. (2022). Improve data quality by using a commitment request instead of attention checks. Qualtrics. https://www.qualtrics.com/blog/attention-checks-and-data-quality/
- Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316. https://doi.org/10.1016/j.chb.2019.01.020
- Greenlaw, C., & Brown-Welty, S. (2009). A comparison of web-based and paper-based survey methods: Testing assumptions of survey mode and response cost. Evaluation Review, 33(5), 464–480. https://doi.org/10.1177/0193841X09340214
- Griffis, S. E., Goldsby, T. J., & Cooper, M. (2003). Web-based and mail surveys: A comparison of response, data, and cost. Journal of Business Logistics, 24(2), 237–258. https://doi.org/10.1002/j.2158-1592.2003.tb00053.x
- Griol D., Carbo, J., & Molina, J. M. (2013). A statistical simulation technique to develop and evaluate conversational agents. AI Communications, 26(4), 355–371. https://doi.org/10.3233/AIC-130573
- Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias: A meta-analysis. Public Opinion Quarterly, 72(2), 167–189. https://doi.org/10.1093/poq/nfn011
- Hanna, R. C., Weinberg, B., Dant, R. P., & Berger, P. D. (2005). Do internet-based surveys increase personal self-disclosure? Journal of Database Marketing & Customer Strategy Management, 12(4), 342–356. https://doi.org/10.1057/palgrave.dbm.3240270
- Harris, M. A. (2010). Invited commentary: Evaluating epidemiologic research methods—the importance of response rate calculation. American Journal of Epidemiology, 172(6), 645–647. https://doi.org/10.1093/aje/kwq219
- Hartono E, Holsapple C W, Kim K, Na K and Simpson J T. (2014). Measuring perceived security in B2C electronic commerce website usage: A respecification and validation. Decision Support Systems, 62, 11–21. https://doi.org/10.1016/j.dss.2014.02.006
- Ha, L. S., & Zhang, C. (2019). Are computers better than smartphones for web survey responses? Online Information Review, 43(3), 350–368. https://doi.org/10.1108/OIR-11-2017-0322
- Heerwegh, D. (2009). Mode differences between face-to-face and web surveys: An experimental investigation of data quality and social desirability effects. International Journal of Public Opinion Research, 21(1), 111–121. https://doi.org/10.1093/ijpor/edn054
- Henson, R. K. (2001). Understanding internal consistency reliability estimates: A conceptual primer on coefficient alpha. Measurement and Evaluation in Counseling and Development, 34(3), 177–189. https://doi.org/10.1080/07481756.2002.12069034
- Hill, J., Randolph Ford, W., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior, 49, 245–250. https://doi.org/10.1016/j.chb.2015.02.026
- Hox, J. J., Moerbeek, M., & Schoot, R. V. D. (2017). Multilevel analysis: Techniques and applications (Third) ed.). Routledge.
- Ischen, C., Araujo, T., van Noort, G., Voorveld, H., & Smit, E. (2020). “I Am here to assist you today”: The role of entity, interactivity and experiential perceptions in chatbot persuasion. Journal of Broadcasting & Electronic Media, 64(4), 615–639. https://doi.org/10.1080/08838151.2020.1834297
- Israel, G. D. (2010). Effects of answer space size on responses to open-ended questions in mail surveys. Journal of Official Statistics, 26(2), 271–285.
- Jain, M., Kumar, P., Kota, R., & Patel, S. N. (2018). Evaluating and informing the design of chatbots. Proceedings of the 2018 Designing Interactive Systems Conference, 895–906. https://doi.org/10.1145/3196709.3196735
- Jones, B., & Jones, R. (2019). Public Service Chatbots: Automating Conversation with BBC News. Digital Journalism, 7(8), 1032–1053. https://doi.org/10.1080/21670811.2019.1609371
- Kang, L., Wang, X., Tan, C. -H., & Zhao, J. L. (2014). Understanding the Antecedents and Consequences of Live-Chat Use in E-Commerce Context. In F. F.-H. Nah (Ed.), HCI in Business (pp. 504–515). Springer International Publishing. https://doi.org/10.1007/978-3-319-07293-7_49
- Kimberlin, C. L., & Winterstein, A. G. (2008). Validity and reliability of measurement instruments used in research. American Journal of Health-System Pharmacy, 65(23), 2276–2284. https://doi.org/10.2146/ajhp070364
- Kim, S., Lee, J., & Gweon, G. (2019). Comparing data from chatbot and web surveys: Effects of platform and conversational style on survey response quality. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3290605.3300316
- Krishnamoorthy, K., & Lee, M. (2014). Improved tests for the equality of normal coefficients of variation. Computational Statistics, 29(1), 215–232. https://doi.org/10.1007/s00180-013-0445-2
- Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236. https://doi.org/10.1002/acp.2350050305
- Kung, F. Y. H., Kwok, N., & Brown, D. J. (2018). Are attention check questions a threat to scale validity? Applied Psychology, 67(2), 264–283. https://doi.org/10.1111/apps.12108
- Lee, Y.-C., Yamashita, N., Huang, Y., & Fu, W. (2020). “I hear you, i feel you”: Encouraging deep self-disclosure through a chatbot. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–12. https://doi.org/10.1145/3313831.3376175
- Malhotra, N. (2008). Completion time and response order effects in web surveys. Public Opinion Quarterly, 72(5), 914–934. https://doi.org/10.1093/poq/nfn050
- Manfreda, K. L., Bosnjak, M., Berzelak, J., Haas, I., & Vehovar, V. (2008). Web surveys versus other survey modes: A meta-analysis comparing response rates. International Journal of Market Research, 50(1), 79–104. https://doi.org/10.1177/147078530805000107
- Marwick, B., & Krishnamoorthy, K. (2019). cvequality: Tests for the Equality of Coefficients of Variation from Multiple Groups (0.2.0). https://CRAN.R-project.org/package=cvequality
- Nass, C., & Steuer, J. (1993). Voices, Boxes, and Sources of Messages. Human Comm Res, 19(4), 504–527. https://doi.org/10.1111/j.1468-2958.1993.tb00311.x
- Ohme, J., Araujo, T., Zarouali, B., & de Vreese, C. H. (2022). Frequencies, drivers, and solutions to news non-attendance: Investigating differences between low news usage and news (topic) avoidance with conversational agents. Journalism Studies, 1–21. https://doi.org/10.1080/1461670X.2022.2102533
- Owton, H., & Allen-Collinson, J. (2014). Close but not too close: Friendship as method(ology) in ethnographic research encounters. Journal of Contemporary Ethnography, 43(3), 283–305. https://doi.org/10.1177/0891241613495410
- Paas, F. G. W. C., & Van Merriënboer, J. J. G. (1994). Variability of worked examples and transfer of geometrical problem-solving skills: A cognitive-load approach. Journal of Educational Psychology, 86(1), 122–133. https://doi.org/10.1037/0022-0663.86.1.122
- Park, S. (2015). The effects of social cue principles on cognitive load, situational interest, motivation, and achievement in pedagogical agent multimedia learning. Journal of Educational Technology & Society, 18(4), 211–229. https://www.jstor.org/stable/jeductechsoci.18.4.211
- Parnham, C. (2021). Using Chatbots for better customer engagement – benefits, use cases and examples. Cognition. https://cognition.certussolutions.com/blog/using-chatbots-for-better-customer-engagement
- Pedersen, M. J., & Nielsen, C. V. (2016). Improving Survey Response Rates in Online Panels. Social Science Computer Review, 34(2), 229–243. https://doi.org/10.1177/0894439314563916
- Rogelberg, S. G., Fisher, G. G., Maynard, D. C., Hakel, M. D., & Horvath, M. (2001). Attitudes toward surveys: Development of a measure and its relationship to respondent behavior. Organizational Research Methods, 4(1), 3–25. https://doi.org/10.1177/109442810141001
- Rosseel, Y. (2012). Lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2). https://doi.org/10.18637/jss.v048.i02
- Sammut, R., Griscti, O., & Norman, I. J. (2021). Strategies to improve response rates to web surveys: A literature review. International Journal of Nursing Studies, 123, 104058. https://doi.org/10.1016/j.ijnurstu.2021.104058
- Santesso, N., Barbara, A. M., Kamran, R., Akkinepally, S., Cairney, J., Akl, E. A., & Schünemann, H. J. (2020). Conclusions from surveys may not consider important biases: A systematic survey of surveys. Journal of Clinical Epidemiology, 122, 108–114. https://doi.org/10.1016/j.jclinepi.2020.01.019
- Sax, L. J., Gilmartin, S. K., & Bryant, A. N. (2003). Research in Higher Education, 44(4), 409–432. https://doi.org/10.1023/A:1024232915870
- Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350–353. https://doi.org/10.1037/1040-3590.8.4.350
- Schuetzler, R. M., Giboney, J. S., Grimes, G. M., & Nunamaker, J. F. (2018). The influence of conversational agent embodiment and conversational relevance on socially desirable responding. Decision Support Systems, 114, 94–102. https://doi.org/10.1016/j.dss.2018.08.011
- Shamon, H., & Berning, C. (2019). Attention check items and instructions in online surveys with incentivized and non-incentivized samples: Boon or bane for data quality? (SSRN Scholarly Paper ID 3549789), Social Science Research Network. https://doi.org/10.2139/ssrn.3549789
- Shen, L., & Dillard, J. P. (2005). Psychometric properties of the hong psychological reactance scale. Journal of Personality Assessment, 85(1), 74–81. https://doi.org/10.1207/s15327752jpa8501_07
- Shin, E., Johnson, T. P., & Rao, K. (2012). Survey mode effects on data quality: Comparison of web and mail modes in a U.S. National panel survey. Social Science Computer Review, 30(2), 212–228. https://doi.org/10.1177/0894439311404508
- Smyth, J. D., Dillman, D. A., Christian, L. M., & Mcbride, M. (2009). Open-ended questions in web surveys: Can increasing the size of answer boxes and providing extra verbal instructions improve response quality? Public Opinion Quarterly, 73(2), 325–337. https://doi.org/10.1093/poq/nfp029
- Stanton, J. M. (1998). An empirical assessment of data collection using the internet. Personnel Psychology, 51(3), 709–725. https://doi.org/10.1111/j.1744-6570.1998.tb00259.x
- Sundar, S. S., & Kim, J. (2019). Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 19, 1–9. https://doi.org/10.1145/3290605.3300768
- Van den Broeck, E., Zarouali, B., & Poels, K. (2019). Chatbot advertising effectiveness: When does the message get through? Computers in Human Behavior, 98, 150–157. https://doi.org/10.1016/j.chb.2019.04.009
- van der Goot, M. J., & Pilgrim, T. (2020). Exploring age differences in motivations for and acceptance of chatbot communication in a customer service context. In A. Følstad, T. Araujo, S. Papadopoulos, E. L.-C. Law, O.-C. Granmo, E. Luger, & P. B. Brandtzaeg(Eds.), Chatbot research and design. (Vol. 11970, pp. 173–186). Springer International Publishing. https://doi.org/10.1007/978-3-030-39540-7_12
- Wambsganss, T., Winkler, R., Söllner, M., & Leimeister, J. M. (2020). A conversational agent to improve response quality in course evaluations. Extended abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, 1–9. https://doi.org/10.1145/3334480.3382805
- Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
- Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T., Miller, E., Bache, S., Müller, K., Ooms, J., Robinson, D., Seidel, D., Spinu, V., & Yutani, H. (2019). Welcome to the Tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi.org/10.21105/joss.01686
- Wijenayake, S., Berkel, N. V., & Goncalves, J. (2020). Bots for research: Minimising the experimenter effect. Adjunct Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 1–9. https://vbn.aau.dk/en/publications/bots-for-research-minimising-the-experimenter-effect
- Wikman, A., & W�rneryd, B. (1990). Measurement errors in survey questions: Explaining response variability. Social Indicators Research, 22(2), 199–212. https://doi.org/10.1007/BF00354840
- Xiao, Z., Zhou, M. X., Liao, Q. V., Mark, G., Chi, C., Chen, W., & Yang, H. (2020). Tell me about yourself: Using an AI-powered chatbot to conduct conversational surveys with open-ended questions. ACM Transactions on Computer-Human Interaction, 273, 1–37. https://doi.org/10.1145/3381804
- Zarouali, B., Brosius, A., Helberger, N., & Vreese, C. H. de. (2021). WhatsApp Marketing: A Study on WhatsApp Brand Communication and the Role of Trust in Self-Disclosure. International Journal of Communication, 15, 252–276.
- Zarouali, B., Makhortykh, M., Bastian, M., & Araujo, T. (2020). Overcoming polarization with chatbot news? Investigating the impact of news content containing opposing views on agreement and credibility. European Journal of Communication. https://doi.org/10.1177/0267323120940908
- Zarouali, B., Van den Broeck, E., Walrave, M., & Poels, K. (2018). Predicting consumer responses to a chatbot on Facebook. Cyberpsychology, Behavior and Social Networking, 21(8), 491–497. https://doi.org/10.1089/cyber.2017.0518