457
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Ask a Further Question or Give a List? How Should Conversational Agents Reply to Users’ Uncertain Queries

, , &
Pages 1087-1101 | Received 05 May 2022, Accepted 29 Sep 2022, Published online: 14 Oct 2022

References

  • Adiga, N., & Prasanna, S. R. M. (2019). Acoustic features modelling for statistical parametric speech synthesis: A review. IETE Technical Review, 36(2), 130–149. https://doi.org/10.1080/02564602.2018.1432422
  • Adler, R. B., & Proctor, R. F. II. (2016). Looking out, looking in. Cengage Learning.
  • Aeschlimann, S., Bleiker, M., Wechner, M., & Gampe, A. (2020). Communicative and social consequences of interactions with voice assistants. Computers in Human Behavior, 112, 106466. https://doi.org/10.1016/j.chb.2020.106466
  • Becker, C., Kopp, S., & Wachsmuth, I. (2007). Why emotions should be integrated into conversational agents. T. Nishida (Ed.), Conversational informatics: An engineering approach (pp. 49–68). Wiley.
  • Beckwith, L., Burnett, M., Grigoreanu, V., & Wiedenbeck, S. (2006). Gender HCI: What about the software? Computer Magazine. 39(11), 97–101. https://doi.org/10.1109/MC.2006.382
  • Beneteau, E., Richards, O. K., Zhang, M., Kientz, J. A., Yip, J., & Hiniker, A. (2019). Communication breakdowns between families and Alexa [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). https://doi.org/10.1145/3290605.3300473
  • Bentley, F., Luvogt, C., Silverman, M., Wirasinghe, R., White, B., & Lottridge, D. (2018). Understanding the long-term use of smart speaker assistants. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(3), 1–24. https://doi.org/10.1145/3264901
  • Bickmore, T. W., Caruso, L., & Clough-Gorr, K. (2005). Acceptance and usability of a relational agent interface by urban older adults [Paper presentation]. CHI'05 Extended Abstracts on Human Factors in Computing Systems (pp. 1212–1215). https://doi.org/10.1145/1056808.1056879
  • Bickmore, T. W., Caruso, L., Clough-Gorr, K., & Heeren, T. (2005). ‘It’s just like you talk to a friend’ relational agents for older adults. Interacting with Computers, 17(6), 711–735. https://doi.org/10.1016/j.intcom.2005.09.002
  • Brahnam, S., & De Angeli, A. (2012). Gender affordances of conversational agents. Interacting with Computers, 24(3), 139–153. https://doi.org/10.1016/j.intcom.2012.05.001
  • Branham, S. M., & Mukkath Roy, A. R. (2019). Reading between the guidelines: How commercial voice assistant guidelines hinder accessibility for blind users [Paper presentation]. The 21st International ACM SIGACCESS Conference on Computers and Accessibility (pp. 446–458). https://doi.org/10.1145/3308561.3353797
  • Casas, J., Spring, T., Daher, K., Mugellini, E., Khaled, O. A., & Cudré-Mauroux, P. (2021). Enhancing conversational agents with empathic abilities [Paper presentation]. Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents (pp. 41–47). Springer. https://doi.org/10.1145/3472306.3478344
  • Chaves, A. P., & Gerosa, M. A. (2021). How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. International Journal of Human–Computer Interaction, 37(8), 729–758. https://doi.org/10.1080/10447318.2020.1841438
  • Choi, T. R., & Drumwright, M. E. (2021). “OK, Google, why do I use you?” Motivations, post-consumption evaluations, and perceptions of voice AI assistants. Telematics and Informatics, 62, 101628. https://doi.org/10.1016/j.tele.2021.101628
  • Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Spillane, B., Murad, C., Munteanu, C., Wade, V., & Cowan, B. R. (2019). What makes a good conversation? Challenges in designing truly conversational agents [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12). https://doi.org/10.1145/3290605.3300705
  • Cohen, M. H., Cohen, M. H., Giangola, J. P., & Balogh, J. (2004). Voice user interface design. Addison-Wesley Professional.
  • Danielescu, A., & Christian, G. (2018). A bot is not a polyglot: Designing personalities for multi-lingual conversational agents [Paper presentation]. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–9). https://doi.org/10.1145/3170427.3174366
  • Dasgupta, R., Dasgupta, R., & R., Srivastava. (2018). Voice user interface design (p. 114). Apress.
  • Dautenhahn, K., Woods, S., Kaouri, C., Walters, M. L., Koay, K. L., & Werry, I. (2005). What is a robot companion-friend, assistant or butler? [Paper presentation]. 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (pp. 1192–1197). IEEE. https://doi.org/10.1109/IROS.2005.1545189
  • Devillers, L., Rosset, S., Duplessis, G. D., Bechade, L., Yemez, Y., Turker, B. B., Sezgin, M., Erzin, E., El Haddad, K., Dupont, S., Deleglise, P., Esteve, Y., Lailler, C., Gilmartin, E., & Campbell, N. (2018). Multifaceted engagement in social interaction with a machine: The joker project [Paper presentation]. 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018) (pp. 697–701). IEEE. https://doi.org/10.1109/FG.2018.00110
  • Dhir, A., & Torsheim, T. (2016). Age and gender differences in photo tagging gratifications. Computers in Human Behavior, 63, 630–638. https://doi.org/10.1016/j.chb.2016.05.044
  • Dhir, A., Pallesen, S., Torsheim, T., & Andreassen, C. S. (2016). Do age and gender differences exist in selfie-related behaviours? Computers in Human Behavior, 63, 549–555. https://doi.org/10.1016/j.chb.2016.05.053
  • Eyssel, F., De Ruiter, L., Kuchenbrandt, D., Bobinger, S., & Hegel, F. (2012). ‘If you sound like me, you must be more human’: On the interplay of robot and user features on human-robot acceptance and anthropomorphism [Paper presentation]. 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 125–126). IEEE. https://doi.org/10.1145/2157689.2157717
  • Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149
  • Faul, F., Erdfelder, E., Lang, A. G., & Buchner, A. (2007). G* Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/bf03193146
  • Feine, J., Gnewuch, U., Morana, S., & Maedche, A. (2019). A taxonomy of social cues for conversational agents. International Journal of Human-Computer Studies, 132, 138–161. https://doi.org/10.1016/j.ijhcs.2019.07.009
  • Fischer, J. E., Reeves, S., Porcheron, M., & Sikveland, R. O. (2019). Progressivity for voice interface design [Paper presentation]. Proceedings of the 1st International Conference on Conversational User Interfaces (pp. 1–8). https://doi.org/10.1145/3342775.3342788
  • Foster, M. E. (2007). Enhancing human-computer interaction with embodied conversational agents. In C. Stephanidis (Ed.), International Conference on Universal Access in Human-Computer Interaction (pp. 828–837). Springer. https://doi.org/10.1007/978-3-540-73281-5_91
  • Gao, C. (2019). Use new Alexa emotions and speaking styles to create a more natural and intuitive voice experience. https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2019/11/new-alexa-emotions-and-speaking-styles.
  • Gilmartin, E., Collery, M., Su, K., Huang, Y., Elias, C., Cowan, B. R., & Campbell, N. (2017). Social talk: Making conversation with people and machine [Paper presentation]. Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial Agents (pp. 31–32). https://doi.org/10.1145/3139491.3139494
  • Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual, identity and conversational cues on humanness perceptions. Computers in Human Behavior, 97, 304–316. https://doi.org/10.1016/j.chb.2019.01.020
  • Gu, Y. (1990). Politeness phenomena in modern Chinese. Journal of Pragmatics, 14(2), 237–257. https://doi.org/10.1016/0378-2166(90)90082-O
  • Habler, F., Peisker, M., & Henze, N. (2019). Differences between smart speakers and graphical user interfaces for music search considering gender effects [Paper presentation]. Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (pp. 1–7). https://doi.org/10.1145/3365610.3365627
  • Hannon, C. (2016). Gender and status in voice user interfaces. Interactions, 23(3), 34–37. https://doi.org/10.1145/2897939
  • Hao, S., Zhang, S., & Zhu, F. (2008). A comparative study of Chinese and American address terms. Journal of Praxis in Multicultural Education, 3(1), 3. https://doi.org/10.9741/2161-2978.1011
  • Hassenzahl, M., Burmester, M., Koller, F. (2003). AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualität. In G. Szwillus & J. Ziegler (Eds.), Mensch & computer 2003 (pp. 187–196). Vieweg + Teubner Verlag. https://doi.org/10.1007/978-3-322-80058-9_19
  • James, J., Balamurali, B. T., Watson, C. I., & MacDonald, B. (2021). Empathetic speech synthesis and testing for healthcare robots. International Journal of Social Robotics, 13(8), 2119–2137. https://doi.org/10.1007/s12369-020-00691-4
  • Jeong, Y., Lee, J., & Kang, Y. (2019). Exploring effects of conversational fillers on user perception of conversational agents [Paper presentation]. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–6). https://doi.org/10.1145/3290607.3312913
  • Kennedy, J., Baxter, P., & Belpaeme, T. (2015). Comparing robot embodiments in a guided discovery learning interaction with children. International Journal of Social Robotics, 7(2), 293–308. https://doi.org/10.1007/s12369-014-0277-4
  • Kim, Y., & Mutlu, B. (2014). How social distance shapes human–robot interaction. International Journal of Human-Computer Studies, 72(12), 783–795. https://doi.org/10.1016/j.ijhcs.2014.05.005
  • Kocabalil, A. B., Laranjo, L., & Coiera, E. (2018). Measuring user experience in conversational interfaces: A comparison of six questionnaires [Paper presentation]. Proceedings of the 32nd International BCS Human Computer Interaction Conference (pp. 1–12). https://doi.org/10.14236/ewic/HCI2018.21
  • Kocaballi, A. B., Berkovsky, S., Quiroz, J. C., Laranjo, L., Tong, H. L., Rezazadegan, D., Briatore, A., & Coiera, E. (2019). The personalization of conversational agents in health care: Systematic review. Journal of Medical Internet Research, 21(11), e15360. https://doi.org/10.2196/15360
  • Kocaballi, A. B., Laranjo, L., & Coiera, E. (2019). Understanding and measuring user experience in conversational interfaces. Interacting with Computers, 31(2), 192–207. https://doi.org/10.1093/iwc/iwz015
  • Kraus, M., Fischbach, F., Jansen, P., Minker, W. (2020). A comparison of explicit and implicit proactive dialogue strategies for conversational recommendation [Paper presentation]. Proceedings of the 12th Language Resources and Evaluation Conference (pp. 429–435). https://aclanthology.org/2020.lrec-1.54
  • Large, D. R., Clark, L., Quandt, A., Burnett, G., & Skrypchuk, L. (2017). Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving. Applied Ergonomics, 63, 53–61. https://doi.org/10.1016/j.apergo.2017.04.003
  • Larivière, B., Bowen, D., Andreassen, T. W., Kunz, W., Sirianni, N. J., Voss, C., Wünderlich, N. V., & De Keyser, A. (2017). “Service Encounter 2.0”: An investigation into the roles of technology, employees and customers. Journal of Business Research, 79, 238–246. https://doi.org/10.1016/j.jbusres.2017.03.008
  • Laugwitz, B., Held, T., & Schrepp, M. (2008). Construction and evaluation of a user experience questionnaire. In A. Holzinger (Ed.), Symposium of the Austrian HCI and Usability Engineering Group (pp. 63–76). Springer. https://doi.org/10.1007/978-3-540-89350-9_6
  • Lee, S., & Choi, J. (2017). Enhancing user experience with conversational agent for movie recommendation: Effects of self-disclosure and reciprocity. International Journal of Human-Computer Studies, 103, 95–105. https://doi.org/10.1016/j.ijhcs.2017.02.005
  • Lee, S., Ryu, H., Park, B., & Yun, M. H. (2020). Using physiological recordings for studying user experience: Case of conversational agent-equipped TV. International Journal of Human–Computer Interaction, 36(9), 815–827. https://doi.org/10.1080/10447318.2019.1693166
  • Liew, T. W., & Tan, S. M. (2021). Social cues and implications for designing expert and competent artificial agents: A systematic review. Telematics and Informatics, 65, 101721. https://doi.org/10.1016/j.tele.2021.101721
  • Liu, G., Ding, X., Yu, C., Gao, L., Chi, X., & Shi, Y. (2019). “I bought this for me to look more ordinary”: A study of blind people doing online shopping [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–11). https://doi.org/10.1145/3290605.3300602
  • Lopatovska, I., & Williams, H. (2018). Personification of the Amazon Alexa: BFF or a mindless companion [Paper presentation]. Proceedings of the 2018 Conference on Human Information Interaction & Retrieval (pp. 265–268). https://doi.org/10.1145/3176349.3176868
  • Lorigo, L., Pan, B., Hembrooke, H., Joachims, T., Granka, L., & Gay, G. (2006). The influence of task and gender on search and evaluation behavior using Google. Information Processing & Management, 42(4), 1123–1131. https://doi.org/10.1016/j.ipm.2005.10.001
  • Ma, Q., Zhou, R., Zhang, C., & Chen, Z. (2022). Rationally or emotionally: How should voice user interfaces reply to users of different genders considering user experience? Cognition, Technology & Work, 24(2), 233–246. https://doi.org/10.1007/s10111-021-00687-8
  • MacWhinney, B., James, J. S., Schunn, C., Li, P., & Schneider, W. (2001). STEP—A system for teaching experimental psychology using E-Prime. Behavior Research Methods, Instruments, & Computers, 33(2), 287–296. https://doi.org/10.3758/bf03195379
  • Malik, A., Dhir, A., & Nieminen, M. (2016). Uses and gratifications of digital photo sharing on Facebook. Telematics and Informatics, 33(1), 129–138. https://doi.org/10.1016/j.tele.2015.06.009
  • Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
  • Nguyen, Q. N., Ta, A., & Prybutok, V. (2019). An integrated model of voice-user interface continuance intention: The gender effect. International Journal of Human–Computer Interaction, 35(15), 1362–1377. https://doi.org/10.1080/10447318.2018.1525023
  • Nunamaker, J. F., Derrick, D. C., Elkins, A. C., Burgoon, J. K., & Patton, M. W. (2011). Embodied conversational agent-based kiosk for automated interviewing. Journal of Management Information Systems, 28(1), 17–48. https://doi.org/10.2753/MIS0742-1222280102
  • Oliveira, R., Arriaga, P., Axelsson, M., & Paiva, A. (2021). Humor–Robot interaction: A scoping review of the literature and future directions. International Journal of Social Robotics, 13(6), 1369–1383. https://doi.org/10.1007/s12369-020-00727-9
  • Ollier, J., Nißen, M., & von Wangenheim, F. (2021). The terms of “you (s)”: How the term of address used by conversational agents influences user evaluations in French and German linguaculture. Frontiers in Public Health, 9, 691595. https://doi.org/10.3389/fpubh.2021.691595
  • Park, S., Park, S., Lee, Y., & Lee, Y. (2020). User experience of smart speaker visual feedback type: The moderating effect of need for cognition and multitasking. Archives of Design Research, 33(2), 181–199. https://doi.org/10.15187/adr.2020.05.33.2.181
  • Pearl, C. (2016). Designing voice user interfaces: Principles of conversational experiences. O'Reilly Media, Inc.
  • Pigliacelli, F. (2020). Smart speakers’ adoption: Technology acceptance model and the role of conversational style. https://tesi.luiss.it/29209/1/712581_PIGLIACELLI_FLAVIA.pdf
  • Purington, A., Taft, J. G., Sannon, S., Bazarova, N. N., & Taylor, S. H. (2017). “Alexa is my new BFF” social roles, user satisfaction, and personification of the Amazon Echo [Paper presentation]. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (pp. 2853–2859). https://doi.org/10.1145/3027063.3053246
  • Qin, X. (2008). Choices in terms of address: A sociolinguistic study of Chinese and American English practices. In Proceedings of the 20th North American Conference on Chinese Linguistics (NACCL-20) (Vol. 1, pp. 409–421). NACCL. https://naccl.osu.edu/sites/naccl.osu.edu/files/22_qin-x.pdf
  • Qu, J., Zhou, R., & Chen, Z. (2021). The effect of personal pronouns on users and the social role of conversational agents. Behaviour & Information Technology, 1–17. https://doi.org/10.1080/0144929X.2021.1999500
  • Rhee, C. E., & Choi, J. (2020). Effects of personalization and social role in voice shopping: An experimental study on product recommendation by a conversational voice agent. Computers in Human Behavior, 109, 106359. https://doi.org/10.1016/j.chb.2020.106359
  • Rheu, M., Shin, J. Y., Peng, W., & Huh-Yoo, J. (2021). Systematic review: Trust-building factors and implications for conversational agent design. International Journal of Human–Computer Interaction, 37(1), 81–96. https://doi.org/10.1080/10447318.2020.1807710
  • Richard, L., & Charbonneau, D. (2009). An introduction to E-Prime. Tutorials in Quantitative Methods for Psychology, 5(2), 68–76. https://doi.org/10.20982/tqmp.05.2.p068
  • Shamekhi, A., Czerwinski, M., Mark, G., Novotny, M., & Bennett, G. A. (2016). An exploratory study toward the preferred conversational style for compatible virtual agents [Paper presentation]. International Conference on Intelligent Virtual Agents (pp. 40–50). Springer. https://doi.org/10.1007/978-3-319-47665-0_4
  • Shinohara, K., & Wobbrock, J. O. (2011). In the shadow of misperception: Assistive technology use and social interactions [Paper presentation]. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 705–714). https://doi.org/10.1145/1978942.1979044
  • Street, R. L. Jr. (1982). Evaluation of noncontent speech accommodation. Language & Communication, 2(1), 13–31. https://doi.org/10.1016/0271-5309(82)90032-5
  • ter Stal, S., Tabak, M., op den Akker, H., Beinema, T., & Hermens, H. (2020). Who do you prefer? The effect of age, gender and role on users’ first impressions of embodied conversational agents in eHealth. International Journal of Human–Computer Interaction, 36(9), 881–892. https://doi.org/10.1080/10447318.2019.1699744
  • Thom, J., Nazarian, A., Brillman, R., Cramer, H., & Mennicken, S. (2020). “Play music”: User motivations and expectations for non-specific voice queries [Paper presentation]. 21st International Society for Music Information Retrieval Conference.
  • Tubin, C., Mazuco Rodriguez, J. P., & de Marchi, A. C. B. (2021). User experience with conversational agent: A systematic review of assessment methods. Behaviour & Information Technology, 1–11. https://doi.org/10.1080/0144929X.2021.2001047
  • Turk, V. (2016). Home invasion. New Scientist. 232(3104-3106), 16–17. https://doi.org/10.1016/S0262-4079(16)32318-1
  • Van Pinxteren, M. M., Pluymaekers, M., & Lemmink, J. G. (2020). Human-like communication in conversational agents: A literature review and research agenda. Journal of Service Management, 31(2), 203–225. https://doi.org/10.1108/JOSM-06-2019-0175
  • Vardoulakis, L. P., Ring, L., Barry, B., Sidner, C. L., & Bickmore, T. (2012). Designing relational agents as long term social companions for older adults [Paper presentation]. International Conference on Intelligent Virtual Agents (pp. 289–302). Springer. https://doi.org/10.1007/978-3-642-33197-8_30
  • Wang, C., Teo, T. S., & Janssen, M. (2021). Public and private value creation using artificial intelligence: An empirical study of AI voice robot users in Chinese public sector. International Journal of Information Management, 61, 102401. https://doi.org/10.1016/j.ijinfomgt.2021.102401
  • Wang, J. A. (2003). A comparative analysis for Sino-English appellation of social intercourse. Journal of Harbin University, 8, 48–50.
  • Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. https://doi.org/10.1016/j.jesp.2014.01.005
  • Xu, K. (2020). Language, modality, and mobile media use experiences: Social responses to smartphone cues in a task-oriented context. Telematics and Informatics, 48, 101344. https://doi.org/10.1016/j.tele.2020.101344
  • Yang, X., Aurisicchio, M., & Baxter, W. (2019). Understanding affective experiences with conversational agents [Paper presentation]. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12). https://doi.org/10.1145/3290605.3300772
  • Yuasa, M., Mukawa, N., Kimura, K., Tokunaga, H., & Terai, H. (2010). An utterance attitude model in human-agent communication: From good turn-taking to better human-agent understanding [Paper presentation]. CHI'10 Extended Abstracts on Human Factors in Computing Systems (pp. 3919–3924). https://doi.org/10.1145/1753846.1754079
  • Zhang, C., Zhou, R., Zhang, Y., Sun, Y., Zou, L., & Zhao, M. (2020). How to design the expression ways of conversational agents based on affective experience [Paper presentation]. International Conference on Human-Computer Interaction (pp. 302–320). Springer. https://doi.org/10.1007/978-3-030-49062-1_21
  • Zhang, X. (2011). A comparative study of the Sino-American address forms from an intercultural communication perspective. International Journal of English Linguistics, 1(1), 54. https://doi.org/10.5539/ijel.v1n1p54
  • Zhang, X., Miyaki, T., & Rekimoto, J. (2021). JustSpeak: Automated, user-configurable, interactive agents for speech tutoring. Proceedings of the ACM on Human-Computer Interaction, 5(EICS), 1–24. https://doi.org/10.1145/3459744
  • Zhao, J., & Rau, P. L. P. (2020). Merging and synchronizing corporate and personal voice agents: Comparison of voice agents acting as a secretary and a housekeeper. Computers in Human Behavior, 108, 106334. https://doi.org/10.1016/j.chb.2020.106334

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.