ABSTRACT
Chatbots provide functional and social support in various contexts. They are often designed with humanlike features. This study examines how chatbots’ assigned names (humanlike vs. neutral vs. machinelike) and communication contexts (functional vs. social) influence users’ willingness to disclose personal information. We conducted a 3 × 2 “between-subjects” online experiment with random assignments of 299 participants. The results showed that a functional communication context elicited greater participants’ willingness to disclose information, but the impact of chatbot names was not significant. These findings provide an extended understanding of the Computers Are Social Actors paradigm and may inspire the exploration of conditional effects in privacy research. The practical implications for context-aware designs are discussed.
Disclosure statement
No potential conflict of interest was reported by the authors.
Additional information
Notes on contributors
Weizi Liu
Weizi Liu is a Ph.D. candidate in Informatics at University of Illinois at Urbana-Champaign. She studies trust, acceptance, and social dynamics in human-machine communication.
Kun Xu
Kun Xu (Ph.D., Temple University, 2019) is an Assistant Professor in Emerging Media at University of Florida. His research broadly focuses on human-computer interaction, human-robot interaction, computer-mediated communication, and psychological processing of emerging technologies.
Mike Z. Yao
Mike Z. Yao (Ph.D., University of California, Santa Barbara, 2006) is a Professor in Digital Media at University of Illinois at Urbana-Champaign. His research focuses on the social and psychological impacts of interactive digital media.