937
Views
1
CrossRef citations to date
0
Altmetric
Research Article

The AI empathy effect: a mechanism of emotional contagion

, , , , &

ABSTRACT

While advances in technology have led to the widespread use of artificial intelligence (AI) in frontline services, the AI empathic effect is often overlooked. Based on emotional contagion theory, this research explores the impacts of AI emotional mimicry and empathic concern in service success and failure via experimental designs, respectively. Results indicate that AI with high emotional mimicry and empathic concern resulted in higher arousal and pleasure, increasing tourists’ continuous usage intention. Arousal and pleasure played significant mediating roles in service success. However, while empathic concern significantly affected continuous usage intention through pleasure, arousal did not have a similar effect. The joint effects of extrinsic and intrinsic anthropomorphism (emotional mimicry) were significant in a successful service context, but not in service failure. These results have important implications for the development and promotion of AI and enriches theoretical applications of emotional contagion theory and empathy.

摘要

虽然技术的进步导致了人工智能在一线服务中的广泛使用,但人工智能的移情效应往往被忽视. 基于情感传染理论,本研究通过实验设计,分别探讨了人工智能情感模仿和移情关怀对服务成败的影响. 结果表明,具有高度情感模仿和移情关注的人工智能能产生更高的唤醒和愉悦感,增加游客的持续使用意愿. 唤醒和愉悦在服务成功中起着重要的中介作用. 然而,尽管移情关注通过愉悦显著影响持续使用意图,但唤醒并没有产生类似的效果. 外在和内在拟人化(情绪模仿)的联合效应在成功的服务环境中是显著的,但在服务失败中不是. 这些结果对人工智能的发展和推广具有重要意义,丰富了情感传染理论和同理心的理论应用.

Introduction

Artificial intelligence (AI) and machine learning have gradually become popular in the service industry, with tourism benefitting the most (Filieri et al., Citation2021; Shan et al., Citation2023). Encompassing smart and self-service systems, chatbots, and service robots, AI devices have found extensive application in the provision of frontline services (Chi et al., Citation2022). For instance, the service robot named “Pepper” actively interacts with customers as it navigates through the Mandarin Oriental lobby in Las Vegas, addressing guests’ requests (Choi et al., Citation2021). Artificial intelligence comprises four types of intelligence—mechanical, analytical, intuitive, and empathic. Empathic intelligence represents the highest level and holds significance in service interactions (Huang & Rust, Citation2018). Despite the realization by researchers that AI applications can reshape service interactions and, consequently, impact customers’ experiences and behaviors in technology-based service encounters (Li et al., Citation2021; Yang et al., Citation2024), there has been little investigation of the role of AI empathy in AI service robots in the service interaction process.

Human emotion is intrinsic, rendering emotional understanding a crucial element of AI striving for human-like capabilities. Hortensius et al. (Citation2018) suggest that future AI agents should provide not just intellectual value, but also affective benefit. Therefore, the recognition of emotions in conversational contexts is emerging as a progressively popular research topic in the field of natural language processing, and the enhancement of AI robots’ emotional performance has gradually become a reality. Compared to the inherent spontaneity of human emotion, AI emotion represents an emotional response based on human emotion, which is still essentially the recognition and feedback of human emotion. This recognition is not only limited to textual language, but can include facial expression, voice, and body language, among other things (Poria et al., Citation2017). The emotional capabilities of AI have been applied and tested in practice. For example, a guest’s emotions can be detected through their body movements by a hotel greeting bot, and the interaction can be adapted accordingly (Liu-Thompkins et al., Citation2022). Companies such as Beyond Verbal and Cogito have used AI to analyze emotions, such as anger, anxiety, happiness, and contentment, by recognizing changes in vocal register. This enables voice hardware and software applications that allow the AI to interact with the user on an emotional level, similarly to humans (Feldman, Citation2016). Given the dominance of customers’ social needs in initiating and facilitating relationships with AI chatbots, the concept of AI empathy deserves further exploration (Pentina et al., Citation2023).

Moro et al. (Citation2019) underlined that emotional expression in human – robot interactions (HRIs) is closely linked to the interactional context. It follows that robots should exhibit emotions precisely at the junctures when humans would naturally anticipate an emotional reaction. The theory of emotional contagion posits that, in human interactions, emotions transfer from one person to another in an automatic, unconscious, and uncontrollable manner (Hatfield et al., Citation1993). This phenomenon also extends to human – robot interactions (Chuah & Yu, Citation2021). However, there is limited information on how service robot interactions effectively transmit the robot’s emotions to the consumer, and subsequently impacts consumer decision-making (Liao & Huang, Citation2024). In addition, the ways in which service robots select the appropriate empathic response for a particular service situation is a crucial question (Paiva et al., Citation2014). If a robot’s response to a human includes emotional disclosure (Ho et al., Citation2018) or emojis (Beattie et al., Citation2020), chatbots can emotionally act like human staffs (Pentina et al., Citation2023). Moreover, it has been determined that AI empathy can compensate for service failure and positively affect people’s willingness to continue using the service (Lv et al., Citation2022). Although several studies have explored the positive effects of AI empathy (Kim & Hur, Citation2023; Pelau et al., Citation2021; Schepers et al., Citation2022; Wirtz et al., Citation2018), the study context is quite limited. The continuing intention of tourists to use AI is the key to testing the success or failure of AI services, and it is also the premise upon which managers can decide whether to use AI service robots to replace human labor. Therefore, the question of how AI’s various empathic responses in different contexts affect people’s continuous usage intention (CUI) is worth exploring in depth, to use AI empathy wisely in the service.

With AI empathy playing a pivotal role in the service interaction process, anthropomorphic appearance is the most intuitive form of expression for triggering consumers into perceiving AI as anthropomorphic (Blut et al., Citation2021). Research has shown that AI extrinsic anthropomorphism can facilitate HRI, but the effect of intrinsic anthropomorphism (AI empathy) co-occurring with anthropomorphic appearance has not been investigated. To address these gaps in the knowledge base, we used the theory of emotional contagion to address the following research questions (RQs) through five pretests, one field study, and six formal studies:

RQ1.

How does AI empathy influence tourists’ continuous usage intention in different service contexts? Do tourists’ emotions play a mediating role in this intention?

RQ2.

Could the anthropomorphic appearance and empathic responses of an AI jointly positively affect tourists’ emotions?

In this study, we categorized AI empathic responses into two types—emotional mimicry and empathic concern. This supposes that AI emotional mimicry exerts a significant influence during service success, while empathic concern plays a role in service failure. By applying the theory of emotional contagion to HRIs, it was demonstrated that tourists’ emotions are affected by AI empathic responses. Furthermore, we investigated mediating role of emotions in the influence of AI empathy on consumers’ willingness to continue using AI. The moderating role of an anthropomorphic appearance in the influence of AI empathy on tourists’ emotions was also explored. Our findings expand the boundaries of emotional contagion theory and enrich the currently sparse literature on AI empathy and anthropomorphism in the service industry. The results could be used to inform managerial recommendations regarding the use of AI, as well as guide AI designers in crafting AI empathy responses for various scenarios.

Literature review

AI empathy in the service industry

The development of AI provides the technical basis for service robots, which are currently taking over the front line in the service industry (Schepers et al., Citation2022). In contrast to industrial robots, service robots engage in extensive customer contact and interaction, while performing tasks in diverse and dynamic everyday environments (Qiu et al., Citation2020), as well as conducting cognitive – analytical and emotional – social tasks (Wirtz et al., Citation2018). It is important to provide emotional reactions in social interactions. Huang and Rust (Citation2018) emphasized that the success of future AI applications would rely on their ability to perform empathic tasks. Artificial intelligence empathy has been defined as the ability of AI agents to recognize and respond to users’ cognitive needs and emotional states (Asada, Citation2015), which is manifested as emotional mimicry, empathic concern, and emotional contagion (Hu et al., Citation2022; Jiang et al., Citation2022; Liu-Thompkins et al., Citation2022; Pentina et al., Citation2023). This process involves computers comprehending human individuals’ emotional states, and responding in a caring and emotionally attuned manner. Consequently, artificial empathy assumes a pivotal role in closing the gap between humans and AI in terms of affective and social customer experiences.

AI empathy should be a crucial factor in shaping forthcoming interactions between firms and customers facilitated by AI. Emotional interactions in HRIs are gradually becoming real, with robots having the goal of personalizing interactions by analyzing users’ responses to robots’ various affective expressions, resulting in the robots adapting their emotional behavior to each user (Paiva et al., Citation2014). Hence, empathic robots are essential because they can maintain an affective loop with the user. The ways in which these robots can close the loop and be empathic toward the user, by delivering more suitable and adaptable responses during the interaction, are essential. Thereby, empathic skills are even more important for both the service employees and AI. This implies that, as AI gradually assumes control of analytical tasks, the significance of analytical skills will diminish, placing even greater importance on the “softer” empathic skills for service employees.

Unfortunately, current research is still focused on employees’ or customers’ acceptance of AI, or the multiple sequential consequences of using AI, such as satisfaction and loyalty (Pelau et al., Citation2021; Schepers et al., Citation2022; Wirtz et al., Citation2018). The role of AI empathy in the service sector has not been extensively studied. The extant literature chiefly addresses the impact of AI empathy on the efficacy of AI services, and particularly its influence on AI acceptance and trust (Kim & Hur, Citation2023; Pelau et al., Citation2021). Furthermore, studies have indicated that appropriate expressions of empathy significantly affect user interactions with social robots, enhancing relational satisfaction and well-being, thereby augmenting customer loyalty and equity (Leite et al., Citation2012; Liu-Thompkins et al., Citation2022). However, the specific role of AI empathy in these applications is not clear. When confronted with AI-generated responses, users’ emotional reactions are pivotal in shaping their subsequent behaviors (Liao & Huang, Citation2024). This warrants further investigation to understand whether AI empathy can infect users emotionally, like in the contexts with human counterparts.

In the tourism field, researchers have investigated AI empathy’s effect on value co-creation (Solakis et al., Citation2022), customer satisfaction enhancement (Zhang et al., Citation2024), and service continued usage intention (De Kervenoael et al., Citation2020). Lv et al. (Citation2022) found AI’s empathize ability can compensate for the service failures and positively influence consumers’ continuous AI usage. However, AI empathy is not only useful in the process of HRI when the service fails because it also plays a key role in the service succeeding. While AI’s empathic capabilities are becoming a reality, and are of significant importance for service robots, investigation into how service robots’ emotions influence customers’ CUI has been lacking. Therefore, this work was focused on the mechanisms inherent in AI empathy that could be used to influence tourists’ continued use of the service and whether the service fails or succeeds.

Emotional contagion theory

Emotional contagion theory refers to a social influence that comprises the process of consciously or unconsciously inducing emotional states and behavioral attitudes in a person or group who then influences the emotions or behaviors of another individual or group (Schoenewolf, Citation1990). It is used to explain how emotions are transmitted in social interactions. Studies have shown that emotional contagion is mainly transmitted through nonverbal signals, such as facial expressions, body language, and tone of voice, rather than verbal signals (Cheshin et al., Citation2011; Mehrabian, Citation1972). Additionally, most cases of emotional contagion are subconscious and can present as automatic transfers between people, such as people spontaneously imitating each other’s facial expressions.

In tourism studies, whether in the real or virtual world, emotional contagion theory explores how emotions are transmitted and gradually converge between people, such as between employees and tourists or between tourists (Woo & Chan, Citation2020; Xu et al., Citation2020). However, as AI is gradually being endowed with emotional attributes, service robots are increasingly able to understand tourists’ emotions and respond to them in a timely manner, thus affecting tourists’ emotions and forming a closed-loop emotional contagion interaction, which influence the customer experience. In addition, as the level of AI sophistication increases, the AI’s influence on eliciting positive emotions becomes more potent (Schepers et al., Citation2022). Recent studies have revealed the impact of the various emotional expressions displayed by robots, such as surprise, happiness, fear, and disgust, on human emotional responses during interactions (Chuah & Yu, Citation2021; Liao & Huang, Citation2024; Yu, Citation2020, Citation2020). Moreover, a few researchers have explored the emotional states of tourists who have used service robots from the perspective of AI empathy and emotional contagion (e.g. Liao & Huang, Citation2024). Liu-Thompkins et al. (Citation2022) defined artificial emotional contagion as the transmission of an artificial sense, or the illusion of experiencing the same emotions as the interacting individual through emotional mirroring and mimicry. The choice between emotional mimicry and empathic concern may depend on the specific emotions involved in these scenarios. Furthermore, the service contexts in which service robots recognize emotions and react to them, as well as how these emotions in turn influence tourists’ emotions and subsequent behaviors (e.g., CUI), are questions that deserve further discussion.

Effects of AI empathy on consumers’ continuous usage intention

Companies place great importance on their employees having empathy, especially in the service industry (Wieseke et al., Citation2012). Indeed, empathy is a key variable and a crucial factor that affects the success of employee – customer interactions (Markovic et al., Citation2018). More-interactive communication with customers and a heightened focus on consumers’ needs and desires are typically facilitated by empathic service employees (Homburg et al., Citation2009). This, in turn, results in positive and lasting impressions of a product or service. Similarly, a form of empathy is exhibited by AI service, which has the capability of concentrating on consumers’ specific needs (Pelau et al., Citation2021; Wang, Citation2017). Studies have determined that customers’ emotions can be significantly influenced by robots exhibiting animacy (Ho & MacDorman, Citation2010), consequently exerting a greater impact on service outcomes. For example, customers may become curt and purchase less once they become aware that their conversational partner is not a human because they perceive the bot as having lower levels of knowledge and empathy (Luo et al., Citation2019). However, Lv et al. (Citation2022) proposed that AI empathy could improve the impact of AI service failures, the literature suggesting that AI with human-like features evokes emotions in consumers similar to those felt toward humans (Schmetkamp, Citation2020). Moreover, where AI has the ability to show empathy and interact with human consumers, its acceptance rates by consumers increase (Pelau et al., Citation2021). Therefore, AI empathy could promote consumers’ CUI.

Service robots are not likely to possess genuine emotions or self-determination in the imminent future (Picard, Citation2013), but they can mimic emotional responses through facial expressions and body language, and they can make relevant empathic responses based on the emotional signals received (Wirtz et al., Citation2018). Emotional mimicry and empathic concern are two important manifestations of empathic responses, and they can be applied in different scenarios. Emotional mimicry refers to imitating the emotional expressions displayed by others (Hess & Fischer, Citation2014). This may not be suitable for negatively valanced situations because a negative attitude toward the target tends to inhibit it. Instead, negative emotions can be alleviated by an empathic AI agent through use of the empathic concern components of artificial empathy (Hutchings & Haddock, Citation2008; Liu-Thompkins et al., Citation2022). Researchers have also indicated that employees’ empathic concern leads to exceptionally positive service encounters (Collier et al., Citation2018). The empathic concern of AI entails the algorithmic detection of an individual’s distress and the presentation of the impression that the AI has concern for an individual, allowing the AI to demonstrate empathic concern by proactively adapting the message or communication interface (Liu-Thompkins et al., Citation2022). Thus, emotional mimicry plays a key role in positive situations (e.g., service success), while empathy can improve tourists’ emotions in negative situations (e.g., service failure). The following hypotheses were thus posited:

H1a:

In service success contexts, emotional mimicry has a positive impact on tourists’ CUI.

H1b:

In service failure contexts, empathic concern positively affects tourists’ CUI.

The mediating effect of pleasure and arousal

In marketing interactions, the occurrence of emotional contagion is likely to initiate a chain reaction that affects follow-up interactions (Liu et al., Citation2019). When robots show empathy, it allows tourists (i.e., consumers) to transfer the emotional attachment that they originally used for humans to the AI. Meanwhile, consumers are further influenced by AI empathy to generate more positive emotions, especially in terms of service robots that have interactive and emotion-intensive attributes. Therefore, it can also be predicted that consumers are influenced to produce emotional responses by AI empathy. For example, AI’s emotional capability could positively influence consumers’ deep emotions (e.g., intimacy and passion) due to the humanized features of intelligent assistants (Song et al., Citation2022). Robots are perceived by consumers with disabilities as capable of stimulating and regulating emotions in long-term care by imitating cognitive and behavioral empathy (Kipnis et al., Citation2022), which also suggests that, even though consumers do not perceive AI as having mental states, they can still experience empathy toward AI applications that exhibit human-like traits (Bretan et al., Citation2015; Li et al., Citation2023).

Pleasure and arousal are typically used as two vital dimensions in mirroring people’s emotions, especially in marketing and consumer studies (Richins, Citation1997; Ruan & Mezei, Citation2022). Pleasure – arousal theory indicates that pleasure and arousal can determine the quality and intensity of consumers’ emotions, and different levels of pleasure and arousal can even be combined into 69 emotional states (Reisenzein, Citation1994). Furthermore, emotional contagion theory posits that emotions are composited by valence (e.g., pleasure vs. hostility) and expression intensity (e.g., hostility vs. depression). The outcome of emotional contagion varies depending on the interaction between these two dimensions (Barsade, Citation2002). The presence of an expressive robot has resulted in participants experiencing a positive affective response and an increased liking for the robot (Cameron et al., Citation2018; Riedel et al., Citation2022), with robots that mimic the emotional expression of their counterpart also being perceived as more pleasing (Tielman et al., Citation2014; Wirtz et al., Citation2018). The stimulus – organism – response (SOR) theory, introduced by Mehrabian and Russell in 1974, links environmental stimuli with individual emotions and behaviors (Jiang, Citation2022). With AI constituting a new type of service scenario that positively affects consumers’ arousal and pleasure levels (Ruan & Mezei, Citation2022), consumers’ spending and loyalty intention can be increased (Schepers et al., Citation2022). Thus, there is sufficient evidence that AI empathy positively influences tourists’ emotions (i.e., pleasure and arousal) and further affects their behavioral intentions (e.g., CUI). Thus, the following hypotheses are proposed:

H2a/b:

Pleasure (/arousal) positively mediates the influences of emotional mimicry on tourists’ CUI.

H3a/b:

Pleasure (/arousal) positively mediates the influences of empathic concern on tourists’ CUI.

The moderating effect of anthropomorphic appearance

In the realm of services and marketing, anthropomorphism is considered to be the human inclination to attribute human capabilities (e.g., rational thought and emotion) to inanimate objects (e.g., robots) (Pelau et al., Citation2021; Waytz et al., Citation2014). It is also a gauge of how extensively service robots can imitate the characteristics, behaviors, and/or appearance of humans (Qiu et al., Citation2020). Currently, anthropomorphism research centers on three dimensions—appearance, internal attributes, and social factors. The appearance, movements, expressions, and voices of robots are all anthropomorphic representations. Existing research has demonstrated that anthropomorphic attributes play a significant role in shaping the connection between consumers and AI devices, influencing consumers’ willingness to engage with such devices (Husain et al., Citation2023; McLean & Osei-Frimpong, Citation2019; Xiao & Kumar, Citation2021). Qiu et al. (Citation2020) found that anthropomorphic robots could facilitate human – robot relationships and service experiences. Robot anthropomorphism can also positively contribute to the customer’s acceptance of robots, which in turn impacts customer satisfaction and customer emotions (Xiao & Kumar, Citation2021).

The robot’s initial impression is created by its appearance, which significantly influences consumers’ behavior intention (Zhang & Wang, Citation2023). Because the interactions between consumers and AI often involve relatively short messages (Hill et al., Citation2015), they tend not to be extroverted or revealing (Mou & Xu, Citation2017). By contrast, humanoid AI devices tend to facilitate more extended interactions with consumers (Pelau et al., Citation2021), and foster a stronger involvement and bond (Husain et al., Citation2023). The degree to which humans spontaneously mimic AI robots is closely linked to the prominence of the AI’s anthropomorphic appearance. The more the appearance resembles a human, the greater the level of mimicry, the closer the user’s emotions to those expressed by the AI (Hofree et al., Citation2014). Indeed, Moro et al. (Citation2019) found that users treat humanoid robots more like a companion, are more willing to use a humanoid robot for services, and are more susceptible to emotion at the time (Chiang et al., Citation2022). Therefore, exterior anthropomorphism may modulate the role of the AI empathy effect. Specifically, the following hypotheses are proposed:

H4a/b:

An anthropomorphic appearance positively moderates the impact of emotional mimicry on tourists’ CUI through stimulation of their pleasure (/arousal).

H4c/d:

An anthropomorphic appearance positively moderates the impact of empathic concern on tourists’ CUI through stimulation of their pleasure (/arousal).

The proposed model of this paper is shown in .

Figure 1. Conceptual model.

Figure 1. Conceptual model.

Method

Five pretests (see Appendix A) and six studies were conducted to examine Hypotheses H1–H4. All of the pretests were conducted to verify the effectiveness of the stimulus materials. The significant presence of the two main effects were tested in the Field study and Study 1, the latter taking the form of Study 1A and Study 1B, which were used to verify H1a and H1b, respectively. Study 2 was carried out to examine the mediating role of arousal and pleasure, with the main effect being assessed again. Similarly, Study 2A examined arousal and pleasure as mediators in a service success context (H2a and H2b), while Study 2B focused on tourists’ emotions as mediators in a service failure context (H3a and H3b). Study 3 was conducted to validate the moderating role of anthropomorphic appearance, with Study 3A being employed to validate the moderating role of anthropomorphic appearance in the effects of emotional mimicry on arousal and pleasure (H4a and H4b), while Study 3B examined its moderating role in empathic concern on tourists’ emotions (H4d and H4c). To enhance the external validity of this study, different scenarios were used in the studies, as indicated in .

Table 1. Research framework.

Table 2. Stimulus material of field study.

Table 3. Stimulus material of Study1.

Table 4. Experimental material of Study2A.

Table 5. Experimental material of Study2B.

Table 6. Experimental material of Study3A.

Table 7. Experimental material of Study3B.

Field study

Participants and procedures

This study commenced with a field experiment aimed at investigating the main effects of AI empathy. Conducted in a nationally recognized historical and cultural district with a unique local cuisine, a total of 114 tourists (63.2% female, 50.9% in the age range 21–30 years old) were recruited for the experiment and randomly assigned to four experimental groups. The participants were asked to use an online AI tour guide to carry out a series of fixed interactions (inquiring about food in the district), with each group encountering varying degrees of empathic responses from the AI (see details in ). Field study 1 compared high (n = 30) vs. low (n = 30) emotional mimicry in successful service scenarios, while Field study 2 contrasted high (n = 26) vs. low empathic concern (n = 28) in service failure contexts.

Post-interaction, participants are required to assess their perceptions of AI emotional mimicry using four seven-point scales (Fischer et al., Citation2012), and empathic concern using three seven-point scales (De Kervenoael et al., Citation2020; Lv et al., Citation2022). Additionally, three scales were adopted to measure the customers’ CUI, “I intend to continue using AI rather than discontinue its use,” “My intentions are to continue using AI rather than calling the services offered by humans,” and “Even if it was possible to stop using this AI, I wouldn’t do so” (Bhattacherjee, Citation2001; Lv et al., Citation2022). Following this, the participants were required to answer items that measured their perceptions of the AI’s emotional mimicry and empathic concern in a manner similar to the pretest. Additionally, the participants were asked to provide their demographic information (see ).

Table 8. Measure items (7-point scale, 1 = strongly disagree, 7 = strongly agree).

Manipulation check and results

A one-way analysis of variance (ANOVA) indicated that AI emotional mimicry (Cronbach’s α = 0.949, Mhigh = 5.43, Mlow = 4.17, F (1,58) = 16.545, p < 0.001) and empathic concern (Cronbach’s α = 0.846, Mhigh = 4.91, Mlow = 4.05, F (1,52) = 7.750, p = 0.007 < 0.01) manipulations were successful. shows the results of the regression analysis, and the effects of emotional mimicry (Field study 1: β = 0.674, p < 0.001) and empathic concern (Field study 2: β = 0.608, p < 0.001) on CUI. Issues of multicollinearity were not present, as indicated by a variance inflation factor (VIF) below 2.5. Therefore, H1a and H1b were supported.

Table 9. Regression analysis.

Brief discussion of the field study and introduction to study 1

The results of the field experiment demonstrated significant positive effects of emotional mimicry and empathic concern on CUI. However, reasons such as the difficulty of controlling the external variables in the field experiment may have led to problems related to the internal reliability and validity of the data. Therefore, we proposed conducting an online experiment (Study 1) to re-validate the main effect results. Specifically, in Study 1A, 120 participants were randomly allocated to one of two AI service scenarios in a hotel in a service success context—an AI robot with high emotional mimicry capabilities or an AI robot with low emotional mimicry capabilities. Study 1B similarly randomized 120 participants to one of two hotel AI service failure scenarios–-one with high empathic concern and the other with low empathic concern.

Study 1

For the experimental materials used in Study 1, see . It has been shown that emotional mimicry is usually performed through tone of voice, posture, and facial expressions (Hess & Fischer, Citation2014). Due to the limitations of the prevailing AI service robot technology, tone of voice mimicry is easy to achieve, whereas facial expressions and correct posture are difficult to mimic correctly. Therefore, speech was chosen as the stimulus material for Study 1A. Specifically, the study stimulated subjects to perceive emotional mimicry by manipulating the tone of voice and content of the robot’s responses, which was followed by a manipulation test. The service failure scenarios in Study 1B were adapted from Lv et al. (Citation2021). To test whether the experimental stimulus material had validity, the study was pretested prior to the formal experiment (see Appendix A).

Participants and procedures

Study 1 comprised a two-group (emotional mimicry: high vs. low; empathic concern: high vs. low) between-subject design that was used to examine the impact of AI emotional mimicry and empathic concern on CUI. We recruited the sample of 120 participants from Credamo (https://www.credamo.com/) by offering a monetary reward as an incentive for each study (Study 1A: 47.5% male, Mage = 31.86; Study 1B: 50.8% male, Mage = 34.29). To ensure the validity of the data, we excluded those questionnaires that had consistent answer completion and short completion times. To prevent potential disruption caused by the emotional states of the participants prior to conducting the experiments on intent to continue using, they were required to finish four semantic items (sad – happy, bad mood – good mood, irritable – pleased, depressed – cheerful) adopted from Townsend and Sood (Citation2012). After this, the participants in each study were instructed to read two random research situations. Post-reading, they filled out questionnaires measuring their perceptions of AI emotional mimicry or empathic concern, CUI scales, and demographic details, similarly to the Field study.

Manipulation check and results

Chi-squared tests were utilized to compare demographic characteristics between samples from the field and online experiments. The analysis revealed that, whether comparing samples from Field study 1 with Study 1A or Field study 2 with Study 1B, the differences in gender, age, occupation, education, and income did not reach statistical significance (all p-values >0.05). This finding indicated a consistency in demographic attributes across the samples from different experimental settings. Moreover, the results indicated that there were no statistically significant differences observed among the groups concerning the emotional states of the participants prior to the experiments in either Study 1A or 1B (Study 1A: Mhigh = 6.1, Mlow = 5.97, F (1,118) = 1.167, p = 0.282 > 0.05; Study 1B: Mhigh = 5.64, Mlow = 5.88, F (1,118) = 2.613, p = 0.078 > 0.05) through the one-way ANOVA. The successful manipulation of AI emotional mimicry and empathic concern were proved by the one-way ANOVA (Study 1A: Mhigh = 5.85, Mlow = 5.23, F (1,118) = 11.738, p < 0.001; Study 1B: Mhigh = 4.52, Mlow = 3.33, F (1, 118) = 14.713, p < 0.001). Regression analysis revealed the impact of emotional mimicry (Study 1A: β = 0.189, p < 0.001) and empathic concern (Study 1B: β = 0.653, p < 0.001) on CUI, with a VIF below 2.5 for all pairwise correlations. Therefore, H1a and H1b were supported.

Brief discussion of study 1 and introduction to study 2

The preliminary evidence provided by Study 1 was that emotional mimicry has a significant effect on consumers’ willingness to use AI consistently in service success contexts, while empathic concern plays a significant role in service failure contexts. While H1a and H1b were validated in Study 1 using a hotel check-in context, the intrinsic mechanism for why AI empathy affects CUI was not determined. Therefore, in Study 2, the mediation of tourists’ emotions (arousal and pleasure) was tested, with another important tourism service context being used—scenic spots. A two-groups between-subjects design was also employed in Study 2 to test for the mediating effects (H2a and H2b, H3a and H3b) and to retest for the main effects (H1a and H1b).

Study 2A

Participants and procedures

In Study 2A,120 subjects were recruited from Credamo (47.5% male, Mage = 31.42). They were then randomly assigned to the two contexts in each study (Study 2A: high vs. low emotional mimicry) (see ). Similarity to Study 1, before perusing the stimulus material, the participants were instructed to answer questions about their current emotional states so as to avoid interference with the experimental results. After this, two seven-point scales, adapted from Du et al. (Citation2014), were employed to assess the participants’ familiarity with, and the realism of, the scenario, “I am familiar with the situations in the above textual description” and “I can imagine myself as the customer in the scenario” (1 = strongly disagree to 7 = strongly agree). In addition, an attention test question was added, the participants being told that it was an attention test question and that the answer needed to be Choice 6, and all sample sizes passed test.

Following this, the participants were questioned about their perceptions of AI emotional mimicry or empathic concern, and their CUI was measured by adopting the same three items as in Study 1. The participants’ arousal and pleasure were also measured using items adapted from Ruan and Mezei (Citation2022) and Li et al. (Citation2021). The participants’ degrees of excitement, stimulation, and attraction were used to measure arousal (Cronbach’s α = 0.772), with their degrees of pleasure, happiness, and satisfaction being used to measure pleasure (Cronbach’s α = 0.810).

Results

Manipulation checks

Based on the one-way ANOVA, there was no statistically significant variance in the participants’ emotional states (Mhigh = 6.04, Mlow = 5.85, F(1,118) = 1.506, p = 0.222 > 0.05), familiarity with (Mhigh = 5.83, Mlow = 5.70, F(1,118) = 0.481, p = 0.489 > 0.05) or realism of (Mhigh = 6.53, Mlow = 6.32, F(1,118) = 2.684, p = 0.104 > 0.05) the scenario among the groups. The results from a one-sample t-test further supported the findings on the scenario’s familiarity (M = 5.77, SD = 1.051, t(136) = 18.412, p < 0.001) and scenario realism (M = 6.43, SD = 0.729, t(119) = 36.416, p < 0.001). The successful manipulation of emotional mimicry (Mhigh = 5.91, Mlow = 5.15, F(1,118) = 15.011, p < 0.001) was indicated by one-way ANOVA test.

Hypothesis test

For Study 2A, a regression analysis indicated that the effect of emotional mimicry on consumers’ CUI in service success contexts was significant (β = 0.499, p < 0.001). Therefore, H1a was supported. Additionally, the mediation of arousal and pleasure in the influence of emotional mimicry on consumers’ CUI was examined using a mediated analysis model (PROCESS, Model 4, 5,000 samples), based on Hayes (Citation2018). Firstly, the arousal’s indirect effect did not contain 0 within a 95% bootstrap, thereby indicating a significant mediating effect (effect = 0.287, CI = [0.135, 0.460]). Secondly, another operation was conducted with pleasure, and the indirect effect similarly did not contain 0 (effect = 0.221, CI = [0.088, 0.371]). This further indicated a significant indirect effect. Finally, following Hayes (Citation2018) approach for multiple parallel mediators (using PROCESS, Model 4), a bootstrap mediation test was conducted. Both mediating variables operated jointly (effect = 0.300, CI = [0.144, 0.496]), with the effect size for arousal being 0.172 (LLCI = 0.050, ULCI = 0.355) and the perceived threat being 0.128 (LLCI = 0.009, ULCI = 0.287). Therefore, the partially mediated effect of arousal and pleasure was proven, and H2a and H2b were supported (see ).

Figure 2. The mediating effects of arousal and pleasure in study 2A.

Figure 2. The mediating effects of arousal and pleasure in study 2A.

Study 2B

Participants and procedures

In Study 2B, 120 participants were engaged (46.7% male, Mage = 29.96). They were randomly divided into two contexts (high vs. low empathic concern) (see ). Similarly to Study 2A, the participants initially responded to questions about their current emotional states prior to engaging with the stimulus material. Subsequently, their familiarity with, and the realism of, the scenario were evaluated using two seven-point scales. The participants were then asked about their perceptions of the AI’s empathic concern, and their CUI was gauged using the same three items used in Study 2A. Additionally, measures were taken of their arousal (Cronbach’s α = 0.825) and pleasure (Cronbach’s α = 0.880) levels.

Results

Manipulation checks

Using one-way ANOVA for Study 2B, there was no statistically significant variance in the emotional states (Study 2B: Mhigh = 5.74, Mlow = 5.98, F(1,118) = 2.483, p = 0.118 > 0.05), familiarity with (Mhigh = 5.75, Mlow = 5.88, F(1,118) = 0.608, p = 0.437 > 0.05) or realism of (Mhigh = 6.30, Mlow = 6.27, F(1,118) = 0.042, p = 0.839 > 0.05) the scenario among the groups. The findings from a one-sample t-test further supported the findings on scenario familiarity (M = 5.82, SD = 0.935, t(119) = 21.291, p < 0.001) and scenario realism (M = 6.28, SD = 0.891, t(119) = 28.088, p < 0.001). The manipulation of empathic concern (Mhigh = 5.94, Mlow = 4.76, F(1,118) = 29.035, p < 0.001) was found to be effective, as indicated by one-way ANOVA.

Hypothesis test

For Study 2B, the results revealed that empathic concern in service failure contexts had a significant impact on the users’ CUI (β = 0.786, p < 0.001). Similarly to Study 2A, a mediated analysis model (PROCESS, Model 4, 5,000 samples) was employed to examine the mediated roles of arousal and pleasure, with the results indicating that arousal (effect = 0.288, CI = [0.129, 0.441]) and pleasure (effect = 0.408, CI = [0.252, 0.562]) played significant mediating roles. The two mediating variables worked jointly (effect = 0.410, CI = [0.251, 0.574]), with the effect size for arousal being 0.367 (CI=[0.190, 0.560]). However, the role of arousal as a mediator was not statistically significant (effect = 0.0428, CI=[−0.098, 0.185]). Hence, H3b was supported (see ).

Figure 3. The mediating effects of arousal and pleasure in study 2B.

Figure 3. The mediating effects of arousal and pleasure in study 2B.

Brief discussion of study 2 and introduction to study 3

Study 2 explored the mediating role of arousal and pleasure. The results of these studies indicated that the tourists’ emotions were influenced by AI empathy and that they mediated the relationship between AI empathy and tourists’ CUI. Specifically, tourists’ arousal and pleasure were positively influenced by emotional mimicry in service success situations (Study 2A) and empathic concern in service failure contexts (Study 2B). Therefore, arousal and pleasure had a significant partial mediating role in service success contexts, but arousal did not play a mediating role in service failure situations.

As an intrinsic anthropomorphic feature of AI, AI empathy raises the question of whether it can interact with an anthropomorphic appearance to positively affect user emotions. Answering this question would enrich the literature in the field of AI anthropomorphism and provide valuable insights for the design of AI-powered robots. Accordingly, Study 3 employed two between-group experiments to further examine the moderating role of anthropomorphic appearance. Specifically set in an airport context, Study 3A explored a service success scenario using a two (emotional mimicry: high vs. low) by two (anthropomorphic appearance: high vs. low) experimental design, while Study 3B investigated a service failure context with a two (empathic concern: high vs. low) by two (anthropomorphic appearance: high vs. low) experimental setup.

Study 3A

Design and participants

A between-subjects design was used in Study 3A (AI emotional mimicry: high vs. low x anthropomorphic characteristics: high vs. low). A total of 205 subjects (male = 43.6%, Mage = 33.05) were recruited through Credamo and randomly distributed to one of four groups—AI with high emotional mimicry capabilities and high levels of anthropomorphism (n = 53), AI with low emotional mimicry capabilities and high levels of anthropomorphism (n = 48), AI with high emotional mimicry capabilities and low levels of anthropomorphism (n = 56), and AI with low emotional mimicry capabilities and low levels of anthropomorphism (n = 48) (see ). The AI images in the experimental stimulus materials were adopted from previous studies (Han et al., Citation2023; Lin et al., Citation2022).

Procedure and measures

Before reading the stimulus materials, the participants were first asked to answer four questions about their emotional states. Afterward, their familiarity with, and the realism of, the scenario were separately measured. The uncanny valley hypothesis suggests that high anthropomorphic appearance in robots is linked to eeriness (Appel et al., Citation2020). While increasing anthropomorphism initially improves acceptance and appeal at low to moderate levels, further enhancement can reverse this effect due to heightened eeriness (Singh et al., Citation2021). Therefore, to avoid the uncanny valley effect, the participants were asked about the perceived eeriness through the question, “Does the video give you a feeling of eeriness?” This was adopted from Yang et al. (Citation2022) (using a seven-point Likert scale). The subjects were then asked about their perceptions of AI emotional mimicry and the level of AI anthropomorphism. The AI’s anthropomorphic appearance was measured using two items from prior studies—“The robot looks like a human” and “The robot looks too much like a human” (Xie & Lei, Citation2022; Yang et al., Citation2022). Similarly to Studies 1 and 2, the participants’ arousal, pleasure, and CUI were measured. Finally, the participants’ demographic information was elicited (see ).

Results

Manipulation checks

A two-way ANOVA showed that the impact of AI emotional mimicry on the participants’ emotional states (F(1, 201) = 0.166, p = 0.684 > 0.05), the influence of anthropomorphism on their emotional states (F(1, 201) = 0.004, p = 0.953 > 0.05), and the interactive effect of AI emotional mimicry and anthropomorphism on their emotional states (F(1, 201) = 0.178, p = 0.674 > 0.05) were all insignificant. Taking eeriness as a control variable, the manipulation of the emotional mimicry and anthropomorphic appearance variables and the interaction effect were all insignificant. The participants’ familiarity with (M = 5.878, SD = 0.852, t(204) = 31.575, p < 0.001) and the realism of (M = 6.454, SD = 0.696, t(204) = 50.478, p < 0.001) the scenario were effectively controlled for, as demonstrated by one-sample t-tests. Furthermore, a one-way ANOVA revealed the effectiveness of emotional mimicry manipulation (Mhigh = 5.404, Mlow = 4.971, F(1, 203) = 5.887, p = 0.016 < 0.05) and anthropomorphism manipulation (Mhigh = 4.688, Mlow = 3.369, F(1, 203) = 29.733, p < 0.001). Consequently, these results validated the success of the manipulations.

Hypothesis test

As outlined by Hayes (Citation2018), we employed Model 7 in the PROCESS procedure to examine the moderated mediating effect using bootstrapping with 5,000 samples. The interaction between emotional mimicry and anthropomorphic appearance was found to be significant when arousal (F(1,201) = 12.028, p < 0.001) and pleasure (F(1, 201) = 5.412, p = 0.021 < 0.05) were used as dependent variables. This confirmed the moderating effect of anthropomorphic appearance on the relationship between emotional mimicry and arousal as well as emotional mimicry and pleasure. Therefore, H4a and H4b were supported (see ). When arousal and pleasure were used as mediating variables, the mediation moderation indices were 0.023 (CI=[0.004, 0.048]) and 0.033 (CI=[0.001, 0.066]), respectively, with 0 being excluded. The mediation moderation index was 0.023 (CI=[0.004, 0.048]) and 0.033 (CI=[0.001, 0.066]). In the low anthropomorphic appearance category, the mediating effect of arousal (effect = 0.077, CI = [0.019, 0.151]) and pleasure (effect = 0.122, CI = [0.038, 0.231]) on the influence of emotional mimicry on the participants’ CUI were found to be significant. Under the high anthropomorphic appearance category, the mediating effects of arousal (effect = 0.169, CI = [0.004, 0.048]) and pleasure (effect = 0.253, CI = [0.127, 0.406]) were also found to be significant. Moreover, emotional mimicry did not directly affect the participants’ participation (effect = 0.065, CI=[−0.001, 0.131]), showing the full mediating effect of arousal and pleasure.

Figure 4. The moderated mediating effects of anthropomorphic appearance.

Figure 4. The moderated mediating effects of anthropomorphic appearance.

Study 3B

Design and procedures

A two (empathic concern: high vs. low) by two (anthropomorphic appearance: high vs. low) between-subject design was used on a sample of 198 participants. Of these, 182 were deemed valid (52.2% female, Mage = 27.8) and were randomly distributed to one of four groups—AI with high empathic concern capabilities and high levels of anthropomorphism (n = 50), AI with low empathic concern capabilities and high levels of anthropomorphism (n = 46), AI with high empathic concern capabilities and low levels of anthropomorphism (n = 46), and AI with low empathic concern capabilities and low levels of anthropomorphism (n = 43) (see ). The AI images in the experimental stimulus materials were adopted from previous studies (Han et al., Citation2023; Lin et al., Citation2022). While the emotional mimicry scale was changed to an empathic concern scale (the same as used in Study 2), the other procedures and measurement items were the same as in Study 3A.

Manipulation checks and results

A two-way ANOVA demonstrated that the participants’ emotional states were not significantly influenced by AI empathic concern (F(1, 178) = 0.303, p = 0.583 > 0.05), anthropomorphic appearance (F(1, 178) = 2.777, p = 0.097 > 0.05) or interaction between these two variables (F(1, 178) = 0.847, p = 0.356 > 0.05). A one-sample t-test indicated successful control over the participants’ familiarity with the scenario (M = 5.896, SD = 0.740, t(181) = 34.576, p < 0.001), the realism of the scenario (M = 6.280, SD = 0.753, t(181) = 40.836, p < 0.001), and the eeriness levels of the AI (M = 2.434, SD = 1.499, t(181)=-14.093, p < 0.001). Furthermore, the manipulations of empathic concern (Mhigh = 6.014, Mlow = 4.730, F(1, 181) = 46.121, p < 0.001) and anthropomorphism (Mhigh = 5.172, Mlow = 2.767, F(1, 181) = 90.909, p < 0.001) were found to be effective through a one-way ANOVA. Thus, these manipulations were effective.

According to Model 4 in the PROCESS procedure, pleasure significantly mediated the effect of empathic concern on the participants’ CUI (effect = 0.507, CI = [0.312, 0.685]), but the mediating impact of arousal was not significant (effect = 0.044, CI=[−0.1001, 0.222], with 0 being included). The direct effect of empathic concern on the participants’ CUI was also found to be significant (effect = 0.293, CI = [0.169, 0.417]). Thus, pleasure played a partial mediating role, which is consistent with the findings from Study 2B, thereby reverifying H2c. Using the PROCESS procedure (Model 7) to test for the moderating role of anthropomorphic appearance, it was found that the moderating effect on the relationship between arousal (F(1,178) = 0.039, p = 0.843 > 0.05) and pleasure (F(1,178) = 0.109, p = 0.742 > 0.05) was insignificant. Therefore, H4c and H4d were not supported. Pleasure was found to be significant at both low (effect = 0.5074, CI = [0.274, 0.699]) and high (effect = 0.486, CI = [0.274, 0.751]) levels of anthropomorphic appearance, but the mediating effect of arousal remained insignificant at both low and high levels of anthropomorphic appearance. Furthermore, the index of moderated mediation was also found to be insignificant (CI=[−0.044, 0.037], with 0 being included).

Brief discussion of study 3

Study 3 explored the moderating effects of anthropomorphic appearance of AI empathy (intrinsic anthropomorphism) on the participants’ emotions (arousal and pleasure). The results showed that anthropomorphic appearance played a significant moderating role in the service success context, thereby supporting H4a and H4b. However, when the AI service failed, the moderating effect of anthropomorphic appearance was found to be insignificant.

Conclusions and implications

Conclusions

As AI intelligence levels increase, the realization of AI empathy is gradually becoming a reality. Notably, the influence of AI’s empathic ability on tourists’ behaviors has been overlooked, as only a few scholars have explored AI’s empathic ability to compensate for the loss caused by the failure of AI service (Lv et al., Citation2022). In the age of intelligence, the traditional interaction patterns between humans have changed, which has led to significant amounts of attention being paid to this transformation (Tussyadiah, Citation2020). During HRIs, AI empathy affects tourists’ behavior when the service both fails and succeeds, but these manifest differently. The findings of this study suggest that AI emotional mimicry positively affects tourists’ CUI when the service is successful, and empathic concern similarly positively affects tourists’ continued willingness to use the AI when the service fails.

The existing literature has emphasized that people are influenced by others’ emotions through emotional contagion. However, the findings of this study suggest that, as AI becomes an indispensable partner with humans, the emotions displayed by AI will also affect tourists’ emotions, thereby influencing their behaviors. This conclusion validates Bretan et al. (Citation2015) and Schmetkamp’s (Citation2020) claims that people can empathize with AI applications that feature human-like traits. Furthermore, it is worth mentioning that arousal has no mediating effect on the relationship between AI empathic concern and consumers’ CUI, while pleasure plays a significant mediating role. This is because arousal levels do not make up for the loss caused by service failures, but enhancing consumers’ pleasure can make up for those failures.

Researchers have frequently investigated how an anthropomorphic appearance can influence customer willingness to use service robots, and the findings are inconsistent, revealing positive (Stroessner & Benitez, Citation2019), neutral (Goudey & Bonnin, Citation2016), negative (Blut et al., Citation2021; Choi et al., Citation2021), and even concurrent positive and negative impacts (Alsaad, Citation2023). Our findings highlighted the significance of service scenario (success vs. failure) on the effect of anthropomorphic appearance. Extrinsic anthropomorphism can enhance tourists’ perceived psychological closeness, which can have a positive impact (Han et al., Citation2023; Lv et al., Citation2022). However, AI empathy is a type of anthropomorphism known as intrinsic anthropomorphism. The findings suggest that, when service is successful, the positive effects of AI’s intrinsic anthropomorphism are prominent when there are high levels of extrinsic anthropomorphism (vs. low extrinsic anthropomorphism). This corroborates the findings of studies that have indicated that AI anthropomorphism has a positive impact (Han et al., Citation2023). However, no significant moderating effect of AI anthropomorphism was found in service failure contexts. This is due to the fact that, when people see a humanoid AI, they expect the AI to have more capabilities, and, based on expectancy theory, when the AI fails in its service duties, tourists are disappointed, resulting in a negative effect (Choi et al., Citation2021; Husain et al., Citation2023). Therefore, AI’s extrinsic anthropomorphism does not modulate the effect of the AI’s empathic concern on the tourists’ positive emotions in service failure contexts.

Theoretical implications

Based on emotional contagion theory, we explored the positive effects of two types of AI empathy (emotional mimicry and empathic concern) on tourists’ emotions as well as their CUI in service success and failure contexts. This is theoretically significant in several ways. Firstly, it expands the theoretical boundaries of emotional contagion theory in a tourism scenario. Emotional contagion theory refers to the process of emotional transmission and imitation behavior between people, which gradually results in emotional convergence. With the development of the internet, emotional contagion has gradually expanded from offline, face-to-face scenarios to online connections. However, whether it is online or offline, emotional contagion still involves emotional transmission between people (Woo & Chan, Citation2020). This study expands the application of this theory to the realm of HRIs, thus recognizing AI’s empathy as an important factor that affects human emotions, which in turn influence the AI’s CUI. This important finding provides a theoretical basis for the direction of empathy capabilities in AI service robots.

Secondly, although AI empathy has been mentioned in service studies (Kim & Hur, Citation2023; Leite et al., Citation2012; Pelau et al., Citation2021), they merely focused on AI empathy in the service failure scenario, suggesting that it will affect the psychological distance and trust of tourists, and ultimately affecting their CUI (Lv et al., Citation2022). Limited attention has been paid to the complex mechanism of AI empathy on customer emotion and behavior. Nevertheless, AI empathy is not limited to empathic concern, but also includes emotional mimicry (Liu-Thompkins et al., Citation2022). In this study, empathy in AI service robots was innovatively subdivided, revealing how two types of empathic response aligned with different service scenarios, thereby enriching the literature on AI empathy. Specifically, AI’s emotional mimicry positively impacted tourist emotions and their usage intentions during successful service interactions, while empathic concern came into play during service failures. This finding addresses how AI empathy should be manifested in HRIs.

Thirdly, anthropomorphism is a topic that has been explored in the field of service robotics, with several existing research models differentiating only between high and low anthropomorphism levels, without delving into whether various forms of anthropomorphism (such as physical or emotional) could maximize outcomes (Alabed et al., Citation2022; Liu et al., Citation2022; Xie & Lei, Citation2022). Intrinsic anthropomorphism (AI empathy) and anthropomorphic appearance frequently coexist. Therefore, we examined the effects of these two types of anthropomorphism in AI service robots on consumers’ CUI in different service contexts. The result indicated that the joint effect of appearance and intrinsic anthropomorphism was significant in a service success context, but not in a failure context. Therefore, this study has provided valuable insights in terms of AI internal anthropomorphism (AI empathy) and external anthropomorphism enhancing CUI, which enriches the research on AI anthropomorphism.

Practical contributions

This study has made three practical contributions. Firstly, when focusing on improving AI intelligence, it is just as important to focus on the development of AI empathy. Although AI intelligence levels have improved by leaps and bounds, AI empathy is the prerequisite for AI intelligence being more humanized, especially in the service field because humanized service is the key to successful service provision. Whether AI services succeed or fail, AI empathy has been shown to have significant positive effects. Therefore, in the future development of AI, the enhancement of AI empathy ability is essential. For instance, in their initial interactions with tourists, AI service robots should first mimic and express positive emotions, thereby conveying positive sentiments to the tourists. Following this, companies need to anticipate potential negative emotional responses in scenarios of service failure and design the AI to provide timely responses infused with empathic concern. Additionally, when programming default replies, it is important to ensure that these responses are also imbued with a sense of empathic concern. This does not only apply to the travel industry—it could even be generalized to other industries in which AI social interactions are the norm.

Secondly, in terms of the design of AI service robots’ appearance, highly intelligent AI services should be designed to be humanoid. In the case of successful service contexts, the appearance of external and internal anthropomorphism will have a more significant positive impact on tourists’ emotions. Therefore, for simple tasks, AI robots should feature a higher degree of anthropomorphic appearance, such as those used for straightforward tasks in hotel lobbies, such as check-in procedures. If the AI is needed to perform difficult tasks, then its appearance design should be less human in order to lower tourists’ expectations. An example of this would be a smart butler robot in hotel rooms, catering to complex guest needs with a less anthropomorphized appearance. Managers should decide whether to humanize AI service robots based on the difficulty of the tasks in enterprise service scenarios, thus optimizing the anthropomorphism effect of AI. However, the most crucial aspect to focus on immediately should be the enhancement of AI empathy to promote the positive impact of successful service provision and to make up for the negative impact of service failures.

Thirdly, AI development should have the ability to recognize the context of normal service situations as well as recognize and imitate tourists’ emotions. This emotional mimicry could be used to enhance the positive emotions of users in service success contexts. Moreover, when AI recognizes the context as one of service failure or negative emotion, it should pivot from emotional mimicry pivot to empathic concern so as to positively influence tourists’ emotions through the correct responses, thereby positively impacting the tourists’ CUI.

Limitations and directions for future research

In this study, we explored the effects of two AI empathy styles on tourists’ CUI in the context of service failure and service success scenarios. However, whether a shift in the AI’s empathy styles affected tourists’ behaviors differently in consecutive service failure and service success contexts was not investigated. In terms of emotional signaling, this study could not explore the emotional impact of AI’s facial expressions on tourists’ emotions due to technical limitations. The tourists’ emotions were measured using self-reporting, and in the future, attempts could be made to use physiological data, to detect users’ emotions in HRIs. In addition, we did not consider the boundary conditions for the role of AI empathy for different levels of time pressure or service failure, for example, only staying at the level of willingness, and not going on to explore the behavior of continued use. Therefore, in future research, there can be further discussion on how AI empathy works in complex contexts of alternating service failures and successes, examining its capacity to affect user behavior, and exploring the boundary conditions that govern its impact.

Supplemental material

Supplemental Material

Download Zip (3.5 MB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/19368623.2024.2315954

Additional information

Funding

This work was supported by the [Hainan Province Science and Technology Special Fund] under Grant [number ZDYF2022SHFZ031]; [Hainan Province Science and Technology Special Fund] under Grant [number ZDYF2022GXJS012];[Innovative research projects for graduate students in Hainan Province] under Grant[number Qhyb2022-28]

References

  • Alabed, A., Javornik, A., & Gregory-Smith, D. (2022). AI anthropomorphism and its effect on users’ self-congruence and self–AI integration: A theoretical framework and research agenda. Technological Forecasting and Social Change, 182, 121786. https://doi.org/10.1016/j.techfore.2022.121786
  • Alsaad, A. (2023). The dual effect of anthropomorphism on customers’ decisions to use artificial intelligence devices in hotel services. Journal of Hospitality Marketing & Management, 32(8), 1048–1076. https://doi.org/10.1080/19368623.2023.2223584
  • Appel, M., Izydorczyk, D., Weber, S., Mara, M., & Lischetzke, T. (2020). The uncanny of mind in a machine: Humanoid robots as tools, agents, and experiencers. Computers in Human Behavior, 102, 274–286. https://doi.org/10.1016/j.chb.2019.07.031
  • Asada, M. (2015). Development of artificial empathy. Neuroscience Research, 90, 41–50. https://doi.org/10.1016/j.neures.2014.12.002
  • Barsade, S. G. (2002). The ripple effect: Emotional contagion and its influence on group behavior. Administrative Science Quarterly, 47(4), 644–675. https://doi.org/10.2307/3094912
  • Beattie, A., Edwards, A. P., & Edwards, C. (2020). A bot and a smile: Interpersonal impressions of chatbots and humans using emoji in computer‐mediated communication. Communication Studies, 71(3), 409–427. https://doi.org/10.1080/10510974.2020.1725082
  • Bhattacherjee, A. (2001). Understanding information systems continuance: An expectation-confirmation model. MIS Quarterly, 25(3), 351–370. https://doi.org/10.2307/3250921
  • Blut, M., Wang, C., Wünderlich, N. V., & Brock, C. (2021). Understanding anthropomorphism in service provision: A meta-analysis of physical robots, chatbots, and other AI. Journal of the Academy of Marketing Science, 49(4), 632–658. https://doi.org/10.1007/s11747-020-00762-y
  • Bretan, M., Hoffman, G., & Weinberg, G. (2015). Emotionally expressive dynamic physical behaviors in robots. International Journal of Human-Computer Studies, 78, 1–16. https://doi.org/10.1016/j.ijhcs.2015.01.006
  • Cameron, D., Millings, A., Fernando, S., Collins, E. C., Moore, R., Sharkey, A., Evers, V., & Prescott, T. (2018). The effects of robot facial emotional expressions and gender on child–robot interaction in a field study. Connection Science, 30(4), 343–361. https://doi.org/10.1080/09540091.2018.1454889
  • Cheshin, A., Rafaeli, A., & Bos, N. (2011). Anger and happiness in virtual teams: Emotional influences of text and behavior on others’ affect in the absence of non-verbal cues. Organizational Behavior and Human Decision Processes, 116(1), 2–16. https://doi.org/10.1016/j.obhdp.2011.06.002
  • Chi, O. H., Gursoy, D., & Chi, C. G. (2022). Tourists’ attitudes toward the use of artificially intelligent (AI) devices in tourism service delivery: Moderating role of service value seeking. Journal of Travel Research, 61(1), 170–185. https://doi.org/10.1177/0047287520971054
  • Chiang, A. H., Trimi, S., & Lo, Y. J. (2022). Emotion and service quality of anthropomorphic robots. Technological Forecasting and Social Change, 177, 121550. https://doi.org/10.1016/j.techfore.2022.121550
  • Choi, S., Mattila, A. S., & Bolton, L. E. (2021). To err is human(-oid): How do consumers react to robot service failure and recovery? Journal of Service Research, 24(3), 354–371. https://doi.org/10.1177/1094670520978798
  • Chuah, S. H. W., & Yu, J. (2021). The future of service: The power of emotion in human-robot interaction. Journal of Retailing and Consumer Services, 61, 102551. https://doi.org/10.1016/j.jretconser.2021.102551
  • Collier, J. E., Barnes, D. C., Abney, A. K., & Pelletier, M. J. (2018). Idiosyncratic service experiences: When customers desire the extraordinary in a service encounter. Journal of Business Research, 84, 150–161. https://doi.org/10.1016/j.jbusres.2017.11.016
  • De Kervenoael, R., Hasan, R., Schwob, A., & Goh, E. (2020). Leveraging human-robot interaction in hospitality services: Incorporating the role of perceived value, empathy, and information sharing into visitors’ intentions to use social robots. Tourism Management, 78, 104042. https://doi.org/10.1016/j.tourman.2019.104042
  • Du, J., Fan, X., & Feng, T. (2014). Group emotional contagion and complaint intentions in group service failure: The role of group size and group familiarity. Journal of Service Research, 17(3), 326–338. https://doi.org/10.1177/1094670513519290
  • Feldman, J. (2016). “The problem of the Adjective”1. Transposition Musique et Sciences Sociales, (6), 6. https://doi.org/10.4000/transposition.1640
  • Filieri, R., D’Amico, E., Destefanis, A., Paolucci, E., & Raguseo, E. (2021). Artificial intelligence (AI) for tourism: An European-based study on successful AI tourism start-ups. International Journal of Contemporary Hospitality Management, 33(11), 4099–4125. https://doi.org/10.1108/IJCHM-02-2021-0220
  • Fischer, A. H., Becker, D., & Veenstra, L. (2012). Emotional mimicry in social context: The case of disgust and pride. Frontiers in Psychology, 3. https://doi.org/10.3389/fpsyg.2012.00475
  • Goudey, A., & Bonnin, G. (2016). Must smart objects look human? Study of the impact of anthropomorphism on the acceptance of companion robots. Recherche Et Applications En Marketing (English Edition), 31(2), 2–20. https://doi.org/10.1177/2051570716643961
  • Han, B., Deng, X., & Fan, H. (2023). Partners or opponents? How mindset shapes consumers’ attitude toward anthropomorphic artificial intelligence service robots. Journal of Service Research, 26(3), 441–458. https://doi.org/10.1177/10946705231169674
  • Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1993). Emotional contagion. Current Directions in Psychological Science, 2(3), 96–100. https://doi.org/10.1111/1467-8721.ep10770953
  • Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis. A regression-based approach (2nd ed.). The Guilford Press.
  • Hess, U., & Fischer, A. (2014). Emotional mimicry: Why and when we mimic emotions: Emotional mimicry. Social and Personality Psychology Compass, 8(2), 45–57. https://doi.org/10.1111/spc3.12083
  • Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior, 49, 245–250. https://doi.org/10.1016/j.chb.2015.02.026
  • Ho, A., Hancock, J., & Miner, A. S. (2018). Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4), 712–733. https://doi.org/10.1093/joc/jqy026
  • Ho, C. C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing and validating an alternative to the godspeed indices. Computers in Human Behavior, 26(6), 1508–1518. https://doi.org/10.1016/j.chb.2010.05.015
  • Hofree, G., Ruvolo, P., Bartlett, M. S., Winkielman, P., & Federici, S. (2014). Bridging the mechanical and the human mind: Spontaneous mimicry of a physically present android. Public Library of Science ONE, 9(7), e99934. https://doi.org/10.1371/journal.pone.0099934
  • Homburg, C., Wieseke, J., & Bornemann, T. (2009). Implementing the marketing concept at the employee-customer interface: The role of customer need knowledge. Journal of Marketing, 73(4), 64–81. https://doi.org/10.1509/jmkg.73.4.64
  • Hortensius, R., Hekele, F., & Cross, E. S. (2018). The perception of emotion in artificial agents. IEEE Transactions on Cognitive and Developmental Systems, 10(4), 852–864. https://doi.org/10.1109/TCDS.2018.2826921
  • Hu, Q., Lu, Y., Pan, Z., & Wang, B. (2022). How does AI use drive individual digital resilience? A conservation of resources (COR) theory perspective. Behaviour & Information Technology, 42(15), 2654–2673. https://doi.org/10.1080/0144929X.2022.2137698
  • Huang, M.-H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155–172. https://doi.org/10.1177/1094670517752459
  • Husain, R., Ratna, V. V., & Saxena, A. (2023). Past, present, and future of anthropomorphism in hospitality & tourism: Conceptualization and systematic review. Journal of Hospitality Marketing & Management, 32(8), 1077–1125. https://doi.org/10.1080/19368623.2023.2232382
  • Hutchings, P. B., & Haddock, G. (2008). Look Black in anger: The role of implicit prejudice in the categorization and perceived emotional intensity of racially ambiguous faces. Journal of Experimental Social Psychology, 44(5), 1418–1420. https://doi.org/10.1016/j.jesp.2008.05.002
  • Jiang, J. (2022). The role of natural soundscape in nature-based tourism experience: An extension of the stimulus–organism–response model. Current Issues in Tourism, 25(5), 707–726. https://doi.org/10.1080/13683500.2020.1859995
  • Jiang, Q., Zhang, Y., & Pian, W. (2022). Chatbot as an emergency exist: Mediated empathy for resilience via human-AI interaction during the COVID-19 pandemic. Information Processing & Management, 59(6), 103074. https://doi.org/10.1016/j.ipm.2022.103074
  • Kim, W. B., & Hur, H. J. (2023). What makes people feel empathy for AI chatbots? Assessing the role of competence and warmth. International Journal of Human–Computer Interaction, 1–14. https://doi.org/10.1080/10447318.2023.2219961
  • Kipnis, E., McLeay, F., Grimes, A., De Saille, S., & Potter, S. (2022). Service robots in long-term care: A consumer-centric view. Journal of Service Research, 25(4), 667–685. https://doi.org/10.1177/10946705221110849
  • Leite, I., Castellano, C., Pereira, A., Martinho, C., & Paiva, A. (2012). Long-term interactions with empathic robots: Evaluating perceived support in children. Proc. of 4th International Conference of Social Robotics, Springer LNCS, Chengdu, China, 298-299.
  • Li, M., Yin, D., Qiu, H., & Bai, B. (2021). A systematic review of AI technology-based service encounters: Implications for hospitality and tourism operations. International Journal of Hospitality Management, 95, 102930. https://doi.org/10.1016/j.ijhm.2021.102930
  • Li, S., Peluso, A. M., & Duan, J. (2023). Why do we prefer humans to artificial intelligence in telemarketing? A mind perception explanation. Journal of Retailing and Consumer Services, 70, 103139. https://doi.org/10.1016/j.jretconser.2022.103139
  • Liao, J., & Huang, J. (2024). Think like a robot: How interactions with humanoid service robots affect consumers’ decision strategies. Journal of Retailing and Consumer Services, 76, 103575. https://doi.org/10.1016/j.jretconser.2023.103575
  • Lin, M., Cui, X., Wang, J., Wu, G., & Lin, J. (2022). Promotors or inhibitors? Role of task type on the effect of humanoid service robots on consumers’ use intention. Journal of Hospitality Marketing & Management, 31(6), 710–729. https://doi.org/10.1080/19368623.2022.2062693
  • Liu, X., Chi, N., & Gremler, D. (2019). Emotion cycles in services: Emotional contagion and emotional labor effects. Journal of Service Research, 22(3), 285–300. https://doi.org/10.1177/1094670519835309
  • Liu, X., (STELLA), Yi, X., (SHANNON), & Wan, L. C. (2022). Friendly or competent? The effects of perception of robot appearance and service context on usage intention. Annals of Tourism Research, 92, 103324. https://doi.org/10.1016/j.annals.2021.103324
  • Liu-Thompkins, Y., Okazaki, S., & Li, H. (2022). Artificial empathy in marketing interactions: Bridging the human-AI gap in affective and social customer experience. Journal of the Academy of Marketing Science, 50(6), 1198–1218. https://doi.org/10.1007/s11747-022-00892-5
  • Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. Humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 2019, 1192. https://doi.org/10.1287/mksc.2019.1192
  • Lv, X., Liu, Y., Luo, J., Liu, Y., & Li, C. (2021). Does a cute artificial intelligence assistant soften the blow? The impact of cuteness on customer tolerance of assistant service failure. Annals of Tourism Research, 87, 103114. https://doi.org/10.1016/j.annals.2020.103114
  • Lv, X., Yang, Y., Qin, D., Cao, X., & Xu, H. (2022). Artificial intelligence service recovery: The role of empathic response in hospitality customers’ continuous usage intention. Computers in Human Behavior, 126, 106993. https://doi.org/10.1016/j.chb.2021.106993
  • Markovic, S., Iglesias, O., Singh, J. J., & Sierra, V. (2018). How does the perceived ethicality of corporate services brands influence loyalty and positive word-of-mouth? Analyzing the roles of empathy, affective commitment, and perceived quality. Journal of Business Ethics, 148(4), 721–740. https://doi.org/10.1007/s10551-015-2985-6
  • McLean, G., & Osei-Frimpong, K. (2019). Hey Alexa … examine the variables influencing the use of artificial intelligent in-home voice assistants. Computers in Human Behavior, 99, 28–37. https://doi.org/10.1016/j.chb.2019.05.009
  • Mehrabian, A. (1972). Nonverbal communication. Aldine-Atherton.
  • Moro, C., Lin, S., Nejat, G., & Mihailidis, A. (2019). Social robots and seniors: A comparative study on the influence of dynamic social features on human–robot interaction. International Journal of Social Robotics, 11(1), 5–24. https://doi.org/10.1007/s12369-018-0488-1
  • Mou, Y., & Xu, K. (2017). The media inequality: Comparing the initial human-human and human-AI social interactions. Computers in Human Behavior, 72, 432–440. https://doi.org/10.1016/j.chb.2017.02.067
  • Paiva, A., Leite, I., & Ribeiro, T. (2014). Emotion modelling for social robots.The Oxford handbook of affective computing, 296.
  • Pelau, C., Dabija, D. C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855. https://doi.org/10.1016/j.chb.2021.106855
  • Pentina, I., Xie, T., Hancock, T., & Bailey, A. (2023). Consumer–machine relationships in the age of artificial intelligence: Systematic literature review and research directions. Psychology & Marketing, 40(8), 1593–1614. https://doi.org/10.1002/mar.21853
  • Picard, R. (2013). Rosalind Picard teaching robots how to feel, interview with rdigitalife. http://rdigitalife.com/rosalind-picard-blog/.
  • Poria, S., Cambria, E., Bajpai, R., & Hussain, A. (2017). A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion, 37, 98–125. https://doi.org/10.1016/j.inffus.2017.02.003
  • Qiu, H., Li, M., Shu, B., & Bai, B. (2020). Enhancing hospitality experience with service robots: The mediating role of rapport building. Journal of Hospitality Marketing & Management, 29(3), 247–268. https://doi.org/10.1080/19368623.2019.1645073
  • Reisenzein, R. (1994). Pleasure-arousal theory and the intensity of emotions. Journal of Personality and Social Psychology, 67(3), 525–539. https://doi.org/10.1037/0022-3514.67.3.525
  • Richins, M. L. (1997). Measuring emotions in the consumption experience. Journal of Consumer Research, 24(2), 127–146. https://doi.org/10.1086/209499
  • Riedel, A., Mulcahy, R., & Northey, G. (2022). Feeling the love? How consumer’s political ideology shapes responses to AI financial service delivery. International Journal of Bank Marketing, 40(6), 1102–1132. https://doi.org/10.1108/IJBM-09-2021-0438
  • Ruan, Y., & Mezei, J. (2022). When do AI chatbots lead to higher customer satisfaction than human frontline employees in online shopping assistance? Considering product attribute type. Journal of Retailing and Consumer Services, 68, 103059. https://doi.org/10.1016/j.jretconser.2022.103059
  • Schepers, J., Belanche, D., Casaló, L. V., & Flavián, C. (2022). How smart should a service robot be? Journal of Service Research, 25(4), 565–582. https://doi.org/10.1177/10946705221107704
  • Schmetkamp, S. (2020). Understanding AI—can and should we empathize with robots? Review of Philosophy and Psychology, 11(4), 881–897. https://doi.org/10.1007/s13164-020-00473-x
  • Schoenewolf, G. (1990). Turning points in analytic therapy: The classic cases. Jason Aronson Press.
  • Shan, M., Zhu, Z., Chen, H., & Sun, S. (2023). Service robot’s responses in service recovery and service evaluation: The moderating role of robots’ social perception. Journal of Hospitality Marketing & Management, 33(2), 1–24. https://doi.org/10.1080/19368623.2023.2246456
  • Singh, S., Olson, E. D., & Tsai, C. H. K. (2021). Use of service robots in an event setting: Understanding the role of social presence, eeriness, and identity threat. Journal of Hospitality & Tourism Management, 49, 528–537. https://doi.org/10.1016/j.jhtm.2021.10.014
  • Solakis, K., Katsoni, V., Mahmoud, A. B., & Grigoriou, N. (2022). Factors affecting value co-creation through artificial intelligence in tourism: A general literature review. Journal of Tourism Futures. https://doi.org/10.1108/JTF-06-2021-0157
  • Song, X., Xu, B., & Zhao, Z. (2022). Can people experience romantic love for artificial intelligence? An empirical study of intelligent assistants. Information & Management, 59(2), 103595. https://doi.org/10.1016/j.im.2022.103595
  • Stroessner, S. J., & Benitez, J. (2019). The social perception of humanoid and non-humanoid robots: Effects of gendered and machinelike features. International Journal of Social Robotics, 11(2), 305–315. https://doi.org/10.1007/s12369-018-0502-7
  • Tielman, M., Neerincx, M., Meyer, J.-J., & Looije, R. (2014). Adaptive emotional expression in robot-child interaction. Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction,Bielefeld, Germany, 407–414.
  • Townsend, C., & Sood, S. (2012). Self-affirmation through the choice of highly aesthetic products. Journal of Consumer Research, 39(2), 415–428. https://doi.org/10.1086/663775
  • Tussyadiah, I. (2020). A review of research into automation in tourism: Launching the annals of tourism research curated collection on artificial intelligence and Robotics in tourism. Annals of Tourism Research, 81, 102883. https://doi.org/10.1016/j.annals.2020.102883
  • Wang, W. (2017). Smartphones as social actors? Social dispositional factors in assessing anthropomorphism. Computers in Human Behavior, 68, 334–344. https://doi.org/10.1016/j.chb.2016.11.022
  • Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52(3), 113–117. https://doi.org/10.1016/j.jesp.2014.01.005
  • Wieseke, J., Geigenmüller, A., & Kraus, F. (2012). On the role of empathy in customer-employee interactions. Journal of Service Research, 15(3), 316–331. https://doi.org/10.1177/1094670512439743
  • Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018). Brave new world: Service robots in the frontline. Journal of Service Management, 29(5), 907–931. https://doi.org/10.1108/JOSM-04-2018-0119
  • Woo, K. S., & Chan, B. (2020). “Service with a smile” and emotional contagion: A replication and extension study. Annals of Tourism Research, 80, 102850. https://doi.org/10.1016/j.annals.2019.102850
  • Xiao, L., & Kumar, V. (2021). Robotics for customer service: a useful complement or an ultimate substitute? Journal of Service Research, 24(1), 9–29. https://doi.org/10.1177/1094670519878881
  • Xie, L., & Lei, S. (2022). The nonlinear effect of service robot anthropomorphism on customers’ usage intention: A privacy calculus perspective. International Journal of Hospitality Management, 107, 103312. https://doi.org/10.1016/j.ijhm.2022.103312
  • Xu, S. T., Cao, Z. C., & Huo, Y. (2020). Antecedents and outcomes of emotional labour in hospitality and tourism: A meta-analysis. Tourism Management, 79, 104099. https://doi.org/10.1016/j.tourman.2020.104099
  • Yang, H., Song, H., Xia, L., & Yang, A. (2024). Understanding the consequences of robotic interaction quality and outcome quality: A three-phased affordance theory-based approach. Journal of Hospitality Marketing & Management, 1–21. https://doi.org/10.1080/19368623.2024.2308531
  • Yang, Y., Liu, Y., Lv, X., Ai, J., & Li, Y. (2022). Anthropomorphism and customers’ willingness to use artificial intelligence service agents. Journal of Hospitality Marketing & Management, 31(1), 1–23. https://doi.org/10.1080/19368623.2021.1926037
  • Yu, C. E. (2020). Emotional contagion in human-robot interaction. E-Review of Tourism Research, 17(5), 793-798. https://ertr-ojs-tamu.tdl.org/ertr/article/view/561.
  • Zhang, J., Chen, Q., Lu, J., Wang, X., Liu, L., & Feng, Y. (2024). Emotional expression by artificial intelligence chatbots to improve customer satisfaction: Underlying mechanism and boundary conditions. Tourism Management, 100, 104835. https://doi.org/10.1016/j.tourman.2023.104835
  • Zhang, Y., & Wang, S. (2023). The influence of anthropomorphic appearance of artificial intelligence products on consumer behavior and brand evaluation under different product types. Journal of Retailing and Consumer Services, 74, 103432. https://doi.org/10.1016/j.jretconser.2023.103432

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.