7,323
Views
19
CrossRef citations to date
0
Altmetric
Research Article

Examining the Use of Nonverbal Communication in Virtual Agents

ORCID Icon &

References

  • Alibali, M. W. (2005, December). Gesture in spatial cognition: Expressing, communicating, and thinking about spatial information. Spatial Cognition and Computation, 5(4), 307–331. https://doi.org/10.1207/s15427633scc0504_2
  • Allbeck, J. M., & Badler, N. I. (2001). Towards behavioral consistency in animated agents. In N. Magnenat-Thalmann & D. Thalmann (Eds.), Deformable avatars (pp. 191–205). Springer US. https://doi.org/10.1007/978-0-306-47002-8%5F17
  • Allwood, J., Kopp, S., Grammer, K., Ahlsén, E., Oberzaucher, E., & Koppensteiner, M. (2007, December). The analysis of embodied communicative feedback in multimodal corpora: A prerequisite for behavior simulation. Language Resources and Evaluation, 41(3), 255–272. https://doi.org/10.1007/s10579-007-9056-2
  • Anderson, K., André, E., Baur, T., Bernardini, S., Chollet, M., Chryssafidou, E., Damian, I., Ennis, C., Egges, A., Gebhard, P., Jones, H., Ochs, M., Pelachaud, C., Porayska-Pomsta, K., Rizzo, P., & Sabouret, N. 2013. The tardis framework: Intelligent virtual agents for social coaching in job interviews. In D. Reidsma, H. Katayose, & A. Nijholt (Eds.), Advances in computer entertainment (pp. 476–491). Springer International Publishing.
  • André, E., & Pelachaud, C. (2010). Interacting with embodied conversational agents. In F. Chen (Ed.), Speech technology: Theory and applications (pp. 123–149). Springer US. https://doi.org/10.1007/978-0-387-73819-2%5F8
  • Andrist, S. (2013). Controllable models of gaze behavior for virtual agents and humanlike robots. Proceedings of the 15th ACM on International Conference on Multimodal Interaction (pp. 333–336). ACM. https://doi.org/10.1145/2522848.2532194
  • Andrist, S., Mutlu, B., & Gleicher, M. (2013). Conversational gaze aversion for virtual agents. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Intelligent virtual agents (pp. 249–262). Springer.
  • Andrist, S., Gleicher, M., & Mutlu, B. (2017). Looking coordinated: Bidirectional gaze mechanisms for collaborative interaction with virtual characters. Proceedings of the 2017 chi Conference on Human Factors in Computing Systems (pp. 2571–2582). ACM. https://doi.org/10.1145/3025453.3026033
  • Andrist, S., Leite, I., & Lehman, J. (2013). Fun and fair: Influencing turn-taking in a multi-party game with a virtual agent. Proceedings of the 12th International Conference on Interaction Design and Children (pp. 352–355). ACM. https://doi.org/10.1145/2485760.2485800
  • Andrist, S., Pejsa, T., Mutlu, B., & Gleicher, M. (2012a). Designing effective gaze mechanisms for virtual agents. Proceedings of the Sigchi Conference on Human Factors in Computing Systems (pp. 705–714). ACM. https://doi.org/10.1145/2207676.2207777
  • Andrist, S., Pejsa, T., Mutlu, B., & Gleicher, M. (2012b). A head-eye coordination model for animating gaze shifts of virtual characters. Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction (pp. 4:1–4:6). Santa Monica, California: ACM. https://doi.org/10.1145/2401836.2401840
  • Argyle, M. (1988). Bodily communication. Methuen.
  • Argyle, M., & Dean, J. (1965). Eye-contact, distance and affiliation. Sociometry, 28(3), 289–304. https://doi.org/10.2307/2786027
  • Badler, N. I., Bindiganavale, R., Allbeck, J., Schuler, W., Zhao, L., & Palmer, M. (2000). Parameterized action representation for virtual human agents. Embodied Conversational Agents (pp. 256–284). MIT Press. http://dl.acm.org/citation.cfm?id=371552.371567
  • Badler, N. I., Chi, D. M., & Chopra-Khullar, S. (1999, May). Virtual human animation based on movement observation and cognitive behavior models. Proceedings Computer Animation 1999 (pp. 128–137).
  • Bailenson, J. N., Beall, A. C., & Blascovich, J. (2002, December). Gaze and task performance in shared virtual environments. The Journal of Visualization and Computer Animation, 13(5), 313–320. https://doi.org/10.1002/vis.297
  • Bailenson, J. N., Swinth, K., Hoyt, C., Persky, S., Dimov, A., & Blascovich, J. (2005, August). The independent and interactive effects of embodied-agent appearance and behavior on self- report, cognitive, and behavioral markers of copresence in immersive virtual environments. Presence: Teleoperators and Virtual Environments, 14(4), 379–393. https://doi.org/10.1162/105474605774785235
  • Bailenson, J. N., & Yee, N. (2005, October). Digital chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological Science, 16(10), 814–819. https://doi.org/10.1111/j.1467-9280.2005.01619.x
  • Bailenson, J. N., Yee, N., Merget, D., & Schroeder, R. (2006, August). The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. Presence: Teleoperators and Virtual Environments, 15(4), 359–372. https://doi.org/10.1162/pres.15.4.359
  • Bailenson, J. N., Yee, N., Patel, K., & Beall, A. C. (2008, January). Detecting digital chameleons. Computers in Human Behavior, 24(1), 66–87. https://doi.org/10.1016/j.chb.2007.01.015
  • Bartlett, M. S., Hager, J. C., Ekman, P., & Sejnowski, T. J. (1999, March). Measuring facial expressions by computer image analysis. Psychophysiology, 36(2), 253–263. https://doi.org/10.1017/S0048577299971664
  • Baur, T., Damian, I., Lingenfelser, F., Wagner, J., & André, E. (2013). Nova: Automated analysis of nonverbal signals in social interactions. In A. A. Salah, H. Hung, O. Aran, & H. Gunes (Eds.), Human behavior understanding (pp. 160–171). Springer International Publishing.
  • Bavelas, J. B., & Chovil, N. (2006). Nonverbal and verbal communication: Hand gestures and facial displays as part of language use in face-to-face dialogue. In V. Manusov & M. L. Patterson (Eds.), The sage handbook of nonverbal communication (pp. 97–116). SAGE Publications, Inc. http://sk.sagepub.com/reference/hdbk%5Fnonverbalcomm/n6.xml
  • Bavelas, J. B., Coates, L., & Johnson, T. (2002). Listener responses as a collaborative process: The role of gaze. Journal of Communication, 52(3), 566–580. https://doi.org/10.1111/j.1460-2466.2002.tb02562.x
  • Baylor, A., Ryu, J., & Shen, E. (2003). The effects of pedagogical agent voice and animation on learning, motivation and perceived persona. Association for the Advancement of Computing in Education (AACE), 452–458.
  • Baylor, A. L., & Kim, S. (2008). The effects of agent nonverbal communication on procedural and attitudinal learning outcomes. In H. Prendinger, J. Lester, & M. Ishizuka (Eds.), Intelligent virtual agents (pp. 208–214). Springer Berlin Heidelberg.
  • Baylor, A. L. (2009, December). Promoting motivation with virtual agents and avatars: Role of visual presence and appearance. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535), 3559–3565. https://doi.org/10.1098/rstb.2009.0148
  • Baylor, A. L. (2011, April). The design of motivational agents and avatars. Educational Technology Research and Development, 59(2), 291–300. https://doi.org/10.1007/s11423-011-9196-3
  • Baylor, A. L., & Kim, S. (2009, March). Designing nonverbal communication for pedagogical agents: When less is more. Computers in Human Behavior, 25(2), 450–457. https://doi.org/10.1016/j.chb.2008.10.008
  • Becker, C., Kopp, S., & Wachsmuth, I. (2004). Simulating the emotion dynamics of a multi- modal conversational agent. In E. André, L. Dybkjær, W. Minker, & P. Heisterkamp (Eds.), Affective dialogue systems (pp. 154–165). Springer Berlin Heidelberg.
  • Becker, C., Kopp, S., & Wachsmuth, I. (2007). Why emotions should be integrated into conversational agents. Conversational Informatics (pp. 49–67). John Wiley & Sons, Ltd. https://onlinelibrary.wiley.com/doi/abs/10.1002/9780470512470.ch3
  • Beebe, S. A. (1976). Effects of eye contact, posture and vocal inflection upon credibility and comprehension. Australian SCAN: Journal of Human Communication, 7(8), 57–70.
  • Bergmann, K. (2006). Verbal or visual? How information is distributed across speech and gesture in spatial dialog. Proceedings of Brandial 2006, the 10th Workshop on the Semantics and Pragmatics of Dialogue (pp. 90–97).
  • Bergmann, K., Eyssel, F., & Kopp, S. (2012). A second chance to make a first impression? How appearance and nonverbal behavior affect perceived warmth and competence of virtual agents over time. In Y. Nakano, M. Neff, A. Paiva, & M. Walker (Eds.), Intelligent virtual agents (pp. 126–138). Springer Berlin Heidelberg.
  • Bergmann, K., & Macedonia, M. (2013). A virtual agent as vocabulary trainer: Iconic gestures help to improve learners’ memory performance. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Intelligent virtual agents (pp. 139–148). Springer Berlin Heidelberg.
  • Bergmann, K., & Kopp, S. (2009). Increasing the expressiveness of virtual agents: Autonomous generation of speech and gesture for spatial description tasks. Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1 (pp. 361–368). International Foundation for Autonomous Agents and Multiagent Systems. http://dl.acm.org/citation.cfm?id=1558013.1558062
  • Berry, D. C., Butler, L. T., & De Rosis, F. (2005, September). Evaluating a realistic agent in an advice-giving task. International Journal of Human-Computer Studies, 63(3), 304–327. https://doi.org/10.1016/j.ijhcs.2005.03.006
  • Bevacqua, E., Raouzaiou, A., Peters, C., Caridakis, G., Karpouzis, K., Pelachaud, C., & Mancini, M. (2006). Multimodal sensing, interpretation and copying of movements by a virtual agent. In E. André, L. Dybkjær, W. Minker, H. Neumann, & M. Weber (Eds.), Perception and interactive technologies (pp. 164–174). Springer Berlin Heidelberg.
  • Bevacqua, E., Pammi, S., Hyniewska, S. J., Schröder, M., & Pelachaud, C. (2010). Multimodal backchannels for embodied conversational agents. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, & A. Safonova (Eds.), Intelligent virtual agents (pp. 194–200). Springer Berlin Heidelberg.
  • Biancardi, B., Cafaro, A., & Pelachaud, C. (2017a). Analyzing first impressions of warmth and competence from observable nonverbal cues in expert-novice interactions. Proceedings of the 19th ACM International Conference on Multimodal Interaction (pp. 341–349). ACM. https://doi.org/10.1145/3136755.3136779
  • Biancardi, B., Cafaro, A., & Pelachaud, C. (2017b). Could a virtual agent be warm and competent? Investigating user’s impressions of agent’s non-verbal behaviours. Proceedings of the 1st ACM Sigchi International Workshop on Investigating Social Interactions with Artificial Agents (pp. 22–24). ACM. https://doi.org/10.1145/3139491.3139498
  • Bilakhia, S., Petridis, S., & Pantic, M. (2013, September). Audiovisual detection of behavioural mimicry. 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (pp. 123–128). IEEE.
  • Brennan, S. E., Chen, X., Dickinson, C. A., Neider, M. B., & Zelinsky, G. J. (2008, March). Coordinating cognition: The costs and benefits of shared gaze during collaborative search. Cognition, 106(3), 1465–1477. https://doi.org/10.1016/j.cognition.2007.05.012
  • Buisine, S., Abrilian, S., & Martin, J.-C. (2004). Evaluation of multimodal behaviour of embodied agents. In Z. Ruttkay & C. Pelachaud (Eds.), From brows to trust: Evaluating embodied conversational agents (pp. 217–238). Springer Netherlands. https://doi.org/10.1007/1-4020-2730-3%5F8
  • Buisine, S., & Martin, J.-C. (2007, July). The effects of speech–gesture cooperation in animated agents’ behavior in multimedia presentations. Interacting with Computers, 19(4), 484–493. https://doi.org/10.1016/j.intcom.2007.04.002
  • Burleson, W., Picard, R. W., Perlin, K., & Lippincott, J. (2004). A platform for affective agent research. Workshop on Empathetic Agents, International Conference on Autonomous Agents and Multiagent Systems, Columbia University, New York, NY (Vol. 2). Citeseer.
  • Buschmeier, H., & Kopp, S. (2018). Communicative listener feedback in human-agent interaction: Artificial speakers need to be attentive and adaptive. Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (pp. 1213–1221). International Foundation for Autonomous Agents and Multiagent Systems. http://dl.acm.org/citation.cfm?id=3237383.3237880
  • Cafaro, A., Vilhjálmsson, H. H., Bickmore, T., Heylen, D., Jóhannsdóttir, K. R., & Valgarosson, G. S. (2012). First impressions: Users’ judgments of virtual agents’ personality and interpersonal attitude in first encounters. In Y. Nakano, M. Neff, A. Paiva, & M. Walker (Eds.), Intelligent virtual agents (pp. 67–80). Springer Berlin Heidelberg.
  • Cafaro, A., Vilhjálmsson, H. H., & Bickmore, T. (2016, August). First impressions in human–agent virtual encounters. ACM Transactions on Computer-Human Interaction, 23(4), 24: 1–24:40. https://doi.org/10.1145/2940325
  • Caridakis, G., Raouzaiou, A., Bevacqua, E., Mancini, M., Karpouzis, K., Malatesta, L., & Pelachaud, C. (2007, December). Virtual agent multimodal mimicry of humans. Language Resources and Evaluation, 41(3), 367–388. https://doi.org/10.1007/s10579-007-9057-1
  • Cassell, J. (2001, December). Nudge nudge wink wink: Elements of face-to-face conversation for embodied conversational agents. Embodied Conversational Agents (pp. 1–27). MIT Press.
  • Cassell, J., Vilhjálmsson, H. H., & Bickmore, T. (2004). Beat: The behavior expression animation toolkit. In H. Prendinger & M. Ishizuka (Eds.), Life-like characters: Tools, affective functions, and applications (pp. 163–185). Springer. https://doi.org/10.1007/978-3-662-08373-4%5F8
  • Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjálmsson, H., & Yan, H. (1999). Embodiment in conversational interfaces: Rea. Proceedings of the Sigchi Conference on Human Factors in Computing Systems (pp. 520–527). ACM. https://doi.org/10.1145/302979.303150
  • Cassell, J., Pelachaud, C., Badler, N., Steedman, M., Achorn, B., Becket, T., … Stone, M. (1994). Animated conversation: Rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques (pp. 413–420). ACM. https://doi.org/10.1145/192161.192272
  • Cassell, J., & Thorisson, K. R. (1999, May). The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents. Applied Artificial Intelligence, 13(4–5), 519–538. https://doi.org/10.1080/088395199117360
  • Cassell, J., & Vilhjálmsson, H. (1999, March). Fully embodied conversational avatars: Making communicative behaviors autonomous. Autonomous Agents and Multi-Agent Systems, 2(1), 45–64. https://doi.org/10.1023/A:1010027123541
  • Castellano, G., Mancini, M., Peters, C., & McOwan, P. W. (2012, May). Expressive copying behavior for social agents: A perceptual analysis. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 42(3), 776–783. https://doi.org/10.1109/TSMCA.2011.2172415
  • Chandler, P., & Sweller, J. (1991, December). Cognitive load theory and the format of instruction. Cognition and Instruction, 8(4), 293–332. https://doi.org/10.1207/s1532690xci0804_2
  • Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception–behavior link and social interaction. Journal of Personality and Social Psychology, 76(6), 893. https://doi.org/10.1037/0022-3514.76.6.893
  • Chollet, M., Ochs, M., & Pelachaud, C. (2014). From non-verbal signals sequence mining to bayesian networks for interpersonal attitudes expression. In T. Bickmore, S. Marsella, & C. Sidner (Eds.), Intelligent virtual agents (pp. 120–133). Springer International Publishing.
  • Clarebout, G., Elen, J., Johnson, W. L., & Shaw, E. (2002). Animated pedagogical agents: An opportunity to be grasped? Journal of Educational Multimedia and Hypermedia, 11(3), 267–286.
  • Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. Resnick, B. M. John, & S. Teasley (Eds.), Perspectives on socially shared cognition (pp. 13–1991). American Psychological Association.
  • Clark, H. H., & Wilkes-Gibbs, D. (1986, February). Referring as a collaborative process. Cognition, 22(1), 1–39. https://doi.org/10.1016/0010-0277(86)90010-7
  • Clavel, C., Plessier, J., Martin, J.-C., Ach, L., & Morel, B. (2009). Combining facial and postural expressions of emotions in a virtual character. In Z. Ruttkay, M. Kipp, A. Nijholt, & H. H. Vilhjálmsson (Eds.), Intelligent virtual agents (pp. 287–300). Springer Berlin Heidelberg.
  • Cook, M. (1977). Gaze and mutual gaze in social encounters: How long–and when–we look others “in the eye” is one of the main signals in nonverbal communication. American Scientist, 65(3), 328–333.
  • Cook, S. W., & Goldin-Meadow, S. (2006, April). The role of gesture in learning: Do children use their hands to change their minds? Journal of Cognition and Development, 7(2), 211–232. https://doi.org/10.1207/s15327647jcd0702_4
  • Cowell, A. J., & Stanney, K. M. (2003). Embodiment and interaction guidelines for designing credible, trustworthy embodied conversational agents. In T. Rist, R. S. Aylett, D. Ballin, & J. Rickel (Eds.), Intelligent virtual agents (pp. 301–309). Springer Berlin Heidelberg.
  • Dael, N., Mortillaro, M., & Scherer, K. R. (2012). Emotion expression in body action and posture. Emotion, 12(5), 1085–1101. https://doi.org/10.1037/a0025737
  • De Carolis, B., Pelachaud, C., Poggi, I., & Steedman, M. (2004). Apml, a markup language for believable behavior generation. In H. Prendinger & M. Ishizuka (Eds.), Life-like characters: Tools, affective functions, and applications (pp. 65–85). Springer. https://doi.org/10.1007/978-3-662-08373-4%5F4
  • Dermouche, S., & Pelachaud, C. (2018). Attitude modeling for virtual character based on temporal sequence mining: Extraction and evaluation. Proceedings of the 5th International Conference on Movement and Computing (pp. 23:1–23:8). ACM. https://doi.org/10.1145/3212721.3212806
  • Dermouche, S., & Pelachaud, C. (2019). Engagement modeling in dyadic interaction. 2019 International Conference on Multimodal Interaction (pp. 440–445). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/3340555.3353765
  • DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., Georgila, K., Gratch, J., Hartholt, A., Lhommet, M., Lucas, G., Marsella, S., Morbini, F., Nazarian, A., Scherer, S., Stratou, G., Suri, A., Traum, D., Wood, R., Xu, Y., Rizzo, A., & Morency, L.P. (2014). Simsensei kiosk: A virtual human interviewer for healthcare decision support. Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems (pp. 1061–1068). International Foundation for Autonomous Agents and Multiagent Systems. http://dl.acm.org/citation.cfm?id=2617388.2617415
  • Dillenbourg, P., & Traum, D. (2006, January). Sharing solutions: Persistence and grounding in multimodal collaborative problem solving. Journal of the Learning Sciences, 15(1), 121–151. https://doi.org/10.1207/s15327809jls1501_9
  • Duffy, K. A., & Chartrand, T. L. (2015, June). Mimicry: Causes and consequences. Current Opinion in Behavioral Sciences, 3(6), 112–116. https://doi.org/10.1016/j.cobeha.2015.03.002
  • Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23(2), 283–292. https://doi.org/10.1037/h0033031
  • Duncan, S., & Niederehe, G. (1974, May). On signalling that it’s your turn to speak. Journal of Experimental Social Psychology, 10(3), 234–247. https://doi.org/10.1016/0022-1031(74)90070-5
  • Ekman, P. (1992, December). Are there basic emotions? Psychological Review, 99(3), 550. https://doi.org/10.1037/0033-295X.99.3.550
  • Ekman, P. (1993). Facial expression and emotion. American Psychologist, 48(4), 384–392. https://doi.org/10.1037/0003-066X.48.4.384
  • Ekman, P., Friesen, W. V., O’Sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., Krause, R., LeCompte, W. A., Pitcairn, T., Ricci-Bitti, P. E., Scherer, K., Tomita, M., & Tzavaras, A. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. Journal of Personality and Social Psychology, 53(4), 712–717. https://doi.org/10.1037/0022-3514.53.4.712
  • Feese, S., Arnrich, B., Tröster, G., Meyer, B., & Jonas, K. (2012, September). Quantifying behavioral mimicry by automatic detection of nonverbal cues from body motion. 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on Social Computing (pp. 520–525).
  • Foster, M. E., & Oberlander, J. (2007, December). Corpus-based generation of head and eyebrow motion for an embodied conversational agent. Language Resources and Evaluation, 41(3), 305–323. https://doi.org/10.1007/s10579-007-9055-3
  • Frechette, C., & Moreno, R. (2010). The roles of animated pedagogical agents’ presence and nonverbal communication in multimedia learning environments. Journal of Media Psychology: Theories, Methods, and Applications, 22(2), 61–72. https://doi.org/10.1027/1864-1105/a000009
  • Frischen, A., Bayliss, A. P., & Tipper, S. P. (2007, June). Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychological Bulletin, 133(4), 694. https://doi.org/10.1037/0033-2909.133.4.694
  • Fussell, S. R., Kraut, R. E., & Siegel, J. (2000). Coordination of communication: Effects of shared visual context on collaborative work. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (pp. 21–30). ACM. https://doi.org/10.1145/358916.358947
  • Fussell, S. R., Setlock, L. D., Yang, J., Ou, J., Mauer, E., & Kramer, A. D. I. (2004, September). Gestures over video streams to support remote collaboration on physical tasks. Human–Computer Interaction, 19(3), 273–309. https://doi.org/10.1207/s15327051hci1903_3
  • Garau, M., Slater, M., Pertaub, D.-P., & Razzaque, S. (2005, February). The responses of people to virtual humans in an immersive virtual environment. Presence: Teleoperators and Virtual Environments, 14(1), 104–116. https://doi.org/10.1162/1054746053890242
  • Gergle, D., Kraut, R. E., & Fussell, S. R. (2004). Action as language in a shared visual space. Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work (pp. 487–496). ACM. https://doi.org/10.1145/1031607.1031687
  • Goldin-Meadow, S., & Alibali, M. W. (2013). Gesture’s role in speaking, learning, and creating language. Annual Review of Psychology, 64(1), 257–283. https://doi.org/10.1146/annurev-psych-113011-143802
  • Gorham, J. (1988, January). The relationship between verbal teacher immediacy behaviors and student learning. Communication Education, 37(1), 40–53. https://doi.org/10.1080/03634528809378702
  • Grandhi, S. A., Joue, G., & Mittelberg, I. (2011). Understanding naturalness and intuitiveness in gesture production: Insights for touchless gestural interfaces. Proceedings of the Sigchi Conference on Human Factors in Computing Systems (pp. 821–824). ACM. https://doi.org/10.1145/1978942.1979061
  • Gratch, J., Okhmatovskaia, A., Lamothe, F., Marsella, S., Morales, M., Van Der Werf, R. J., & Morency, L.-P. (2006). Virtual rapport. In J. Gratch, M. Young, R. Aylett, D. Ballin, & P. Olivier (Eds.), Intelligent virtual agents (pp. 14–27). Springer.
  • Gratch, J., Wang, N., Okhmatovskaia, A., Lamothe, F., Morales, M., Van Der Werf, R. J., & Morency, L.-P. (2007). Can virtual humans be more engaging than real ones? In J. A. Jacko (Ed.), Human-computer interaction. hci intelligent multimodal interaction environments (pp. 286–297). Springer Berlin Heidelberg.
  • Gratch, J., Wang, N., Gerten, J., Fast, E., & Duffy, R. (2007). Creating rapport with virtual agents. In C. Pelachaud, J.-C. Martin, E. André, G. Chollet, K. Karpouzis, & D. Pelé (Eds.), Intelligent virtual agents (pp. 125–138). Springer Berlin Heidelberg.
  • Gratch, J., Morency, L.-P., Scherer, S., Stratou, G., Boberg, J., Koenig, S., Adamson, T., & Rizzo, A. (2013). User-state sensing for virtual health agents and telehealth applications. Studies in Health Technology and Informatics, 184(1), 151–157.
  • Gratch, J., Rickel, J., André, E., Cassell, J., Petajan, E., & Badler, N. (2002, July). Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems, 17(4), 54–63. https://doi.org/10.1109/MIS.2002.1024753
  • Grolleman, J., Van Dijk, B., Nijholt, A., & Van Emst, A. (2006). Break the habit! Designing an e- therapy intervention using a virtual coach in aid of smoking cessation. In W. A. IJsselsteijn, Y. A. W. De Kort, C. Midden, B. Eggen, & E. Van Den Hoven (Eds.), Persuasive technology (pp. 133–141). Springer Berlin Heidelberg.
  • Guadagno, R. E., Blascovich, J., Bailenson, J. N., & Mccall, C. (2007, June). Virtual humans and persuasion: The effects of agency and behavioral realism. Media Psychology, 10(1), 1–22. https://www.tandfonline.com/doi/full/10.1080/15213260701300865
  • Hall, E. T. (1966). The hidden dimension. Doubleday. (Google-Books-ID: TchK2tDnpkAC).
  • Hartholt, A., Traum, D., Marsella, S. C., Shapiro, A., Stratou, G., Leuski, A., … Gratch, J. (2013). All together now. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Intelligent virtual agents (pp. 368–381). Springer.
  • Hartmann, B., Mancini, M., & Pelachaud, C. (2006). Implementing expressive gesture synthesis for embodied conversational agents. In S. Gibet, N. Courty, & J.-F. Kamp (Eds.), Proceedings of the 6th international conference on gesture in human-computer interaction and simulation (pp. 188–199). Springer Berlin Heidelberg. https://doi.org/10.1007/11678816%5F22
  • Hartmann, B., Mancini, M., Buisine, S., & Pelachaud, C. (2005). Design and evaluation of expressive gesture synthesis for embodied conversational agents. Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 1095–1096). New York, NY: ACM. Retrieved February 2, 2018, from https://doi.org/10.1145/1082473.1082640
  • Heidig, S., & Clarebout, G. (2011, January). Do pedagogical agents make a difference to student motivation and learning? Educational Research Review, 6(1), 27–54. https://doi.org/10.1016/j.edurev.2010.07.004
  • Heloir, A., & Kipp, M. (2009). Embr – A realtime animation engine for interactive embodied agents. In Z. Ruttkay, M. Kipp, A. Nijholt, & H. H. Vilhjálmsson (Eds.), Intelligent virtual agents (pp. 393–404). Springer Berlin Heidelberg.
  • Heylen, D., Kopp, S., Marsella, S. C., Pelachaud, C., & Vilhjálmsson, H. (2008). The next step towards a function markup language. In H. Prendinger, J. Lester, & M. Ishizuka (Eds.), Intelligent virtual agents (pp. 270–280). Springer.
  • Hirsh, A. T., George, S. Z., & Robinson, M. E. (2009, May). Pain assessment and treatment disparities: A virtual human technology investigation. Pain, 143(1), 106–113. https://doi.org/10.1016/j.pain.2009.02.005
  • Hofstede, G., Hofstede, G. J., & Minkov, M. (2010). Cultures and organizations: Software of the mind (3rd ed.). McGraw-Hill Education.
  • Horstmann, G. (2003). What do facial expressions convey: Feeling states, behavioral intentions, or actions requests? Emotion, 3(2), 150–166. https://doi.org/10.1037/1528-3542.3.2.150
  • Huang, L., Morency, L.-P., & Gratch, J. (2011). Virtual rapport 2.0. In H. H. Vilhjálmsson, S. Kopp, S. Marsella, & K. R. Thórisson (Eds.), Intelligent virtual agents (pp. 68–79). Springer Berlin Heidelberg.
  • Huang, L., Morency, L.-P., & Gratch, J. (2010). Parasocial consensus sampling: Combining multiple perspectives to learn virtual human behavior. Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: Volume 1 (pp. 1265–1272). International Foundation for Autonomous Agents and Multiagent Systems. http://dl.acm.org/citation.cfm?id=1838206.1838371
  • Isbister, K., Nakanishi, H., Ishida, T., & Nass, C. (2000). Helper agent: Designing an assistant for human-human interaction in a virtual meeting space. Proceedings of the Sigchi Conference on Human Factors in Computing Systems (pp. 57–64). ACM. https://doi.org/10.1145/332040.332407
  • Jacob, C., Guéguen, N., Martin, A., & Boulbry, G. (2011, September). Retail salespeople’s mimicry of customers: Effects on consumer behavior. Journal of Retailing and Consumer Services, 18(5), 381–388. https://doi.org/10.1016/j.jretconser.2010.11.006
  • Janssoone, T., Clavel, C., Bailly, K., & Richard, G. (2016). Using temporal association rules for the synthesis of embodied conversational agents with a specific stance. In D. Traum, W. Swartout, P. Khooshabeh, S. Kopp, S. Scherer, & A. Leuski (Eds.), Intelligent virtual agents (pp. 175–189). Springer International Publishing.
  • Johnson, W. L., Rickel, J. W., & Lester, J. C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education, 11(1), 47–78.
  • Jurafsky, D., & Martin, J. H. (2000). Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition (1st ed.). Prentice Hall PTR.
  • Kang, S.-H., Gratch, J., Sidner, C., Artstein, R., Huang, L., & Morency, L.-P. (2012). Towards building a virtual counselor: Modeling nonverbal behavior during intimate self-disclosure. Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1 (pp. 63–70). International Foundation for Autonomous Agents and Multi- agent Systems. http://dl.acm.org/citation.cfm?id=2343576.2343585
  • Kang, S.-H., Gratch, J., Wang, N., & Watt, J. H. (2008). Does the contingency of agents’ nonverbal feedback affect users’ social anxiety? Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1 (pp. 120–127). International Foundation for Autonomous Agents and Multiagent Systems. http://dl.acm.org/citation.cfm?id=1402383.1402405
  • Kendon, A. (1970, January). Movement coordination in social interaction: Some examples described. Acta Psychologica, 32(2), 101–125. https://doi.org/10.1016/0001-6918(70)90094-6
  • Kendon, A. (1988). How gestures can become like words. In Cross-cultural Perspectives in Nonverbal Communication (pp. 131–141). Hogrefe.
  • Kendon, A. (2004). Gesture: Visible action as utterance. Cambridge University Press.
  • Kenny, P., Hartholt, A., Gratch, J., Swartout, W., Traum, D., Marsella, S., & Piepol, D. (2007). Building interactive virtual humans for training environments. Proceedings of I/ITSEC (Vol. 174, pp. 911–916). NTSA.
  • Kipp, M. (2001). Anvil - a generic annotation tool for multimodal dialogue. Proceedings of the 7th European Conference on Speech Communication and Technology (Eurospeech) (pp. 1367–1370). ISCA.
  • Kipp, M., Heloir, A., Schröder, M., & Gebhard, P. (2010). Realizing multimodal behavior. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, & A. Safonova (Eds.), Intelligent virtual agents (pp. 57–63). Springer Berlin Heidelberg.
  • Kistler, F., Endrass, B., Damian, I., Dang, C. T., & André, E. (2012, July). Natural interaction with culturally adaptive virtual characters. Journal on Multimodal User Interfaces, 6(1), 39–47. https://doi.org/10.1007/s12193-011-0087-z
  • Kitchenham, B., Pearl Brereton, O., Budgen, D., Turner, M., Bailey, J., & Linkman, S. (2009, January). Systematic literature reviews in software engineering – A systematic literature review. Information and Software Technology, 51(1), 7–15. https://doi.org/10.1016/j.infsof.2008.09.009
  • Koda, T., & Maes, P. (1996, November). Agents with faces: The effect of personification. Proceedings 5th IEEE International Workshop on Robot and human communication. Roman’96 tsukuba (pp. 189–194). IEEE.
  • Kopp, S., Krenn, B., Marsella, S., Marshall, A. N., Pelachaud, C., Pirker, H., … Vilhjálmsson, H. (2006). Towards a common framework for multimodal generation: The behavior markup language. In J. Gratch, M. Young, R. Aylett, D. Ballin, & P. Olivier (Eds.), Intelligent virtual agents (pp. 205–217). Springer Berlin Heidelberg.
  • Kopp, S., Tepper, P., & Cassell, J. (2004). Towards integrated microplanning of language and iconic gesture for multimodal output. In Proceedings of the 6th International Conference on Multimodal Interfaces (pp. 97–104). ACM. https://doi.org/10.1145/1027933.1027952
  • Kopp, S., & Wachsmuth, I. (2004). Synthesizing multimodal utterances for conversational agents. Computer Animation and Virtual Worlds, 15(1), 39–52. https://doi.org/10.1002/cav.6
  • Krämer, N., Kopp, S., Becker-Asano, C., & Sommer, N. (2013, March). Smile and the world will smile with you–the effects of a virtual agent’s smile on users’ evaluation and behavior. International Journal of Human-Computer Studies, 71(3), 335–349. https://doi.org/10.1016/j.ijhcs.2012.09.006
  • Krämer, N. C., Tietz, B., & Bente, G. (2003). Effects of embodied interface agents and their gestural activity. In T. Rist, R. S. Aylett, D. Ballin, & J. Rickel (Eds.), Intelligent virtual agents (pp. 292–300). Springer Berlin Heidelberg.
  • Krämer, N. C., Simons, N., & Kopp, S. (2007). The effects of an embodied conversational agent’s nonverbal behavior on user’s evaluation and behavioral mimicry. In C. Pelachaud, J.-C. Martin, E. André, G. Chollet, K. Karpouzis, & D. Pelé (Eds.), Intelligent virtual agents (pp. 238–251). Springer Berlin Heidelberg.
  • Krämer, N. C., & Bente, G. (2010, March). Personalizing e-learning. the social effects of pedagogical agents. Educational Psychology Review, 22(1), 71–87. https://doi.org/10.1007/s10648-010-9123-x
  • Kranstedt, A., Kopp, S., & Wachsmuth, I. (2002). Murml: A multimodal utterance representation markup language for conversational agents. AAMAS’02 Workshop Embodied Conversational Agents - Let’s Specify and Evaluate Them!. ACM. https://pub.uni-bielefeld.de/publication/1857788
  • Kraut, R. E., Fussell, S. R., & Siegel, J. (2003, June). Visual information as a conversational resource in collaborative physical tasks. Human–Computer Interaction, 18(1), 13–49. https://doi.org/10.1207/S15327051HCI1812_2
  • LaFrance, M. (1985, June). Postural mirroring and intergroup relations. Personality & Social Psychology Bulletin, 11(2), 207–217. https://doi.org/10.1177/0146167285112008
  • Lafrance, M., & Broadbent, M. (1976, September). Group rapport: Posture sharing as a nonverbal indicator. Group & Organization Studies, 1(3), 328–333. https://doi.org/10.1177/105960117600100307
  • Lakin, J. L., & Chartrand, T. L. (2003, July). Using nonconscious behavioral mimicry to create affiliation and rapport. Psychological Science, 14(4), 334–339. https://doi.org/10.1111/1467-9280.14481
  • Lance, B., & Marsella, S. (2009, May). Glances, glares, and glowering: How should a virtual human express emotion through gaze? Autonomous Agents and Multi-Agent Systems, 20(1), 50. https://doi.org/10.1007/s10458-009-9097-6
  • Lance, B., & Marsella, S. (2010, July). The expressive gaze model: Using gaze to express emotion. IEEE Computer Graphics and Applications, 30(4), 62–73. https://doi.org/10.1109/MCG.2010.43
  • Lance, B. J., & Marsella, S. C. (2008). A model of gaze for the purpose of emotional expression in virtual embodied agents. Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1 (pp. 199–206). International Foundation for Autonomous Agents and Multiagent Systems. http://dl.acm.org/citation.cfm?id=1402383.1402415
  • Lee, J., & Marsella, S. (2006). Nonverbal behavior generator for embodied conversational agents. In J. Gratch, M. Young, R. Aylett, D. Ballin, & P. Olivier (Eds.), Intelligent virtual agents (pp. 243–255). Springer Berlin Heidelberg.
  • Lee, J., Marsella, S., Traum, D., Gratch, J., & Lance, B. (2007). The rickel gaze model: A window on the mind of a virtual human. In C. Pelachaud, J.-C. Martin, E. André, G. Chollet, K. Karpouzis, & D. Pelé (Eds.), Intelligent virtual agents (pp. 296–303). Springer Berlin Heidelberg.
  • Mancini, M., & Pelachaud, C. (2008). The fml-apml language. Proceedings of the Workshop on fml at aamas (Vol. 8).
  • Matsuyama, Y., Bhardwaj, A., Zhao, R., Romeo, O., Akoju, S., & Cassell, J. (2016). Socially- aware animated intelligent personal assistant agent. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue (pp. 224–227). Association for Computational Linguistics. http://aclweb.org/anthology/W16-3628
  • Mayer, R. E., & DaPra, C. S. (2012). An embodiment effect in computer-based learning with animated pedagogical agents. Journal of Experimental Psychology. Applied, 18(3), 239–252. https://doi.org/10.1037/a0028616
  • McDuff, D., Mahmoud, A., Mavadati, M., Amr, M., Turcot, J., & Kaliouby, R. E. (2016). Affdex sdk: A cross-platform real-time multi-face expression recognition toolkit. Proceedings of the 2016 chi Conference Extended Abstracts on Human Factors in Computing Systems (pp. 3723–3726). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/2851581.2890247
  • McNeill, D. (1992). Hand and mind: What gestures reveal about thought. University of Chicago Press. (Includes bibliographical references (pp. 393–407) and index.).
  • McRorie, M., Sneddon, I., McKeown, G., Bevacqua, E., De Sevin, E., & Pelachaud, C. (2012, July). Evaluation of four designed virtual agent personalities. IEEE Transactions on Affective Computing, 3(3), 311–322. https://doi.org/10.1109/T-AFFC.2011.38
  • Mehrabian, A. (1969). Significance of posture and position in the communication of attitude and status relationships. Psychological Bulletin, 71(5), 359–372. https://doi.org/10.1037/h0027349
  • Mehrabian, A. (1972). Nonverbal communication. Transaction Publishers.
  • Morency, L.-P., Christoudias, C. M., & Darrell, T. (2006). Recognizing gaze aversion gestures in embodied conversational discourse. Proceedings of the 8th International Conference on Multimodal Interfaces (pp. 287–294). ACM. https://doi.org/10.1145/1180995.1181051
  • Neff, M., Wang, Y., Abbott, R., & Walker, M. (2010). Evaluating the effect of gesture and language on personality perception in conversational agents. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, & A. Safonova (Eds.), Intelligent virtual agents (pp. 222–235). Springer Berlin Heidelberg.
  • Nguyen, H., & Masthoff, J. (2009). Designing empathic computers: The effect of multimodal empathic feedback using animated agent. In Proceedings of the 4th international conference on persuasive technology (pp. 7:1–7:9). ACM. https://doi.org/10.1145/1541948.1541958
  • Niewiadomski, R., Ochs, M., & Pelachaud, C. (2008). Expressions of empathy in ecas. In H. Prendinger, J. Lester, & M. Ishizuka (Eds.), Intelligent virtual agents (pp. 37–44). Springer Berlin Heidelberg.
  • Niewiadomski, R., Demeure, V., & Pelachaud, C. (2010). Warmth, competence, believability and virtual agents. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, & A. Safonova (Eds.), Intelligent virtual agents (pp. 272–285). Springer.
  • Niewiadomski, R., Bevacqua, E., Mancini, M., & Pelachaud, C. (2009, May). Greta: An interactive expressive eca system. In Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2 (pp. 1399–1400). International Foundation for Autonomous Agents and Multiagent Systems.
  • Nijholt, A. (2004, August). Where computers disappear, virtual humans appear. Computers & Graphics, 28(4), 467–476. https://doi.org/10.1016/j.cag.2004.04.002
  • Noma, T., Zhao, L., & Badler, N. I. (2000, July). Design of a virtual human presenter. IEEE Computer Graphics and Applications, 20(4), 79–85. https://doi.org/10.1109/38.851755
  • Novack, M., & Goldin-Meadow, S. (2015, September). Learning from gesture: How our hands change our minds. Educational Psychology Review, 27(3), 405–412. https://doi.org/10.1007/s10648-015-9325-3
  • Nowak, K. L., & Biocca, F. (2003, October). The effect of the agency and anthropomorphism on users’ sense of telepresence, copresence, and social presence in virtual environments. Presence: Teleoperators and Virtual Environments, 12(5), 481–494. https://doi.org/10.1162/105474603322761289
  • Ochs, M., Libermann, N., Boidin, A., & Chaminade, T. (2017). Do you speak to a human or a virtual agent? Automatic analysis of user’s social cues during mediated communication. In Proceedings of the 19th ACM International Conference on Multimodal Interaction (pp. 197–205). ACM. https://doi.org/10.1145/3136755.3136807
  • Ochs, M., Pelachaud, C., & Mckeown, G. (2017, January). A user perception–based approach to create smiling embodied conversational agents. ACM Transactions on Interactive Intelligent Systems, 7(1), 4: 1–4:33. https://doi.org/10.1145/2925993
  • Parise, S., Kiesler, S., Sproull, L., & Waters, K. (1999). Cooperating with life-like interface agents. Computers in Human Behavior, 15(2), 123–142. https://doi.org/10.1016/S0747-5632(98)00035-1
  • Patterson, M. L., & Sechrest, L. B. (1970). Interpersonal distance and impression formation. Journal of Personality, 38(2), 161–166. https://doi.org/10.1111/j.1467-6494.1970.tb00001.x
  • Pejsa, T., Gleicher, M., & Mutlu, B. (2017). Who, me? How virtual agents can shape conversational footing in virtual reality. In J. Beskow, C. Peters, G. Castellano, C. O’Sullivan, H. Leite, & S. Kopp (Eds.), Intelligent virtual agents (pp. 347–359). Springer International Publishing.
  • Pejsa, T., Andrist, S., Gleicher, M., & Mutlu, B. (2015, March). Gaze and attention management for embodied conversational agents. ACM Transactions on Interactive Intelligent Systems, 5(1), 3: 1–3:34. https://doi.org/10.1145/2724731
  • Pelachaud, C. (2005a). Multimodal expressive embodied conversational agents. In Proceedings of the 13th Annual ACM International Conference on Multimedia (pp. 683–689). ACM. https://doi.org/10.1145/1101149.1101301
  • Pelachaud, C. (2005b). Multimodal expressive embodied conversational agents. Proceedings of the 13th Annual ACM International Conference on Multimedia (pp. 683–689). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/1101149.1101301
  • Pelachaud, C. (2009a, December). Modelling multimodal expression of emotion in a virtual agent. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535), 3539–3548. https://doi.org/10.1098/rstb.2009.0186
  • Pelachaud, C. (2009b, July). Studies on gesture expressivity for a virtual agent. Speech Communication, 51(7), 630–639. https://doi.org/10.1016/j.specom.2008.04.009
  • Pelachaud, C. (2017). Greta: A conversing socio-emotional agent. Proceedings of the 1st ACM Sigchi International Workshop on Investigating Social Interactions with Artificial Agents (pp. 9–10). ACM. https://doi.org/10.1145/3139491.3139902
  • Pelachaud, C., Carofiglio, V., De Carolis, B., De Rosis, F., & Poggi, I. (2002). Embodied contextual agent in information delivering application. In Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems: Part 2 (pp. 758–765). ACM. https://doi.org/10.1145/544862.544921
  • Pelachaud, C., & Poggi, I. (2002, December). Subtleties of facial expressions in embodied agents. The Journal of Visualization and Computer Animation, 13(5), 301–312. https://doi.org/10.1002/vis.299
  • Poggi, I., Pelachaud, C., De Rosis, F., Carofiglio, V., & De Carolis, V. (2005). Greta. a believable embodied conversational agent. In O. Stock & M. Zancanaro (Eds.), Multi- modal intelligent information presentation (pp. 3–25). Springer Netherlands. https://doi.org/10.1007/1-4020-3051-7%5F1
  • Poggi, I., Pelachaud, C., & De Rosis, F. (2000, November). Eye communication in a conversational 3d synthetic agent. AI Communications, 13(3), 169–181.
  • Quek, F., McNeill, D., Bryll, R., Duncan, S., Ma, X.-F., Kirbas, C., McCullough, K. E., & Ansari, R. (2002, September). Multimodal human discourse: Gesture and speech. ACM Transactions on Computer-Human Interaction, 9(3), 171–193. https://doi.org/10.1145/568513.568514
  • Ravenet, B., Ochs, M., & Pelachaud, C. (2013). From a user-created corpus of virtual agent’s non-verbal behavior to a computational model of interpersonal attitudes. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Intelligent virtual agents (pp. 263–274). Springer Berlin Heidelberg.
  • Ravenet, B., Clavel, C., & Pelachaud, C. (2018). Automatic nonverbal behavior generation from image schemas. In Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems (pp. 1667–1674). International Foundation for Autonomous Agents and Multiagent Systems. http://dl.acm.org/citation.cfm?id=3237383.3237947
  • Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press.
  • Rehm, M., & André, E. (2008). From annotated multimodal corpora to simulated human-like behaviors. In I. Wachsmuth & G. Knoblich (Eds.), Modeling communication with robots and virtual humans (pp. 1–17). Springer Berlin Heidelberg.
  • Rehm, M., André, E., Bee, N., Endrass, B., Wer, M., Nakano, Y., … Huang, H. (2007). The cube-g approach–coaching culture-specific nonverbal behavior by virtual agents. Organizing and Learning through Gaming and Simulation: Proceedings of Isaga (pp. 313).
  • Rickel, J., & Johnson, W. L. (1999, May). Animated agents for procedural training in virtual reality: Perception, cognition, and motor control. Applied Artificial Intelligence, 13(4–5), 343–382. https://doi.org/10.1080/088395199117315
  • Rickel, J., & Johnson, W. L. (2000). Task-oriented collaboration with embodied agents in virtual worlds. In Embodied Conversational Agents (pp. 29). MIT Press.
  • Rosis, F. D., Pelachaud, C., Poggi, I., Carofiglio, V., & Carolis, B. D. (2003, July). From greta’s mind to her face: Modelling the dynamics of affective states in a conversational embodied agent. International Journal of Human-Computer Studies, 59(1), 81–118. https://doi.org/10.1016/S1071-5819(03)00020-X
  • Salem, B., & Earle, N. (2000). Designing a non-verbal language for expressive avatars. In Proceedings of the Third International Conference on Collaborative Virtual Environments (pp. 93–101). ACM. https://doi.org/10.1145/351006.351019
  • Scherer, S., Marsella, S., Stratou, G., Xu, Y., Morbini, F., Egan, A., … Morency, L.-P. (2012). Perception markup language: Towards a standardized representation of perceived nonverbal behaviors. In Y. Nakano, M. Neff, A. Paiva, & M. Walker (Eds.), Intelligent virtual agents (pp. 455–463). Springer Berlin Heidelberg.
  • Schröder, M. (2010, January). The semaine api: Towards a standards-based framework for building emotion-oriented systems. Advances in Human-Computer Interaction, 2010(2), 1. https://doi.org/10.1155/2010/319406
  • Schröder, M., Baggia, P., Burkhardt, F., Pelachaud, C., Peter, C., & Zovato, E. (2011). Emotionml – An upcoming standard for representing emotions and related states. In S. D’Mello, A. Graesser, B. Schuller, & J.-C. Martin (Eds.), Affective computing and intelligent interaction (pp. 316–325). Springer.
  • Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications. Wiley. (Google-Books-ID: Ze63AAAAIAAJ).
  • Sproull, L., Subramani, M., Kiesler, S., Walker, J. H., & Waters, K. (1996, June). When the interface is a face. Human–Computer Interaction, 11(2), 97–124. https://doi.org/10.1207/s15327051hci1102_1
  • Stevens, C. J., Pinchbeck, B., Lewis, T., Luerssen, M., Pfitzner, D., Powers, D. M. W., Abrahamyan, A., Leung, Y., & Gibert, G. (2016, June). Mimicry and expressiveness of an eca in human-agent interaction: Familiarity breeds content! Computational Cognitive Science, 2(1), 1. https://doi.org/10.1186/s40469-016-0008-2
  • Terven, J. R., Raducanu, B., Meza-de Luna, M. E., & Salas, J. (2016, January). Head-gestures mirroring detection in dyadic social interactions with computer vision-based wearable devices. Neurocomputing, 175(2), 866–876. https://doi.org/10.1016/j.neucom.2015.05.131
  • Theune, M. (2001, December). Angelica: Choice of output modality in an embodied agent. In International Workshop on Information Presentation and Natural Multimodal Dialogue (IPNMD- 2001) (pp. 89–93). ITC-IRST.
  • Thórisson, K. R. (1997). Gandalf: An embodied humanoid capable of real-time multimodal dialogue with people. Agents, 536–537.
  • Tickle-Degnen, L., & Rosenthal, R. (1990, October). The nature of rapport and its nonverbal correlates. Psychological Inquiry, 1(4), 285–293. https://doi.org/10.1207/s15327965pli0104_1
  • Traum, D., Marsella, S. C., Gratch, J., Lee, J., & Hartholt, A. (2008). Multi-party, multi-issue, multi-strategy negotiation for multi-modal virtual agents. In H. Prendinger, J. Lester, & M. Ishizuka (Eds.), Intelligent virtual agents (pp. 117–130). Springer Berlin Heidelberg.
  • Van Baaren, R. B., Holland, R. W., Kawakami, K., & Van Knippenberg, A. (2004, January). Mimicry and prosocial behavior. Psychological Science, 15(1), 71–74. https://doi.org/10.1111/j.0963-7214.2004.01501012.x
  • Van Swol, L. M. (2003, August). The effects of nonverbal mirroring on perceived persuasiveness, agreement with an imitator, and reciprocity in a group discussion. Communication Research, 30(4), 461–480. https://doi.org/10.1177/0093650203253318
  • Verberne, F. M. F., Ham, J., Ponnada, A., & Midden, C. J. H. (2013). Trusting digital chameleons: The effect of mimicry by a virtual social agent on user trust. In S. Berkovsky & J. Freyne (Eds.), Persuasive technology (pp. 234–245). Springer Berlin Heidelberg.
  • Wagner, D., Billinghurst, M., & Schmalstieg, D. (2006). How real should virtual characters be? In Proceedings of the 2006 ACM Sigchi International Conference on Advances in Computer Entertainment Technology. ACM. https://doi.org/10.1145/1178823.1178891
  • Wagner, H. L., MacDonald, C. J., & Manstead, A. S. (1986, September). Communication of individual emotions by spontaneous facial expressions. Journal of Personality and Social Psychology, 50(4), 737. https://doi.org/10.1037/0022-3514.50.4.737
  • Walker, J. H., Sproull, L., & Subramani, R. (1994, April). Using a human face in an interface. In Proceedings of the Sigchi Conference on Human Factors in Computing Systems (pp. 85–91). Association for Computing Machinery. https://doi.org/10.1145/191666.191708
  • Wallbott, H. G. (1998). Bodily expression of emotion. European Journal of Social Psychology, 28(6), 879–896. https://doi.org/10.1002/(SICI)1099-0992(1998110)28:6<879::AID-EJSP901>3.0.CO;2-W
  • Wang, C., Biancardi, B., Mancini, M., Cafaro, A., Pelachaud, C., Pun, T., & Chanel, G. (2020). Impression detection and management using an embodied conversational agent. In M. Kurosu (Ed.), Human-computer interaction. multimodal and natural interaction (pp. 260–278). Springer International Publishing.
  • Wang, I., Narayana, P., Smith, J., Draper, B., Beveridge, R., & Ruiz, J. (2018). Easel: Easy automatic segmentation event labeler. In 23rd International Conference on Intelligent User Interfaces (pp. 595–599). ACM. https://doi.org/10.1145/3172944.3173003
  • Wang, N., & Gratch, J. (2009, July). Can virtual human build rapport and promote learning? In Proceedings of the 2009 Conference on Artificial Intelligence In Education: Building Learning Systems that Care: From Knowledge Representation to Affective Modelling (pp. 737–739). IOS Press.
  • Willis, F. N. (1966, June). Initial speaking distance as a function of the speakers’ relationship. Psychonomic Science, 5(6), 221–222. https://doi.org/10.3758/BF03328362
  • Wittenburg, P., Brugman, H., Russel, A., Klassmann, A., & Sloetjes, H. (2006). Elan: A professional framework for multimodality research. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006) (pp. 1556–1559). http://tla.mpi.nl/tools/tla-tools/elan/
  • Young, R. F., & Lee, J. (2004, August). Identifying units in interaction: Reactive tokens in korean and english conversations. Journal of Sociolinguistics, 8(3), 380–407. https://doi.org/10.1111/j.1467-9841.2004.00266.x