3,828
Views
2
CrossRef citations to date
0
Altmetric
Editorial

Learning with virtual humans: Introduction to the special issue

&

Abstract

Virtual humans are embodied agents with a human-like appearance. In educational contexts, virtual humans are often designed to help people learn. In this special issue, we see six representations of this field from different perspectives, including system analysis and evaluation, value-added studies, and reviews of the literature. This diversity of research exemplifies the exciting field of virtual humans. In this introduction to the special issue, we discuss virtual humans in educational contexts, highlight the articles in this special issue, and outline productive future research directions.

Innovations in educational technology continue to help instructors find ways to improve the online learning experience. Virtual humans are one innovation with over a quarter century of research that is still ongoing with new and exciting areas. Over the years, virtual humans have gone by several names within the research literature depending on how they are used, such as avatars for human-controlled virtual humans, and agents (including conversational, embodied, or pedagogical) when controlled by a computer system (Craig & Schroeder, Citation2018). Two studies (Bondie et al., Citation2021; Howell & Mikeska, Citation2021) in this special issue discuss the use of avatars within teacher training. It has been argued that avatars can be very important for supporting learning by supporting collaboration within virtual environments (Bailenson, Citation2008) and adding social presence (Hodge et al., Citation2008; Lowenthal, Citation2010). The other four articles in this special issue focus on the use of agents. The articles in this special issue focus on virtual humans used within educational environments, which have traditionally been broadly referred to as pedagogical agents.

Pedagogical agents are embodied representations of software agents that appear within the user interface, and they are designed to help people learn (Moreno, Citation2005). Since their inception, pedagogical agents have been researched in a variety of learning environments to help learners through a variety of instructional roles. Early studies provide examples of the variance in how pedagogical agents have been used. For example, Johnson and Rickel (Citation1997) presented a virtual human named Steve that could teach procedural tasks, and Lester et al. (Citation1997) presented Herman the Bug, who delivered different types of feedback and guidance. Through the years, one of the essential questions around the use of pedagogical agents is how they should appear to the learner, or what type of physical embodiment they should have. While pedagogical agent researchers conducted many studies with non-human-like characters (e.g. Atkinson, Citation2002; Moreno et al., Citation2001), recently this trend has subsided and virtual humans have been featured in recent work across many disciplines.

Virtual humans can be more than simple human-like representations on a computer screen, and there are many ways in which virtual humans and other pedagogical agents can aid learning. For example, Clarebout et al. (Citation2002) proposed that they could play six different roles in the learning environment, including demonstrating, scaffolding, modeling, testing, supplanting, or coaching. Virtual humans have been implemented in a variety of learning environments, spanning narrated videos (Davis et al., Citation2019; Schroeder et al., Citation2021), intelligent tutoring systems (Graesser, Citation2016, Citation2017; Johnson et al., Citation2017), and educational games (Rowe et al., Citation2009), to name a few. As creating virtual humans and the environments they appear in becomes more approachable to those without programming and animation knowledge, it seems plausible that the number of potential use cases will continue to increase. This is demonstrated through the increased variance in virtual human use outside of strictly education and learning domains, such as medical and mental health care fields. Research surrounding virtual humans has included topics such as older adult self-care (Morrow et al., Citation2020), health behavior change (Olafsson et al., Citation2020), genetic risk communication (Zhou & Bickmore, Citation2019), and virtual patient interactions (Hirumi et al., Citation2016).

Given the broad range of virtual human research in the literature, it is natural to question their efficacy for facilitating learning. While specific, synthesized insights about virtual human efficacy are challenging to find in the literature, within the last decade work in the broader field of pedagogical agents has been systematically synthesized. Heidig and Clarebout's (Citation2011) systematic review found that no significant differences were generally found for either learning or motivational outcomes between pedagogical agent and non-agent conditions. This is notable, as pedagogical agents can be time consuming and potentially expensive to create. Meanwhile, Schroeder et al.'s (Citation2013) meta-analysis found that pedagogical agents produced a small, positive effect on learning outcomes. However, this effect was significantly moderated by features of the research, such as the age of the participants. We view the conclusions of Heidig and Clarebout and Schroeder et al. as largely complementary, in that for pedagogical agents to be effective, we must thoughtfully design them and implement them when their strengths can be highlighted and their weaknesses minimized (Craig & Schroeder, Citation2018). Yet, research is still needed to understand what these strengths are and when other interventions may be more effective.

This special issue

The pace at which virtual humans are being implemented in learning technologies is rapidly increasing, and virtual humans may be more commonly implemented than ever before. While this infusion of technology is exciting, there is a lack of research to support the wide implementation of virtual humans. This special issue focuses on the implementation and design of virtual humans specifically, rather than other types of pedagogical agents, and identifies the limitations of current work. We are pleased to include the following six articles in this special issue, of which two are examples of systems analysis and evaluations, two are examples of value-added research studies, and two present reviews of existing literature.

System analysis and evaluations

A critical step in developing any educational technology is the analysis and evaluation of the system. Two studies in this special issue examine different approaches to analyzing and evaluating a virtual human-based system.

Virtual humans have successfully functioned as conversational partners for many years (Graesser et al., Citation2017; Johnson & Lester, Citation2016), and recent work has started to look at the impact of dialogs with multiple virtual humans (see Millis et al., Citation2018). Howell and Mikeska (Citation2021) provide an example of how this would work with the Go Discuss Project. Go Discuss provided teachers with a virtual classroom in which they had to teach and monitor virtual students. They use this example to explore the role of environment authenticity when teaching with virtual humans. They investigate the questions of the implications of the participants’ understanding of the goals and mechanism of the simulation and when does the authenticity of the virtual human’s matter. Their article specifically highlights the need for greater attention to representation, decomposition, and approximations of practice. This paper poses interesting new directions for virtual human research highlighting the need to better understand the learner’s individual differences to fully understand learning with virtual humans.

In their evaluation of MentorPal, Nye et al. (Citation2021) provide an example of virtual mentors, which are one type of conversational virtual human. This paper provides an example of usability evaluation during the rapid development process. This type of evaluation is unfortunately underutilized within the intelligent tutoring system and conversational agent world (Chughtai et al., Citation2015). However, it has great potential for establishing the system’s feasibly and for finding implementation problems that will impact adoption (Roscoe et al., Citation2018). Their evaluation indicated an overall usable system that was effective but found a drawback that could impact later adoption of the system and can be addressed during the rapid development process.

Value-added research

While analyzing and evaluating virtual human-based educational technologies is a critical step, in many cases these systems are used to answer research questions that can further inform the design of the technology. Mayer (Citation2019) refers to these types of studies that compare the same system with different features as value-added studies. Two studies in this special issue present value-added research studies.

Much of the research literature around interactive virtual humans falls within the domain of intelligent tutoring systems. AutoTutor is a long-standing intelligent tutoring system that can provide communication with the learners through natural dialogue (Li & Graesser, Citation2021). The design of such interactive systems is of importance for both research and practice, and one aspect of virtual human design is how it communicates with the learner. Li and Graesser (Citation2021) explored the impact of AutoTutor’s communication style during a three hour interaction with learners using either a formal language style, an informal language style, or a mixed language style. Their article highlights the complexity of language and various ways to measure learners’ writing quality.

From Betty’s Brain (Biswas et al., Citation2005; Roscoe et al., Citation2013) to SimStudent (Matsuda et al., Citation2013, Citation2018), teachable agents have a long history within the virtual human and learning literature. Silvervarg et al. (Citation2021) revisited this area to investigate how these agents can mitigate critical constructive feedback. Their study indicated that when feedback is mitigated through teachable agents there is a positive effect on students' responses. According to their findings, teachable agents decrease the direct negative impact of feedback, which leads learners to be more responsive to the feedback, increasing posttest performance.

Reviews of the literature

Finally, it is critical to conduct reviews of the literature in order to synthesize what the results of individual studies collectively mean for the field. Two studies in this special issue present reviews that can inform research and practice.

Historically, how a learner perceives a virtual human has been viewed as an important question and the field has long investigated the so-called persona effect (Lester et al., Citation1997). One way of measuring the persona of a virtual human is the Agent Persona Instrument (API; Ryu & Baylor, Citation2005). Researchers have continued to use the API as a measure of agent persona and recently began investigating if the API is related to learning outcomes (Schroeder et al., Citation2017, Citation2018). Davis et al. (Citation2021) systematically reviewed the literature to examine if the ratings of agent persona, as measured by the API, were associated with measures of learning. They did not find compelling evidence that persona effects are related to significant differences in learning outcomes, however they specifically noted the importance of considering context in virtual human research. Specifically, they denote the differences between claims from interactive systems compared to one-way information delivery (e.g. virtual human communicates with the learner, but the learner does not communicate with the virtual human).

Virtual humans have been used within teacher training simulations to provide guided practice and skill-building (Bradley & Kendall, Citation2014). Bondie et al. (Citation2021) conducted a literature review to identify the best practices for designing teacher training simulations. While the reviewed literature was modest, the authors were able to identify five important design considerations which should be reported for teacher training simulations with virtual humans. First, the learning design should explicitly describe all elements within the simulation. Second, avatar and interactor learning should be investigated by analyzing the learning that avatars demonstrated through their responses. Third, the simulation will only be as effective as the interactor training, so detailed descriptions of training used must be provided. Fourth, time and distance can be suspended, compressed, and extended within the learning session and these features should be specifically addressed. Finally, complex practices in a sociocultural context are very important, so learner characteristics such as language, race, culture, and power must be explicitly considered.

The future is bright for virtual human research

Each year that passes tends to bring new technologies that make creating virtual humans more approachable or improves the programmer’s ability to create a lifelike virtual human. As the software engines (e.g. Unity (Unity Technologies, Citation2020), Unreal Engine 4 (Epic Games Inc., Citation2020)) and animation databases (e.g. Mixamo (Adobe Systems Incorporated, Citation2020)) continue to improve and expand, we anticipate that the use of virtual humans will continue to increase, as will research focusing on and incorporating virtual humans.

We note that while the field is moving forward at an exciting pace, the field has long-standing limitations that still need to be addressed, such as a lack of strong measurement techniques, underreporting on cost-effectiveness, and the lack of longitudinal studies (Clark & Choi, Citation2005; Schroeder & Gotch, Citation2015). Longitudinal studies are particularly warranted to better understand how learners interact with virtual humans over time (Veletsianos & Russell, Citation2014), especially as they are adopted into learning systems designed for more than a single use. Additionally, we encourage researchers to leverage previous work in the area and conduct systematic lines of research rather than ‘one-shot’ studies that hold little relation to existing research in the area, as well as well-document their research and development methodologies. In regards to the design and development of virtual humans, researchers may consider designing their virtual humans with respect to established design frameworks, such as those presented by Domagk (Citation2010) and Heidig and Clarebout (Citation2011). Only through well-documented, systematic lines of research will the field move forward to help answer the question of in what situations virtual humans may be the most effective intervention.

The diversity of the articles in this special issue highlight various research streams in the field of virtual humans. The empirical work shows much of the lifecycle of developing an educational technology, from usability testing (Nye et al., Citation2021) through longer-term interventions (Li & Graesser, Citation2021) and classroom-based implementations (Silvervarg et al., Citation2021). Similarly, the review articles in this special issue help highlight the maturation of a field, from novel ways of thinking about existing systems (Howell & Mikeska, Citation2021) through analyses of extant empirical findings to guide future work (Bondie et al., Citation2021; Davis et al., Citation2021). Together, these studies provide insights into the design of virtual human-based systems.

The studies in this special issue also stand in contrast to much of the work in the field of pedagogical agents. While some early studies in the field revolved around interactive agents that would respond depending on student input (e.g. Johnson et al., Citation2000; Moreno et al., Citation2001), there are many examples of non-interactive systems where the agent primarily lectures to the learner (e.g. Baylor, Citation2005; Craig et al., Citation2002; Moreno & Flowerday, Citation2006; Veletsianos, Citation2010). The articles in this special issue highlight how the field of virtual humans is transitioning away from non-interactive agents that simply deliver content and toward virtual humans that can respond to learner input and interact appropriately. It seems plausible that these lines of work around interactive virtual humans may increase with time given the effectiveness of intelligent tutoring systems (see Kulik & Fletcher, Citation2016; Ma et al., Citation2014; Steenbergen-Hu & Cooper, Citation2014) and other conversational agent systems (Graesser & McNamara, Citation2010; Graesser et al., Citation2017). The types of research highlighted in this special issue, system analysis and evaluation, value-added research, and reviews of the literature, will be essential in helping to continue moving the field of interactive virtual humans forward into the future.

References

  • Adobe Systems Incorporated. (2020). Mixamo. https://www.mixamo.com/
  • Atkinson, R. K. (2002). Optimizing learning from examples using animated pedagogical agents. Journal of Educational Psychology, 94(2), 416–427. https://doi.org/10.1037/0022-0663.94.2.416
  • Bailenson, J. (2008). Why digital avatars make the best teachers. The Chronicle of Higher Education, 54(30), B27. https://www.chronicle.com/article/why-digital-avatars-make-the-best-teachers/
  • Baylor, A. L. (2005, January). The impact of pedagogical agent image on affective outcomes. In International conference on intelligent user interfaces (p. 29).
  • Biswas, G., Leelawong, K., Schwartz, D., & Vye, N. (2005). Learning by teaching: A new agent paradigm for educational software. Applied Artificial Intelligence, 19(3–4), 363–392.
  • Bondie, R., Mancenido, Z., & Dede, C. (2021). Interaction principles for digital puppeteering to promote teacher learning. Journal of Research on Technology in Education, https://doi.org/10.1080/15391523.2020.1823284
  • Bradley, E. G., & Kendall, B. (2014). A review of computer simulations in teacher education. Journal of Educational Technology Systems, 43(1), 3–12. https://doi.org/10.2190/ET.43.1.b
  • Chughtai, R., Zhang, S., Craig, S. D. (2015, September). Usability evaluation of intelligent tutoring system: ITS from a usability perspective. In Proceedings of the human factors and ergonomics society annual meeting (Vol. 59, No. 1, pp. 367–371). SAGE Publications. https://doi.org/10.1177/1541931215591076
  • Clarebout, G., Elen, J., Johnson, W. L., & Shaw, E. (2002). Animated pedagogical agents: An opportunity to be grasped? Journal of Educational Multimedia and Hypermedia, 11(3), 267–286.
  • Clark, R. E., & Choi, S. (2005). Five design principles for experiments on the effects of animated pedagogical agents. Journal of Educational Computing Research, 32(3), 209–225. https://doi.org/10.2190/7LRM-3BR2-44GW-9QQY
  • Craig, S. D., Gholson, B., & Driscoll, D. M. (2002). Animated pedagogical agents in multimedia educational environments: Effects of agent properties, picture features and redundancy. Journal of Educational Psychology, 94(2), 428–434. https://doi.org/10.1037/0022-0663.94.2.428
  • Craig, S. D., & Schroeder, N. L. (2018). Design principles for virtual humans in educational technology environments. In K. Millis, D. Long, J. Magliano, & K. Wiemer (Eds.), Deep learning: Multi-disciplinary approaches (pp. 128–139). Routledge/Taylor Francis.
  • Davis, R. O., Park, T., & Vincent, J. (2021). A systematic narrative review of agent persona on learning outcomes and design variables to increase personification. Journal of Research on Technology in Education. https://doi.org/10.1080/15391523.2020.1830894
  • Davis, R. O., Vincent, J., & Park, T. (2019). Reconsidering the voice principle with non-native language speakers. Computers & Education, 140, 103605.
  • Domagk, S. (2010). Do pedagogical agents facilitate learner motivation and learning outcomes?: The role of the appeal of agent’s appearance and voice. Journal of Media Psychology: Theories, Methods, and Applications, 22(2), 84–97. https://doi.org/10.1027/1864-1105/a000011
  • Epic Games Inc. (2020). Unreal engine. https://www.unrealengine.com/en-US/
  • Graesser, A. C. (2016). Conversations with AutoTutor help students learn. International Journal of Artificial Intelligence in Education, 26(1), 124–132. https://doi.org/10.1007/s40593-015-0086-4
  • Graesser, A. C., Lippert, A. M., & Hampton, A. J. (2017). Successes and failures in building learning environments to promote deep learning: The value of conversational agents. In Informational environments (pp. 273–298). Springer.
  • Graesser, A., & McNamara, D. (2010). Self-regulated learning in learning environments with pedagogical agents that interact in natural language. Educational Psychologist, 45(4), 234–244. https://doi.org/10.1080/00461520.2010.515933
  • Graesser, A. C., Lippert, A. M., & Hampton, A. J. (2017). Successes and failures in building learning environments to promote deep learning: The value of conversational agents. In Informational environments (pp. 273–298). Springer.
  • Heidig, S., & Clarebout, G. (2011). Do pedagogical agents make a difference to student motivation and learning? Educational Research Review, 6(1), 27–54. https://doi.org/10.1016/j.edurev.2010.07.004
  • Hirumi, A., Kleinsmith, A., Johnsen, K., Kubovec, S., Eakins, M., Bogert, K., Rivera-Gutierrez, D. J., Reyes, R. J., Lok, B., & Cendan, J. (2016). Advancing virtual patient simulations through design research and interPLAY: Part I: Design and development. Educational Technology Research and Development, 64(4), 763–785. https://doi.org/10.1007/s11423-016-9429-6
  • Hodge, E. M., Tabrizi, M. H. N., Farwell, M. A., & Wuensch, K. L. (2008). Virtual reality classrooms: Strategies for creating a social presence. International Journal of Social Sciences, 2(2), 105–109.
  • Howell, H., & Mikeska, J. N. (2021). Approximations of practice as a framework for understanding authenticity in simulations of teaching. Journal of Research on Technology in Education. https://doi.org/10.1080/15391523.2020.1809033
  • Johnson, A. M., Guerrero, T. A., Tighe, E. L., & McNamara, D. S. (2017, June). iSTART-ALL: Confronting adult low literacy with intelligent tutoring for reading comprehension. In International conference on artificial intelligence in education (pp. 125–136). Springer.
  • Johnson, W. L., & Lester, J. C. (2016). Face-to-face interaction with pedagogical agents, twenty years later. International Journal of Artificial Intelligence in Education, 26(1), 25–36. https://doi.org/10.1007/s40593-015-0065-9
  • Johnson, W. L., & Rickel, J. (1997). Steve: An animated pedagogical agent for procedural training in virtual environments. ACM SIGART Bulletin, 8(1–4), 16–21. https://doi.org/10.1145/272874.272877
  • Johnson, W. L., Rickel, J. W., & Lester, J. C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial Intelligence in Education, 11(1), 47–78.
  • Kulik, J. A., & Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: A meta-analytic review. Review of Educational Research, 86(1), 42–78. https://doi.org/10.3102/0034654315581420
  • Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stone, B. A., & Bhogal, R. S. (1997, March). The persona effect: Affective impact of animated pedagogical agents. In Proceedings of the ACM SIGCHI conference on human factors in computing systems (pp. 359–366).
  • Li, H., & Graesser, A. C. (2021). The impact of conversational agents’ language on summary writing. Journal of Research on Technology in Education. https://doi.org/10.1080/15391523.2020.1826022
  • Lowenthal, P. R. (2010). The evolution and influence of social presence theory on online learning. In Social computing: Concepts, methodologies, tools, and applications (pp. 113–128). IGI Global.
  • Matsuda, N., Sekar, V. P. C., & Wall, N. (2018, June). Metacognitive scaffolding amplifies the effect of learning by teaching a teachable agent. In International conference on artificial intelligence in education (pp. 311–323). Springer.
  • Matsuda, N., Yarzebinski, E., Keiser, V., Raizada, R., Cohen, W. W., Stylianides, G. J., & Koedinger, K. R. (2013). Cognitive anatomy of tutor learning: Lessons learned with SimStudent. Journal of Educational Psychology, 105(4), 1152–1163. https://doi.org/10.1037/a0031955
  • Ma, W., Adesope, O. O., Nesbit, J. C., & Liu, Q. (2014). Intelligent tutoring systems and learning outcomes: A meta-analysis. Journal of Educational Psychology, 106(4), 901–918. https://doi.org/10.1037/a0037123
  • Mayer, R. E. (2019). Computer games in education. Annual Review of Psychology, 70, 531–549. https://doi.org/10.1146/annurev-psych-010418-102744
  • Millis, K., Forsyth, C., Wiemer, K., Wallace, P., & Steciuch, C. (2018). Learning scientific inquiry from a serious game that uses autotutor. In Deep comprehension (pp. 180–193). Routledge.
  • Moreno, R. (2005). Multimedia learning with animated pedagogical agents. In R. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 507–523). Cambridge University Press.
  • Moreno, R., & Flowerday, T. (2006). Students’ choice of animated pedagogical agents in science learning: A test of the similarity-attraction hypothesis on gender and ethnicity. Contemporary Educational Psychology, 31(2), 186–207. https://doi.org/10.1016/j.cedpsych.2005.05.002
  • Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social agency in computer-based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction, 19(2), 177–213. https://doi.org/10.1207/S1532690XCI1902_02
  • Morrow, D. G., Lane, H. C., & Rogers, W. A. (2020). A framework for design of conversational agents to support health self-care for older adults. Human Factors, https://doi.org/10.1177/0018720820964085
  • Nye, B. D., Davis, D. M., Rizvi, S. Z., Carr, K., Swartout, W., Thacker, R., & Shaw, K. (2021). Feasibility and usability of MentorPal, a framework for rapid development of virtual mentors. Journal of Research on Technology in Education. https://doi.org/10.1080/15391523.2020.1771640
  • Olafsson, S., O'Leary, T. K., & Bickmore, T. W. (2020, October). Motivating health behavior change with humorous virtual agents. In Proceedings of the 20th ACM international conference on intelligent virtual agents (pp. 1–8).
  • Roscoe, R. D., Branaghan, R. J., Cooke, N. J., & Craig, S. D. (2018). Human systems engineering and educational technology. In End-user considerations in educational technology design (pp. 1–34). IGI Global.
  • Roscoe, R. D., Segedy, J. R., Sulcer, B., Jeong, H., & Biswas, G. (2013). Shallow strategy development in a teachable agent environment designed to support self-regulated learning. Computers & Education, 62, 286–297.
  • Rowe, J., Mott, B., McQuiggan, S., Robison, J., Lee, S., & Lester, J. (2009). Crystal island: A narrative-centered learning environment for eighth grade microbiology. In Workshop on intelligent educational games at the 14th international conference on artificial intelligence in education, Brighton, UK (pp. 11–20).
  • Ryu, J., & Baylor, A. L. (2005). The psychometric structure of pedagogical agent persona. Technology Instruction Cognition and Learning, 2(4), 291.
  • Schroeder, N. L., Adesope, O. O., & Gilbert, R. B. (2013). How effective are pedagogical agents for learning? A meta-analytic review. Journal of Educational Computing Research, 49(1), 1–39. https://doi.org/10.2190/EC.49.1.a
  • Schroeder, N. L., Chiou, E. K., & Craig, S. D. (2021). Trust influences perceptions of virtual humans, but not necessarily learning. Computers & Education, 160, 104039. https://doi.org/10.1016/j.compedu.2020.104039
  • Schroeder, N. L., & Gotch, C. M. (2015). Persisting issues in pedagogical agent research. Journal of Educational Computing Research, 53(2), 183–204. https://doi.org/10.1177/0735633115597625
  • Schroeder, N. L., Romine, W. L., & Craig, S. D. (2017). Measuring pedagogical agent persona and the influence of agent persona on learning. Computers & Education, 109, 176–186.
  • Schroeder, N. L., Yang, F., Banerjee, T., Romine, W. L., & Craig, S. D. (2018). The influence of learners’ perceptions of a virtual human on learning transfer. Computers & Education, 126, 170–182.
  • Silvervarg, A., Wolf, R., Blair, K. P., Haake, M., & Gulz, A. (2021). How teachable agents influence students’ responses to critical constructive feedback. Journal of Research on Technology in Education. https://doi.org/10.1080/15391523.2020.1784812
  • Steenbergen-Hu, S., & Cooper, H. (2014). A meta-analysis of the effectiveness of intelligent tutoring systems on college students’ academic learning. Journal of Educational Psychology, 106(2), 331–347. https://doi.org/10.1037/a0034752
  • Unity Technologies. (2020). Unity for all. https://unity.com/
  • Veletsianos, G. (2010). Contextually relevant pedagogical agents: Visual appearance, stereotypes, and first impressions and their impact on learning. Computers & Education, 55(2), 576–585.
  • Veletsianos, G., & Russell, G. (2014). Pedagogical agents. In M. Spector, D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (4th ed., pp. 759–769). Springer Academic.
  • Zhou, S., & Bickmore, T. (2019, June). A virtual counselor for genetic risk communication. In International conference on artificial intelligence in education (pp. 374–378). Springer.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.