1,903
Views
64
CrossRef citations to date
0
Altmetric
Articles

Towards multimodal emotion recognition in e-learning environments

, &
Pages 590-605 | Received 01 Aug 2013, Accepted 24 Mar 2014, Published online: 12 May 2014

References

  • Alepis, E., & Virvou, M. (2011). Automatic generation of emotions in tutoring agents for affective e-learning in medical education. Expert Systems with Applications, 38(8), 9840–9847. doi: 10.1016/j.eswa.2011.02.021
  • Anaraki, F. (2004). Developing an effective and efficient elearning platform. International Journal of the Computer, the Internet and Management, 12(2), 57–63.
  • Bachiller, C., Hernandez, C., & Sastre, J. (2010, July 18–22). Collaborative learning research and science promotion in a multidisciplinary scenario: Information and communications technology and music. Proceedings of the International Conference on Engineering Education, pp. 1–8, Gliwice, Poland.
  • Bahreini, K., Nadolski, R., Qi, W., & Westera, W. (2012, October 4–5). FILTWAM - A framework for online game-based communication skills training - Using webcams and microphones for enhancing learner support. In P. Felicia (Ed.), The 6th European conference on games based learning (ECGBL) (pp. 39–48). Cork: Academic Publishing International Limited Reading.
  • Bahreini, K., Nadolski, R., & Westera, W. (2012, October 29–31). FILTWAM - A framework for online affective computing in serious games. In A. De Gloria & S. de Freitas (Eds.), The 4th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES–12). Procedia Computer Science (Vol. 15, pp. 45–52), Genoa, Italy. Amsterdam: Curran Associates.
  • Ben Ammar, M., Neji, M., Alimi, A. M., & Gouardères, G. (2010). The affective tutoring system. Expert Systems with Applications, 37(4), 3013–3023. doi: 10.1016/j.eswa.2009.09.031
  • Bettadapura, V. (2012). Face expression recognition and analysis: The state of the art. Journal of CoRR, abs/1203.6722.
  • Chen, L. S. (2000). Joint processing of audio-visual information for the recognition of emotional expressions in human-computer interaction (PhD thesis). University of Illinois at Urbana-Champaign.
  • Cheng, Z., Sun, S., Kansen, M., Huang, T., & He, A. (2005, March 28–30). A personalized ubiquitous education support environment by comparing learning instructional requirement with learner's behavior. 19th International Conference on Advanced Information Networking and Applications (AINA), Vol. 2, pp. 567–573, Taipei, Taiwan.
  • Chibelushi, C. C., & Bourel, F. (2003). Facial expression recognition: A brief tutorial overview. Compendium of Computer Vision. Retrieved from http://lyle.smu.edu/~mhd/8331f06/CCC.pdf
  • Ebner, M. (2007, April 10–23). E-learning 2.0 = e-learning 1.0 + Web 2.0? The Second International Conference on Availability Reliability and Security (ARES), pp. 1235–1239, Vienna, Austria.
  • Ekman, P., & Friesen, W. V. (1978). Facial action coding system: Investigator's guide. Douglas, AZ: A Human Face.
  • Feidakis, M., Daradoumis, T., & Caballe, S. (2011). Emotion measurement in intelligent tutoring systems: What when and how to measure. Third International Conference on Intelligent Networking and Collaborative Systems, pp. 807–812, Fukuoka, Japan.
  • Greller, W., & Drachsler, H. (2012). Translating learning into numbers: A generic framework for learning analytics. Educational Technology & Society, 15(3), 42–57.
  • Hager, P. J., Hager, P., & Halliday, J. (2006). Recovering informal learning: Wisdom judgment and community. Lifelong Learning Book Series. Dordrecht: Springer.
  • Huhnel, I., Fölster, M., Werheid, K., & Hess, U. (2014). Empathic reactions of younger and older adults: No age related decline in affective responding. Journal of Experimental Social Psychology, 50, 136–143. doi: 10.1016/j.jesp.2013.09.011
  • Jianhua, T., Tieniu, T., & Rosalind, W. P. (2005). Affective computing: A review (Vol. 3784, pp. 981–995). Affective Computing and Intelligent Interaction. Berlin: Springer.
  • Krahmer, E., & Swerts, M. (2011). Audiovisual expression of emotions in communication (Vol. 12, pp. 85–106). Philips Research Book Series. Dordrecht: Springer.
  • Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159–174. doi: 10.2307/2529310
  • Lang, G., & van der Molen, H. T. (2008). Psychologische gespreksvoering book. Heerlen: Open University of the Netherlands.
  • Murthy, G. R. S., & Jadon, R. S. (2009). Effectiveness of eigenspaces for facial expression recognition. International Journal of Computer Theory and Engineering, 1(5), 638–642. doi: 10.7763/IJCTE.2009.V1.103
  • Pantic, M., Sebe, N., Cohn, J. F., & Huang, T. (2005, November). Affective multimodal human-computer interaction. Proceedings of the 13th Annual ACM International Conference on Multimedia, Vol. 5, pp. 669–676, Hilton, Singapore.
  • Pekrun, R. (1992). The impact of emotions on learning and achievement: Towards a theory of cognitive/motivational mediators. Journal of Applied Psychology, 41, 359–376. doi: 10.1111/j.1464-0597.1992.tb00712.x
  • Saragih, J., Lucey, S., & Cohn, J. (2010). Deformable model fitting by regularized landmark mean-shifts. International Journal of Computer Vision (IJCV), 91(2), 200–215. doi: 10.1007/s11263-010-0380-4
  • Sarrafzadeh, A., Alexander, S., Dadgostar, F., Fan, C., & Bigdeli, A. (2008). How do you know that I don't understand? A look at the future of intelligent tutoring systems. Computers in Human Behavior, 24(4), 1342–1363. doi: 10.1016/j.chb.2007.07.008
  • Sebe, N. (2009). Multimodal interfaces: Challenges and perspectives. Journal of Ambient Intelligence and Smart Environments, 1(1), 23–30.
  • Van der Molen, H. T., & Gramsbergen-Hoogland, Y. H. (2005). Communication in organizations: Basic skills and conversation models. New York, NY: Psychology Press.
  • Zeng, Z., Pantic, M., Roisman, G. I., & Huang, T. S. (2009). A survey of affect recognition methods: Audio visual and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1), 39–58. doi: 10.1109/TPAMI.2008.52
  • Zhang, Z. (1999). Feature-based facial expression recognition: Sensitivity analysis and experiment with a multi-layer perceptron. International Journal of Pattern Recognition Artificial Intelligence, 13(6), 893–911. doi: 10.1142/S0218001499000495

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.