295
Views
3
CrossRef citations to date
0
Altmetric
Articles

Emorec: a new approach for detecting and improving the emotional state of learners in an e-learning environment

ORCID Icon, ORCID Icon, , ORCID Icon & ORCID Icon
Pages 6223-6241 | Received 07 Jun 2021, Accepted 10 Jan 2022, Published online: 18 Feb 2022

References

  • Adil, B., Nadjib, K. M., & Yacine, L. (2019). A novel approach for facial expression recognition. (Ed.),^(Eds.). 2019 International Conference on Networking and Advanced Systems (ICNAS).
  • Alonso, J. B., Cabrera, J., Medina, M., & Travieso, C. M. (2015). New approach in quantification of emotional intensity from the speech signal: Emotional temperature. Expert Systems with Applications, 42(24), 9554–9564. https://doi.org/10.1016/j.eswa.2015.07.062
  • Anagnostopoulos, C.-N., Iliou, T., & Giannoukos, I. (2015). Features and classifiers for emotion recognition from speech: A survey from 2000 to 2011. Artificial Intelligence Review, 43(2), 155–177. https://doi.org/10.1007/s10462-012-9368-5
  • Arguel, A., Lockyer, L., Lipp, O. V., Lodge, J. M., & Kennedy, G. (2017). Inside out: Detecting learners’ confusion to improve interactive digital learning environments. Journal of Educational Computing Research, 55(4), 526–551. https://doi.org/10.1177/0735633116674732
  • Azzi, I., Jeghal, A., Radouane, A., Yahyaouy, A., & Tairi, H. (2020). A robust classification to predict learning styles in adaptive E-learning systems. Education Information Technologies, 25(1), 437–448. https://doi.org/10.1007/s10639-019-09956-6
  • Bahreini, K., Nadolski, R., & Westera, W. (2016). Towards real-time speech emotion recognition for affective e-learning. Education Information Technologies, 21(5), 1367–1386. https://doi.org/10.1007/s10639-015-9388-2
  • Bahreini, K., Van der Vegt, W., & Westera, W. (2019). A fuzzy logic approach to reliable real-time recognition of facial emotions. Interactive Learning Environments, 78(14), 18943–18966.
  • Beauné, A. (2011). Quelles utilisations des TICE pour l’apprentissage du français langue étrangère au niveau A1. 1?
  • Boughida, A., Kouahla, M. N., & Lafifi, Y. (2021). A novel approach for facial expression recognition based on Gabor filters and genetic algorithm. Evolving Systems, 1–15.
  • Boutefara, T., & Mahdaoui, L. (2015). Emoticon-based feedback Tool for e-learning platforms. (Ed.),^(Eds.). The 16th International Arab Conference on Information Technology.
  • Chaffar, S., Chalfoun, P., & Frasson, C. (2006). La prédiction de la réaction émotionnelle dans un environnement d’apprentissage à distance. (Ed.),^(Eds.). Colloque international TICE'2006.
  • Cooper, B., & Brna, P. (2002). Supporting high quality interaction and motivation in the classroom using ICT: The social and emotional learning and engagement in the NIMIS project. Education, Communication Information, 2(2–3), 113–138. https://doi.org/10.1080/1463631021000025321.001
  • Daouas, T., & Lejmi, H. (2018). Emotions recognition in an intelligent elearning environment. Interactive Learning Environments, 26(8), 991–1009. https://doi.org/10.1080/10494820.2018.1427114
  • D’Mello, S., & Graesser, A. (2009). Automatic detection of learner’s affect from gross body language. Applied Artificial Intelligence, 23(2), 123–150. https://doi.org/10.1080/08839510802631745
  • Drissi, S., & Amirat, A. (2016). An adaptive E-learning system based on student’s learning styles: An empirical study. International Journal of Distance Education Technologies, 14(3), 34–51. https://doi.org/10.4018/IJDET.2016070103
  • Ekman, P. (1993). Facial expression and emotion. American Psychologist, 48(4), 384. https://doi.org/10.1037/0003-066X.48.4.384
  • Esteve-Gibert, N., & Guellaï, B. (2018). Prosody in the auditory and visual domains: A developmental perspective. Frontiers in Psychology, 9, 338. https://doi.org/10.3389/fpsyg.2018.00338
  • Fatahi, S. (2019). An experimental study on an adaptive e-learning environment based on learner’s personality and emotion. Education and Information Technologies, 24(4), 2225–2241. https://doi.org/10.1007/s10639-019-09868-5
  • Fayek, H. M., Lech, M., & Cavedon, L. J. N. N. (2017). Evaluating deep learning architectures for speech emotion recognition. Neural Networks, 92, 60–68. https://doi.org/10.1016/j.neunet.2017.02.013
  • Giannoulis, P., & Potamianos, G. (2012). A hierarchical approach with feature selection for emotion recognition from speech. (Ed.),^(Eds.). LREC.
  • Gonçalves, V. P., Costa, E. P., Valejo, A., Geraldo Filho, P., Johnson, T. M., Pessin, G., & Ueyama, J. (2017). Enhancing intelligence in multimodal emotion assessments. Applied Intelligence, 46(2), 470–486. https://doi.org/10.1007/s10489-016-0842-7
  • Immordino-Yang, M. H., & Damasio, A. (2007). We feel, therefore we learn: The relevance of affective and social neuroscience to education. Mind, Brain, Education, 1(1), 3–10. https://doi.org/10.1111/j.1751-228X.2007.00004.x
  • Karampiperis, P., Koukourikos, A., & Stoitsis, G. (2014). Collaborative filtering recommendation of educational content in social environments utilizing sentiment analysis techniques. In Recommender systems for Technology enhanced learning (pp. 3–23). Springer.
  • Kerkeni, L., Serrestou, Y., Mbarki, M., Raoof, K., & Mahjoub, M. A. (2017). A review on speech emotion recognition: Case of pedagogical interaction in classroom. (Ed.),^(Eds.). 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP).
  • Kleinginna, P. R., & Kleinginna, A. M. (1981). A categorized list of emotion definitions, with suggestions for a consensual definition. Motivation Emotion, 5(4), 345–379. https://doi.org/10.1007/BF00992553
  • Leony, D., Parada Gélvez, H. A., Munoz-Merino, P. J., Pardo Sánchez, A., & Delgado Kloos, C. (2013). A generic architecture for emotion-based recommender systems in cloud learning environments.
  • Lim, Y. M., Ayesh, A., & Stacey, M. (2014). Detecting emotional stress during typing task with time pressure. (Ed.),^(Eds.). 2014 Science and Information Conference.
  • Liu, W., Zhang, L., Tao, D., & Cheng, J. (2018). Reinforcement online learning for emotion prediction by using physiological signals. Pattern Recognition Letters, 107, 123–130. https://doi.org/10.1016/j.patrec.2017.06.004
  • Liu, Z.-T., Wu, M., Cao, W.-H., Mao, J.-W., Xu, J.-P., & Tan, G.-Z. (2018). Speech emotion recognition based on feature selection and extreme learning machine decision tree. Neurocomputing, 273, 271–280. https://doi.org/10.1016/j.neucom.2017.07.050
  • Lotfian, R., & Busso, C. (2018). Predicting categorical emotions by jointly learning primary and secondary emotions through multitask learning. Interspeech.
  • Lu, C.-C., Li, J.-L., & Lee, C.-C. (2018). Learning an arousal-valence speech front-end network using media data in-the-wild for emotion recognition. (Ed.),^(Eds.). Proceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop.
  • Lyons, M. J., Akamatsu, S., Kamachi, M., Gyoba, J., & Budynek, J. (1998). The Japanese female facial expression (JAFFE) database. (Ed.),^(Eds.). Proceedings of third international conference on automatic face and gesture recognition.
  • Neji, M., Ammar, M. B., & Alimi, A. M. (2011). Real-time affective learner profile analysis using an EMASPEL framework. (Ed.),^(Eds.). 2011 IEEE Global Engineering Education Conference (EDUCON).
  • Neumann, M., & Vu, N. T. (2017). Attentive convolutional neural network based speech emotion recognition: A study on the impact of input features, signal length, and acted speech.
  • Niu, Y., Zou, D., Niu, Y., He, Z., & Tan, H. (2017). A breakthrough in speech emotion recognition using deep retinal convolution neural networks. arXiv preprint arXiv:.09917.
  • Odo, C. (2018). Adapting learning activities selection in an Intelligent Tutoring System to affect. (Ed.),^(Eds.). International Conference on Artificial Intelligence in Education.
  • O’regan, K. (2003). Emotion and e-learning. Journal of Asynchronous Learning Networks, 7(3), 78–92.
  • Pekrun, R., Frenzel, A. C., Goetz, T., & Perry, R. P. (2007). The control-value theory of achievement emotions: An integrative approach to emotions in education. In Emotion in education (pp. 13–36). Elsevier.
  • Poria, S., Cambria, E., Hussain, A., & Huang, G.-B. J. N. N. (2015). Towards an intelligent framework for multimodal affective data analysis. Neural Networks, 63, 104–116. https://doi.org/10.1016/j.neunet.2014.10.005
  • Pour, P. A., Hussain, M. S., AlZoubi, O., D’Mello, S., & Calvo, R. A. (2010). The impact of system feedback on learners’ affective and physiological states. (Ed.),^(Eds.). International Conference on Intelligent Tutoring Systems.
  • Revina, I. M., & Emmanuel, W. S. (2019). Face expression recognition with the optimization based multi-SVNN classifier and the modified LDP features. Journal of Visual Communication Image Representation, 62, 43–55. https://doi.org/10.1016/j.jvcir.2019.04.013
  • Sadeghi, H., & Raie, A.-A. (2019). Human vision inspired feature extraction for facial expression recognition. Multimedia Tools Applications, 78(21), 30335–30353. https://doi.org/10.1007/s11042-019-07863-z
  • Salazar, C., Aguilar, J., Monsalve-Pulido, J., & Montoya, E. (2021). Affective recommender systems in the educational field. A Systematic Literature Review. Computer Science Review, 40, 100377.
  • Salmeron-Majadas, S., Arevalillo-Herráez, M., Santos, O. C., Saneiro, M., Cabestrero, R., Quirós, P., Arnau, D., & Boticario, J. G. (2015). Filtering of spontaneous and low intensity emotions in educational contexts. (Ed.),^(Eds.). International Conference on Artificial Intelligence in Education.
  • Saneiro, M., Santos, O. C., Salmeron-Majadas, S., & Boticario, J. G. (2014). Towards emotion detection in educational scenarios from facial expressions and body movements through multimodal approaches. The Scientific World Journal, 2014. https://doi.org/10.1155/2014/484873
  • Santos, O. C., & Boticario, J. (2012). Affective issues in Semantic Educational Recommender Systems. (Ed.),^(Eds.). RecSysTEL@ EC-TEL.
  • Santos, O. C., & Boticario, J. G. J. E. S. (2015). User-centred design and educational data mining support during the recommendations elicitation process in social online learning environments. Expert Systems, 32(2), 293–311. https://doi.org/10.1111/exsy.12041
  • Satpute, A. B., Shu, J., Weber, J., Roy, M., & Ochsner, K. N. (2013). The functional neural architecture of self-reports of affective experience. Biological Psychiatry, 73(7), 631–638. https://doi.org/10.1016/j.biopsych.2012.10.001
  • Sebe, N. (2009). Multimodal interfaces: Challenges and perspectives. Journal of Ambient Intelligence Smart Environments, 1(1), 23–30. https://doi.org/10.3233/AIS-2009-0003
  • Sharma, A. K., Kumar, U., Gupta, S. K., Sharma, U., & LakshmiAgrwal, S. (2018). A survey on feature extraction technique for facial expression recognition system. (Ed.),^(Eds.). 2018 4th International Conference on Computing Communication and Automation (ICCCA).
  • Shen, L., Wang, M., & Shen, R. (2009). Affective e-learning: Using” emotional” data to improve learning in pervasive learning environment. Journal of Educational Technology & Society, 12.
  • Shi, L., & Cristea, A. I. (2016). Motivational gamification strategies rooted in self-determination theory for social adaptive e-learning. (Ed.),^(Eds.). International Conference on Intelligent Tutoring Systems.
  • Thayamani, N. E., Fathima, M. P., & Mohan, S. (2013). Role of emotion in learning process. International Journal Of Scientific Research, 2(7), 119–121. https://doi.org/10.15373/22778179/JULY2013/41
  • Vedantham, R., Settipalli, L., & Reddy, E. S. (2018). Real time facial expression recognition in video using nearest neighbor classifier. International Journal of Pure Applied Mathematics, 118(9), 849–854.
  • Wall, W. D. (1956). Education and mental health.
  • Zhou, J., Zhang, S., Mei, H., & Wang, D. (2016). A method of facial expression recognition based on Gabor and NMF. Pattern Recognition Image Analysis, 26(1), 119–124. https://doi.org/10.1134/S1054661815040070

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.