Abstract
Detecting and analyzing emotions from human facial movement is a problem defined and developed over many years for the benefits it brings. Facial expression is a crux of the human-computer interaction (HCI) research area. Researchers are exploring its application in security, medical science, and to know the behavior of a person or community. In this paper, we have proposed a deep learning-based framework using transfer learning for facial expression. This approach uses the existing VGG16 model for a modified trained model and concatenates additional layers on it. VGG16 model is already trained on ImageNet, which has 1000 classes. After this, the model has been verified on CK+, JAFFE benchmark datasets. Extended Cohn-Kanade (CK+) and Japanese Female Facial Expression (JAFFE) are popular Facial Expression Dataset. The proposed model has shown 94.8% accuracy on CK+ and 93.7% on the JAFFE dataset and found superior to existing techniques. We have implemented proposed technique on Google Colab-GPU that has helped us to process these data.
Subject Classification:
Keywords: