157
Views
2
CrossRef citations to date
0
Altmetric
Articles

An Automatic Facial Expression Recognition System Employing Convolutional Neural Network with Multi-strategy Gravitational Search Algorithm

&

References

  • R. Abiantun, F. Juefei-Xu, U. Prabhu, and M. Savvides, “SSR2: Sparse signal recovery for single-image super-resolution on faces with extreme low resolution,” Pattern Recognit., Vol. 90, pp. 308–324, 2019. doi:https://doi.org/10.1016/j.patcog.2019.01.032
  • R. A. Khan, A. Meyer, H. Konik, and S. Bouazkaz, “Framework for reliable, real-time facial expression recognition for low resolution images,” Pattern Recognit. Lett., Vol. 34, no. 10, pp. 1159–1168, 2013. doi:https://doi.org/10.1016/j.patrec.2013.03.022
  • C. Turan, and K.-M. Lam, “Histogram-based local descriptors for facial expression recognition (FER): A comprehensive study,” J. Vis. Commun. Image. Represent., Vol. 55, pp. 331–341, 2018. doi:https://doi.org/10.1016/j.jvcir.2018.05.024
  • Y. Yan, Z. Zhang, S. Chen, and H. Wang, “Low-resolution facial expression recognition: A filter learning perspective,” Signal. Processing., Vol. 169, pp. 107370, 2020. doi:https://doi.org/10.1016/j.sigpro.2019.107370
  • W. Xie, X. Jia, L. Shen, and M. Yang, “Sparse deep feature learning for facial expression recognition,” Pattern Recognit., Vol. 96, pp. 106966, 2019. doi:https://doi.org/10.1016/j.patcog.2019.106966
  • H. Wu, Y. Liu, Y. Liu, and S. Liu, “Efficient facial expression recognition via convolution neural network and infrared imaging technology,” Infrared Phys. Technol., Vol. 102, pp. 103031, 2019. doi:https://doi.org/10.1016/j.infrared.2019.103031
  • O. Dan, “Recognition of emotional facial expressions in adolescents with attention deficit/hyperactivity disorder,” J. Adolesc., Vol. 82, pp. 1–10, 2020. doi:https://doi.org/10.1016/j.adolesce-nce.2020.04.010
  • M. R. Rejeesh, “Interest point based face recognition using adaptive neuro fuzzy inference system,” Multimed. Tools and Appl., Vol. 78, pp. 22691–22710, 2019.
  • R. Ma, and J. Wang, “ Automatic facial expression recognition using linear and nonlinear holistic spatial analysis,” in International Conference on Affective Computing and Intelligent Interaction, Springer, Berlin, Heidelberg, 144–151, 2005, October.
  • M. R. González-Rodríguez, M. C. Díaz-Fernández, and C. P. Gómez, “Facial-expression recognition: An emergent approach to the measurement of tourist satisfaction through emotions,” Telemat. Inform., Vol. 51, pp. 101404, 2020. doi:https://doi.org/10.1016/j.tele.2020.101404
  • B. Yang, J. M. Cao, D. P. Jiang, and J. D. Lv, “Facial expression recognition based on dual-feature fusion and improved random forest classifier,” Multimed. Tools. Appl., Vol. 77, no. 16, pp. 20477–20499, 2018. doi:https://doi.org/10.1007/s11042-017-5489-9
  • Y. Wang, and C. a. Zhang, “Facial expression recognition from image based on hybrid features understanding,” J. Vis. Commun. Image. Represent., Vol. 59, pp. 84–88, 2019. doi:https://doi.org/10.1016/j.jvcir.2018.11.010
  • W. M. Alenazy, and A. S. Alqahtani, “Gravitational search algorithm based optimized deep learning model with diverse set of features for facial expression recognition,” J. Ambient. Intell. Humaniz. Comput., 1–16, 2020.
  • G. V. Reddy, C. V. R. D. Savarni, and S. Mukherjee, “Facial expression recognition in the wild, by fusion of deep Learnt and Hand-crafted features,” Cogn. Syst. Res., Vol. 62, pp. 23–34, 2020.
  • X. Fan, and T. Tjahjadi, “Fusing dynamic deep learned features and handcrafted features for facial expression recognition,” J. Vis. Commun. Image. Represent., Vol. 65, pp. 102659, 2019. doi:https://doi.org/10.1016/j.jvcir.2019.102659
  • S. A. Khan, S. Hussain, S. Xiaoming, and S. Yang, “An effective framework for driver fatigue recognition based on intelligent facial expressions analysis,” IEEE. Access., Vol. 6, pp. 67459–67468, 2018. doi:https://doi.org/10.1109/ACCESS.2018.2878601
  • B. Yang, J. Cao, R. Ni, and Y. Zhang, “Facial expression recognition using weighted mixture deep neural network based on double-channel facial images,” IEEE. Access., Vol. 6, pp. 4630–4640, 2017. doi:https://doi.org/10.1109/ACCESS.2017.2784096
  • Z. Chen, D. Huang, Y. Wang, and L. Chen, “Fast and light manifold cnn based 3D facial expression recognition across pose variations,” in Proceedings of the 26th ACM international conference on Multimedia, 229–238, 2018, October.
  • S. Lin, M. Bai, F. Liu, L. Shen, and Y. Zhou, “Orthogonaliza-tion-Guided Feature Fusion Network for Multimodal 2D+ 3D Facial Expression Recognition,” in IEEE Transactions on Multimedia, 2020.
  • M. A. Takalkar, M. Xu, and Z. Chaczko, “Manifold feature integration for micro-expression recognition,” Multimedia Syst., Vol. 26, no. 5, pp. 535–551, 2020. doi:https://doi.org/10.1007/s00530-020-00663-8
  • V. Sundararaj, “An efficient threshold prediction scheme for wavelet based ECG signal noise reduction using variable step size firefly algorithm,” Int J Intell Eng Syst, Vol. 9, no. 3, pp. 117–126, 2016.
  • S. Vinu, “Optimal task assignment in mobile cloud computing by queue based Ant-Bee algorithm,” Wirel. Pers. Commun., Vol. 104, no. 1, pp. 173–197, 2019a. doi:https://doi.org/10.1007/s11277-018-6014-9
  • V. Sundararaj, “Optimised denoising scheme via opposition-based self-adaptive learning PSO algorithm for wavelet-based ECG signal noise reduction,” Int. J. Biomed. Eng. Technol., Vol. 31, no. 4, pp. 325–345, 2019b. doi:https://doi.org/10.1504/IJBET.2019.103242
  • V. Sundararaj, S. Muthukumar, and R. S. Kumar, “An optimal cluster formation based energy efficient dynamic scheduling hybrid MAC protocol for heavy traffic load in wireless sensor networks,” Comput. Secur., Vol. 77, pp. 277–288, 2018. doi:https://doi.org/10.1016/j.cose.2018.04.009
  • V. Sundararaj, V. Anoop, P. Dixit, A. Arjaria, U. Chourasia, P. Bhambri, M. R. Rejeesh, and R. Sundararaj, “CCGPA-MPPT: Cauchy preferential crossover-based global pollination algorithm for MPPT in photovoltaic system,” in Progress in Photovoltaics. Research and Applications, 2020.
  • P. M. Ferreira, F. Marques, J. S. Cardoso, and A. Rebelo, “Physiological inspired deep neural networks for emotion recognition,” IEEE. Access., Vol. 6, pp. 53930–53943, 2018. doi:https://doi.org/10.1109/ACCESS.2018.2870063
  • K. R. Scherer, H. Ellgring, and A. Dieckmann, “Dynamic facial expression of emotion and observer inference,” Front. Psychol., Vol. 10, pp. 508, 2019. doi:https://doi.org/10.3389/fpsyg.2019.00508
  • O. Vinyals, S. Bengio, and M. Kudlur, “Order matters: sequence to sequence for sets,” arXiv Preprint ArXiv, Vol. 1511. .06391 (2015).
  • M. K. Alsmadi, “Inventor; Imam Abdulrahman Bin Faisal University, assignee. facial expression recognition,” United States Patent US, Vol. 10, no. 417, pp. 483, 2019.
  • W.-j. Niu, Z.-k. Feng, M. Zeng, B.-f. Feng, Y.-w. Min, C.-t. Cheng, and J.-Z. Zhou, “Forecasting reservoir monthly runoff via ensemble empirical mode decomposition and extreme learning machine optimized by an improved gravitational search algorithm,” Appl. Soft. Comput., Vol. 82, pp. 105589, 2019. doi:https://doi.org/10.1016/j.asoc.2019.105589
  • X. Zhang, Y. Huang, Q. Zou, Y. Pei, and R. Zhang, “A hybrid convolutional neural network for sketch recognition,” Pattern Recognit. Lett., Vol. 130, pp. 73–82, 2020. doi:https://doi.org/10.1016/j.patrec.2019.01.006
  • Y. Tian, “Evaluation of face resolution for expression analysis,” Computer Vision and Pattern Recognition Workshop, 2004.
  • R. A. Khan, A. Meyer, H. Konik, and S. Bouakaz, “Exploring human visual system: study to aid the development of automatic facial expression recognition framework,” 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 49–54, 2012.
  • R. A. Khan, A. Meyer, H. Konik, and S. Bouakaz. “Human vision inspired framework for facial expressions recognition,” in IEEE International Conference on Image Processing, IEEE (Ed.) 2012.
  • C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on local binary patterns: a comprehensive study,” Image. Vis. Comput., Vol. 27, pp. 803–816, 2009. doi:https://doi.org/10.1016/j.imavis.2008.08.005
  • S. Chew, S. Lucey, P. Lucey, and S. Sridharan, “Improved facial expression recognition via uni-hyperplane classification,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2554–2561, 2012.
  • P. N. Belhumeur, J. P. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: recognition using class specific linear projection,” IEEE Trans. Pattern Anal. Mach. Intell, Vol. 19, no. 7, pp. 711–720, 1997. doi:https://doi.org/10.1109/34.598228
  • L. Zhong, Q. Liu, P. Yang, B. Liu, J. Huang, and D. N. Metaxas, “Learning active facial patches for expression analysis,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2562–2569, 2012.
  • A. J. Calder, A. M. Burton, P. Miller, A. W. Young, and S. kamatsu, “A principal component analysis of facial expressions,” Vision Res., Vol. 41, no. 9, pp. 1179–1208, 2001. doi:https://doi.org/10.1016/S0042-6989(01)00002-5
  • R. A. Khan, A. Meyer, H. Konik, and S. Bouakaz, “Framework for reliable, realtime facial expression recognition for low resolution images,” Pattern Recognit. Lett, Vol. 34, no. 10, pp. 1159–1168, 2013. doi: https://doi.org/10.1016/j.patrec.2013.03.022
  • K. Zhang, Y. Huang, Y. Du, and L. Wang, “Facial expression recognition based on deep evolutional spatial-temporal networks,” IEEE Trans. Image Process, Vol. 26, no. 9, pp. 4193–4203, 2017. doi:https://doi.org/10.1109/TIP.2017.2689999

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.