1,332
Views
43
CrossRef citations to date
0
Altmetric
Review Article

A Review on Facial Expression Recognition: Feature Extraction and Classification

&

References

  • A. Mehrabian, “Communication without words,” Psychol. Today, Vol. 2, pp. 53–5, 1968.
  • G. Sandbach, S. Zafeiriou, M. Pantic, and L. Yin, “Static and dynamic 3D facial expression recognition: A comprehensive survey,” Image Vis. Comput., Vol. 30, no. 10, pp. 683–97, Oct. 2012.
  • Y. Tian, T. Kanade, and J. Cohn, “Facial expression analysis,” in Handbook of face recognition. Springer, 2005, pp. 247–75.
  • C.-D. Caleanu, “Face expression recognition: A brief overview of the last decade,” in IEEE 8th International Symposium on Applied Computational Intelligence and Informatics (SACI), Timisoara, 2013, pp. 157–61.
  • T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models-their training and application,” Comput. Vis. Image Underst., Vol. 61, no. 1, pp. 38–59, Jan. 1995.
  • Y. Chang, C. Hu, R. Feris, and M. Turk, “Manifold based analysis of facial expression,” Image Vis. Comput., Vol. 24, no. 6, pp. 605–14, Jun. 2006.
  • R. Shbib, and S. Zhou, “Facial expression analysis using active shape model,” Int. J. Signal Process. Image Process. Pattern Recognit, Vol. 8, no. 1, pp. 9–22, 2015.
  • L. A. Cament, F. J. Galdames, K. W. Bowyer, and C. A. Perez, “Face recognition under pose variation with local Gabor features enhanced by active shape and statistical models,” Pattern Recognit., Vol. 48, no. 11, pp. 3371–84, Nov. 2015.
  • T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 23, no. 6, pp. 681–5, Jun. 2011.
  • Y. Cheon, and D. Kim, “Natural facial expression recognition using differential-AAM and manifold learning,” Pattern Recognit., Vol. 42, no. 7, pp. 1340–50, Jul. 2009.
  • E. Antonakos, J. Alabort-i-Medina, G. Tzimiropoulos, and S. Zafeiriou, “Hog active appearance models,” in 2014 IEEE International Conference on Image Processing (ICIP), Paris, 2014, pp. 224–8.
  • R. Anderson, B. Stenger, and R. Cipolla, “Using bounded diameter minimum spanning trees to build dense active appearance models,” Int. J. Comput. Vis., Vol. 110, no. 1, pp. 48–57, Oct. 2014.
  • Y. Chen, C. Hua, and R. Bai, “Regression-based active appearance model initialization for facial feature tracking with missing frames,” Pattern Recognit. Lett., Vol. 38, pp. 113–9, Mar. 2014.
  • D. G. Lowe, “Object recognition from local scale-invariant features,” in the Seventh IEEE International Conference on Computer vision, Kerkyra, 1999, pp. 1150–7.
  • D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., Vol. 60, no. 2, pp. 91–110, Nov. 2004.
  • S. Berretti, A. Del Bimbo, P. Pala, B. B. Amor, and D. Mohamed, “A set of selected SIFT features for 3D facial expression recognition,” in 20th International Conference on Pattern Recognition, Istanbul, Turkey, 2010, pp. 4125–8.
  • H. Soyel, and H. Demirel, “Facial expression recognition based on discriminative scale invariant feature transform,” Electron. Lett., Vol. 46, no. 5, pp. 343–5, Mar. 2010.
  • Y. Li, W. Liu, X. Li, Q. Huang, and X. Li, “GA-SIFT: A new scale invariant feature transform for multispectral image using geometric algebra,” Inform. Sci., Vol. 281, pp. 559–72, Oct. 2014.
  • T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognit., Vol. 29, no. 1, pp. 51–9, Jan. 1996.
  • C. Shan, S. Gong, and P. W. McOwan, “Facial expression recognition based on Local Binary Patterns: A comprehensive study,” Image Vis. Comput., Vol. 27, no. 6, pp. 803–16, May 2009.
  • S. Zhang, X. Zhao, and B. Lei, “Facial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis,” WSEAS Trans. Signal Process., Vol. 8, no. 1, pp. 21–31, 2012.
  • X. Zhao, and S. Zhang, “Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding,” EURASIP J. Adv. Signal Process., Vol. 2012, no. 1, pp. 20, Dec. 2012.
  • D. Huang, C. Shan, M. Ardabilian, Y. Wang, and L. Chen, “Local binary patterns and its application to facial image analysis: a survey,” IEEE Trans. Syst. Man, Cybernet. Part C: Appl. Rev., Vol. 41, no. 6, pp. 765–81, Nov. 2011.
  • G. Zhao, and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 29, no. 6, pp. 915–28, Jun. 2007.
  • T. Jabid, M. H. Kabir, and O. Chae, “Robust facial expression recognition based on local directional pattern,” ETRI J., Vol. 32, no. 5, pp. 784–94, Oct. 2010.
  • T. Ahsan, T. Jabid, and U.-P. Chong, “Facial expression recognition using local transitional pattern on Gabor filtered facial images,” IETE Tech. Rev., Vol. 30, no. 1, pp. 47–52, Jan.–Feb. 2013.
  • X. Li, Q. Ruan, Y. Jin, G. An, and R. Zhao, “Fully automatic 3D facial expression recognition using polytypic multi-block local binary patterns,” Signal Process., Vol. 108, pp. 297–308, Mar. 2015.
  • Z. Zhang, M. Lyons, M. Schuster, and S. Akamatsu, “Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron,” in Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998. Proceedings, Nara 1998, pp. 454–9.
  • S.-s. Liu, and Y.-t. Tian, “Facial expression recognition method based on Gabor wavelet features and fractional power polynomial kernel PCA,” Adv. Neural Netw.-ISNN 2010. Vol. 6064, no. Part 2, pp. 144–51, 2010.
  • W. Gu, C. Xiang, Y. Venkatesh, D. Huang, and H. Lin, “Facial expression recognition using radial encoding of local Gabor features and classifier synthesis,” Pattern Recognit., Vol. 45, no. 1, pp. 80–91, Jan. 2012.
  • E. Owusu, Y. Zhan, and Q. R. Mao, “A neural-AdaBoost based facial expression recognition system,” Expert Syst. Appl., Vol. 41, no. 7, pp. 3383–90, Jun. 2014.
  • S. Negahdaripour, “Revised definition of optical flow: Integration of radiometric and geometric cues for dynamic scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 20, no. 9, pp. 961–79, Sep. 1998.
  • J.-J. Lien, T. Kanade, J. F. Cohn, and C.-C. Li, “Subtly different facial expression recognition and expression intensity estimation,” in IEEE Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, 1998, pp. 853–9.
  • Y. Yacoob, and L. S. Davis, “Recognizing human facial expressions from long image sequences using optical flow,” IEEE Trans Pattern Anal. Mach. Intell., Vol. 18, no. 6, pp. 636–42, Jun. 1996.
  • A. Sánchez, J. V. Ruiz, A. B. Moreno, A. S. Montemayor, J. Hernández, and J. J. Pantrigo, “Differential optical flow applied to automatic facial expression recognition,” Neurocomputing, Vol. 74, no. 8, pp. 1272–82, Mar. 2011.
  • M. Pantic, and I. Patras, “Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences,” IEEE Trans. Syst. Man Cybern. Part B: Cybern., Vol. 36, no. 2, pp. 433–49, Apr. 2006.
  • Y. Tie, and L. Guan, “A deformable 3D facial expression model for dynamic human emotional state recognition,” IEEE Trans. Circuits Syst. Video Technol., Vol. 23, no. 1, pp. 142–57, Jan. 2013.
  • H. Fang, N. Mac Parthaláin, A. J. Aubrey, G. K. Tam, R. Borgo, P. L. Rosin, P. W. Grant, D. Marshall, and M. Chen, “Facial expression recognition in dynamic sequences: An integrated approach,” Pattern Recognit., Vol. 47, no. 3, pp. 1271–81, Mar. 2014.
  • P. S. Aleksic, and A. K. Katsaggelos, “Automatic facial expression recognition using facial animation parameters and multistream HMMs,” IEEE Trans Inf. Forensics Secur., Vol. 1, no. 1, pp. 3–11, Mar. 2006.
  • Y. Sun, and A. Akansu, “Facial expression recognition with regional hidden Markov models,” Electron. Lett., Vol. 50, no. 9, pp. 671–3, Apr. 2014.
  • L. Ma, and K. Khorasani, “Facial expression recognition using constructive feedforward neural networks,” IEEE Tran. Syst. Man Cybern. Part B: Cybern., Vol. 34, no. 3, pp. 1588–95, Jun. 2004.
  • C. R. De Silva, S. Ranganath, and L. C. De Silva, “Cloud basis function neural network: a modified RBF network architecture for holistic facial expression recognition,” Pattern Recognit., Vol. 41, no. 4, pp. 1241–53, Apr. 2008.
  • V. G. Kaburlasos, S. E. Papadakis, and G. A. Papakostas, “Lattice Computing Extension of the FAM Neural Classifier for Human Facial Expression Recognition,” IEEE Trans. Neural Netw. Learn. Syst., Vol. 24, no. 10, pp. 1526–38, Oct. 2013.
  • I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang, “Facial expression recognition from video sequences: temporal and static modeling,” Comput. Vis. Image Underst., Vol. 91, no. 1, pp. 160–87, Jul.–Aug. 2003.
  • X. Zhao, E. Dellandréa, J. Zou, and L. Chen, “A unified probabilistic framework for automatic 3D facial expression analysis based on a Bayesian belief inference and statistical feature models,” Image Vis. Comput., pp. 231–45, Mar. 2013.
  • N. Sebe, M. S. Lew, Y. Sun, I. Cohen, T. Gevers, and T. S. Huang, “Authentic facial expression analysis,” Image Vis. Comput., Vol. 25, no. 12, pp. 1856–63, Dec. 2007.
  • K. Yurtkan, and H. Demirel, “Feature selection for improved 3D facial expression recognition,” Pattern Recognit. Lett., Vol. 38, pp. 26–33, Mar. 2014.
  • D. Ghimire, and J. Lee, “Geometric feature-based facial expression recognition in image sequences using multi-class adaboost and support vector machines,” Sensors, Vol. 13, no. 6, pp. 7714–34, Jun. 2013.
  • J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 31, no. 2, pp. 210–27, Feb. 2009.
  • D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory, Vol. 52, no. 4, pp. 1289–306, Apr. 2006.
  • S. Zhang, X. Zhao, and B. Lei, “Robust facial expression recognition via compressive sensing,” Sensors, Vol. 12, no. 3, pp. 3747–61, Mar. 2012.
  • S. Zhang, X. Zhao, and B. Lei, “Facial expression recognition using sparse representation,” Wseas Trans. Syst., Vol. 11, no. 8, pp. 440–52, 2012.
  • M. Mohammadi, E. Fatemizadeh, and M. Mahoor, “PCA-Based dictionary building for accurate facial expression recognition via sparse representation,” J. Vis. Commun. Image Represent., Vol. 25, no. 5, pp. 1082–92, Jul. 2014.
  • Y. Ouyang, N. Sang, and R. Huang, “Accurate and robust facial expressions recognition by fusing multiple sparse representation based classifiers,” Neurocomputing, Vol. 149, pp. 71–8, Feb. 2015.
  • M. J. Lyons, J. Budynek, and S. Akamatsu, “Automatic classification of single facial images,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 21, no. 12, pp. 1357–362, Dec. 1999.
  • T. Kanade, Y. Tian, and J. Cohn, “Comprehensive database for facial expression analysis,” in International Conference on Face and Gesture Recognition, Grenoble, France, 2000, pp. 46–53.
  • C. Shan, S. Gong, and P. McOwan, “Facial expression recognition based on Local Binary Patterns: A comprehensive study,” Image Vis. Comput., Vol. 27, no. 6, pp. 803–16, May 2009.
  • Y. Tong, J. Chen, and Q. Ji, “A unified probabilistic framework for spontaneous facial action modeling and understanding,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 32, no. 2, pp. 258–73, Feb. 2010.
  • M. H. Siddiqi, R. Ali, A. Sattar, A. M. Khan, and S. Lee, “Depth camera-based facial expression recognition system using multilayer scheme,” IETE Tech. Rev., Vol. 31, no. 4, pp. 277–86, Aug. 2014.
  • G. E. Hinton, and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, Vol. 313, no. 5786, pp. 504–07, Jul. 2006.
  • D. Yu, and L. Deng, “Deep learning and its applications to signal and information processing,” IEEE Signal Process. Mag., Vol. 28, no. 1, pp. 145–54, Jan. 2011.
  • Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, Vol. 521, pp. 436–44, May 2015.
  • X. Zhao, X. Shi, and S. Zhang, “Facial expression recognition via deep learning,” IETE Tech. Rev., Vol. 32, no. 5, pp. 347–55, Sep.–Oct. 2015. doi:10.1080/02564602.2015.1017542.
  • S. Zhang, X. Wang, G. Zhang, and X. Zhao, “Multimodal emotion recognition integrating affective speech with facial expression,” WSEAS Trans. Signal Process., no. 10, pp. 526–37, 2014.
  • J. Kim, and M. Clements, “Multimodal affect classification at various temporal lengths,” IEEE Trans. Affective Comput., Vol. 6, no. 4, pp. 371–84, Oct.–Dec. 2015. doi:10.1109/TAFFC.2015.2411273.
  • F. Ringeval, F. Eyben, E. Kroupi, A. Yuce, J.-P. Thiran, T. Ebrahimi, D. Lalanne, and B. Schuller, “Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data,” Pattern Recognit. Lett., Vol. 66, pp. 22–30, Nov. 2015. doi:10.1016/j.patrec.2014.11.007.
  • A. Savran, H. Cao, A. Nenkova, and R. Verma, “Temporal Bayesian fusion for affect sensing: combining video, audio, and lexical modalities,” IEEE Trans. Cybernet., Vol. 45, no. 9, pp. 1927–41 Sep. 2015. doi:10.1109/TCYB.2014.2362101.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.