284
Views
1
CrossRef citations to date
0
Altmetric
Computers & Computing

Micro Expression Recognition Using Delaunay Triangulation and Voronoi Tessellation

ORCID Icon & ORCID Icon

REFERENCES

  • M. Dailey, C. Joyce, M. Lyons, M. Kamachi, H. Ishi, J. Gyoba, and G. Cottrell, “Evidence and a computational explanation of cultural differences in facial expression recognition,” Emotion, Vol. 10, pp. 874–893, 2010.
  • J. Russell, “Culture and the categorization of emotions,” Psychol. Bull., Vol. 110, pp. 426–450, 1991.
  • M. Sultan Zia, M. Hussain, and M. Arfan Jaffar, “A novel spontaneous facial expression recognition using dynamically weighted majority voting based ensemble classifier,” Multimed. Tools. Appl., Vol. 77, pp. 25537–25567, 2018.
  • J. Cohn, Z. Ambadar, and P. Ekman, “Observer- based measurement of facial expression with the facial action coding system,” in The Handbook of Emotion Elicitation and Assessment, Oxford University Press Series in Affective Science, J. A. Coan, and J. J. B. Allen, Ed. New York, NY: Oxford University, 2007, pp. 203–221.
  • M. Valstar, M. Mehu, B. Jiang, M. Pantic, and K. Scherer, “Meta-analysis of the first facial expression recognition challenge,” IEEE Trans. Syst. Man Cybern. B Cybern., Vol. 42, pp. 966–979, 2012.
  • B. Martinez, M. Valstar, B. Jiang, and M. Pantic, “Automatic analysis of facial actions: A survey,” IEEE Trans. Affect. Comput., Vol. 13, pp. 1–22, 2017.
  • P. Ekman, “Body position, facial expression, and verbal behavior during interviews,” J. Abnorm. Soc. Psychol., Vol. 68, pp. 295–301, 1964.
  • R. Zhi, M. Liu, and D. Zhang, “A comprehensive survey on automatic facial action unit analysis,” Vis. Comput., Vol. 36, pp. 1067–1093, 2020.
  • E. Friesen, and P. Ekman, “Facial action coding system: a technique for the measurement of facial movement,” Palo Alto, Vol. 3, p. 5, 1978.
  • R. Zhi, C. Zhou, T. Li, S. Liu, and Y. Jin, “Action unit analysis enhanced facial expression recognition by deep neural network evolution,” Neuro-computing, Vol. 425, pp. 135–148, 2021.
  • L. Barrett, R. Adolphs, S. Marsella, A. Martinez, and S. Pollak, “Emotional expressions reconsidered: challenges to inferring emotion from human facial movements,” Psychol. Sci. Public. Interest., Vol. 20, pp. 1–68, 2019.
  • S. Liong, J. See, K. Wong, and R. Phan, “Less is more: micro-expression recognition from video using apex frame,” Signal Process., Image Commun., Vol. 62, pp. 82–92, 2018.
  • J. He, X. Yu, B. Sun, and L. Yu, “Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks,” J. Multimodal User Interfaces, Vol. 15, pp. 429–440, 2021.
  • H. Xie, L. Lo, H. Shuai, and W. Cheng, “Au- assisted graph attention convolutional network for micro-expression recognition,” in Proceedings of The 28th ACM International Conference On Multimedia, 2020, pp. 2871–2880.
  • Y. Oh, J. See, A. Le Ngo, R. Phan, and V. Baskaran, “A survey of automatic facial micro-expression analysis: databases, methods, and challenges,” Front. Psychol., Vol. 9, p. 1128, 2018.
  • S. Wang, W. Yan, X. Li, G. Zhao, and X. Fu, “Micro-expression recognition using dynamic textures on tensor independent color space,” in 2014 22nd International Conference on Pattern Recognition, 2014, pp. 4678–4683.
  • N. Thi Thu Nguyen, D. Thi Thu Nguyen, and B. The Pham, “Micro-expression recognition based on the fusion between optical flow and dynamic image,” in 2021 The 5th International Conference on Machine Learning And Soft Computing, 2021, pp. 115–120.
  • S. Happy, and A. Routray, “Automatic facial expression recognition using features of salient facial patches,” IEEE Trans. Affect. Comput., Vol. 6, pp. 1–12, 2015.
  • H. Khor, J. See, S. Liong, R. Phan, and W. Lin, “Dual-stream shallow networks for facial micro-expression recognition,” in 2019 IEEE International Conference on Image Processing (ICIP), 2019, pp. 36–40.
  • S. Liong, and K. Wong, “Micro-expression recognition using apex frame with phase information,” in 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2017, pp. 534–537.
  • I. Kotsia, N. Nikolaidis, and I. Pitas, “Fusion of geometrical and texture information for facial expression recognition,” in 2006 International Conference on Image Processing, 2006, pp. 2649–2652.
  • Y. Tang, X. Zhang, and H. Wang, “Geometric- convolutional feature fusion based on learning propagation for facial expression recognition,” IEEE. Access., Vol. 6, pp. 42532–42540, 2018.
  • Y. Fan, J. Lam, and V. Li, “Multi-region ensemble convolutional neural network for facial expression recognition,” in International Conference on Artificial Neural Networks, 2018, pp. 84–94.
  • Y. Li, X. Huang, and G. Zhao, “Micro-expression action unit detection with spatial and channel attention,” Neurocomputing, Vol. 436, pp. 221–231, 2021.
  • Z. Lu, Z. Luo, H. Zheng, J. Chen, and W. Li, “A delaunay-based temporal coding model for micro-expression recognition,” in Asian Conference on Computer Vision, 2014, pp. 698–711.
  • Y. Zhang, Y. Liu, and H. Wang, “Cross-database micro-expression recognition exploiting intradomain structure,” J. Healthc. Eng., Vol. 2021, pp. 1–9, 2021.
  • W. Yan, X. Li, S. Wang, G. Zhao, Y. Liu, Y. Chen, and X. Fu, “CASME II: An improved spontaneous micro-expression database and the baseline evaluation,” PloS One, Vol. 9, p. e86041, 2014.
  • A. Davison, C. Lansley, N. Costen, K. Tan, and M. Yap, “Samm: A spontaneous micro-facial movement dataset,” IEEE Trans. Affect. Comput., Vol. 9, pp. 116–129, 2018.
  • X. Li, T. Pfister, X. Huang, G. Zhao, and M. Pietikinen, “A spontaneous micro-expression database: inducement, collection and baseline,” in Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, 2013, pp. 1–6.
  • T. Pfister, X. Li, G. Zhao, and M. Pietikinen, “Recognising spontaneous facial micro- expressions,” in 2011 International Conference On Computer Vision, 2011, pp. 1449–1456.
  • J. Khodadoust, and A. Khodadoust, “Fingerprint indexing based on expanded Delaunay triangulation,” Expert. Syst. Appl., Vol. 81, pp. 251–267, 2017.
  • S. Jiang, and W. Jiang, “Reliable image matching via photometric and geometric constraints structured by Delaunay triangulation,” ISPRS. J. Photogramm. Remote. Sens., Vol. 153, pp. 1–20, 2019.
  • B. Delaunay, et al., “Sur la sphere vide,” Izv. Akad. Nauk SSSR, Otdelenie Matematicheskii I Estestvennyka Nauk, Vol. 7, pp. 1–2, 1934.
  • A. Cheddad, D. Mohamad, and A. Abd Manaf, “Exploiting Voronoi diagram properties in face segmentation and feature extraction,” Pattern Recognit., Vol. 41, pp. 3842–3859, 2008.
  • G. Bebis, T. Deaconu, and M. Georgiopoulos, “Fingerprint identification using delaunay triangulation,” in Proceedings 1999 International Conference On Information Intelligence And Systems (Cat. No. PR00446), 1999, pp. 452–459.
  • K. Liu, Q. Jin, H. Xu, Y. Gan, and S. Liong, “Micro-expression recognition using advanced genetic algorithm,” Signal Process., Image Commun., Vol. 93, p. 116153, 2021.
  • M. Munasinghe, “Facial expression recognition using facial landmarks and random forest classifier,” in 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), 2018, pp. 423–427.
  • H. Shahar, and H. Hel-Or, “Micro Expression classification using facial color and deep learning methods,” in Proceedings of The IEEE/CVF International Conference On Computer Vision Work-Shops, 2019, pp. 0–0.
  • Y. Liu, X. Zhang, Y. Lin, and H. Wang, “Facial expression recognition via deep action units graph network based on psychological mechanism,” IEEE Trans. Cogn. Dev. Syst., Vol. 12, pp. 311–322, 2020.
  • A. Majumder, L. Behera, and V. Subramanian, “Automatic facial expression recognition system using deep network-based data fusion,” IEEE Trans. Cybern., Vol. 48, pp. 103–114, 2018.
  • H. Ghazouani, “A genetic programming-based feature selection and fusion for facial expression recognition,” Appl. Soft. Comput., Vol. 103, p. 107173, 2021.
  • T. Ojala, M. Pietikinen, and D. Harwood, “A comparative study of texture measures with classification based on featured distributions,” Pattern Recognit., Vol. 29, pp. 51–59, 1996.
  • T. Senechal, K. Bailly, and L. Prevost, “Impact of action unit detection in automatic emotion recognition,” Pattern. Anal. Appl., Vol. 17, pp. 51–67, 2014.
  • L. Lei, J. Li, T. Chen, and S. Li, “A novel graph-TCN with a graph structured representation for micro-expression recognition,” in Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 2237–2245.
  • S. Wang, C. Wu, M. He, J. Wang, and Q. Ji, “Posed and spontaneous expression recognition through modeling their spatial patterns,” Mach. Vis. Appl., Vol. 26, pp. 219–231, 2015.
  • C. Aggarwal. Data classification. Data Mining. 2015, pp. 285–344.
  • G. Sakkis, I. Androutsopoulos, G. Paliouras, V. Karkaletsis, C. Spyropoulos, and P. Stamatopoulos. Stacking classifiers for anti-spam filtering of e-mail. ArXiv Preprint Cs/0106040. (2001).
  • S. Malmasi, and M. Dras, “Native language identification with classifier stacking and ensembles,” Comput. Linguist., Vol. 44, pp. 403–446, 2018.
  • S. Safavian, and D. Landgrebe, “A survey of decision tree classifier methodology,” IEEE Trans. Syst. Man Cybern., Vol. 21, pp. 660–674, 1991.
  • Y. Yaddaden, M. Adda, A. Bouzouane, S. Gaboury, and B. Bouchard, “Hybrid-based facial expression recognition approach for human-computer interaction,” in 2018 IEEE 20th International Workshop On Multimedia Signal Processing (MMSP), 2018, pp. 1–6.
  • H. Dino, and M. Abdulrazzaq, “Facial expression classification based on SVM, KNN and MLP classifiers,” in 2019 International Conference on Advanced Science and Engineering (ICOASE), 2019, pp. 70–75.
  • A. McCallum, K. Nigam, “A comparison of event models for naive bayes text classification,” in AAAI-98 Workshop on Learning for Text Categorization. Vol. 752, 1998, pp. 41–48.
  • R. Mihalcea. Classifier stacking and voting for text filtering. TREC. 2002.
  • W. Li, and L. Zou, “Classifier stacking for native language identification,” in Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, 2017, pp. 390–397.
  • A. Kumar, R. Theagarajan, O. Peraza, and B. Bhanu, “Classification of facial micro-expressions using motion magnified emotion avatar images,” in CVPR Workshops, 2019, pp. 12–20.
  • M. Takalkar, S. Thuseethan, S. Rajasegarar, Z. Chaczko, M. Xu, and J. Yearwood, “LGAttnet: automatic micro-expression detection using dual-stream local and global attentions,” Knowl. Based. Syst., Vol. 212, p. 106566, 2021.
  • Y. Zhang, Y. Liu, G. Li, and H. Peng, “Sub- space learning and joint distribution Adaptation for unsupervised cross-database microexpression recognition,” Mob. Inf. Syst., Vol. 2021, pp. 1–8, 2021.
  • M. Peng, C. Wang, T. Bi, Y. Shi, X. Zhou, and T. Chen, “A novel apex-time network for cross-dataset micro-expression recognition,” in 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), 2019, pp. 1–6.
  • Y. Zong, W. Zheng, X. Huang, J. Shi, Z. Cui, and G. Zhao, “Domain regeneration for cross-database micro-expression recognition,” IEEE Trans. Image Process., Vol. 27, pp. 2484–2498, 2018.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.