149
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Manifold-enhanced CycleGAN for facial expression synthesis

ORCID Icon & ORCID Icon
Pages 181-193 | Received 20 Jan 2022, Accepted 12 Dec 2022, Published online: 20 Feb 2023

References

  • Torres L. Is there any hope for face recognition? Proceedings of the International Workshop Image Analysis Multimedia Interactive Services, vol. 21; Lisboa, Portugal; 2004. p. 23.
  • Shan C, Gong S, McOwan PW. Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis Comput. 2009;27(6):803–816.
  • Yan Y, Huang Y, Chen S, et al. Joint deep learning of facial expression synthesis and recognition. IEEE Trans Multimed. 2019;22(11):2792–2807.
  • Ekman P, Rolls E, Perrett D, et al. Facial expressions of emotion: an old controversy and new findings. Philos T Roy Soc B. 1992;335(1273):63–69.
  • Ekman P, Friesen WV. Facial action coding system. Environ Psychol Nonverbal Behav. 1978.
  • Liu Z, Shan Y, Zhang Z. Expressive expression mapping with ratio images. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. 2001.
  • Zhang Q, Liu Z, Quo G, et al. Geometrydriven photorealistic facial expression synthesis. IEEE Trans Vis Comput Gr. 2006;12(1):48–60.
  • Huang D, De la Torre F. Bilinear kernel reduced rank regression for facial expression synthesis. Proceedings of the European Conference Computer Vision; Crete, Greece; 2010. p. 364–377.
  • Peng Y, Yin H. Facial expression analysis and expression-invariant face recognition by manifold-based synthesis. Mach Vis Appl. 2018;29:263–284.
  • Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets. Proceedings IEEE International Conference Advance Neural Information Processing System; Montreal, Canada; 2014. p. 2672–2680.
  • Hajarolasvadi N, Ramírez MA, Beccaro W, et al. Generative adversarial networks in human emotion synthesis: a review. IEEE Access. 2020;8:218499–218529.
  • Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. Proceedings of the International Conference Learning Representations; San Juan, Puerto Rico; 2016.
  • Chen M, Li C, Li K, et al. Double encoder conditional GAN for facial expression synthesis. 2018 37th Chinese Control Conference (CCC); IEEE; 2018.
  • Zhu J-Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference Computer Vision; Venice, Italy; 2017. p. 2242–2251.
  • Choi Y, Choi M, Kim M, et al. StarGAN: unified generative adversarial networks for multi-domain image-to-image translation. Proceedings of the IEEE International Conference Computer Visual Pattern Recognition; Salt Lake City, USA; 2018. p. 8789–8797.
  • Peng Y, Yin H. ApprGAN: appearance-based GAN for facial expression synthesis. IET Image Proc. 2019;13(14):2706–2715.
  • Wu R, Zhang G, Lu S. Cascade ef-gan: progressive facial expression editing with local focuses. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.
  • Ding H, Sricharan K, Chellappa R. ExprGAN: facial expression editing with controllable expression intensity. Proceedings of the AAAI Conference Artificial Intelligence; 2018. p. 6781–6788.
  • Pumarola A, Agudo A, Martinez AM. Ganimation: anatomically-aware facial animation from a single image). Proceedings of the European conference on computer vision (ECCV). 2018.
  • Ling J, Xue H, Song L. Toward fine-grained facial expression manipulation. European Conference on Computer Vision; Springer, Cham; 2020.
  • Wu R, Lu S. Leed: label-free expression editing via disentanglement. European Conference on Computer Vision; Springer, Cham; 2020
  • Xia Y, Zheng W, Wang Y. Local and global perception generative adversarial network for facial expression synthesis. IEEE Trans Circuits Syst Video Technol. 2021;32(3):1443–1452.
  • Cootes TF, Edwards GJ, Taylor CJ. Active appearance models. IEEE Trans Pattern Anal Mach Intell 2001;23(6):681–685.
  • Zhu J-Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings IEEE Interantional Conference Computer Vision; Venice, Italy; 2017. p. 2242–2251
  • Golub GH, Reinsch C. Singular value decomposition and least squares solutions. Numer Math. 1970;14(5):403–420.
  • Abboud B, Davoine F. Bilinear factorisation for facial expression analysis and synthesis. IEEE Proc Vis Image Signal Process. 2005;152(3):327–333.
  • Sandbach G, Zafeiriou S, Pantic M, et al. A dynamic approach to the recognition of 3d facial expressions and their temporal models. Proceedings IEEE International Conference Autom Face Gesture Recognition; Santa Barbara, USA; 2011. p. 406–413
  • Lucey P, Cohn JF Kanade T, et al. The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specifified expression. Proceedings IEEE International Workshop Computer Vision and Pattern Recognition; San Francisco, USA; 2010. p. 94–101
  • Lyons M, Akamatsu S, Kamachi M, et al. Coding facial expressions with gabor wavelets. Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition. IEEE; 1998.
  • Schroff F, Kalenichenko D, Philbin J. Facenet: a unified embedding for face recognition and clustering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
  • Abdal R, Qin Y, Wonka P. Image2stylegan: how to embed images into the stylegan latent space? Proceedings IEEE/CVF International Conference on Computer Vision. 2019.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.