Volume 16, Issue 2 (9-2019)                   JSDP 2019, 16(2): 91-104 | Back to browse issues page


XML Persian Abstract Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Merrikhi H, Ebrahimnezhad H. Synthesis of human facial expressions based on the distribution of elastic force applied by control points. JSDP 2019; 16 (2) :91-104
URL: http://jsdp.rcisp.ac.ir/article-1-726-en.html
Sahand University of Technology
Abstract:   (3337 Views)
Facial expressions play an essential role in delivering emotions. Thus facial expression synthesis gain interests in many fields such as computer vision and graphics. Facial actions are generated by contraction and relaxation of the muscles innervated by facial nerves. The combination of those muscle motions is numerous. therefore, facial expressions are often person specific. But in general, facial expressions can be divided into six groups: anger, disgust, fear, happiness, sadness, and surprise. Facial expression variations include both global facial feature motions (e.g. opening or closing of eyes or mouth) and local appearance deformations (e.g. facial wrinkles and furrows).
Ghent and McDonald introduced the Facial Expression Shape model and Facial Expression Texture Model respectively for the synthesizing global and local changes. Zhang et al. published an elastic model to balance the local and global warping. Then, they added suitable illumination details to the warped face image with muscle-distribution-based model.
The goal of facial expression synthesis is to create expressional face image of the subject with the availability of neutral face image of that subject.
This paper proposes a new method for synthesis of human facial expressions, in which an elastic force is defined to simulate the displacement of facial points in various emotional expressions. The basis of this force is the presence of control points with specific coordinates and directions on the face image. In other words, each control point applies an elastic force into the points of the face and moves them in a certain direction. The force applied to each point is inversely proportional to the distance between that point and the control point. For several control points, the force applied to the points of the face is the result of the forces associated with all control points. To synthesize a specific expression, the location of the control points and parameters of the force are adjusted to achieve an expression face. Face detail is extracted with laplacian pyramid and added to the synthesized image.
The proposed method was implemented on the KDEF and Cohn-Kanade (CK+) databases and the results were put on for comparison. Two happy and sad expressions were selected for synthesis. The proper location of the control points and elastic force parameters were determined on the neutral image of the target person based on the expressional images in the database. Then, the neutral image of the person was warped with the elastic forces. Facial expression details have been added with laplacian pyramid method to the warped image. Finally, the experimental results were compared with the photo-realistic and facial expression cloning methods which demonstrate the high visual quality and low computational complexity of the proposed method in synthesizing the face image.
Full-Text [PDF 4504 kb]   (1815 Downloads)    
Type of Study: Research | Subject: Paper
Received: 2017/08/1 | Accepted: 2019/01/14 | Published: 2019/09/17 | ePublished: 2019/09/17

References
1. [1] L. Asgharian, and H. Ebrahimnezhad, "Animating of Carton Characters by Skeleton based Articular Motion Transferring of Other Objects," Journal of Signal and Data Processing, vol. 13, no. 2, pp. 71-89, 2016.
2. [2] F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin, "Synthesizing realistic facial expressions from photographs," in Proceedings of the 25th annual conference on Computer graphics and interactive techniques - SIGGRAPH '98, 1998, vol. 2, no. 3, pp. 75-84. [DOI:10.1145/280814.280825]
3. [3] Y. Zhang and W. Wei, "A realistic dynamic facial expression transfer method," Neurocomputing, vol. 89, pp. 21-29, 2012. [DOI:10.1016/j.neucom.2012.01.019]
4. [4] K. Yu, Z. Wang, L. Zhuo, J. Wang, Z. Chi, and D. Feng, "Learning realistic facial expressions from web images," Pattern Recognit., vol. 46, no. 8, pp. 2144-2155, 2013. [DOI:10.1016/j.patcog.2013.01.032]
5. [5] D. Huang and F. De La Torre, "Bilinear kernel reduced rank regression for facial expression synthesis," in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010, vol. 6312 LNCS, no. PART 2, pp. 364-377. [DOI:10.1007/978-3-642-15552-9_27]
6. [6] E. Keeve, S. Girod, R. Kikinis, and B. Girod, "Deformable modeling of facial tissue for craniofacial surgery simulation," Comput. Aided Surg., vol. 3, no. 5, pp. 228-238, 1998. https://doi.org/10.1002/(SICI)1097-0150(1998)3:5<228::AID-IGS2>3.0.CO;2-I [DOI:10.1002/(SICI)1097-0150(1998)3:53.0.CO;2-I]
7. [7] J. Noh and U. Neumann, "Expression cloning," in Proceedings of the 28th annual conference on Computer graphics and interactive techniques - SIGGRAPH '01, 2001, no. August, pp. 277-288. [DOI:10.1145/383259.383290] [PMID]
8. [8] K. Chung and H. M. Chung, Gross Anatomy (Board Review). 2005.
9. [9] W. Wei, C. Tian, S. J. Maybank, and Y. Zhang, "Facial expression transfer method based on frequency analysis," Pattern Recognit., vol. 49, pp. 115-128, 2016. [DOI:10.1016/j.patcog.2015.08.004]
10. [10] P. Ekman and W. V. Friesen, "Constants across cultures in the face and emotion.," J. Pers. Soc. Psychol., vol. 17, no. 2, pp. 124-129, 1971. [DOI:10.1037/h0030377] [PMID]
11. [11] S. M. Platt and N. I. Badler, "Animating facial expressions," in ACM SIGGRAPH Computer Graphics, 1981, vol. 15, no. 3, pp. 245-252. [DOI:10.1145/965161.806812]
12. [12] Y. Lee, D. Terzopoulos, and K. Walters, "Realistic modeling for facial animation," in Proceedings of the 22nd annual conference on Computer graphics and interactive techniques SIGGRAPH 95, 1995, vol. 95, pp. 55-62. [DOI:10.1145/218380.218407]
13. [13] J. Ghent and J. McDonald, "Photo-realistic facial expression synthesis," Image Vis. Comput., vol. 23, no. 12, pp. 1041-1050, 2005. [DOI:10.1016/j.imavis.2005.06.011]
14. [14] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, "Active Shape Models-Their Training and Application," Comput. Vis. Image Underst., vol. 61, no. 1, pp. 38-59, 1995. [DOI:10.1006/cviu.1995.1004]
15. [15] T.F.Cootes, G. J. Edwards, and C.J.Taylor, "Active Appearance Models," in Proc. European Conference on Computer Vision 1998, 1998, vol. 2, pp. 484-498. [DOI:10.1007/BFb0054760]
16. [16] L. Huang and C. Su, "Facial expression synthesis using manifold learning and belief propagation," Soft Comput., vol. 10, no. 12, pp. 1193-1200, 2006. [DOI:10.1007/s00500-005-0041-7]
17. [17] Q. Zhang, Z. Liu, Q. Gaining, D. Terzopoulos, and H. Y. Shum, "Geometry-driven photorealistic facial expression synthesis," IEEE Trans. Vis. Comput. Graph., vol. 12, no. 1, pp. 48-60, 2006. [DOI:10.1109/TVCG.2006.9] [PMID]
18. [18] J. Jia, S. Zhang, and L. Cai, "Facial expression synthesis based on motion patterns learned from face database," in Proceedings - International Conference on Image Processing, ICIP, 2010, pp. 3973-3976. [DOI:10.1109/ICIP.2010.5653914]
19. [19] L. Xiong, N. Zheng, S. Du, and L. Wu, "Extended facial expression synthesis using statistical appearance model," in Industrial Electronics and …, 2009, pp. 1582-1587. [DOI:10.1109/ICIEA.2009.5138461]
20. [20] Y. Yang, N. Zheng, Y. Liu, S. Du, Y. Su, and Y. Nishio, "Expression transfer for facial sketch animation," Signal Processing, vol. 91, no. 11, pp. 2465-2477, 2011. [DOI:10.1016/j.sigpro.2011.04.020]
21. [21] Y. Zhang, W. Lin, B. Zhou, Z. Chen, B. Sheng, and J. Wu, "Facial expression cloning with elastic and muscle models," J. Vis. Commun. Image Represent., vol. 25, no. 5, pp. 916-927, 2014. [DOI:10.1016/j.jvcir.2014.02.010]
22. [22] K. Li, Q. Dai, R. Wang, Y. Liu, F. Xu, and J. Wang, "A data-driven approach for facial expression retargeting in video," IEEE Trans. Multimed., vol. 16, no. 2, pp. 299-310, 2014. [DOI:10.1109/TMM.2013.2293064]
23. [23] P. Garrido, L. Valgaerts, O. Rehmsen, T. Thormahlen, P. Perez, and C. Theobalt, "Automatic face reenactment," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 4217-4224. [DOI:10.1109/CVPR.2014.537]
24. [24] J. Thies, M. Zollhöfer, M. Nießner, L. Valgaerts, M. Stamminger, and C. Theobalt, "Real-time Expression Transfer for Facial Reenactment," SIGGRAPH Asia 2015, vol. 34, no. 6, pp. 1-14, 2015. [DOI:10.1145/2816795.2818056]
25. [25] C. Tian, H. Li, and X. Gao, "Photo-realistic 2D expression transfer based on FFT and modified Poisson image editing," Neurocomputing, vol. 309, pp. 1-10, 2018. [DOI:10.1016/j.neucom.2018.03.045]
26. [26] W. Xie, L. Shen, M. Yang, and J. Jiang, "Facial expression synthesis with direction field preservation based mesh deformation and lighting fitting based wrinkle mapping," Multimed. Tools Appl., vol. 77, no. 6, pp. 7565-7593, 2018. [DOI:10.1007/s11042-017-4661-6]
27. [27] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, "The extended cohn-kande dataset (CK+): A complete facial expression dataset for action unit and emotionspecified expression," in Cvprw, 2010, no. July, pp. 94-101. [DOI:10.1109/CVPRW.2010.5543262]
28. [28] D. Lundqvist, A. Flykt, and A. Ohman, "The Karolinska directed emotional faces (KDEF)," CD ROM from Dep. Clin. Neurosci. Psychol. Sect. Karolinska Institutet, pp. 91-630, 1998. [DOI:10.1037/t27732-000]

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

© 2015 All Rights Reserved | Signal and Data Processing