Real-Time Facial Expression Cloning and Animation
This is the work for generating a real-time facial
expression animation based on an individualized wireframe model avatar.
The tracking and head pose detection (developed by Dr. Qing Ji) and animation (developed by Dr. Lijun Yin) in 3D space are conducted
in real-time. The face animation is based only on 22 fiducial points
of a performer.
This is the collaborative work with
Dr. Qiang Ji from Intelligent System Lab, Rensselaer Polytechnic Institute.
Face Modeling and Expression Analysis and Synthesis
A realistic face modeling and facial expression analysis and systhesis system is developed.
The system consists of face topo-feature labeling, facial model adaptation, texture of
interest detection and facial expression reconstruction.
Video Head TP+6: Right view
Video Head TP-6: Left view
Video Fit TP+6: Right view
Video Fit TP-6: Left view
Topographic Labeling: Front view
The material is based upon work supported by the NSF under grant 0414029. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
lijun@cs.binghamton.edu