Developing a High-Definition Face
Modeling System for Recognition and Generation of Face and Face Expressions
Project
Description
The human faces consist of the same
facial features in roughly the same geometrical configuration. They are similar
but different. This research seeks to identify the embedded features in the 2D
face images and generate a 3D representation of face models. The objective of this project is
to explore a novel technique for realistic facial analysis and modeling, with
the ultimate goal of developing a high-definition face modeling system to
facilitate the research of human face understanding and recognition for
applications in HCI, security, telecommunication, entertainment, and medical
and psychological research. We proposed to develop a novel approach for face
modeling by exploring the topographic primal feature theory. It is so-called
topographic based face analysis and topographic based face modeling. The
algorithms that developed are applicable to a number of applications, including
face and face expression recognition for HCI, security, etc.
Project
Development
·
Topographic based face analysis
We developed a
novel technique for analysis and synthesis of the human face at a detailed
level. A so-called topographic representation is proposed
for labeling the facial images. Tracing the behavior of some features across
multiple scales can reveal precious information about the nature of the
underlying physical process, and thus lead to establishing an intrinsic
relationship between face features and surface properties. Through the
topographic analysis of face images, each pixel is labeled
as one of primal features that are embedded in the face images (e.g., peak,
valley, etc.) The topographic feature distribution (i.e., topographic context)
is a unique signature of each individual face or individual expression. The
generated topographic map shows that the topographic features change along with
the facial appearance change. This finding leads to a number of significant applications
for face expression recognition, face color transfer, human eye detection and tracking. The idea of the face geometric
structure analysis based on the principal curvatures and their classification
is extendable to the face recognition and face range data classification. In
order to provide more topographic details of facial images, we also developed
the image resolution enhancement through a proposed hyper-resolution algorithm.
The results are published in the technical conferences
or journals (as listed in the following publication section).

Video demonstrations (original;
topographic map; face models.)
·
Topographic based face model
generation
Based on the topographic analysis and labeling, we developed
a method for creating the individual face model using an adaptive mesh in the
topographic domain. The adaptive mesh (or called dynamic mesh) is adjusted from a generic model based on the topographic
features. The model deformation is conducted based on
the external force determined by the topographic gradients and the topographic
curvatures. The comparison study using the generated models with the range
models captured from a 3D scanner shows that the resulting individualized model
represents the individual face shape with the sufficient accuracy.


·
Applications
The idea of topographic labeling is extended to the application for eye tracking, face color
transfer, face model classification, and face expression classification. The
usefulness of the generated models are validated
through the multimedia and HCI applications. The individualized models are used
for performance-driven avatar animation and expression transfer.



Related
Publications
· Lijun Yin and Johnny Loi and Wei Xiong,
“Facial Expression Representation and Recognition Based on Texture
Augmentation and Topographic Masking”, ACM Multimedia 2004 (SIGMM), New York, NY, Oct., 2004 p236- 239 [PDF]
· Lijun Yin and Xiaozhou Wei, “Multi-Scale
Primal Feature Based Facial Expression Modeling and Identification”, 7th
International Conference on Automatic Face and Gesture Recognition (FGR2006),
IEEE Computer Society TC PAMI.
· Lijun Yin and Matt Yourst, "Hyper-Resolution: image detail reconstruction
through parametric edges", Computers and Graphics, Vol.29, No.6, Elsevier
Science, December, 2005, p946-960 [PDF]
· Lijun Yin and K.
Weiss, “Generating 3D Views of Facial Expressions From Frontal Face Video
Based on Topographic Analysis”, ACM
Multimedia 2004 (SIGMM),
· Lijun Yin, Kenny Weiss
and Xiaozhou Wei,
“Face Modeling From Frontal Face Image Based on Topographic
Analysis”, SIGGRAPH 2004 Posters program, August 2004.
· Lijun Yin, Xiaozhou Wei,
Yi Sun, Jun Wang, and Matthew Rosato, “A 3D Facial
Expression Database For Facial Behavior Research”, 7th International Conference on Automatic
Face and Gesture Recognition (FGR2006), IEEE Computer Society TC PAMI.
· Jun Wang, Lijun Yin, Xiaozhou Wei, and Yi Sun, “3D Facial Expression Recognition Based on
Primitive Surface Feature Distribution”, IEEE International Conference on Computer Vision and Pattern
Recognition (CVPR 2006),
· Jun Wang and Lijun
Yin, "Detecting and Tracking Eyes Through Dynamic Terrain Feature
Matching". IEEE CVPR05 workshop on Vision For Human Computer Interaction
(V4HCI) in conjunction with CVPR2005,
· Lijun Yin, Johnny Loi, Jingrong Jia
and Joseph Morrissey, “Topographic
Based Facial Skin Color Transfer”, SIGGRRAPH 2004 Posters program,
August, 2004.
· Yi Sun and Lijun Yin,
"3D face recognition using two views face modeling and labeling",
IEEE CVPR05 Workshop on Advanced 3D Imaging for Safety and Security (A3DISS),
San Diego, CA, June 2005 in conjunction with CVPR2005. [PDF]
· Yi Sun and Lijun Yin, “Evaluation of 3D Facial Feature Selection
For Individual Facial Model Identification”, accepted by IAPR/IEEE
International Conference on Pattern Recognition (ICPR 2006), Hong Kong. Aug.
2006 [PDF]
· Xiaozhou Wei
and Zhiwei Zhu and Lijun Yin and Qiang
Ji, “A real time face tracking and animation
system”, First IEEE CVPR’04 Workshop on Face Processing in Video,
in conjunction with IEEE International Conference on Computer Vision and
Pattern Recognition (CVPR'04). June 2004,
· Xiaozhou Wei
and Zhiwei Zhu and Lijun Yin and Qiang
Ji, “Avatar mediated face tracking and lip
reading for human computer interaction”, ACM Multimedia 2004 (SIGMM),
Project
Participants:
PI: Dr. Lijun Yin.
Students: Xiaozhou Wei, Jun
Wang, Yi Sun, Jun Wang, Kenny Weiss, Wei Xiong, Johnny Loi.
Future
Development
The future
development could be in areas of multi-view labeling and real time
implementation.
Acknowledgement:
This material is based
upon work supported by the National Science Foundation under grants
IIS-0414029. Any opinions, findings, and conclusions or recommendations
expressed in this material are those of the author and do not necessarily
reflect the views of the National Science Foundation. We would also like to
thank the support from the NYSTAR’s James D.
Watson Program.

Copyright @ GAIC lab, SUNY at