Analyzing Facial Expressions in Three Dimensional Space

 

Introduction

     Traditionally, human facial expressions have been studied using either 2D static images or 2D video sequences. The 2D-based analysis is difficult to handle large pose variations and subtle facial behavior. This exploratory research targets the facial expression analysis and recognition in a 3D space. The analysis of 3D facial expressions will facilitate the examination of the fine structural changes inherent in the spontaneous expressions. The project aims to achieve a high rate of accuracy in identifying a wide range of facial expressions, with the ultimate goal of increasing the general understanding of facial behavior and 3D structure of facial expressions on a detailed level.


Project Progress

 

I. BU-3DFE (Binghamton University 3D Facial Expression) Database (Static Data)

 

       3D facial models have been extensively used for 3D face recognition and 3D face animation, the usefulness of such data for 3D facial expression recognition is unknown. To foster the research in this field, we created a 3D facial expression database (called BU-3DFE database), which includes 100 subjects with 2500 facial expression models. The BU-3DFE database is available to the research community (e.g., areas of interest come from as diverse as affective computing, computer vision, human computer interaction, security, biomedicine, law-enforcement, and psychology.)

 

        The database presently contains 100 subjects (56% female, 44% male), ranging age from 18 years to 70 years old, with a variety of ethnic/racial ancestries, including White, Black, East-Asian, Middle-east Asian, Indian, and Hispanic Latino. Participants in face scans include undergraduates, graduates and faculty from our institute’s departments of Psychology, Arts, and Engineering (Computer Science, Electrical Engineering, System Science, and Mechanical Engineering). The majority of participants were undergraduates from the Psychology Department (collaborator: Dr. Peter Gerhardstein).

    

         Each subject performed seven expressions in front of the 3D face scanner. With the exception of the neutral expression, each of the six prototypic expressions (happiness, disgust, fear, angry, surprise and sadness) includes four levels of intensity. Therefore, there are 25 instant 3D expression models for each subject, resulting in a total of 2,500 3D facial expression models in the database. Associated with each expression shape model, is a corresponding facial texture image captured at two views (about +45° and -45°). As a result, the database consists of 2,500 two-view’s texture images and 2,500 geometric shape models.

 

 

Facial Expression Recognition Based On BU-3DFE Database

 

       We investigated the usefulness of 3D facial geometric shapes to represent and recognize facial expressions using 3D facial expression range data. We developed a novel approach to extract primitive 3D facial expression features, and then apply the feature distribution to classify the prototypic facial expressions. Facial surfaces are classified by the primitive surface features based on the surface curvatures. The distribution of these features are used as the descriptors of the facial surface, which characterize the facial expression. We conducted the person-independent study to recognize the facial expression contained in our BU-3DFE Database, the result shows about 83% correct recognition rate in classifying six universal expressions using LDA approach.

 

 

Requesting Data (BU-3DFE)

     With the agreement of the technology transfer office of the SUNY at Binghamton, the database is available for use by external parties. Due to agreements signed by the volunteer models, a written agreement must first be signed by the recipient and the research administration office director of your institution before the data can be provided. Furthermore, the data will be provided to parties who are pursuing research for non-profit use. To make a request for the data, please contact Dr. Lijun Yin at lijun@cs.binghamton.edu. For any profit/commercial use of such data, please also contact both Dr. Lijun Yin and Mr. Scott Hancock in the Office of Technology Licensing and Innovation Partnerships at shancock@binghamton.edu.

 

Note: (1) Students are not eligible to be a recipient.  If you are a student, please have your supervisor to make a request. (2) Once the agreement form is signed, we will give access to download the data.

 

If this data is used, in whole or in part, for any publishable work, the following paper must be referenced:

”A 3D Facial Expression Database For Facial Behavior Research” by  Lijun Yin; Xiaozhou Wei; Yi Sun; Jun Wang; Matthew J. Rosato, 7th International Conference on Automatic Face and Gesture Recognition, 10-12 April 2006 P:211 - 216

 

 

II. BU-4DFE (3D + time):  A 3D Dynamic Facial Expression Database (Dynamic Data)

 

    To analyze the facial behavior from a static 3D space to a dynamic 3D space, we extended the BU-3DFE to the BU-4DFE.

Here we present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The 3D facial expressions are captured at a video rate (25 frames per second). For each subject, there are six model sequences showing six prototypic facial expressions (anger, disgust, happiness, fear, sadness, and surprise), respectively. Each expression sequence contains about 100 frames. The database contains 606 3D facial expression sequences captured from 101 subjects, with a total of approximately 60,600 frame models. Each 3D model of a 3D video sequence has the resolution of approximately 35,000 vertices. The texture video has a resolution of about 1040×1329 pixels per frame. The resulting database consists of 58 female and 43 male subjects, with a variety of ethnic/racial ancestries, including Asian, Black, Hispanic/Latino, and White.

 

 

 

4D_samples Individual model views

 

 

 

4D_samples_sequenceMale 4D_samples_sequenceFemale

 

Sample expression model sequences (male and female)

 

Requesting Data (BU-4DFE)

     With the agreement of the technology transfer office of the SUNY at Binghamton, the database is available for use by external parties. Due to agreements signed by the volunteer models, a written agreement must first be signed by the recipient and the research administration office director of your institution before the data can be provided. Furthermore, the data will be provided to parties who are pursuing research for non-profit use. To make a request for the data, please contact Dr. Lijun Yin at lijun@cs.binghamton.edu. For any profit/commercial use of such data, please also contact both Dr. Lijun Yin and Mr. Scott Hancock in the Office of Technology Licensing and Innovation Partnerships at shancock@binghamton.edu.


Note:
(1) Students are not eligible to be a recipient.  If you are a student, please have your supervisor to make a request.
(2) Once a license agreement is signed, we will give access to download the data.

(3) If this data is used, in whole or in part, for any publishable work, the following paper must be referenced:

   ”A High-Resolution 3D Dynamic Facial Expression Database” by  Lijun Yin; Xiaochen Chen; Yi Sun; Tony Worm; Michael Reale, The 8th International Conference on Automatic Face and Gesture Recognition, 17-19 September 2008 (Tracking Number: 66)

 

Development Team:

PI:  Dr. Lijun Yin.

Research Team:  Xiaozhou Wei, Yi Sun, Jun Wang, Matthew Rosato, Myung Jin Ko, Wanqi Tang, Peter Longo, Xiaochen Chen, Terry Hung, Michael Reale, Tony Worm, and Xing Zhang.

Collaborator: Dr. Peter Gerhardstein of Psychology, SUNY Binghamton and his team (Ms. Gina Shroff).

 

Related Publications:

 

·         Lijun Yin, Xiaozhou Wei, Yi Sun, Jun Wang, and Matthew Rosato, “A 3D Facial Expression Database For Facial Behavior Research”. The 7th International Conference on Automatic Face and Gesture Recognition (2006).  IEEE Computer Society TC PAMI.  Southampton, UK, April 10-12 2006. p211-216 [PDF]

·         Lijun Yin and Xiaozhou Wei, “Multi-Scale Primal Feature Based Facial Expression Modeling and Identification”. The 7th International Conference on Automatic Face and Gesture Recognition (2006), IEEE Computer Society TC PAMI.  Southampton, UK, April 10-12 2006. p603-608 [PDF]

·         Jun Wang, Lijun Yin, Xiaozhou Wei, and Yi Sun, “3D Facial Expression Recognition Based on Primitive Surface Feature Distribution”, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2006), New York, NY, June 17-22, 2006. IEEE Computer Society. [PDF]

·          L. Yin, X. Wei, P. Longo, and A. Bhuvanesh, “Analyzing Facial Expressions Using Intensity-Variant 3D Data for Human Computer Interaction”, 18th IAPR International Conference on Pattern Recognition (ICPR 2006), Hong Kong. p1248 – 1251. (Best Paper Award) [PDF]

·         Y. Sun and L. Yin, “Evaluation of 3D Facial Feature Selection for Individual Facial Model Identification”, 18th IAPR International Conference on Pattern Recognition (ICPR 2006), Hong Kong. p562- 565. [PDF]

·         J. Wang and L. Yin, “Static Topographic Modeling for Facial Expression Recognition and Analysis”, Computer Vision and Image Understanding, Elsevier Science. Nov. 2007. p19-34. [PDF]

·         L. Yin, X. Chen, Y. Sun, T. Worm, and M. Reale, “A High-Resolution 3D Dynamic Facial Expression Database”,  The 8th International Conference on Automatic Face and Gesture Recognition (2008), 17-19 September 2008 (Tracking Number: 66). IEEE Computer Society TC PAMI.  Amsterdam, The Netherlands. [PDF]

·         Y. Sun and L. Yin, “Facial Expression Recognition Based on 3D Dynamic Range Model Sequences". The 10th European Conference on Computer Vision (ECCV08), October 12-18, 2008, Marseille, France. [PDF]

 

 

III. BP4D-Spontanous:  Binghamton-Pittsburgh 3D Dynamic Spontaneous Facial Expression Database (Spontaneous Dynamic Data) (under construction)

 

 

Acknowledgement:

This material is based upon work supported in part by the National Science Foundation under grants IIS-0541044, IIS-0414029, and IIS-1051103. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. We would also like to thank the support from the NYSTAR’s James D. Watson Investigator Program.

 

NYSTAR_logo  nsf NYSTAR_logo

 

Copyright @ GAIC lab, SUNY at Binghamton 2013.