Organized Research Center for Imaging, Acoustics, and Perception Science
(CIAPS)

What is CIAPS?



Imaging Science, Acoustic Science, and Perception Science (IAP) are the highly interdisciplinary fields which combine aspects of science, mathematics, and engineering. Imaging Science is a unique multidisciplinary approach to solving problems using imaging systems. These problems can range from understanding the environment at both large and microscopic scales, to understanding how the human eye-brain system allows us to visually understand the world. We use imaging systems to learn about the long hidden content of ancient documents, recognize human face and fingerprints, identify manipulated pictures, and support disaster relief efforts with images from airplanes and satellites. Acoustics is the interdisciplinary science that deals with the study of all mechanical waves including vibration, sound, ultrasound, especially their generation, transmission, and reception. The perception is to study how people translate sensory impressions into a coherent and unified view of the world around them.


CIAPSDescription

The mission of the CIAPS is to advance the scientific underpinnings of imaging and audioception technologies as well as increase the scientific understanding of human perception and cognitive mechanism, for applications in security, health-care, arts, robotics, environmental, and social science, etc. The center will also promote and facilitate interdisciplinary collaborations among CIAPS researchers and industry developers, leading to large-scale multidisciplinary research projects development with impacts both scientifically and economically.




Objective of CIAPS



The objective of the CISPS Center lies in three-fold:

  • Advance the scientific underpinnings of imaging and audioception technologies as well as increase the scientific understanding of human perception and cognitive mechanism, for applications in security, health-care, arts, robotics, environmental and social science, etc.

  • Promote and facilitate interdisciplinary collaboration among IAP researchers in Binghamton in both academia and industry, leading to large-scale multidisciplinary research projects development with external funding

  • Generate great impacts by translating the IAP scientific research to technology and product development with a great commercial potential for enhancing economic development opportunities in local region, New York State, as well as the nation as a whole.




Participating Faculty




Lijun Yin

Director of CIAPS

Peofessor of Computer Science
Thomas J. Watson School of Engineering and Applied Science
 Computer vision & graphics, HCI, biometrics
 (607)-777-5484
 lijun@cs.binghamton.edu

Scott Craver

Associate Director of CIAPS

Associate Professor of Electrical and Computer Engineering
Thomas J. Watson School of Engineering and Applied Science
 Image cryptology and watermarking,and biometrics security
 (607) 777-7238
 scraver@binghamton.edu

Ronald Miles

Associate Director of CIAPS

SUNY Distinguished Professor of Mechanical Engineering
Thomas J. Watson School of Engineering and Applied Science
 Acoustics, micro-acoustic sensor, vibrations.
 607-777-4038
 miles@binghamton.edu

Peter C. Gerhardstrin

Associate Director of CIAPS

Professor of Psychology
Harper College of Arts and Science
 Visual perception and cognition
 607-777-4387
 gerhard@binghamton.edu

Zhongfei (Mark) Zhang

Professor

Department of Computer Science
Thomas J. Watson School of Engineering and Applied Science
 Multimedia and image data learning & retrieval
 (607) 777-2935
 zhongfei@cs.binghamton.edu

Jessica Fridrich

Distinguished Professor

Department of Electrical and Computer Engineering
Thomas J. Watson School of Engineering and Applied Science
 Steganography and steganalysis
 (607) 777-6177
 fridrich (AT) binghamton . edu

Carl P. Lipo

Professor

Department of Anthropology
Harper College of Arts and Science
 Remote sensing and thermal imagery for study of archaeological record
 607-777-4306
 clipo@binghamton.edu

Xingye Qiao

Associate Professor

Department of Mathematical Sciences
Harper College of Arts and Science
 Statistical machine learning and pattern recognition
 (607) 777-2593
 qiao@math.binghamton.edu

Weiying Dai

Assistant Professor

Department of Computer Science
Thomas J. Watson School of Engineering and Applied Science
 Medical Image (MRI, CT, neuroimaging, Angiography)
 (607) 777-4859
 wdai@binghamton.edu

Lei Yu

Associate Professor

Department of Computer Science
Thomas J. Watson School of Engineering and Applied Science
 Big image data mining and machine learning
 (607) 777-6250
 A@B with A = lyu and B = cs.binghamton.edu

Yao Liu

Assistant Professor

Department of Computer Science
Thomas J. Watson School of Engineering and Applied Science
 Power-efficient video streaming, QoE improved VR visualization
 607-777-4365
 yaoliu AT binghamton DOT edu

Mark Fowler

Distinguished Teaching Professor

Department of Electrical and Computer Engineering
Thomas J. Watson School of Engineering and Applied Science
 Data compression and signal processing
 607-777-6973
 mfowler@binghamton.edu

Stephen A. Zahorian

Professor

Department of Electrical and Computer Engineering
Thomas J. Watson School of Engineering and Applied Science
 Acoustic signal processing and speech recognition
 (607) 777-4846
 zahorian@binghamton.edu

Matthias Kirchner

Assistant Professor

Department of Electrical and Computer Engineering
Thomas J. Watson School of Engineering and Applied Science
 Multimedia security and multimedia forensic
 607-777-3681
 kirchner@binghamton.edu

Amber L Doiron

Assistant Professor

Department of Biomedical Engineering
Thomas J. Watson School of Engineering and Applied Science
 Molecular imaging & MRI
 607-777-5477
 adoiron@binghamton.edu

Huiyang Li

Assistant Professor

Department of Systems Science & Industrial Engineering
Thomas J. Watson School of Engineering and Applied Science
 Human robot interaction, human factor, persuasiveness, and multimodal interface
 
 hli@binghamton.edu

Mohammad T. Khasawneh

Professor

Department of Systems Science & Industrial Engineering
Thomas J. Watson School of Engineering and Applied Science
 Digital human modeling in manufacturing and healthcare
 (607) 777-4408
 mkhasawn@binghamton.edu

Michael Dobbins

Assistant Professor

Department of Mathematical Sciences
Harper College of Arts and Science
 Computational geometry and visual topology
 (607) 777-2378
 dobbins@math.binghamton.edu

ALBRECHT INHOFF

Professor of Psychology

Department of Psychology
Harper College of Arts and Science
 Attention, eye-voice coordination, sub-vocal speech
 607-777-3958
 inhoff@binghamton.edu

Matthew C. Sanger

Assistant Professor of Anthropology

Department of Anthropology
Harper College of Arts and Science
 Imaging for archaeological objects and contexts study
 607-777-6739
 msanger@binghamton.edu

Timothy S. de Smet

Research Assiatant Professor

Geological Science and Environmental Studies, Department of Anthropology
Harper College of Arts and Science
 Geo-archaeology, geology, remote sensing, data fusion
 (607)777-2519
 tdesmet@binghamton.edu

JENNIFER STOEVER

Associate Professor

English Department
Harper College of Arts and Science
 Sound and audio culture study
 607-777-5494
 jstoever@binghamton.edu

Chengbin Deng

Assistant Professor

Department of Geography
Harper College of Arts and Science
 Remote sensing image processing and spatial analysis
 607-777-6791
 cdeng@binghamton.edu

Qiusheng Wu

Assistant Professor

Department of Geography
Harper College of Arts and Science
 GIS, multispectral remote sensing, Environmental Change
 607-777-3145
 wqs@binghamton.edu

Andrew Horowitz

Artist In Residence

Theatre Department
Harper College of Arts and Science
 Artistic visual effects, media production
 (607) 348 - 4044
 andy@galumpha.com

Qi Wang

Associate Professor

Marketing Department
School of Management
 Marketing and video based social interaction study
 607-777-2632
 qiwang@binghamton.edu



Publications




Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis

International Conference on Computer Vision and Pattern Recognition (CVPR),2016

Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.


   Three-dimensional Modeling, Two Dimensional Analysis, Emotion, Thermal sensors, Sensor systems, Physiology

Z. Zhang, J. Girard, Y. Wu, X. Zhang, P. Liu, U. Ciftci, S. Canavan, M. Reale, A. Horowitz, H. Yang, J. Cohn, Q. Ji, and L. Yin


Perception Driven 3D Facial Expression Analysis Based on Reverse Correlation and Normal Component

AAAC 6th International Conference on Affective Computing and intelligent Interaction (ACII) ,2015

Research on automated facial expression analysis (FEA) has been focused on applying different feature extraction methods on texture space and geometric space, using holistic or local facial regions based on regular grids or facial anatomical structure. Not much work has been investigated by taking human perception into account. In this paper, we propose to study the facial expressive regions using a reverse correlation method, and further develop a novel 3D local normal component feature representation based on human perceptions. The classification image (CI) accumulated in multiple trials reveals the shape features which alter the neutral Mona Lisa portrait to positive and negative domains. The differences can be identified by both humans and machine. Based on the CI and the derived local feature regions, a novel 3D normal component based feature (3D-NLBP) is proposed to represent positive and negative expressions (e.g., happiness and sadness). This approach achieves a good performance and has been validated by testing on both high-resolution database and real-time low resolution depth map videos.


   Perception, Facial Expression Analysis, Reverse Correlation

X. Zhang, Z. Zhang, D. Hipp, L. Yin, and P. Gerhardstein


Landmark Localization on 3D/4D Range Data Using a Shape Index-Based Statistical Shape Model with Global and Local Constraints

Computer Vision and Image Understanding (Special issue on Shape Representations Meet Visual Recognition), Vol. 139, p136-148, Elsevier.,2015

In this paper we propose a novel method for detecting and tracking facial landmark features on 3D static and 3D dynamic (a.k.a. 4D) range data. Our proposed method involves fitting a shape index-based statistical shape model (SI-SSM) with both global and local constraints to the input range data. Our proposed model makes use of the global shape of the facial data as well as local patches, consisting of shape index values, around landmark features. The shape index is used due to its invariance to both lighting and pose changes. The fitting is performed by finding the correlation between the shape model and the input range data. The performance of our proposed method is evaluated in terms of various geometric data qualities, including data with noise, incompletion, occlusion, rotation, and various facial motions. The accuracy of detected features is compared to the ground truth data as well as to start of the art results. We test our method on five publicly available 3D/4D databases: BU-3DFE, BU-4DFE, BP4D-Spontaneous, FRGC 2.0, and Eurecom Kinect Face Dataset. The efficacy of the detected landmarks is validated through applications for geometric based facial expression classification for both posed and spontaneous expressions, and head pose estimation. The merit of our method is manifested as compared to the state of the art feature tracking methods.


   Feature detection, tracking, 3D face

S. Canavan, P. Liu, X. Zhang, and L. Yin


BP4D-Spontaneous: A high resolution spontaneous 3D dynamic facial expression database,

Image and Vision Computing, 32, pp.692-706 (special issue of The Best of Face and Gesture'13), Elsevier.,2014

Facial expression is central to human experience. Its efficiency and valid measurement are challenges that automated facial image analysis seeks to address. Most publically available databases are limited to 2D static images or video of posed facial behavior. Because posed and un-posed (aka “spontaneous”) facial expressions differ along several dimensions including complexity and timing, well-annotated video of un-posed facial behavior is needed. Moreover, because the face is a three-dimensional deformable object, 2D video may be insufficient, and therefore 3D video archives are required. We present a newly developed 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains. To the best of our knowledge, this new database is the first of its kind for the public. The work promotes the exploration of 3D spatiotemporal features in subtle facial expression, better understanding of the relation between pose and motion dynamics in facial action units, and deeper understanding of naturally occurring facial action.


   3D face modeling, facial action unit, FACS, spontaneous facial expression

X. Zhang, L. Yin, J. Cohn, S. Canavan, M. Reale, A.Horowitz, P. Liu, and G. Girard