Imaging Science, Acoustic Science, and Perception Science (IAP) are the highly interdisciplinary fields which combine aspects of science, mathematics, and engineering. Imaging Science is a unique multidisciplinary approach to solving problems using imaging systems. These problems can range from understanding the environment at both large and microscopic scales, to understanding how the human eye-brain system allows us to visually understand the world. We use imaging systems to learn about the long hidden content of ancient documents, recognize human face and fingerprints, identify manipulated pictures, and support disaster relief efforts with images from airplanes and satellites. Acoustics is the interdisciplinary science that deals with the study of all mechanical waves including vibration, sound, ultrasound, especially their generation, transmission, and reception. The perception is to study how people translate sensory impressions into a coherent and unified view of the world around them.
The mission of the CIAPS is to advance the scientific underpinnings of imaging and audioception technologies as well as increase the scientific understanding of human perception and cognitive mechanism, for applications in security, health-care, arts, robotics, environmental, and social science, etc. The center will also promote and facilitate interdisciplinary collaborations among CIAPS researchers and industry developers, leading to large-scale multidisciplinary research projects development with impacts both scientifically and economically.
The objective of the CISPS Center lies in three-fold:
Advance the scientific underpinnings of imaging and audioception technologies as well as increase the scientific understanding of human perception and cognitive mechanism, for applications in security, health-care, arts, robotics, environmental and social science, etc.
Promote and facilitate interdisciplinary collaboration among IAP researchers in Binghamton in both academia and industry, leading to large-scale multidisciplinary research projects development with external funding
Generate great impacts by translating the IAP scientific research to technology and product development with a great commercial potential for enhancing economic development opportunities in local region, New York State, as well as the nation as a whole.
> Prof. Peter Gerhardstein (TBD)
> Prof. Fake Lu (TBD)
> Prof. Weiying Dai (TBD)
> Prof. Shiqi Zhang (TBD)
> Invited talk: Designing Evolutionary Rule-based Machine Learning for Real World Applications, Prof. Keiki Takadam, The University of Electro-Communications, Tokyo, April, 2019 (Hosted by Dr. Shiqi Zhang)
> Invited talk: Challenges Facing Computational Face, Dr. Laszlo A. Jeni, Carnegie Mellon University, July 2018
> Invited talk: Digital Facial Morphometry for Face Perception Research, Dr. Carl Martin Grewe, Zuse Institute Berlin, Germany, Aug. 2018
> Invited talk: Quantitative susceptibility mapping (QSM): physics, algorithm and applications, Dr. Yi Wang, Cornell University, Sept. 2018
> Invited talk: Mechanical object modeling, detection, recognition, and classification, Taichi Wada, The University of Electro-Communications, Tokyo, Japan, Oct. 2018
> Invited talk: Object Oriented Data Analysis, Dr. Steve Marron from UNC Chapel Hill, March, 2018 (co-organized with Data Science TWG)
> Invited talk: Integrating Prior Knowledge and Data for Efficient Visual Learning, Dr. Qiang Ji, Rensselaer Polytechic Institute, Nov. 2017
Topic: Dynamic functional brain imaging in early diagnosis and treatment therapy
Speaker: Dr. Weiying Dai, Assistant Professor of Computer Science, Binghamton University
Time and place: 12pm-1pm on Nov.7 Thursday at G11 conference room of Engineering building
Topic: Multiphoton optical imaging renders rapid label-free digital pathology for cancer diagnosis
Speaker: Dr. Frank (Fa-ke) Lu, Assistant Professor of Biomedical Engineering, Binghamton University
Time and place: 12pm-1pm on Nov.14 Thursday at G11 conference room of Engineering building
Topic: From Multi-Robot Systems to Human-Robot Interaction and Collaboration
Speaker: Dr. Shiqi Zhang, Assistant Professor of Computer Science, Binghamton University
Time and place: 12pm-1pm on Dec.5 Thursday at G11 conference room of Engineering building
Assistant ProfessorDepartment of Biomedical Engineering
Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.