Imaging Science, Acoustic Science, and Perception Science (IAP) are the highly interdisciplinary fields which combine aspects of science, mathematics, and engineering. Imaging Science is a unique multidisciplinary approach to solving problems using imaging systems. These problems can range from understanding the environment at both large and microscopic scales, to understanding how the human eye-brain system allows us to visually understand the world. We use imaging systems to learn about the long hidden content of ancient documents, recognize human face and fingerprints, identify manipulated pictures, and support disaster relief efforts with images from airplanes and satellites. Acoustics is the interdisciplinary science that deals with the study of all mechanical waves including vibration, sound, ultrasound, especially their generation, transmission, and reception. The perception is to study how people translate sensory impressions into a coherent and unified view of the world around them.
The mission of the CIAPS is to advance the scientific underpinnings of imaging and audioception technologies as well as increase the scientific understanding of human perception and cognitive mechanism, for applications in security, health-care, arts, robotics, environmental, and social science, etc. The center will also promote and facilitate interdisciplinary collaborations among CIAPS researchers and industry developers, leading to large-scale multidisciplinary research projects development with impacts both scientifically and economically.
The objective of the CISPS Center lies in three-fold:
Advance the scientific underpinnings of imaging and audioception technologies as well as increase the scientific understanding of human perception and cognitive mechanism, for applications in security, health-care, arts, robotics, environmental and social science, etc.
Promote and facilitate interdisciplinary collaboration among IAP researchers in Binghamton in both academia and industry, leading to large-scale multidisciplinary research projects development with external funding
Generate great impacts by translating the IAP scientific research to technology and product development with a great commercial potential for enhancing economic development opportunities in local region, New York State, as well as the nation as a whole.
· Long-term, Shared Autonomy with Augmented 360-Degree Vision for Mobile Telepresence Robotics, PIs: Shiqi Zhang (CS) and Yao Liu (CS)
· Validating the concept of label-free surgical dermatopathology via stimulated Raman imaging of fresh frozen tissues from Mohs surgery PIs: Fake Lu (BME), L. Yin (CS), external collaborator Dr. Sherrif Ibrahim (U of Rochester)
· Deep-learning Based Denoising and Registration of Brain Functional Images, PIs: Weiying Dai (CS) and L. Yin (CS)
· Landmine Detection via R-CNN Processing of UAV-based Imagery PIs: Kenneth Chiu (CS), Alex Nikulin (Geology), Tim de Smet (Geology)
· Road condition assessment using deep learning and in-situ sensing PIs: Chengbin Deng (Geography) and L. Yin (CS)
· Invited Talk：Pain and Substance Use: Research Findings, Treatment Implications, and Future Directions, Dr. Emily Zale (Department of Psychology, Binghamton University (Oct. 29, 2020, 1:00pm-2:00pm via Zoom: https://binghamton.zoom.us/my/lijunyin)
· Invited Talk： “Shapes, Reconstruction, and Deep Fakes from a Generative Perspective” by Dr. Ilke Demir, Senior Research Scientist at Intel, (Oct. 15, 2020, 12pm-1pm via Zoom: https://binghamton.zoom.us/my/lijunyin)
· Invited Talk：”A Statistical Distribution-based Deep Neuron Network Model – a new perspective on effective learning” By Dr. Jinjun Xiong, IBM T. J. Watson Research Center, Yorktown Heights, NY (Oct. 16, 2020, 1pm-2pm via Zoom: https://binghamton.zoom.us/my/lijunyin)
· Invited talk：“Designing Controlled Experiments for Data Analysis in Public Health and Policy: An Introduction”, by Dr. Hao Deng, Massachusetts General Hospital/Harvard Medical School, and John Hopkins Bloomberg School of Public Health; Date/Time: September 18, 2020 (Friday: noon-1pm) via Zoom: https://binghamton.zoom.us/my/lijunyin
· Seminar talk：”Computational modeling of human affection and its applications” By Dr. Lijun Yin (Department of Computer Science, Binghamton University (September 10, 2020, 9am-10am via Zoom)
· Upcoming Invited Talk： Dr. Rodney M. Gabel (Decker's School of Rehabilitation Sciences, Binghamton University (Date/Time: TBA)
· Upcoming Invited Talk： Dr. Michael Reale (SUNY Polytechnic Institute (Date/Time: TBA)
· Invited talk：Dynamic functional brain imaging in early diagnosis and treatment therapy, by Dr. Weiying Dai, Assistant Professor of Computer Science, Binghamton University, 12pm-1pm on Nov.7 Thursday at G11 conference room of Engineering building
· Invited talk: Multiphoton optical imaging renders rapid label-free digital pathology for cancer diagnosis, by Dr. Frank (Fa-ke) Lu, Assistant Professor of Biomedical Engineering, Binghamton University, 12pm-1pm on Nov.14 Thursday at G11 conference room of Engineering building
· Invited talk: From Multi-Robot Systems to Human-Robot Interaction and Collaboration, by Dr. Shiqi Zhang, Assistant Professor of Computer Science, Binghamton University, 12pm-1pm on Dec.5 Thursday at G11 conference room of Engineering building
· Invited talk: Designing Evolutionary Rule-based Machine Learning for Real World Applications, Prof. Keiki Takadam, The University of Electro-Communications, Tokyo, April, 2019 (Hosted by Dr. Shiqi Zhang)
· Seminar talk: Multimodal emotion analysis with EEG and videos, Xiaotian Li and Xiang Zhang, April 28, 2020 (via Zoom)
· Seminar talk: Privacy protection by portrait replacement, Zhe Ge, May 12, 2020 (via Zoom)
· Invited talk: Challenges Facing Computational Face, Dr. Laszlo A. Jeni, Carnegie Mellon University, July 2018
· Invited talk: Digital Facial Morphometry for Face Perception Research, Dr. Carl Martin Grewe, Zuse Institute Berlin, Germany, Aug. 2018
· Invited talk: Quantitative susceptibility mapping (QSM): physics, algorithm and applications, Dr. Yi Wang, Cornell University, Sept. 2018
· Invited talk: Mechanical object modeling, detection, recognition, and classification, Taichi Wada, The University of Electro-Communications, Tokyo, Japan, Oct. 2018
· Invited talk: Object Oriented Data Analysis, Dr. Steve Marron from UNC Chapel Hill, March, 2018 (co-organized with Data Science TWG)
· Invited talk: Integrating Prior Knowledge and Data for Efficient Visual Learning, Dr. Qiang Ji, Rensselaer Polytechic Institute, Nov. 2017
Assistant ProfessorDepartment of Biomedical Engineering
Emotion is expressed in multiple modalities, yet most research has considered at most one or two. This stems in part from the lack of large, diverse, well-annotated, multimodal databases with which to develop and test algorithms. We present a well-annotated, multimodal, multidimensional spontaneous emotion corpus of 140 participants. Emotion inductions were highly varied. Data were acquired from a variety of sensors of the face that included high-resolution 3D dynamic imaging, high-resolution 2D video, and thermal (infrared) sensing, and contact physiological sensors that included electrical conductivity of the skin, respiration, blood pressure, and heart rate. Facial expression was annotated for both the occurrence and intensity of facial action units from 2D video by experts in the Facial Action Coding System (FACS). The corpus further includes derived features from 3D, 2D, and IR (infrared) sensors and baseline results for facial expression and action unit detection. The entire corpus will be made available to the research community.