(Call for tutorial proposals is now over, but is still available for a reference here)
Multi-view Face Representation
Date and time:
Room:
Presenters: Zhengming Ding, Handong Zhao, and Yun Fu
Tutorial Description:
Multi-view face data are extensively accessible nowadays, since various types of features, view-points and different sensors tend to facilitate better face data representation. For example, multiple features attempt to uncover various knowledge within each view to alleviate the final tasks, since each view would preserve shared and its own specific information. Recently there are a bunch of approaches proposed to deal with the multi-view face data. Our tutorial will cover most multi-view face representation approaches, centered around three major face applications, i.e., multi-view face clustering, multi-view face verification and multi-view face identification. The discussed algorithms will include matrix factorization, low-rank modeling, multi-view subspace learning, transfer learning, and deep learning.
About the presenters:
Zhengming Ding received the B.Eng. degree in information security and the M.Eng. degree in computer software and theory from University of Electronic Science and Technology of China (UESTC), China, in 2010 and 2013, respectively. He is currently working toward the PhD degree in the Department of Electrical and Computer Engineering, Northeastern University, USA. His research interests include machine learning and computer vision. Specifically, he devotes himself to develop scalable algorithms for challenging problems in transfer learning scenario. He was the recipient of the Student Travel Grant of ACM MM 14, ICDM 14, AAAI 16 an IJCAI 16. He received the National Institute of Justice Fellowship. He was the recipient of the best paper award (SPIE). He has served as the reviewers for IEEE journals: IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Pattern Analysis and Machine Intelligence, etc. He is an IEEE student member and AAAI student member. | |
Handong Zhao received the B.Eng. degree in computer science and the M.Eng. degree in computer technology from Tianjin University, China, respectively. He is currently working toward the PhD degree in the Department of Electrical and Computer Engineering, Northeastern University, Boston, MA. His research interests include machine learning, computer vision, and data mining. He holds the Dean’s Fellowship at Northeastern University, and he is the recipient of the Best Paper Honorable Mention Award at the 2013 ACM International Conference on Internet Multimedia Computing and Service (ICIMCS). He serves as the program committee member for IJCAI 2017 and ICMLA 2016. He is also the reviewers for multiple IEEE transactions. He is a student member of IEEE and AAAI. | |
Yun Fu received the B.Eng. degree in information engineering and the M.Eng. degree in pattern recognition and intelligence systems from Xi’an Jiaotong University, China, respectively, and the M.S. degree in statistics and the Ph.D. degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign, respectively. He is an interdisciplinary faculty member affiliated with College of Engineering and the College of Computer and Information Science at Northeastern University since 2012. His research interests are Machine Learning, Computational Intelligence, Big Data Mining, Computer Vision, Pattern Recognition, and Cyber-Physical Systems. He has extensive publications in leading journals, books/book chapters and international conferences/workshops. He serves as associate editor, chairs, PC member and reviewer of many top journals and international conferences/workshops. He received seven Prestigious Young Investigator Awards from NAE, ONR, ARO, IEEE, INNS, UIUC, Grainger Foundation; seven Best Paper Awards from IEEE, IAPR, SPIE, SIAM; three major Industrial Research Awards from Google, Samsung, and Adobe, etc. He is currently an Associate Editor of the IEEE Transactions on Neural Networks and Leaning Systems (TNNLS). He is fellow of IAPR, a Lifetime Senior Member of ACM and SPIE, Lifetime Member of AAAI, OSA, and Institute of Mathematical Statistics, member of Global Young Academy (GYA), INNS and Beckman Graduate Fellow during 2007-2008. |
Remote Physiological Measurement from Images and Videos
Date and time:
Room:
Presenters: Daniel McDuff
Tutorial Description:
In recent years, there have been significant advances in remote imaging methods for capturing physiological signals. Many of these approaches involve analysis of the human face and utilize computer vision to recover very subtle changes caused by human physiology. The resulting signals are clinically important as vital signs and are also influenced by autonomic nervous system activity. There are numerous healthcare and affective computing applications of remote physiological sensing. The first part of this tutorial will cover the fundamentals of remote imaging photoplethysmography. Following this there will be a deeper dive into state-of-the-art techniques for motion and dynamic illumination tolerance. The impact of frame rate, image resolution and video compression on blood volume pulse signal-to-noise ratio and physiological parameters accuracy will be characterized and discussed. Advancements in multispectral and hyper-spectral imaging will also be presented, highlighting how hardware as well as software can be adapted to improve physiological measurement. Finally, examples of visualization techniques and applications will be presented.
About the presenter:
Daniel McDuff is a researcher at Microsoft Research in Redmond. His research focuses on building sensing and machine learning tools to enable the automated recognition and analysis of emotions and physiology. He is also a visiting scientist at Brigham and Women’s Hospital in Boston. Daniel completed his Ph.D. in the Affective Computing Group at the MIT Media Lab in 2014 and has a B.A. and Masters from Cambridge University. Previously, Daniel was Director of Research at Affectiva and a post-doctoral research affiliate at the MIT Media Lab. During his Ph.D. Daniel collaborated on the first methods to show physiological signals could be measured remotely using ordinary webcams. He is serving on the organizing committee for ACII2017 and has organized several IEEE workshops and special sessions related to physiological and affect measurement and machine learning. His work has been published in a number of top journals and conferences. His work has received nominations and awards from Popular Science magazine as one of the top inventions in 2011, South-by-South-West Interactive (SXSWi), The Webby Awards, ESOMAR and the Center for Integrated Medicine and Innovative Technology (CIMIT). His projects have been reported in many publications including The Times, the New York Times, The Wall Street Journal, BBC News, New Scientist and Forbes magazine. He has received best paper awards at IEEE Face and Gesture and Body Sensor Networks. |
From Deep Unsupervised to Supervised Models for Face Analysis
Date and time:
Room:
Presenters: Richa Singh and Mayank Vatsa
Tutorial Description:
Representation learning approaches have become an integral component of designing any pattern analysis system including face recognition. While learning from data is not new, advances in computing hardware and availability of very large training data has instigated widespread attention towards deep learning approaches. Currently, newer deep learning models/architectures and their applications are proposed almost every day. Face recognition literature has also seen a ubiquitous acceptability and results on benchmark databases show that deep learning algorithms have achieved accuracies, which were once considered arduous. The literature on deep learning for face analysis can be divided into three categories: supervised, unsupervised, and semi-supervised. Approaches focusing on synthesizing input such as super-resolution are typically unsupervised, whereas classification/recognition approaches are either supervised or semi-supervised. The applications of deep learning not only focus on face recognition, they also span in several other areas such as kinship verification and super-resolution to address the challenge of recognizing low resolution face images. This tutorial will focus on unsupervised and supervised deep learning models (e.g. autoencoders, Boltzmann machine) and application of different regularization techniques. We will also discuss the applications of these deep regularized architectures in different applications including (i) Face Verification/Classification, (ii) Kinship Verification, (iii) Face Super-resolution, and (iv) Face Presentation Attack Detection.
About the presenters:
Richa Singh received the Ph.D. degree in Computer Science from West Virginia University, Morgantown, USA, in 2008. She is currently an Associate Professor with the IIIT Delhi, India and a Visiting Professor at West Virginia University, USA. Her areas of interest are biometrics, pattern recognition, and machine learning. She is a recipient of the Kusum and Mohandas Pai Faculty Research Fellowship at the IIIT Delhi, the FAST Award by Department of Science and Technology, India, and several best paper and best poster awards in international conferences. She is also an Editorial Board Member of Information Fusion (Elsevier), and Associate Editor of IEEE Access and the EURASIP Journal on Image and Video Processing (Springer). She served as the Program Co-Chair of IEEE BTAS 2016 and is serving as the General Co-Chair of ISBA 2017. | |
Mayank Vatsa received the Ph.D. degree in Computer Science from West Virginia University, Morgantown, USA, in 2008. He is currently an Associate Professor with the IIIT Delhi, India and Visiting Professor at West Virginia University, USA. His areas of interest are biometrics, image processing, computer vision, and information fusion. He is a recipient of the AR Krishnaswamy Faculty Research Fellowship, the FAST Award by DST, India, and several best paper and best poster awards in international conferences. He has published more than 175 peer-reviewed papers. He is also the Vice President (Publications) of IEEE Biometrics Council, an Associate Editor of the IEEE Access, and an Area Editor of Information Fusion (Elsevier). He served as the PC Co-Chair of ICB 2013, IJCB 2014, and ISBA 2017. |
Statistical Methods for Affective Computing
Date and time:
Room:
Presenters: Jeffrey M Girard and Jeffrey F Cohn
Tutorial Description:
From the evaluation of algorithms to the comparison of experimental groups and approaches, statistical methods are indispensable tools for scientists and engineers interested in affective computing. This tutorial will provide training and hands-on experience with several statistical methods with high relevance in this area. These methods include (1) indexes of categorical and dimensional agreement for quantifying inter-rater reliability and classification performance in a variety of research designs, (2) effect sizes and confidence intervals for quantifying the magnitude and precision of parameter estimates in the presence of sampling error, and (3) general linear modeling for quantifying the strength of the relationship between variables of interest. Attendees will learn the statistical basis of these methods, the assumptions that are required for their use, and standard practices for their implementation, interpretation, and reporting. Syntax, functions, and examples will be provided in both R and MATLAB; attendees are encouraged to bring a laptop with one of these software packages installed.
About the presenter:
Jeffrey Girard is currently a doctoral candidate in Clinical Psychology at the University of Pittsburgh. His work takes a deeply interdisciplinary approach to the study of human behavior, drawing insights and tools from psychology, computer science, and statistics. He is particularly interested in developing and applying technology to advance the study of emotion, interpersonal communication, and psychopathology (e.g., depression). Jeffrey offers a unique and valuable perspective to the affective computing community, especially regarding research design, statistical analysis, and clinical applications. | Jeffrey Cohn is Professor of Psychology and Psychiatry at the University of Pittsburgh and Adjunct Professor of Computer Science at the Robotics Institute at Carnegie Mellon University. He leads interdisciplinary and inter-institutional efforts to develop advanced methods of automatic analysis and synthesis of facial expression and prosody and applies those tools to research in human emotion, social development, nonverbal communication, psychopathology, and biomedicine. His research has been supported by grants from the U.S. National Institutes of Health, National Science Foundation, Autism Foundation, Office of Naval Research, and Defense Advanced Research Projects Agency. |