The 9th IEEE International Workshop on
Analysis and Modeling of Faces and Gestures (AMFG2019)
—
A deeper understanding of face, gestures, and across modalities
A deeper understanding of face, gestures, and across modalities
In
conjunction with CVPR 2019
conjunction with CVPR 2019
Call for
Papers
Papers
We have experienced rapid advances in face, gesture, and cross-modality (e.g., voice and face) technologies. This is due with many thanks to the deep learning (i.e., dating back to 2012, AlexNet) and large-scale, labeled image collections. The progress made in deep learning continues to push renown public databases to near saturation which, thus, calls more evermore challenging image collections to be compiled as databases. In practice, and even widely in applied research, using off-the-shelf deep learning models has become the norm, as numerous pre-trained networks are available for download and are readily deployed to new, unseen data (e.g., VGG-Face, ResNet, amongst other types). We have almost grown “spoiled” from such luxury, which, in all actuality, has enabled us to stay hidden from many truths. Theoretically, the truth behind what makes neural networks more discriminant than ever before is still, in all fairness, unclear—rather, they act as a sort of black box to most practitioners and even researchers, alike. More troublesome is the absence of tools to quantitatively and qualitatively characterize existing deep models, which, in itself, could yield greater insights about these all so familiar black boxes. With the frontier moving forward at rates incomparable to any spurt of the past, challenges such as high variations in illuminations, pose, age, etc., now confront us. However, state-of-the-art deep learning models often fail when faced with such challenges owed to the difficulties in modeling structured data and visual dynamics.Alongside the effort spent on conventional face recognition is the research done across modality learning, such as face and voice, gestures in imagery and motion in videos, along with several other tasks. This line of work has attracted attention from industry and academic researchers from all sorts of domains. Additionally, and in some cases with this, there has been a push to advance these technologies for social media based applications. Regardless the exact domain and purpose, the following capabilities must be satisfied: face and body tracking (e.g., facial
expression analysis, face detection, gesture recognition), lip reading and voice understanding, face and body characterization (e.g., behavioral understanding, emotion recognition), face, body and gesture characteristic analysis (e.g., gait, age, gender, ethnicity recognition), group understanding via social cues (e.g., kinship, non-blood relationships, personality), and visual sentiment analysis (e.g., temperament, arrangement). Thus, needing to be able to create effective models for visual certainty has significant value in both the scientific communities and the commercial market, with applications that span topics of human-computer interaction, social media analytics, video indexing, visual surveillance, and internet vision. Currently, researchers have made significant progress addressing the many of these problems, and especially when considering off-the-shelf and cost-efficient vision HW products available these days, e.g. Intel RealSense, Magic Leap, SHORE, and Affdex. Nonetheless, serious challenges still remain, which only amplifies when considering the unconstrained imaging conditions captured by different sources focused on non-cooperative subjects. It is these latter challenges that especially grabs our interest, as we sought out to bring together the cutting-edge techniques and recent advances of deep learning to solve the challenges in the wild.This one-day serial workshop (i.e., AMFG2019) provides a forum for researchers to review the recent progress of recognition, analysis, and modeling of face, body, and gesture, while embracing the most advanced deep learning systems available for face and gesture analysis, particularly, under an unconstrained environment like social media and across modalities like face to voice. The workshop includes up to 3 keynotes and peer-reviewed papers (oral and poster). Original high-quality contributions are solicited on the following topics:
- Deep learning methodology, theory, as applied to social media analytics;
- Data-driven or physics-based generative models for faces, poses, and gestures;Deep learning for internet-scale soft biometrics and profiling: age, gender, ethnicity, personality, kinship, occupation, beauty ranking, and fashion classification by facial or body descriptor;
- Novel deep model, deep learning survey, or comparative study for face/gesture recognition;
- Deep learning for detection and recognition of faces and bodies with large 3D rotation, illumination change, partial occlusion, unknown/changing background, and aging (i.e., in the wild); especially large 3D rotation robust face and gesture recognition;
- Motion analysis, tracking and extraction of face and body models captured from several non-overlapping views;
- Face, gait, and action recognition in low-quality (e.g., blurred), or low-resolution video from fixed or mobile device cameras;
- AutoML for face and gesture analysis;
- Mathematical models and algorithms, sensors and modalities for face & body gesture and action representation, analysis, and recognition for cross-domain social media;
- Social/psychological based studies that aids in understanding computational modeling and building better automated face and gesture systems with interactive features;
- Multimedia learning models involving faces and gestures (e.g., voice, wearable IMUs, and face);
- Social applications involving detection, tracking & recognition of face, body, and action;
- Face and gesture analysis for sentiment analysis in social context;
- Other applications involving face and gesture analysis in social media content.
Previous
AMFG
Workshops
AMFG
Workshops
The first workshop with this
name was held in 2003, in conjunction with ICCV2003 in Nice, France. So far, it has
been successfully held EIGHT times. The homepages of previous five AMFG are as
follows:
AMFG2003: http://brigade.umiacs.umd.edu/iccv2003/
AMFG2005: http://mmlab.ie.cuhk.edu.hk/iccv05/
AMFG2007: http://mmlab.ie.cuhk.edu.hk/iccv07/
AMFG2010: http://www.lv-nus.org/AMFG2010/cfp.html
AMFG2013: http://www.northeastern.edu/smilelab/AMFG2013/home.html
AMFG2015: http://www.northeastern.edu/smilelab/AMFG2015/home.html
AMFG2017: https://web.northeastern.edu/smilelab/AMFG2017/index.html
AMFG2018: https://fulab.sites.northeastern.edu/amfg2018/
name was held in 2003, in conjunction with ICCV2003 in Nice, France. So far, it has
been successfully held EIGHT times. The homepages of previous five AMFG are as
follows:
AMFG2003: http://brigade.umiacs.umd.edu/iccv2003/
AMFG2005: http://mmlab.ie.cuhk.edu.hk/iccv05/
AMFG2007: http://mmlab.ie.cuhk.edu.hk/iccv07/
AMFG2010: http://www.lv-nus.org/AMFG2010/cfp.html
AMFG2013: http://www.northeastern.edu/smilelab/AMFG2013/home.html
AMFG2015: http://www.northeastern.edu/smilelab/AMFG2015/home.html
AMFG2017: https://web.northeastern.edu/smilelab/AMFG2017/index.html
AMFG2018: https://fulab.sites.northeastern.edu/amfg2018/
Important
Dates
Dates
[ 03/10/2019 ] Submission Deadline
[ 03/29/2019 ] Notification
[ 04/05/2019 ] Camera-Ready Due
Author Guidelines
Submissions are handled via the
workshop’s CMT website:
https://cmt3.research.microsoft.com/AMFG2019/Submission/Index
Following the guideline of CVPR2019:
http://cvpr2019.thecvf.com/submission/main_conference/author_guidelines#cmt_website
- 8 pages (+ references)
- Anonymous
- Using CVPR template
Workshop
Organizers
Organizers
General Chairs
Workshop Chairs
![](https://fulab.sites.northeastern.edu/files/2024/03/ostadabbas-s-48fe54c6cf63e145.jpg)
Sarah Ostadabbas, Northeastern University, Boston, USA.
Zhengming Ding, Indiana University-Purdue University, Indianapolis, USA.
Sheng Li, University of Georgia, Athens, GA, USA.
![](https://fulab.sites.northeastern.edu/files/2024/03/joface_5-8c6b2a6c3659e80b.png)
Joseph P. Robinson, Northeastern University, Boston, USA.
Program
Committee
Committee
- Handong Zhao, Adobe Research, USA
- Bineng Zhong, Huaqiao University, China
- Chengcheng Jia, Huawei, USA
- Junchi Yan, Shanghai Jiao Tong University, China
- Jun Li, MIT, USA
- Hong Pan, Southeast University, China
- Shuyang Wang, Shiseido Americas
- Samson Timoner, ISM Connect
- Aleix Martinez, The Ohio State University, USA
- Yingli Tian, City University of New York, USA
- Chengjun Liu, New Jersey Institute of Technology, USA
- Liang Zheng, Australian National University, Australia
- Thomas Moeslund, Aalborg University, Denmark
- Kai Qin, Swinburne University of Technology, Australia
Program Schedule (Tentative)
8:30 AM
|
Keynote 1: Stan Z. Li [1][2], Ran He [2], Zhen Lei [2] ([1] Westlake University, China, [2] CASIA, China), Heterogeneous Face Recognition: Research and Recent Advances>
|
9:10 AM
|
Oral 1: A Realistic Dataset and Baseline Temporal Model for Early Drowsiness Detection,
Reza Ghoddoosian, Marnib Galib, Vassilis Athitsos |
9:30 AM
|
Oral 2: Expression Classification in Children,
Shruti Nagpal, Maneet Singh, Mayank Vatsa, Richa Singh (IIIT-Delhi); Afzel Noore |
9:50 AM
|
Oral 3: Modelling Multi-Channel Emotions using Facial Expression and Trajectory Cues for Improving Socially-Aware Robot Navigation,
Aniket Bera, Tanmay Randhavane, Dinesh Manocha |
10:10 AM
|
Oral 4: Understanding Beauty via Deep Facial Features,
Xudong Liu, Tao Li, Hao Peng, Iris Chuoying Ouyang, Taehwan Kim, Ruizhe Wang |
10:30 AM
|
Coffee Break
|
11:00 AM
|
Keynote 2: Pavlo Molchanov (NVIDIA Corporation), Semi-supervised learning for driver monitoring>
|
11:40 AM
|
Oral 5: Personalized Estimation of Engagement from Videos Using Active Learning with Deep Reinforcement Learning,
Ognjen Rudovic, Hae Won Park, John Busche, Bjoern W. Schuller, Cynthia Breazeal, Rosalind Picard |
12:00 PM
|
Lunch Time
|
1:30 PM
|
Keynote 3:Vincent Lepetit (University of Bordeaux), On 3D Hand Registration
|
2:10 PM
|
Oral 6: LBVCNN: Local Binary Volume Convolutional Neural Network for Facial Expression Recognition from Image Sequences,
Sudhakar Kumawat, Manisha Verma, Shanmuganathan Raman |
2:30 PM
|
Oral 7: APA: Adaptive Pose Alignment for Robust Face Recognition,
Zhanfu An, Weihong Deng, Yaoyao Zhong, Yaohai Huang, Xunqiang Tao |
2:50 PM
|
Oral 8: >2D-3D Heterogeneous Face Recognition based on Deep Coupled Spectral Regression,
Yangtao Zheng, Di Huang, Weixin Li, Wang Shupeng, Yunhong Wang |
3:10 PM
|
Oral 9: Analysis of Deep Fusion Strategies for Multi-modal Gesture Recognition,
Alina Roitberg, Tim Pollert, Monica Haurilet, Manuel Martin, Rainer Stiefelhagen |
3:30 PM
|
Coffee Break
|
4:00 PM
|
Keynote 4: Charlees Fowlkes (UC Irvine), Geometric Pose Affordance
|
4:40 PM
|
Oral 10: Efficient and Accurate Face Alignment by Global Regression and Cascaded Local Refinement,
Jinzhan Su, Zhe Wang, Chunyuan Liao, Haibin Ling |
5:00 PM
|
Oral 11: Stacked Multi-Target Network for Robust Facial Landmark Localisation,
Yun Yang, Bing Yu, Xiaodong Li, Bailan Feng |
5:20 PM
|
Oral 12: Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set,
Yu Deng, Jiaolong Yang, Sicheng Xu, Dong Chen, Yunde Jia, Xin Tong |
5:40 PM
|
Closing Remarks (Awards Ceremony)
|
–>