Institut des Systèmes Intelligents
et de Robotique





Tremplin CARNOT Interfaces



A voir également


rapp Vincent
Titre : Doctorant
Ne fait plus partie de l'unité





My research focuses on the automatic understanding of human behaviour. My works are mainly based on computer visionmachine learning  and visual perception applied to face analysis. More specifically, I’ve been working on facial features localizationaction units detection and continuous emotion prediction in video sequences.


This research is a part of the IMMEMO project (IMMersion 3D basée sur l'interaction EMOtionnelle) supported by the French National Research Agency. This serious game aims at designing a 3D immersive platform dedicated to interactive teaching. The goal is to capture and recognize facial expressions of an immersed person (learner) to facilitate his interaction with an autonomous conversational agent. (collaborators: Supélec, Télécom ParisTech, Artefacto)



Proposed works:

Facial features detection: This framework is based on multi-kernel SVM used to combine information at different scales. Its precision and robustness can be compared favorably with the current state of the art point detectors. [FG'11_1]

Action Units detection: This system combines different types of features to automatically detect actions units (facial micro-movements) in facial images. This combination is done using multi-kernel SVM. A first kernel matrix is based on Local Gabor Binary Pattern Histograms (LGBP). A second kernel is then computed from Active Appearance Model (AAM) coefficients. This system has been evaluated during FERA’11 “International Facial Emotion Recognition Challenge”. Our team was ranked first ahead of the University of California-San Diego (2nd), the Karlsruhe Institute of Technology (3rd), the University of Queensland associated with Carnegie Mellon (4th) and a consortium featuring MIT and the University of Cambridge. [FG'11_2] and [TSMC'12]

Continuous emotion prediction: This system uses dynamic cues to continuously predict the emotional state of a subject in a sequence. The features are extracted using four different information: audio, shape, global appearance and local appearance. Dynamic is introduced by computing log-magnitude spectra on these features. The prediction system is then based on a kernel regression using a delay probability estimation. Again, this system has been evaluated during AVEC2012 “International Audio/Visual Emotion Challenge”. Our team was also ranked first ahead Supelec (2nd), the University of Pennsylvania (3rd) and the University of Southern California (4th). [ICMI'12]




Best Entry Award for the 1st  International Facial Expression Recognition and Analysis Challenge, which was held in conjunction with the International Conference on Face and Gesture Recognition 2011 (FG’11)

Best Entry Award for the 2nd International Audio/Visual Emotion Challenge, which was held in conjunction with the International Conference on Multimodal Interaction 2012 (ICMI’12)






Teaching assistant (192 hours per year), University Pierre & Marie Curie

  • Machine Learning
  • Image processing
  • Pattern Recognition
  • Signal Processing
  • C++







Present : PhD in Pattern Recognition and Computer Vision – Univ. Pierre & Marie Curie

  • Title: Face analysis for automatic understanding of human behaviour
  • Advisor: Lionel Prevost
  • Co-Advisor: Kevin Bailly

2009 : M.Sc in Image Processing and Pattern Recognition (with honors) - Univ. Pierre & Marie Curie

2007 : B.Sc in Electrical Engineering (with honors) - Univ. Pierre & Marie Curie