Institut des Systèmes Intelligents
et de Robotique





Tremplin CARNOT Interfaces



A voir également

Short bio

chetouani Mohamed
Title : Professor
Address : 4 place Jussieu, CC 173, 75252 Paris cedex 05
Phone : +33 (0) 1 44 27 63 08
Email : chetouani(at)


Current positions:

  • Since 2008     Adjunct Professor at the Speech Therapist Department

 Faculty of Medicine, UPMC 

  • Since 2013     Full Professor in Signal Processing and Machine Learning for Human-Machine Interactions,

Institut des Systèmes Intelligents et de Robotique (CNRS UMR 7222)

  • Since 2014     Deputy Director of the Laboratory of Excellence

SMART Human/Machine/Human Interactions in the Digital Society

  • Since Sept. 2016     Coordinator of the Clinics of Autonomy programme, Institut Universitaire d'Ingénierie en Santé, Sorbonne Universités

Clinic of Autonomy programme 


Short bio

Research interests

Since 2007

  • Social Signal Processing: inter-personal interaction, machine-learning techniques for human communication dynamics modeling
  • Affective Computing: multimodal emotion analysis and recognition for social interaction
  • Human-Robot Interaction: social robotics, social learning, socially assistive robotics.
  • Computational models and robotics for:  psycho-pathology (Autism, Alzheimer), child psychiatry, clinics, physiology and neurophysiology (EEG, hormones) for social interactions

2001- 2007:  Non-linear speech processing applied to speaker and phoneme recognition 


Summer School on Computational Social and Behavioural Sciences

Open Source for interpersonal synchrony: see SyncPy



  • Multiple positions in Human-Machine Interaction: Social Robotics, Embodied Conversational Agents

           with Uppsala Univ., Inst. Mines Telecom, KTH, EPFL, INESC-ID, Jacobs Univ. and SoftBank Robotics:

ANIMATAS: Advancing intuitive human-machine interaction with human-like social capabilities for education in schools 

H2020-MSCA-ITN-2017: Innovative Training Network.

Contact: Mohamed CHETOUANI (Coord.)

  • Replay of scientastik TV program (FRANCE 4) here

Interpersonal Synchrony,    Social Signal Processing and Socially Assistive Robotics in Developmental Disorders   

  • New paper in Frontiers in Psychology (Human Media Interaction) with. ISIR colleagues L. Chaby, I. Huppont, M. Avril and V. Luherne-du-Boullay

Gaze behavior consistency among older and younger adults when looking at emotional faces

  • New paper in Child and Adolescent Psychiatry and Mental Health as a follow up of the FP7 MICHELANGELO project:

GOLIAH (Gaming Open Library for Intervention in Autism at Home): a 6-month single blind matched controlled exploratory study

  • ICMI 2017 Workshop Co-Chair:

Call for Workshops

  • New Paper in Social Robotics with  S. Anzalone (Paris 8), G. Varni (ISIR and S. Ivaldi (INRIA) on

                      Automated prediction of Extraversion during Human-Humanoid interaction 

  • New Project with HMI (University of Twente) within the Van Gogh programme on


  • New paper in IEEE Trans. on Affective Computing with J. Aigrain, M. Spodenkiewicz, S. Dubuisson (ISIR), M. Detyniecki (LIP6) and D. Cohen (ISIR, AP-HP)

Multimodal stress detection from multiple assessments.

Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions

Interaction and behaviour imaging: a novel method to measure mother-infant interaction using video 3D reconstruction

  • Invited talk: Séminaire iPAC "Image, Perception, Action et Cognition" LORIA 
  • New paper in International Journal of Social Robotics with Serena Ivaldi (INRIA), Sebastien Lefort (LIP6), Jan Peters (Technische Universität Darmstadt), Joelle Provasi and Elisabetta Zibetti (LUTIN):

Towards engagement models that consider individual factors in HRI: on the relation of extroversion and negative attitude towards robots to gaze and speech during a human-robot assembly task

  • Three papers accepted for IEEE RO-MAN 2016:
    • "Modeling the dynamics of individual behaviors for group detection in crowds using low-level features" with Omar Islas, Giovanna Varni, Mihai Andries and Raja Chatila (ISIR) within FP7 SPENCER and SMART Labex
    • "Robots Learning How and Where to Approach People" with Omar Islas, Raja Chatila (ISIR and LAAS colleagues: Harmish Khambhaita and Rachid Alami within FP7 SPENCER
    • "Training a robot with evaluative feedback and unlabeled guidance signals" with Anis Najar and Olivier Sigaud (ISIR) within ROMEO2
  • New paper in Image and Vision Computing with Jéremie Nicolle and Kévin Bailly

Real-time facial action unit intensity prediction with regularized metric learning

  • 26 April: Talk at Good AIfternoon, Artificial Intelligence Department, Faculty of Social Sciences, Rabdoud University (Nijmegen, NL)
  • April: Participating to the Workshop on Emotions as feedback signals , Lorentz Center (Leiden, NL)
  • April 2016:  Paper describing the GOLIAH gaming platform designed during the FP7 Michelangelo project in Frontiers in Psychiatry:

GOLIAH: A gaming platform for home based intervention in Autism - Principles and Design

  • Final review and demonstration of FP7 SPENCER at Schiphol airport : More info
  • March 2016: Our paper on trust and dynamics of human-robot interaction with I. Gaudiello and E. Zibetti (Chart-Lutin), S. Lefort (LIP6) and S. Ivaldi (INRIA) within EDHHI project (SMART Labex):  

Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to iCub answers

  • Co-organizing the 7th workshop on Human Behaviour Understanding (HBU2016): "Behavior Analysis for Children"

    • at ACM Multimedia 2016

    • with Albert Salah (Bogazici University) and J. Cohn (Univ. Pittsburgh)

  • Two workshops accepted at ACM ICMI 2016:

    • 2nd Workshop on Advancements in Social Signal Processing for Multimodal Interaction, with K. Truong (Twente), D. Heylen (Twente), T. Nishida (Kyoto)

    • Social learning and multimodal interaction for designing artificial agents, with S. Anzalone (ISIR), G. Varni (ISIR), I. Hupont (ISIR), G. Castellano (Uppsala), G. Venture (Tokyo) and A. Lim (Aldebaran Robotics) 

  • April 2016  Invited speaker at the Workhop:

"From Human-Human Joint Action to Human-Robot Joint Action and vice-versa", Toulouse

  • Fev. 2016     Publication of our paper on imitation and identity recognition in Scientific Reports (NPG)

with Sofiane Boucenna (ETIS), David Cohen (UPMC), Andrew Meltzoff (Univ. Seattle),  P. Gaussier (ETIS)

Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars

Other news 


Research projects:

Current projects

  • ASSESSTRONIC ECHORD++ PDTI HealthcareComprehensive Geriatric Assessment (CGA)

  • Robot AVS (Auxiliaire de Vie Scolaire) supported by IUIS (Institute of Engineering in Healthcare, Sorbonne University) and INSERM (Neuroscience and new technologies)

  • FP7 SPENCERSocial situation-aware perception and action for cognitive robots

  • ROMEO2: Humanoid Robot assistant and companion for everyday life

  • ANR SYNED-PSYSynchrony, early development and psychopathology

  • ANR JeMime: Jeu Educatif Multimodal d'Imitation Emotionnelle
  • SeNSE: Socio-emotional Signals (Project within SMART Labex)

  • EDHHI: Engagement During Human-Humanoid Interaction (Project within SMART Labex)


Former projects

2006-2011: Emotion, Prosody and Autism (Fondation France Telecom)

2007-2013: Mamanais (Fondation de France) : Early signs of autism

2010-2013: MULTI-STIM Multi-sensory intelligent stimulation systems for children with developmental disorders (Emergence Programme UPMC)

2006-2012 European COST action 2102 : Cross-Modal Analysis of Verval and Non-Verbal Communication

2009-2013 ROBADOM Impact of a robot “butler” at home on psychological and cognitive state of the elderly with mid cognitive impairment. ANR TecSan’09 (Health technologies)

2011-2014 PRAMAD2 Assistive Robotic Platform for Housekeeping. FUI 11 (Fonds unique interministériel).

2011-2014 FP7 ICT-2011-7 MICHELANGELO: Patient-centric model for remote management, treatment and rehabilitation of autistic children, ICT for Health, Ageing Well.

2012-2015 A1:1 Avatar scale 1:1 (social engagement)