Title : Researcher
Address : 4 place Jussieu, CC 173, 75252 Paris cedex 05
Phone : +33 (0) 1 44 27 28 85
Email : khamassi(at)isir.upmc.fr
Group : AMAC (AMAC)
In 2003, I graduated from both Université Pierre et Marie Curie, Paris (Master in Cognitive Science) and from an engineering school ENSIIE, Evry (Master in Computer Science). Then from 2003 until 2007, I prepared a PhD thesis between Université Pierre and Marie Curie and Collège de France under the supervision of Agnès Guillot and Sidney I. Wiener about learning and navigation in animals and robots. In 2008, I spent a short period at Kenji Doya's lab at Okinawa Institute of Science and Technology, Japan. Then I pursued a postdoctoral fellowship at INSERM in Lyon, where my work was at the interface between Emmanuel Procyk's neurophysiology team and Peter F. Dominey's modelling and robotics team. Since 2010, I have been holding a tenured research scientist position at the French National Center for Scientific Research (CNRS) in the Institute of Intelligent Systems and Robotics at Université Pierre et Marie Curie, Paris 6. I obtained my Habilitation to Direct Researches in Biology from Université Pierre et Marie Curie, Paris 6, on May 6th 2014. I have been an invited researcher at the Center for Mind/Brain Sciences, University of Trento, Italy in 2014-2015, where I was mainly collaborating with Giorgio Coricelli. Since January 2016, I have been an invited researcher at the Robotics Laboratory of the National Technical University of Athens, Greece, where I mainly collaborate with Costas Tzafestas. Since March 2017, I have also been an invited researcher at the Department of Experimental Psychology, University of Oxford, where I mainly collaborate with Jérôme Sallet, Matthew Rushworth and Chris Summerfield.
My research interest is at the interface between Neuroscience and Robotics focusing on: animals' reinforcement learning and flexible decision-making abilities; the associated neural mechanisms in the prefrontal cortex, hippocampus and basal ganglia; and their applications to learning in autonomous robots. I am studying how the brain efficiently coordinates different learning systems in parallel, with the hippocampus-prefrontal cortex network detecting the different states of the world (e.g. new vs familiar environment or context A vs context B) and the different performances of the agent (e.g progressing, stagnating, or dropping) to adaptively choose: the appropriate learning system in each situation (e.g. learning a cognitive graph of the environment or not), and the learning state (e.g. explore or exploit). These novel computational models are then tested on robotic platforms in the real-world with the dual goal of improving robots’ behavioral flexibility and testing biological hypotheses.
- Khamassi, M. and Girard, B. and Clodic, A. and Devin, S. and Renaudo, E. and Pacherie, E. and Alami, R. and Chatila, R. (2016). Integration of Action, Joint Action and Learning in Robot Cognitive Architectures. Intellectica, 2016/1(65):169-203.
- Khamassi, M., Quilodran, R., Enel, P., Dominey, P.F. and Procyk, E. (2015). Behavioral regulation and the modulation of information coding in the lateral prefrontal and cingulate cortex. Cerebral Cortex, 25(9):3197-218.
- Palminteri, S., Khamassi, M., Joffily, M. and Coricelli, G. (2015). Contextual modulation of value signals in reward and punishment learning. Nature Communications, 6:8096.
- Lesaint, F., Sigaud, O., Flagel, S.B., Robinson, T.E. and Khamassi, M. (2014). Modelling individual differences observed in Pavlovian autoshaping in rats using a dual learning systems approach and factored representations. PLoS Computational Biology, 10(2): e1003466.
- Khamassi, M., Enel, P., Dominey P.F. and Procyk, E. (2013). Medial prefrontal cortex and the adaptive regulation of reinforcement learning parameters. Progress in Brain Research, 202:441-64.
- Khamassi, M. and Humphries, M. D. (2012). Integrating cortico-limbic-basal ganglia architectures for learning model-based and model-free navigation strategies. Frontiers in Behavioral Neuroscience, 6:79.
- Bellot, J. and Sigaud, O. and Khamassi, M. (2012). Which Temporal Difference Learning algorithm best reproduces dopamine activity in a multi-choice task? SAB 2012 Conference. BEST PAPER AWARD.
- Caluwaerts, K., Staffa, M., N'Guyen, S., Grand, C., Dollé, L., Favre-Félix, A., Girard, B. and Khamassi, M. (2012). A biologically inspired meta-control navigation system for the Psikharpax rat robot. Bioinspiration & Biomimetics, 7(2):025009.
- Humphries, M.D., Khamassi, M. and Gurney, K.N. (2012). Dopaminergic control of the exploration-exploitation trade-off via the basal ganglia. Frontiers in Neuroscience, 6:9.
- Khamassi, M., Lallée, S., Enel, P., Procyk, E. and Dominey P.F. (2011). Robot cognitive control with a neurophysiologically inspired reinforcement learning model. Frontiers in Neurorobotics, 5:1.
- Benchenane, K., Peyrache, A., Khamassi, M., Tierney, P.L., Gioanni, Y., Battaglia, F.P. and Wiener, S.I. (2010). Coherent theta oscillations and reorganization of spike timing in the hippocampal-prefrontal network upon learning. Neuron, 66(6):912-36.
- Peyrache, A., Khamassi, M., Benchenane, K., Wiener, S.I. and Battaglia, F.P. (2009). Replay of rule-learning related neural patterns in the prefrontal cortex during sleep. Nature Neuroscience, 12(7):919-26.
- Khamassi, M., Mulder, A.B., Tabuchi, E., Douchamps, V. and Wiener S.I. (2008). Anticipatory reward signals in ventral striatal neurons of behaving rats. European Journal of Neuroscience, 28(9):1849-66.
- Khamassi, M., Lachèze, L., Girard, B., Berthoz, A. and Guillot, A. (2005). Actor-Critic Models of Reinforcement Learning in the Basal Ganglia: From Natural to Artificial Rats. Adaptive Behavior, 13(2):131-148.