Home » Join us » Opportunities


Internship offers

Subject of internship: Augmented reality for a surgical cockpit

Abstract : Minimally invasive surgery is known for its benefits to the patient. These benefits result mainly from the smaller incisions it requires compared to open surgery. These incisions allow the insertion of an endoscope and instruments into the patient’s body through trocars. Despite its advantages, minimally invasive surgery presents several challenges for the surgeon. The trend is increasing with the generalization of minimally invasive access to almost all specialties (catheterization for vascular, flexible endoscopy for gastroenterology, fibroscopy in urology etc…).

Objectives: With this project, we propose to study the contribution of augmented reality in the operating room. Its specificity project is to take advantage of the possible complementarity between comanipulator robotic arms held by the surgeon and an augmented reality device carried by the latter (Hololens/Varjo XR1).

The main objective of this internship will be to integrate the information from tools present in the surgical cockpit project (manipulator arms, endoscopic cameras, tactile interfaces …) to a Unity 3D environment. The environment will have to be manipulable via the different interaction tools provided by the Head mounted display and devices of the cockpit. It will thus be possible to manipulate the position of several “screens” through the gaze, the voice or the movements of the tip of the laparoscopic instruments held by the comanipulator arms.

The goals are :

  • Creation of a 3d scene in Unity allowing the display and manipulation of 3d objects,
  • External video (endoscope) streamed in the 3d scene (WebRTC),
  • Retrieving manipulator arms pose (orientation/position) and display it in 3D,
  • Use of robot information to allow manipulation of 3D objects,
  • Creation of a widget to display the status of different sensors that could be added,
  • The usability of the developed platform will be studied in the framework of an experimental protocol. The two technologies used (Hololens/varjo) will also be compared.

The developments will all be carried out under Unity XR SDK.

Candidate Profile: Master’s student in Computer Science or Engineering, a student engineer, wishing to explore a subject involving the study of the use of information manipulation technologies in augmented reality. The internship requires strong programming skills (specifically C# and/or Unity’s visual programming language) and information processing skills in order to extract the user’s intention from voice, gesture and eye sensors.

  • Supervisor: Fabien Vérité
  • Duration: 6 months starting 01 March 2021
  • Location: Isir (Institut des Systèmes Intelligents et de Robotique), 4 Place Jussieu 75005, Paris
  • Contact: Fabien Vérité, verite@isir.upmc.fr ; Send your application by email, with [Augmented reality for a surgical cockpit] in the topic along with a CV and motivation letter.

Download the offer

Subject of internship: Experimental evaluation of Human-Robot negotiation of intentions in comanipulative tasks


Abstract : Humans possess the ability to naturally and efficiently communicate information through haptics, as proven recently in the literature. This haptic communication often stays unnoticed even though used daily in multiple activities. One simple example to visualize this communication would be the following situation: imagine that you need to guide a blind and deaf person through a room, you would be able to do so by taking his hand and correcting his movements through this haptic feedback. The information communicated through touch is thus sufficient for numerous tasks that humans need to realize.

Objectives: The aim of the internship will be to study how to develop these kinesthetic units . And more precisely the way they influence the negotiation. The focus will be made on asymetric scenarii that are representative of the blind guiding situation.

The intern will firstly need to familiarize him(her)self with the literature of the subject, and with the experimental setup used. This will require basic knowledge of C and C++ programming, as well as notions of real-time computing.

The second step will be to design the experimental protocol that will be used for the evaluation of the virtual agent (that will implement the kinesthetic unit), based on the different parameters that needs to be tested. This protocol will then be implemented in the current set-up, in order to carry on experiments with human subjects. Once the experiments are done, the final step will be the analysis of the data obtained, in order to confirm or reject the initial hypothesis. Depending on the time available, redaction or participation to the redaction of a scientific article for publication will be considered.

The work done during the internship will be a great opportunity to discover and experiment the work of experimental researchers in the field of robotics.

Required skills: Basis of C and C++ programming and experimental data analysis (Python or Matlab)

  • Supervisor: Ludovic Saint-Bauzel
  • Duration: 5/6 months starting February
  • Location: Isir (Institut des Systèmes Intelligents et de Robotique), 4 Place Jussieu 75005, Paris
  • Contact : Ludovic Saint-Bauzel ; saintbauzel@isir.upmc.fr ; Send your application by e-mail, with [Experimental evaluation of Human-Robot negotiation of intentions in comanipulative tasks] in the topic along with a CV and motivation letter.

Download the offer

Subject of internship: Machine Learning for Social Behaviour Generation

Abstract : A part of the success of human-agent interactions relies on the ability of social agents to perform behaviors that are easily understood by humans. This means that the human is able to infer the meaning and/or the intention of such behaviors that could take several forms: pointing, gaze, head movements, etc. To increase the understanding of agent intentions by humans, the notion of legibility is often considered in robotics. Legibility is defined as the ability to anticipate the goal of a action. This notion has been differentiated from predictability, which is defined as the ability to predict the trajectory for a given goal. To address the modeling of such notions, mathematical approaches show the need of explicitly integrating human observers. The models able to generate such actions / behaviors take into account how a human observer will perceive them [Wallkotter et al. 2020].

Objectives: We aim to develop machine learning algorithms able to generate behaviors mainly movements that are explicitly taking into account human observers. Recent works on movement generation for artificial agents have shown the relevance of generative models such as Variational Auto-Encoders (VAE). The main intuition is that latent representation and regularization allows controlling the generation of data. In (Marmpena et al., 2019), a VAE based approach has been proposed to generate various body language animations. Interestingly, modulation of motion is made possible through affective related spaces.

Here, we propose to address a similar behavior generation problem while focusing on communicative demonstrations, which are naturally employed by humans when teaching (Ho et al. 2018). Communicative demonstrations are intelligent modifications of demonstrator’s actions and/or behaviors with the aim of influencing the mental representation that an observer ascribes to the demonstration.

We target situations in which an agent is demonstrating a series of actions such as writing or reaching objects. The idea will be to control the generation in a communicative space from instrumental to pedagogical intentions. We will firstly develop a VAE model able to learn a representation of communicative actions and analyze the latent space. We will then develop specific regularization techniques to control the communicative intention.

The main steps are:

– Development a first generative model,

– Analysis of the latent space,

– Development of a regularization technique able to control the communicative intention.

If possible, evaluation with real humans.

Skills: Python, Machine learning

  • Supervisor: Mohamed CHETOUANI
  • Duration: 5/6 months
  • Location: Isir (Institut des Systèmes Intelligents et de Robotique), 4 Place Jussieu 75005, Paris
  • Contact : Mohamed CHETOUANI ; Mohamed.Chetouani@sorbonne-universite.fr ; Send your application by email, with [Machine Learning for Social Behaviour Generation] in the topic along with a CV and motivation letter.

Download the offer

Subject: Towards automated pedicle screw placement in spinal surgery using robotics and multimodal control


There are many spinal pathologies and they can be classified into four main groups: degenerative (e.g. herniated discs, narrow lumbar canal), traumatic (e.g. dislocation and fractures), spinal deformities (scoliosis and kyphosis) and primary or secondary tumor lesions. These pathologies can lead to significant functional repercussions (pain) or even a severe handicap that can lead to paralysis of the limbs (paraplegia or tetraplegia). Some pathologies are treated conservatively (medication and rehabilitation) but, for serious cases, surgery may be necessary. The number of spine surgery procedures is steadily increasing, both for trauma-related cases (Kumar, 2018) and for degenerative surgery indications due to the aging population (Ravindra, 2018). This increase in activity creates a need for innovations that will involve the introduction of new technologies in the operating room.

Among the main issues in spine surgery are safety and precision. Poorly positioned pedicle screws can lead to bone or disc damage, which threatens the stability of the surgical fixation. More rarely, the injury may be vascular or nervous, with a risk of neurological or even life-threatening complications.

Advances in the use of preoperative and intraoperative imaging coupled with three-dimensional navigation systems have led to significant improvements in surgical protocol. Indeed, some common spine surgery procedures such as pedicle screw placement require further scientific and technological innovation. Pedicle screw placement can be made more difficult, especially in patients with severe spinal deformity (such as scoliosis), osteoporosis or tumor. The literature reports that approximately 20% of pedicle screws are malpositioned (Mason, 2014), implying neurological symptoms in the medium term that can often lead to a second surgery with all the risks and cost that this entails (Verma, 2018). It is in this context that robots have entered the operating room for several years.

Scientific objective

Our objective is to get out of the paradigm of semi-autonomous surgical robotics dominating for 30 years the robotization of orthopedic surgery and based on the triptych imaging (dosing) – planning – registration. For this, it is necessary to be able, from a geometrical configuration and an initial plan, to adapt in real time the behavior of the robot: speeds, forces, impedances, etc. We will evaluate the possibility of integrating different local sensors: visual, force, electrical impedance (DSG technology from SpineGuard). With all these signals, our objective is to learn to detect events imposing changes or adaptations of behavior (unsuitable trajectory imposing to modify the direction of insertion).

The scientific objectives are therefore:

  • in the processing of multi-physical signals and their fusion, by means of Artificial Intelligence, to detect model breaks; for example, we plan to fuse the electrical impedance signal and an estimate of the mechanical impedance to determine the nature of the tissue (cancellous bone, cortical),
  • in the programming of reactive behaviors of the robot adapted to these signals and to the mechanical interaction with the surgeon and with the tissues. For example, we plan to program an adaptive impedance that will allow precise manual pre-positioning of the instrument tip while finely adjusting the orientation of the penetration axis. We will also try to work on the realization of a precise drilling of mobile parts, as the vertebrae are. Thanks to models and sensors, we will be able to estimate the deformations and mobilities in real time in order to adapt the behavior (force / speed / impedance) and improve the precision.

In addition, we will simultaneously design the robotic system, composed of one or two arms Kuka (available at ISIR) and the associated surgical process. It is on this crucial point that the interaction between the two research teams is crucial. The aim is to design a scenario that minimizes installation time and maximizes the medical service rendered. The design of the control and dialogue interfaces is a critical point that we will be able to address by relying on the expertise in Human-Machine Interface (HMI) of colleagues from ISIR.

  • Thesis director : Brahim Tamadazte
  • Possible co-director: Raphaël Vialle
  • Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
  • Contact: Brahim Tamadazte, Researcher, tamadazte@isir.upmc.fr ; Send your application by mail, with [thesis : Towards automated pedicle screw placement in spinal surgery using robotics and multimodal contro] in the subject line, a CV and a cover letter.

Download the offer

Link to the Doctoral School ED-SMAER

PhD offers

Thesis topic: Design and control of a flexible manipulator for fruit picking

Abstract: The goal of the thesis is to develop a flexible manipulator like a trunk for grasping of a fragile object such as a fruit. This robotic object would address three research issues, design, modeling and control, and finally machine learning, which are in an ideal world, should be addressed in a joint and integrated approach, called sometimes “hardware-software co-design”. The design will be based a priori on a deformable continuum structure without articulation and with a cable transmission. We will study particularly the routing of the cables, such as the helical and/or internal routings and their influences especially on the working space of the manipulator and pose repeatability. The modeling and control will seek to find a good tradeoff between model fidelity and computing time efficiency. For this purpose, the use of a reduced model of a deformable body (Cosserat beam model or reduced model from FEM), combined with model predictive control MPC, is a relevant solution capable of taking into account the under-actuation, kinematic redundancy and physical constraints such as actuator limitations. Finally, learning from real and/or simulated data is a interesting way that we will also explore because it is more robust for systems that are difficult to model and that present a large number of variables, such as a redundant deformable object in multiple interaction with object having different geometry and stiffness.

Context : In the context of the fight against global warming and unprecedented technological convergence (geolocation, artificial intelligence, energy, etc.), agriculture is reinventing itself to produce better, while respecting the environment and human health. Agricultural robotics is one of the levers of this transformation. It is a promising solution for meeting the environmental and health challenges currently facing France and all other developed countries. The urgency of climate change will require an ever greater reduction in the use of inputs (plant protection products, water, etc.) and fossil fuels used to operate agricultural machinery and equipment. In addition, the agricultural sector suffers from a lack of attractiveness and a negative image among many young people (rurality, repetitive tasks, exhaustion, poverty, isolation, etc.). Digitization and robotization will enhance the value of this profession by adding more value to the farmer, and will relieve him of repetitive and tiring tasks such as weeding, carrying, picking and regular maintenance of crops. The agricultural sectors that are the most open today to the robotization of their production are market gardening (under glass or not), viticulture and breeding. We have recently begun to see the appearance of large robotized machines used in the “field crops” sector, particularly cereals, which anticipate future restrictive changes in the regulations on the use of phytosanitary products.

The main scientific difficulty posed by agricultural robotics applications lies in the fact that the robot evolves in an open and unstructured environment. Building a model of the environment (geometry, soil physics, obstacles, luminosity, seasonality, growth and variability of plants, …) could be an almost impossible goal to reach. This is why we will limit ourselves in this project to row crops placed on flat, horizontal or sloping soils, such as encountered in market gardening or viticulture. These two types of crops, more or less structured, can then be modeled.

General description of the project: The flexible manipulator

The application on which we wish to focus is the harvesting of fruits/vegetables by means of a flexible manipulator, in the form of a flexible and jointless tube. It will be used as a positioner and a gripper. The flexible robotics constitutes today a paradigm in rupture with the traditional and industrial robotics, it opens the door to a compliant and interactive robotics with the human, but especially it would make it possible to guarantee in our particular application, the safety of the humans possibly present in the working space of the robot, but also the integrity of the seized object, particularly fragile. The latter is ensured by a grip distributed over a large contact surface and by controlling the interaction between the manipulator and the object. We will first focus on axisymmetric objects such as zucchini or peppers.

The manipulator will be embarked on a self-guided all-terrain mobile base to follow the rows of crops. The problems of guidance and control of all-terrain mobile robots have been addressed in recent years in several thesis works conducted at ISIR [1]. ISIR has also a know-how in the mechatronic design of compliant and adaptive robots, such as the wheeled and legged robots, the Hylos or the Complios with serial-elastic actuators [2], or the intelligent structures with distributed actuation for endoscopy or catheterization [3][4]. This research will also open interesting exchanges with biologists from the Muséum National d’Histoire Naturelle who are studying object manipulation in elephants by their trunk and comparing grasping strategies between different species. The elephant trunk is an extraordinary effector capable of grasping food, manipulating it, snatching it, grabbing a tree trunk as well as picking up a small object the size of a peanut.

The thesis has three scientific development axes:

1) Design of a dual manipulator

2) Model-based control of the dual manipulator

3) Control of the learning process and the input task

  • Thesis director: Faïz Ben Amar
  • Possible co-director: Jérôme Szewczyk
    Collaborations within the framework of the thesis: Muséum National d’Histoire Naturelle
  • Location: Isir (Institut des Systèmes Intelligents et de Robotique), 4 Place Jussieu 75005, Paris
  • Contact: Faïz Ben Amar ; amar@isir.upmc.fr ; Send your application by mail, with [Thesis: Design and control of a flexible manipulator for fruit picking] in the subject line, a CV and a cover letter.

Download the offer

Thesis topic: Development of an intelligent control of robotic supernumerary limbs

Context: Within the AGATHE team, we are working on the development of innovative control approaches for robotic devices to assist gesture for people with sensorimotor disabilities. We have recently developed an innovative intuitive command for the control of an upper limb prosthesis which exploits postural compensations and allows to slave the movements of the prosthesis to those, measured, of the wearer’s body in order to automatically correct the action defects of the latter (Mathilde Legrand’s thesis, winner of the “women for sciences” prize of the L’Oréal foundation in partnership with the academy of sciences).

In the framework of the European project FET-NIMA (Non-invasive Interface for Movement Augmentation) we are interested in extending these results to the particular case of dexterity augmentation and operator assistance in an industrial context. The work focuses on the control of an arm placed in the work environment. We believe that it is also possible to develop a control for a third robotic arm (Supernumerary Robotic Limb or SRL).

Description of the thesis topic: The objective of this thesis will be to generalize our control approaches exploiting body movements (redundancy, compensations, etc.) to this new type of assistance device and to multiple functional situations representative of the tasks of an industrial operator.

The thesis project will therefore first consist in conducting a series of experimental measurement campaigns on healthy subjects (using a Motion Capture platform) in order to study and characterize the compensatory behavior of asymptomatic subjects with or without a supernumerary robotic arm structure.

These data will then be used to develop, possibly using machine learning or optimization techniques, a control block capable of determining at each moment on which compensatory movement of the person the movements of the robotic supernumerary limb must be controlled, and in which way. A software simulator will have to be developed in parallel in order to evaluate the performances of this control block in a simplified and controlled environment and to compare it to the data obtained experimentally. This control block will then be implemented on a robotic supernumerary limb platform under development at ISIR. Experimental campaigns on healthy subjects will be conducted in order to refine the functioning of the control and to evaluate its performances (in terms of functional efficiency but also of comfort for the users), in particular compared to other approaches proposed in the scientific literature using electrophysiological measurements.

This work will therefore involve motion analysis, modeling, control, AI, implementation of prototypes and experimental tests in humans.

  • Thesis director: Guillaume Morel
  • Co-supervision: Nathanaël Jarrassé
  • Location: Isir (Institut des Systèmes Intelligents et de Robotique), 4 Place Jussieu 75005, Paris
  • Contact: Guillaume Morel ; guillaume.morel@sorbonne-universite.fr ; Send your application by mail, with [Thesis: Development of an intelligent control of robotic supernumerary limbs] in subject, a CV and a cover letter.

Download the offer

Thesis topic: Learning to grasp

This thesis project is part of the ISIR federative project on agricultural robotics. It focuses on robotic input, particularly in an agricultural context (input of fruits or vegetables). This subject will be approached by a pluri-disciplinary approach between computer science (supervision S. Doncieux and A. Coninx), mechanics and automation (supervision F. Ben Amar).

Context: This thesis project deals with the grasping of objects by a robotic gripper. The goal is to design an automatic picking system for fruits and vegetables. It is thus a question of seizing fragile objects, of various forms and colors, objects which can moreover be at least partially occulted by the foliage of the vegetation. This thesis project will focus on a learning-based approach in which a significant expertise in mechanical modeling will be used at different levels of the methodological approach, from the design of the learning system to the analysis of its performance.

Object grasping is an emblematic task in robotics as it is a prerequisite for many other more advanced tasks such as object manipulation. From a learning point of view, it represents a particularly difficult challenge because it corresponds to a rare reward case: very few interactions are likely to lead to a correct grasp. Many movements will not even touch the object in question. The robot may therefore spend a lot of time exploring irrelevant behaviors. This problem is frequently addressed by providing robot demonstrations or motion primitives that, while not perfect, manage to generate correct grasps with sufficient probability that such motions are observed during the exploration phase [1]. This project aims at solving this problem through appropriate exploration methods that will complement existing deep learning methods.

Scientific objective: This thesis project aims at defining an autonomous approach to learning in robotics in the case of rare rewards and interactions. The developments will be tested on the problem of object capture, with a particular focus on applications in the agricultural domain. They will be based on the work of the supervisors on novelty search methods and quality-diversity algorithms [2, 3, 4] as well as on preliminary results on learning object grasping behavior with these methods. Within the framework defined in the European project DREAM, the approach developed will be iterative [5, 6]. It will first aim at generating input behaviors by a mixed approach between exploration in simulation and testing in reality. This step will be based on an open loop policy learning. Each seizure of an object at a particular position will therefore require a dedicated learning. The repetition of such learning will allow to generate a sufficient base of examples to train a deep learning system that will associate an adapted behavior to the visual perception of the robot. The main objective of the thesis will be to generate these bases of examples thanks to adapted exploration methods.

Justification of the approach: Reinforcement learning searches for a policy, i.e. a function associating a state with an action maximizing a reward. This allows to find the appropriate behavior to achieve a goal that is described only by the occurrence of rewards that the system seeks to maximize. The resounding success of deep reinforcement learning [8,9] has however few equivalents in robotics. This domain accumulates difficulties for learning [10], with very large state and action spaces, transitions between continuous and discrete, noisy perceptions and actions, rare rewards, … Several approaches are possible to deal with these challenges [11]: (1) “step-by-step” approaches, in which a model is used to test several possible actions that a criterion will discriminate to choose the best one and (2) episode-based approaches in which the policy is described by a parameterized function that is evaluated over the course of a complete episode, i.e., a sequence of perception-actions that may lead to a reward.

Step-by-step methods are more efficient and generalize better, but have difficulty handling the case of rare rewards. Episodic approaches can cope with this problem, but at the cost of more exploration and poor generalization. This thesis project aims at combining the best of these two types of approaches by handling rare rewards with an episode-based approach to generate a training base allowing to start learning a “step by step” system.

Required profile: Computer science student with a “machine learning” profile.

Required skills: Programming in Python must be mastered. Skills in robotics are desirable.

  • Application deadline: April 23, 2021
  • Thesis director: Stéphane Doncieux
  • Possible co-directors: Faiz Ben Amar, Alexandre Coninx
  • Location: Isir (Institut des Systèmes Intelligents et de Robotique), 4 Place Jussieu 75005, Paris
  • Contact: Stéphane Doncieux ; doncieux@isir.upmc.fr ; Send your application by mail, with [Thesis: Learning to capture objects] in the subject line, a CV and a cover letter.

Download the offer

Thesis topic: Optical microrobots for interactive manipulation of biological samples

Abstract: This thesis aims at developing a new scientific instrument for applications in experimental biology, in particular for the manipulation, characterization and analysis of objects such as isolated cells, neurons, or intracellular organs. Using the principle of optical tweezers, laser beams are controlled to act directly on samples, or to actuate remote-controlled microrobots. These microrobots will be able to integrate analysis capabilities and bio-active sensors allowing a quick feedback to the operator. This is a new technology capable of supporting and considerably accelerating several studies in biology. Collaborations are started with teams from Institut Curie and Pasteur around cancer and intracellular mechanisms.

General description of the project: The optical tweezers are a technique allowing to manipulate microscopic objects by using a focused laser beam. They allow to act on samples in solution by a non-contact action. ISIR has developed a robotic laser trap system able to manipulate samples on 3 dimensions while measuring the interaction forces in real time. Nevertheless, the difficulty of handling these devices remains an important step to overcome, especially when it concerns objects outside the image plane.

The current performances of the system show that it is possible to trap and move simultaneously several particles with an effort resolution close to 10pN (Fig. A and B). Using these principles, optical microrobots have been realized. Activated by lasers, these ‘optobots’ (Fig. C) of a few micrometers in size, will be used to perform operations on biological samples, such as mechanical characterization, interaction measurement, genetic injection and electrical analysis. However, achieving such high performance has been at the expense of simplicity of use. This is mainly due to the design of the optical path and the complex control laws used.

The objective of this project is first to develop applications in experimental biology to demonstrate the advantages of this system and to impose it as a new scientific instrument. A collaboration is established with teams from the Pasteur Institute and the Curie Institute to exploit these possibilities in cancer research and studies on intracellular mechanisms. In this context, it is necessary to optimize interactivity so that the user is able to plan complex trajectories to trap and move objects, automate operations and collect results. We will also be interested in the modalities of Human/Machine interaction: dedicated haptic interfaces, notably among the previous achievements of the lab such as the ‘FishTank’, are promising candidates to develop a chain of cross-scale, multi-modal interaction.

Scientific theme: The main scientific theme is microrobotics, with strong support from physics and optics. The problems of object positioning and control in 6D with microscopic resolution and precision (nanometers and picoNewtons) are at the heart of the work. From an interaction point of view, existing solutions are generally difficult for the user to grasp and HMI approaches are an original way to achieve this. The user’s immersion is indeed an asset to free himself from complex control laws and planning systems. In the same way, the use of high performance integrated sensors is an asset concerning the final precision reached by the system.

Expected results, challenges and perspectives: The experience of ISIR in handling systems and in human-machine interaction allows us to envisage very promising perceptives and spin-offs. Such an achievement has never been done before and we are confident that it would be a major contribution to the use of optical tweezers. At the end of the project, applications in biology such as the manipulation of intracellular organs. These will be made possible with the collaboration of research teams in experimental biology.

This thesis is part of an industrial maturation process to create an innovative instrument in the field of life sciences, supported by the SATT and the Ile-de-France region. The perspectives concern an exploitation of the generated knowledge to accelerate research in biology. The creation of a start-up is also envisaged to valorize the results.

  • Thesis director: Sinan Haliyo
  • Possible co-supervision: Stéphane Régnier
  • Collaborations within the framework of the thesis: Institut Pasteur, Institut Curie
  • Location: Isir (Institut des Systèmes Intelligents et de Robotique), 4 Place Jussieu 75005, Paris, the Multi-Scale Interactions team
  • Contact: Sinan Haliyo ; sinan.haliyo@isir.upmc.fr ; Send your application by email, with [Thesis: Optical microrobots for interactive manipulation of biological samples] in the subject line, a CV and a cover letter.

Download the offer

Post-doctoral offers

Subject: PostDoc position in Surgical Robotics at Sorbonne University

A PostDoc position is available to start at the beginning of 2021 at Sorbonne University, ISIR, Institut des Systèmes Intelligents et de Robotique http://www.isir.upmc.fr), in the field of surgical robotics. The successful applicant will become part of the AGATHE team (Assistance to Gesture with Application to THErapy) belonging to ISIR. This is a multi-disciplinary research team grouping researchers in design and control of robots, human-machine interaction, neurosciences and medical sciences. The successful applicant will join the European project FAROS (Functionally Accurate RObotic Surgery), an H2020-funded project that is bound to start on January 1st, 2021 and coordinated by KU Leuven (https://h2020faros.eu) and involving King’s College London, University of Zurich and Sorbonne Université.

About the project: A key motivation for introducing robots in operating rooms has been their ability to deliver superhuman performance. Surgical robots are able to execute highly precise gestures that rapidly and stably position instruments in many degrees of freedom (DOFs). However, for the vast majority of surgical procedures, robotic positioning precision alone is not sufficient to realize the “right gesture”. Surgical plans are generally only a much-idealised sketch of the intraoperative reality, which is also defined by patient motion, tissue deformation, bleeding, and the like. Therefore, when finally deciding where exactly to make “the cut” (or intervene on tissue in any other irreversible manner), surgeons “disconnect” from the preoperative plan. Surgical teams resort to their own senses and experience. On the spot, based on the interpretation of the limited intraoperative scene, tactical decisions will be made.

In this context, for orthopaedic applications (e.g. pedicle screw placement), a rich set of sensors that could then form a comprehensive representative of surgical scenes can be utilized to offer the sufficient sensory information for autonomous robotic control.

Locally at ISIR, the project benefits from a high added-value environment: an innovative technology experimental platform, including a KUKA IIWA robot with an open controller, instruments, and a plurality of interfaces, is already operational and will be available to support the experimental research. Access to preclinical facilities will favor in-vivo evaluation when duly justified and validated by an ethical committee. Collaboration with clinical teams, both locally and within the FAROS project, will help targeting real problems with the aim of effectively influencing surgical robotics as practiced in ORs.

Candidate Profile: We seek for applicants who hold a PhD in one of the following domains: robotics, applied control, signal processing.

Required skills: programming skills in C/C++, C# and Python. Fluency in English (both written and oral expression). Bonus skills: Experience in machine learning, deep learning; experience in conducting experiments involving both practical development of interactive devices and evaluation (protocol design, statistical results analysis).

  • Duration: 12 months renewable for a period of 24 months (36 months in total).
  • Desired starting date: as soon as possible.
  • Deadline for application: the process stops as soon as the appropriate candidate is found.
  • Location: Isir (Institut des Systèmes Intelligents et de Robotique), 4 Place Jussieu 75005, Paris. The AGATHE team, dedicated to Medical Robotics at ISIR groups approx. 8 permanent staff and 15 PhD students and postdocs.
  • Contact: Send an application in a single pdf file with short CV, list of publications and potential references to: Guillaume Morel, Prof. Sorbonne Université,  guillaume.morel@sorbonne-universite.fr ; Brahim Tamadazte, CNRS scientist, tamadazte@isir.upmc.fr

Job offers

Context: The Greta platform allows controlling the multimodal behaviour of an Embodied Conversational Agent ECA in real time. The platform is modular but with a linear structure which does not allow it to take into account frequent phenomena during an interaction such as interruptions, socio-emotional reactions of interlocutors, or adaptation mechanisms. It cannot take advantage of the incremental structure of the acoustic and visual signal processing modules and of the dialogue modules. Therefore, we aim to transform the modular structure of the Greta platform into an incremental architecture by modifying its main modules (planning of intentions, behaviours and their realisation).

The Greta platform is developed at the ISIR laboratory, Sorbonne University. This new platform is the basis for the ISIR federative project “avatar of the future” which aims at modelling the interactive loop between a human and an ECA by recurrent neural networks. Different levels of adaptation must be taken into account to obtain this interactive loop. It is necessary that the virtual agent has the capacity to adapt to all levels in real time. This new platform will also be used in our various national and European research projects. The Greta platform is on GitHub with the GPL license.

Missions: The candidate will be involved in the restructuring of the architecture and the code of the Embodied Conversational Agent system. This restructuring concerns the renewal of two specific modules for the planning and realisation of multimodal behaviours. The intention planning module will be based on an incremental dialogue model. Such an architecture also relies on the incremental structure of the acoustic and visual signal processing modules.

It is also necessary to develop an incremental behaviour generation model allowing the agent to update its behaviours. The Greta virtual agent behaviour planner will then be able to incrementally compute the agent’s multimodal behaviours. This task requires modifying the behaviour planner module so that it no longer calculates the agent’s behaviour at the level of the complete sentence but at the level of the ‘chunk’, a piece of sentence generally consisting of a word surrounded by function words. This requires to redesign the planning of behaviours and of their synchronisation.

At the level of the behaviour realizer module, particular attention will be paid to the interaction between verbal and non-verbal production which involve sequential and simultaneous management of signals of different modalities, as well as coarticulation between them.

Skills required:

  • System knowledge and experience: Windows
  • Programming knowledge and experience: Software engineering, multi-threading, software interoperability
  • Programming languages mastered: Java (advanced level), C#, C++, Python
  • Experience with a real-time 3D rendering engine: OpenGL, OGRE, Unity3D, Unreal Engine
  • Experience with code versioning tools: Git, SVN
  • Experience in modelling: human-agent interaction platform, dialogue model, etc.

General information:

  • Workplace: PARIS
  • Type of contract: Fixed-term contract for IT engineer
  • Duration of the contract: 12 months
  • Expected date of employment: 1 May 2021
  • Working hours: Full time
  • Remuneration: between 2100 € and 3000 € gross per month depending on experience
  • Desired level of education: Engineer
  • Desired experience: 1 to 4 years


Applications can be made through the CNRS job portal: https://emploi.cnrs.fr/Gestion/Offre/Default.aspx?Ref=UMR7222-CATPEL-001

You can contact Catherine Pelachaud (catherine.pelachaud@upmc.fr), Catherine Achard (catherine.achard@sorbonne-universite.fr) and Philippe Gauthier (philippe.gauthier@sorbonne-universite.fr) directly for any further information.


Download the offer