Home » Join us » Opportunities

Opportunities

PhD offers

Subject: Physics Based Deep Learning for Modeling Complex Dynamics. Applications to Climate

Context:

Deep Learning is beginning to be explored for scientific computing in domains traditionally dominated by physics models (first principles) like earth science, climate science, biological science, etc. It is particularly promising in problems involving processes that are not completely understood, or computationally too complex to solve by running the physics inspired model. The direct use of pure machine learning approaches has however met limited successes for scientific computing. Hence, researchers from different communities have started to explore (i) how to integrate physics knowledge and data, and (ii) how to push the limits of current ML methods and theory; two challenging directions. We consider here deep learning approaches for the modeling of complex dynamical systems characterizing natural phenomena, a recent and fast growing research topic (Willard et al. 2020, Thuerey et al. 2021). Motivating problems and applications will come from climate science (de Bezenac et al. 2018, Ayed et al. 2020).

Scientific Objective:

The global objective is the development of new models leveraging observation or simulation data for the modeling of complex spatio-temporal dynamics characterizing physical phenomena such as those underlying earth-science and climate observations.  The classical modeling tools for such dynamics in physics and applied mathematics rely on partial differential equations (PDE). Despite their successes in different areas, current ML based approaches are notably insufficient for such problems. Using ML for physics raises new challenging problems and requires rethinking fundamental ML ideas.

Research directions:

  • Hybrid systems – Integrating Physics and Deep Learning,
  • Domain generalization for deep learning as dynamical models,
  • Learning at Multiple Scales.

Required Profile: Master in computer science or applied mathematics, Engineering school.  Background and experience in machine learning. Good technical skills in programming.

General information:

  • Thesis director: Patrick Gallinari
  • Thesis co-supervisors: M. Levy and S. Thiria of LOCEAN laboratory
  • Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
  • Start date of the thesis: October / November, 2022
  • The research topic is open and, depending on the candidate’s profile, it may be more theoretically oriented or more application-oriented.

Contact person:

  • Patrick Gallinari
  • Email : patrick.gallinari(at)sorbonne-universite.fr
  • Send your application by email, with [subject of the thesis] in the subject line, a CV, a letter of motivation, the grades obtained in master, and letters of recommendation if possible.

Download this offer

Subject: Computational models to predict user trajectories in dynamic environments

Context:

Reaching an object (e.g selecting a 3D object in VR or an icon on the desktop) is one of the most fundamental tasks in Human Computer Interaction (HCI). In HCI, Fitts’ law has been extensively used to predict the pointing time depending on the distance and size of the target (object). It has been used to compare different devices, as well to develop advanced interaction techniques. However, Fitts’ law remains a behavioural model providing little explanation regarding the cognitive processes and thus it does not explain/predict how users adapt their behaviour in dynamic environments e.g., tasks involving external forces or dynamic mappings between physical and virtual movements. A model that would predict human produced trajectories in dynamic environments would inform the design of many non-static input-output mappings (e.g., adaptive mice, VR techniques that manipulate the mapping), by allowing counterfactual reasoning.

Project description:

In this thesis, we wish to provide a comprehensive view of how people produce and adapt their trajectories in a new and/or dynamic environment. We embrace a model-based view of action, where human policy builds on predictions of an internal world model of the task to be accomplished, in line with the optimal control framework pioneered by Todorov. In this classical framework, the internal model is static and identified beforehand. We hypothesise that, rather than being static, this internal model is continually kept up to date, in light of conflicting prediction and sensory information. Modeling and integrating this learning process in the optimal control framework is the open problem that we address. To achieve this, we will adapt Todorov’s classical model, by having the internal model inferred. This inference will be achieved by progressively updating the original outdated internal model, based on an error signal between predicted and observed outcome. The rates of updating (how often the model parameters are updated and by how much) will be determined from empirical data.

Scientific Objective:

The goals of this thesis are:

– adapting Todorov’s optimal control model for aimed movements by adding a learning mechanism that updates the internal world model,

– extending that model with feedforward mechanisms, Todorov’s model being purely feedback driven,

– validating and calibrating the new models on empirical data,

– implementing an interaction technique that leverages the new models (demonstrator).

Required Profile: Applicants with a strong academic record in HCI, a field related to motor control, or control theory are encouraged to apply.

Required skills: Interest and/or experience in computational user modeling is required. The ability to conduct controlled experiments, as well as the ability to design VR interactions is appreciated.

 

General information:

  • Supervisor: Gilles Bailly
  • Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.

Contact person: 

  • Julien Gori
  • Tel : +33 1 44 27 51 21
  • Email : gori@isir.upmc.fr
  • Send your application by email, with [subject of the thesis] in the subject line, a CV and a  cover letter.
  • Application deadline: 15/05/2022

Download the thesis offer

Job offers

Post-doc : Learning in robotics, with application to grasping

Context:

During the FET Proactive DREAM project (http://dream.isir.upmc.fr/) has been defined an approach for adaptive robotics based on open-ended learning. The main goal is to allow a robot to learn without requiring a careful preparation by an expert. This approach raises many challenges, notably learning with sparse reward, representation learning (for states and actions), model learning and exploitation, transfer learning, meta-learning and generalization. These topics are considered in simulation, but also on real robotics setup, notably in the context of grasping.

Missions:

This position aims at contributing to these topics in the context of several European projects, in particular SoftManBot, Corsmal, INDEX and Learn2Grasp. Calling upon previous works in the research team, the proposed approaches need to be easy to adapt to different robotic platforms and will thus be applied to different robots (Panda arm from Franka-Emika, Baxter, PR2 or TIAGO, for instance).

Required profile:

Candidates for the position must have a PhD degree in machine learning or related field in which robotics applications (either simulated or real) have been considered.

Required skills:

An excellent background is expected in machine learning as well as an experience in robotics. Excellent programming skills in Python are expected.

General Information: 

  • Position Type: Post-doctoral researcher
  • Contract start date: October 2022
  • Contract duration: 24 months
  • Level of education required: PhD
  • Remuneration : Remuneration according to experience
  • Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.

Contact person: 

  • Stephane Doncieux
  • stephane.doncieux(at)sorbonne-universite.fr
  • Send your application by email, with a CV and a cover letter.

Download this offer

Post-doctoral researcher position in robotic manipulation

Context:

ISIR has a long history in dexterous robotic manipulation since the European project HANDLE (2009-2013) that it coordinated. ISIR is still currently a partner in three European projects on this subject: CORSMAL, INDEX and SOFTMANBOT (https://www.isir.upmc.fr/projets/?team=syroco&project-type=all-projects-fr).

The project team “Robotic Manipulation” within the SYROCO team of the ISIR, which participates in these projects, is composed of about ten researchers (3 professors, 1 associate professor, 5 PhD students, 1 engineer and interns). The developed solutions are integrated either on a Shadow hand, an Allegro hand, a Franka Emika Panda robot or a Baxter robot.

Missions:

ISIR is recruiting a postdoctoral researcher to be involved in the development and implementation of solutions that will contribute to the achievement of the EU projects’ objectives. He/she will help coordinate the research activities carried out within the projects by a team of doctoral students, participate to the projects’ meetings, contribute to the scientific developments, and support the integration and validation of the resulting solutions. The postdoctoral researcher will also conduct his/her own research. In particular, the team wishes to the activities on bimanual manipulation of deformable objects.

Required profile: Candidates for the position must have a PhD degree in robotics or related field.

Required skills: Expertise in perception (tactile sensing would be an asset), optimization, planning and control, as well as good experience in implementing under ROS.

General Information: 

  • Position Type: Post-doctoral researcher
  • Contract start date: as soon as possible
  • Contract duration: 1 year renewable
  • Level of education required: PhD in robotics or related field
  • Remuneration : Remuneration according to experience
  • Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.

Contact person: 

  • Veronique Perdereau
  • veronique.perdereau(at)sorbonne-universite.fr
  • Send your application by email, with a CV including a complete list of publications, a cover letter, transcripts and degree certificates, and three academic references. Incomplete applications will not be considered.

Download this offer

 

Doctoral, Postdoctoral or research engineer positions for the HCI Sorbonne group (Human Computer Interaction)

Context:

We have multiple postdoctoral and engineering positions in the HCI group (https://hci.isir.upmc.fr) at Sorbonne Université, Paris, France.

Missions:

We are searching for curious minds who want to conduct cutting-edge research at the intersection of HCI with VR, Haptics, Robotics or AI. Possible topics/areas of research are:

  • Novel Interaction techniques in VR,
  • VR and haptics for gaming or training,
  • Computational models for learning, decision making and human performance,
  • AI-based recommendation systems.

Some of our previous work in these areas:

Required profile:

For the postdoctoral position, a Phd degree in Computer science, HCI or other field related to our research areas is required.

Required skills:

  • strong programming and analytical skills,
  • strong background in at least one of the following areas (HCI, VR, Haptics, Robotics, AI).

More information : 

  • Type of position: Postdoctoral or Research Engineer position
  • Start date: as soon as possible
  • Duration: 1 to 2 years
  • Level of study required: Master 2 (for engineer), PhD (for post-doc)
  • Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.

Contacts: 

  • Gilles Bailly et Sinan Haliyo
  • Email : gilles.bailly(at)sorbonne-universite.fr ; sinan.haliyo(at)sorbonne-universite.fr
  • Application: Send your application by email, with a CV and a cover letter.
  • Application deadline: None

Download the offer

Internship offers

Subject: Expressive nonverbal behaviors model

Abstract:

This internship is part of the National project ANR ENHANCER which aims to develop an embodied conversational agent platform to ensure interaction with healthy persons and with persons suffering of schizophrenia.

Embodied conversational agents can take on a human appearance and can communicate verbally or non-verbally (Lugrin et al., 2021). They can be used as an interface in human-machine interaction by playing multiple roles such as assistant, teacher, guide or companion. They have communication skills, i.e. they can interact with humans through verbal and non-verbal means of communication.

Non-verbal behavior can be characterized by its shape (e.g., a facial expression, a hand shape), its trajectories (linear, sinusoidal), its timing (in correlation with speech) and its manner of execution (speed of movement, acceleration). The latter is referred to as behavior expressivity. Laban annotation (Laban and Ullmann, 1988) describes expressive dance movements along four dimensions (time, weight, space, and flow). Several of these behavioral characteristics are used to develop computational models controlling virtual agents. Laban’s model was implemented in virtual agents (Durupinar et al., 2017). On the other hand, to characterize emotional body movements, Wallbott and Scherer (1986) defined a set of six expressivity parameters namely: spatial extent, temporal extent, fluidity, power, repetition, and overall activation. These parameters have been implemented to control the dynamic quality of virtual agents’ behavior (Hartmann et al., 2005). An extension has been proposed by (Huang et al., 2012). Lately, data-driven approach has been applied to model expressive gaze and gait (Randhavane et al., 2019), facial expression of emotion (Festl & McDonnell, 2018) and gesture (Neff, 2016).

Internship Objectives:

The objective of the internship is to manipulate the behavior expressivity of the agent. The agent will be able to perform non-verbal behaviors with different expressivities during the whole interaction. Expressivity acts on the dynamics and amplitude of the behaviors as well as on their number of occurrences. It will allow us to create agents doing few behaviors with low expressivity or doing more behaviors with higher expressivity. To this aim, several steps are foreseen:

  • expand the current behavior expressivity model where 6 parameters are implemented (Huang&Pelachaud, 2012), so it will act globally over the whole interaction or over a specific time span,
  • make use of the database of expressive movement Emilya (Fourati&Pelachaud, 2016) that contains motion capture data of movements done by 11 persons performing 7 actions with 8 emotions, to characterize values of the behavior expressivity parameters,
  • evaluate the model through objective measures and through experimental study to measure the naturalness and perceived expressivity of the agent’s behavior.

Required Profile: Master student in engineering or computer science, engineering student

Required skills: Java, animation 3D

Applications: send CV, grades, names of referent and motivation letter (in pdf format) to catherine.pelachaud(at)upmc.fr

More information: 

  • Supervisor: Catherine Pelachaud
  • Starting date of the internship: March-April 2023
  • Duration of the internship: 6 months
  • Desired level of study: Master 2
  • Host laboratory : ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.

Contact person:

  • First name Last name : Catherine Pelachaud
  • Email : catherine.pelachaud(at)upmc.fr
  • Send your application by email, with [internship subject] in the subject line, your CV, your grades from M1 and M2, the names of your referees and a cover letter.
  • Application deadline: January 2023

Download this internship offer

Subject: Visual servoed auto-focusing in automated cell manipulation

Abstract:

Intracytoplasmic sperm injection (ICSI) is an assisted reproductive technology in which a single sperm cell is being injected into the oocyte using sharp glass needle [1]. Automated micro- manipulation systems has been attracted by the researchers and progressively advanced, particularly in the last 10 years [4]. The main reason for the development of automated micromaniopulation systems are to eliminate the human error in manual operations and increase system efficiency. Auto-focusing of an oocyte before micro-injection procedure is an essential operation for performing automated biological cell manipulation tasks. As shown in Figure 1, The holding pipette (on the left) should be aligned with the oocyte prior to injection.

There is not much effort has been made toward auto-focusing of the micro objects that are used during ICSI procedure [2], except analysis of conventional focus measurement algorithms, such as Brenner gradient or Energy of gradint etc toward general micro manipulation applications [3].This project aims to automatically focus on the holding pipette and oocyte. This operation can benefit from the computational power and robustness of deep learning techniques. Subsequently, the obtained corresponding positions will be used to drive the motorised stages in a visual-servoing manner where one of them is placed underneath of the petri-dish and the other motorised stage is directly linked to the holding pipette. Overall, the project will investigate the potential of deep learning techniques to precisely autofocusing on the holding pipette and oocyte under various environmental disturbance to unlock and contribute to the fully automated ICSI system.

Internship Objectives:

The aim of this project is to precisely auto-focusing on the oocyte and holding pipette in ICSI operation using deep learning.

The main objectives of the projects are as follows:

  • Collection and labelling of training data for oocyte and holding pipette,
  • Development of a deep learning based auto-focusing framework for both holding pipette and oocyte,
  • Driving the respective motorised stage in focal axes for both holding pipette and oocyte),
  • Design of experiment to test the developed algorithm and conduct statistical error analysis,
  • Full experimental auto-focusing demonstration for both holding pipette and oocyte.

More information: 

  • Project language is ENGLISH
  • Supervisors: Ferhat Sadak et Sinan Haliyo
  • Starting date of the internship: February 2023
  • Duration of the internship: 4 to 6 months
  • Desired level of study: Master 2 or equivalent
  • Host laboratory: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.

Contact person

  • Ferhat Sadak et Sinan Haliyo
  • Email: sadak(at)isir.upmc.fr et sinan.haliyo(at)isir.upmc.fr
  • You are required to contact the supervisors BEFORE applying for the projet. Send your application by email, with [internship subject] in the subject line, a CV and a cover letter.

Download the intership offer

Subject: Out-of-plane displacement quantification for a 6DoF optical micro-robot using deep learning

Abstract:

Micro-objects with dimensions ranging from 50nm to 10m can be manipulated using an optical tweezer (OT) [1]. OT can manipulate cells or other biological objects directly. Because direct manipulation of the OT can cause cell damage, it is critical to use indirect manipulation to avoid laser beam damage. The indirect manipulaton can be achieved by using the microrobot as an effector. Automation is worth developing to reduce the workload of human operators by eliminating the manual manipulation of multiple microspheres or microrobots. To that end, localization and automatic trapping of multiple microrobots in OT is required. Many existing works focus on planning and control for automated cell manipulation [5], such as using a decision theoretic based path planning approach for real-time path planning [2] and graph search-based algorithms for automated cell transport in mi- crofluidic chambers [3]. However, no attention has been paid to microrobots’ real-time out-of-plane displacement estimation.This information is critical for the automation of cell handling procedures, as it will improve efficiency and control over the process. The experimental setup for a 6DoF optical robot to estimate the out-plane plane displacement is shown as Figure 1.

In overall, the aim of this project is to precisely to predict the out-of-plane displacement estimation of a 6 DoF micro robot using deep learning.

More information: 

  • Project language is ENGLISH
  • Supervisors: Ferhat Sadak et Sinan Haliyo
  • Starting date of the internship: February 2023
  • Duration of the internship: 4 to 6 months
  • Desired level of study: Master 2 or equivalent
  • Host laboratory: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.

Contact person

  • Ferhat Sadak et Sinan Haliyo
  • Email: sadak(at)isir.upmc.fr et sinan.haliyo(at)isir.upmc.fr
  • You are required to contact the supervisors BEFORE applying for the projet. Send your application by email, with [internship subject] in the subject line, a CV and a cover letter.

Download the internship offer

Subject: Learning to grasp in robotics

Abstract:

Learning to grasp in robotics has received increasing attention for several years, justified by the high scientific and practical stakes associated with it [1]. This problem is quite simple in a deterministic environment: it consists in controlling a manipulator arm to reach precise positions and open or close the gripper. But these approaches are limited to heavily constrained scenarios. Despite the efforts of major academic and industrial players, performing object grasping in an uncontrolled environment is still an unsolved task [2].

First, it is a hard exploration problem: it is very difficult to make the robot produce successful grasps until an effective controller is available – in other words, until the problem is solved. Second, experiments with real robots are expensive, slow, and subject to many integration and maintenance problems. Training policies in simulation is therefore preferable, but the reality gap often makes the generated solutions inefficient on a real robot.

The most common way to tackle this problem is to simplify it by considering it as a grasp pose estimation task. This was first done in the form of in-plane position predictions, limiting the policies to top-down movements [3], and more recently by doing 6-DoF pose estimation (gripper position and orientation) [4]. However, these methods impose strong assumptions on the gripper structure, limiting the related works to parallel grippers or suction grippers.

Novelty search based algorithms [5] are evolutionary methods designed to tackle hard exploration problems by looking for diverse solutions instead of the optimal one from a fitness perspective. The large grasp coverages obtained by such approaches [6] suggest that they can be exploited to perform grasp position detection, which can then be combined with planning orreinforcement learning algorithms. These methods have the advantage of being platform- agnostic : the same hyperparameters work for multiple robots & grippers, without constraining the pose estimation with strong assumptions on the end effector’s shape.

Internship Objectives:

The goal of this internship is to leverage some of the team’s previous works [6][7] to generate databases of grasping examples for multiple robots, grippers and objects. Using these databases, one can extract key features to identify relevant grasp poses. A comparative study with other state-of-the-art approaches involving various types of models (DLs, CNNs, GPs, GANs, etc.) must eventually be carried out.

This internship may lead to a publication in Computer Vision, Robotics or AI. Required

Profile:

Students with a strong academic background in Artificial Intelligence, Machine Learning or Data Science. Robotics and/or Computer Vision background is a plus.

Skills :

  • Required : Python, data science, machine learning (theory : standard methods, DL, CNN ; in practice : AI framework (PyTorch, …)), measurement and visualisation (matplotlib, seaborn).
  • Optional : Robotics simulators (PyBullet, Isaac Gym …), Computer vision, evolutionary algorithms, high performance computing (CPU et GPU).

More information:

  • Supervisor: Stéphane Doncieux
  • Duration of the internship : 6 months
  • Desired level of studies: Currently in Master 2, or last year of engineering school
  • Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.

Contact person: 

  • Stéphane Doncieux and Johann Huber
  • Email : stephane.doncieux (at) isir.upmc.fr ; johann.huber (at) isir.upmc.fr
  • Send your application by email, with [apprentissage_s saisie_objets] in the subject line, a CV and a cover letter. It is strongly recommended to also attach one or more personal projects (github, etc.).

Download the internship offer