PhD offers
Subject: Physics Based Deep Learning for Modeling Complex Dynamics. Applications to Climate
Context:
Deep Learning is beginning to be explored for scientific computing in domains traditionally dominated by physics models (first principles) like earth science, climate science, biological science, etc. It is particularly promising in problems involving processes that are not completely understood, or computationally too complex to solve by running the physics inspired model. The direct use of pure machine learning approaches has however met limited successes for scientific computing. Hence, researchers from different communities have started to explore (i) how to integrate physics knowledge and data, and (ii) how to push the limits of current ML methods and theory; two challenging directions. We consider here deep learning approaches for the modeling of complex dynamical systems characterizing natural phenomena, a recent and fast growing research topic (Willard et al. 2020, Thuerey et al. 2021). Motivating problems and applications will come from climate science (de Bezenac et al. 2018, Ayed et al. 2020).
Scientific Objective:
The global objective is the development of new models leveraging observation or simulation data for the modeling of complex spatio-temporal dynamics characterizing physical phenomena such as those underlying earth-science and climate observations. The classical modeling tools for such dynamics in physics and applied mathematics rely on partial differential equations (PDE). Despite their successes in different areas, current ML based approaches are notably insufficient for such problems. Using ML for physics raises new challenging problems and requires rethinking fundamental ML ideas.
Research directions:
- Hybrid systems – Integrating Physics and Deep Learning,
- Domain generalization for deep learning as dynamical models,
- Learning at Multiple Scales.
Required Profile: Master in computer science or applied mathematics, Engineering school. Background and experience in machine learning. Good technical skills in programming.
General information:
- Thesis director: Patrick Gallinari
- Thesis co-supervisors: M. Levy and S. Thiria of LOCEAN laboratory
- Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
- Start date of the thesis: October / November, 2022
- The research topic is open and, depending on the candidate’s profile, it may be more theoretically oriented or more application-oriented.
Contact person:
- Patrick Gallinari
- Email : patrick.gallinari(at)sorbonne-universite.fr
- Send your application by email, with [subject of the thesis] in the subject line, a CV, a letter of motivation, the grades obtained in master, and letters of recommendation if possible.
Subject: Computational models to predict user trajectories in dynamic environments
Context:
Reaching an object (e.g selecting a 3D object in VR or an icon on the desktop) is one of the most fundamental tasks in Human Computer Interaction (HCI). In HCI, Fitts’ law has been extensively used to predict the pointing time depending on the distance and size of the target (object). It has been used to compare different devices, as well to develop advanced interaction techniques. However, Fitts’ law remains a behavioural model providing little explanation regarding the cognitive processes and thus it does not explain/predict how users adapt their behaviour in dynamic environments e.g., tasks involving external forces or dynamic mappings between physical and virtual movements. A model that would predict human produced trajectories in dynamic environments would inform the design of many non-static input-output mappings (e.g., adaptive mice, VR techniques that manipulate the mapping), by allowing counterfactual reasoning.
Project description:
In this thesis, we wish to provide a comprehensive view of how people produce and adapt their trajectories in a new and/or dynamic environment. We embrace a model-based view of action, where human policy builds on predictions of an internal world model of the task to be accomplished, in line with the optimal control framework pioneered by Todorov. In this classical framework, the internal model is static and identified beforehand. We hypothesise that, rather than being static, this internal model is continually kept up to date, in light of conflicting prediction and sensory information. Modeling and integrating this learning process in the optimal control framework is the open problem that we address. To achieve this, we will adapt Todorov’s classical model, by having the internal model inferred. This inference will be achieved by progressively updating the original outdated internal model, based on an error signal between predicted and observed outcome. The rates of updating (how often the model parameters are updated and by how much) will be determined from empirical data.
Scientific Objective:
The goals of this thesis are:
– adapting Todorov’s optimal control model for aimed movements by adding a learning mechanism that updates the internal world model,
– extending that model with feedforward mechanisms, Todorov’s model being purely feedback driven,
– validating and calibrating the new models on empirical data,
– implementing an interaction technique that leverages the new models (demonstrator).
Required Profile: Applicants with a strong academic record in HCI, a field related to motor control, or control theory are encouraged to apply.
Required skills: Interest and/or experience in computational user modeling is required. The ability to conduct controlled experiments, as well as the ability to design VR interactions is appreciated.
General information:
- Supervisor: Gilles Bailly
- Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Julien Gori
- Tel : +33 1 44 27 51 21
- Email : gori@isir.upmc.fr
- Send your application by email, with [subject of the thesis] in the subject line, a CV and a cover letter.
Job offers
Section 61 – Computer Engineering, Automation and Signal Processing
Research:
The successful candidate will conduct research on the management and fusion of this massive and large-scale information, either at ISIR, with a view to anchoring it in robotic perception methods, or at LIB, for the optimization of MRI image analysis for personalized medicine. For ISIR, the profile sought concerns all the teams of the laboratory and covers perception in robotics in the broad sense, whether visual, auditory, haptic or multi-modal. ISIR is looking to strengthen itself in particular on one of the following themes: perception to enrich interaction, perception at the heart of control (sensor-referenced control) or learning with rich perception (end-to-end methods). Multi-disciplinary approaches will be welcome.
Teaching:
The recruited person will intervene in a balanced way on the different levels of training, from L1 to M2. The “Intelligent Systems Engineering” (ISI) course of the Automatic and Robotic (AR) field of the Engineering Sciences (SDI) Masters Department offers training centered around 4 disciplinary pillars: robotics, AI/ML, information processing and computer science.
Research contact for ISIR: Guillaume MOREL, director of ISIR – guillaume.morel(at)sorbonne-universite.fr
Section 27 – Computer Science
Research:
The position is open to all fields related to learning. The successful candidate will integrate one of the laboratories ISIR, LCQB, LIB, LIMICS according to his/her research themes, and/or will carry out his/her projects involving several host laboratories within SCAI (Sorbonne Center for Artificial Intelligence). The laboratories handle large databases of medical imaging, biology, health and robotics. She will be able to coordinate national and international collaborative programs. Past participation in multidisciplinary projects will be appreciated. ISIR is looking for someone who can strengthen its research at the interface between learning and robotics.
Teaching:
The person recruited will intervene in a balanced way in the different levels of training, from L1 to M2. He/she will contribute significantly to the teaching of the Bachelor of Computer Science whose needs cover the whole discipline (algorithms, programming (especially object, concurrent, functional, web), discrete mathematics, data structures, systems, architecture, networks, compilation, databases…).
Research contact for ISIR: Guillaume MOREL, director of ISIR – guillaume.morel(at)sorbonne-universite.fr
Post-doc : Learning in robotics, with application to grasping
Context:
During the FET Proactive DREAM project (http://dream.isir.upmc.fr/) has been defined an approach for adaptive robotics based on open-ended learning. The main goal is to allow a robot to learn without requiring a careful preparation by an expert. This approach raises many challenges, notably learning with sparse reward, representation learning (for states and actions), model learning and exploitation, transfer learning, meta-learning and generalization. These topics are considered in simulation, but also on real robotics setup, notably in the context of grasping.
Missions:
This position aims at contributing to these topics in the context of several European projects, in particular SoftManBot, Corsmal, INDEX and Learn2Grasp. Calling upon previous works in the research team, the proposed approaches need to be easy to adapt to different robotic platforms and will thus be applied to different robots (Panda arm from Franka-Emika, Baxter, PR2 or TIAGO, for instance).
Required profile:
Candidates for the position must have a PhD degree in machine learning or related field in which robotics applications (either simulated or real) have been considered.
Required skills:
An excellent background is expected in machine learning as well as an experience in robotics. Excellent programming skills in Python are expected.
General Information:
- Position Type: Post-doctoral researcher
- Contract start date: October 2022
- Contract duration: 24 months
- Level of education required: PhD
- Remuneration : Remuneration according to experience
- Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Stephane Doncieux
- stephane.doncieux(at)sorbonne-universite.fr
- Send your application by email, with a CV and a cover letter.
Doctoral, Postdoctoral or research engineer positions for the HCI Sorbonne group (Human Computer Interaction)
Context:
We have multiple postdoctoral and engineering positions in the HCI group (https://hci.isir.upmc.fr) at Sorbonne Université, Paris, France.
Missions:
We are searching for curious minds who want to conduct cutting-edge research at the intersection of HCI with VR, Haptics, Robotics or AI. Possible topics/areas of research are:
- Novel Interaction techniques in VR,
- VR and haptics for gaming or training,
- Computational models for learning, decision making and human performance,
- AI-based recommendation systems.
Some of our previous work in these areas:
- CoVR (UIST21): https://dl.acm.org/doi/10.1145/3379337.3415891
- AI-based Recommendation systems (CSCW 21): https://dl.acm.org/doi/abs/10.1145/3476068?sid=SCITRUS
- Adapting UIs with Reinforcement Learning (CHI 21): https://dl.acm.org/doi/abs/10.1145/3411764.3445497
- Mixed Control of Robotic systems (CHI 20): https://dl.acm.org/doi/10.1145/3313831.3376795
Required profile:
For the postdoctoral position, a Phd degree in Computer science, HCI or other field related to our research areas is required.
Required skills:
- strong programming and analytical skills,
- strong background in at least one of the following areas (HCI, VR, Haptics, Robotics, AI).
More information :
- Type of position: Postdoctoral or Research Engineer position
- Start date: as soon as possible
- Duration: 1 to 2 years
- Level of study required: Master 2 (for engineer), PhD (for post-doc)
- Location: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contacts:
- Gilles Bailly et Sinan Haliyo
- Email : gilles.bailly(at)sorbonne-universite.fr ; sinan.haliyo(at)sorbonne-universite.fr
- Application: Send your application by email, with a CV and a cover letter.
- Application deadline: None
Internship offers
Subject: Learning a stand-up motion with a wheeled biped robot
Abstract:
The objective of this 6 months internship (M2) is to apply reinforcement learning to learn a complex stand-up motion with the wheeled biped robot Upkie. First, a robust motion will be learned in a simulator with a “domain randomization” method, and possible physical modifications of the robot will be identified in order to make the stand-up motion simpler. Then, the robot will be built with the identified modifications, and the main objective of the internship will be to design a practical method to efficiently transfer the learned skill to the real robot, and to address the specific challenges related to this transfer in the context of bipedal locomotion, i.e. a type of locomotion in which the balance is always unstable.
Internship Objectives:
Upkie [1] is a wheeled biped robot developed by Stéphane Caron, a researcher at Inria Paris. With only few actuator-sensors, Upkie can perform complex motions such as balancing itself. However, if it falls, it cannot yet get back up on its wheels. To find a way to perform this task, during the internship we will first apply reinforcement learning in simulation to both learn a stand-up motion and identify potential physical modifications of the robot that would make such a motion easier. Then, the robot will be built entirely from scratch with the identified modifications (luckily, there are clear guidelines for that as Upkie is a “do-it-yourself” robot [1]).
The internship will be divided in 3 phases:
- Phase 1: Apply state-of-the-art reinforcement learning algorithms to learn a policy capable of robust stand-up with Upkie in simulation. If needed, identify small modifications of Upkie that would make a stand-up motion easier (e.g. changing the length of the legs, etc.).
- Phase 2: Build Upkie from scratch, integrating the small modifications defined in phase 1.
- Phase 3: Design a method to transfer the stand-up skill to the real robot. Besides applying state- of-the-art methods for transfer learning, we will consider the idea of designing a specific curriculum (i.e. a series of learning tasks of increasing difficulty) based on the stand-up skill learned in simulation.
If the internship succeeds in learning a stand-up motion with Upkie, we will submit the result of the work to an international robotics conference.
If the stand-up motion ends up being too difficult, we will simplify the problem (e.g. considering stand-up from an intermediate position, or helping the robot by slightly pulling it vertically with a cable).
[1] Upkie (2022), https://hackaday.io/project/185729-upkie-homemade-wheeled-biped-robot
Required Profile:
This research internship requires experience in deep learning, but also an interest in working with a real robot. The first phase of the internship will be entirely done in simulation, but the goal is to quickly start building and testing various approaches on the real robot. The internship is therefore particularly suitable for M2 students who want to have fun working with a real robot and also want to learn various useful skills in robotic design and experimentation. The internship is for you if you have a strong interest in reinforcement learning, as well as an at least equally strong interest in building, repairing, and implementing tests on the robot.
Required skills : Experience in software programming is required (mainly in Python, and a bit in C++).
More information:
- Supervisor: Nicolas Perrin-Gilbert
- Starting date of the internship : March or April 2023
- Duration of the internship : 6 months
- Desired level of study: M2
- Host laboratory : ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Nicolas Perrin-Gilbert
- Email : perrin(at)isir.upmc.fr
- Send your CV and a few words about yourself to Nicolas Perrin-Gilbert. Personal projects (GitHub, website, …) and/or a transcript of recent grades are a plus.
- Application deadline: March 31, 2023
Subject : Vision system for the 3D reconstruction of a flexible manipulator
Abstract:
Agricultural robotics is currently in full development to meet environmental and food sovereignty issues in the face of labor shortages. Within the framework of a project to develop a flexible manipulator for the picking of fragile fruits, we seek to reconstruct in real time the shape of the manipulator with an external vision system. This reconstruction is necessary for the closed-loop control of the manipulator shape and to adjust its configuration to the object to be picked.
Objectives of the training course:
The training course consists in setting up a fixed vision system for the 3D reconstruction of a flexible manipulator. The manipulator has the shape of a tube actuated by cables and remote motors. The trainee will have to study the different existing sensors for perception, select and implement a solution and finally evaluate its performance. Among the solutions considered are those using one or more RGB-D cameras, a stereovision bench, or QR codes, or even a tag cube. Another motion capture system is available for the calibration and validation of the developed perception system.
Desired profile: roboticist with an interest in perception and control.
Required skills: programming, image processing, prototyping.
More information:
- Supervisor: Nizar Ouarti, Faïz Ben Amar
- Starting date of the internship: March 2023
- Duration of the internship: 5 or 6 months
- Desired level of study: Master 2 or last year of engineering school
- Host laboratory: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Nizar Ouarti et Faïz Ben Amar Email : ouarti(at)isir.upmc.fr, amar(at)isir.upmc.fr
- Envoyer votre candidature par mail, avec [système de vision pour manipulateur souple] en objet, un CV et une lettre de motivation.
Subject: Automatic assessment of infants’ motor and interaction skills for early identification of neurodevelopmental disorders
Abstract:
This internship is part of the TECH-TOYS project (EraPermed 2021) which aims to develop an innovative platform in the form of a baby gym as a way to facilitate early detection of NeuroDevelopmental Disorders (NDDs) in babies. This sensorized platform, which will be deployed at home, will allow the quantitative acquisition of motor and parent-infant related data. This data registration is intended to help clinicians at identifying individuals at risk of developing NDDs and to plan, if necessary, a set of interventions aimed at reducing the impact of the disorders on the individuals’ future life. TECH-TOYS is a follow-up to the CareToys project conducted by the Fondazione Stella Maris (Pisa, Italy).
The TECH-TOYS project is a multidisciplinary collaboration between six European institutions which started in 2022 and brings together experts in psychiatry, robotics, artificial intelligence and ethics. ISIR is involved in three major tasks of the project; a) the development of a new data acquisition platform, b) the development of a representation learning model for the identification of digital markers and, c) the development of a precision model. This internship focuses on the latter task. Precision medicine refers to the process of defining a pathology by means of genomics or computational techniques, allowing a more precise targeting of the different forms of the pathology. Within the TECH-TOYS project, a digital precision medicine tool will be implemented to assist clinicians during the observation and decision making steps in the early detection process.
Internship Objectives:
1) Definition of signals of interest and CareToys data preprocessing:
A set of signals of interest will have to be defined based on the clinical and engineering literature while taking into account the nature of the CareToys data. A pre-processing phase followed by a preliminary data analysis phase will also be conducted.
2) Model selection and preliminary tests:
The second part of the internship will consist in testing the previously identified methods by applying them on simplified datasets and/or datasets similar to the CareToys dataset, while taking into account the previously identified constraints and assumptions.
3) Model deployment and evaluation
The last phase of this internship will focus on the deployment and the validation of the tested models on the CareToys data. This will require annotating a part of the data and defining a set of metrics to validate the models.
Required Profile:
Master 1 Student (or equivalent engineering school program). Major in Robotics / Cognitive Science / Computer Science.
Required skills:
Signal Processing, Programming (Python / C++), English.
Advised skills:
Machine Learning, Interests in social signal processing and its clinical applications.
More information:
- Supervisor: Mohamed Chetouani – Professor, ISIR Sorbonne University
- Co-supervisor: Louis Simon – PhD student, ISIR Sorbonne University
- Starting date of the internship: April 2023
- Duration of the internship: 3 to 4 months
- Desired level of study: Master 1 or equivalent
- Host laboratory: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Mohamed Chetouani
- Tel:+330144276308
- Email : mohamed.chetouani(at)sorbonne-universite.fr ; louis.simon(at)sorbonne-universite.fr
- Send your application by email, with [internship subject] in the subject line, a CV and a cover letter.
- Application deadline: February 28, 2023
Subject: Interface Web pour la création d’environnements d’apprentissage vidéo 3D
Abstract:
This internship builds on a learning tool developed at our lab, where medical students can learn through 3D videos using a Head Mounted Display (HMD). The figure below shows what the learner sees: at the center a 3D immersive video, on the left a closeup of the video, and on the right a diagram.
Internship Objectives:
To develop a web-based interactive system where instructors can create the learning environment that the learner visualizes. This will involve three steps:
- Developing a web tool where an instructor can create projects by editing videos (e.g., trim, segment), synchronizing videos with other media (e.g., anatomical images, sound), and defining an interactive navigation of their video (e.g., the selection of the level of detail of the content),
- An export function such that the project can be imported by an existing application. This application shows the project in a VR Helmet. The intern will have access to this application,
- Lastly, to perform tests with surgeons to iterate on the design of the tool.
Required Profile:
A master student with an interest in the fields of Human–Computer Interaction (HCI) and/or Computer Supported Cooperative Work (CSCW). The candidate should have strong web and javascript programming skills, in particular we will appreciate previous experience in the design of interactive interfaces with video.
IMPORTANT: The funding for this project requires the student to be enrolled in a master program within the Alliance Sorbonne Université, which involves the following institutions:
Sorbonne Université, Le Muséum national d’Histoire naturelle (MNHN), L’Insead, L’université de technologie de Compiègne (UTC), Le pôle supérieur d’enseignement artistique Paris Boulogne-Billancourt (PSPBB), France Education International (anciennement Ciep)
Required skills :
Web development (HTML, JS, CSS). Additional skills that will be valued include: unity coding skills, proficiency in English and in academic writing.
More information:
- Supervisor: Ignacio Avellino
- Starting date of the internship: March or April 2023
- Duration of the internship : 6 months
- Desired level of studies: M1, currently in M2
- Host laboratory : ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Ignacio Avellino
- Tel : +33 1 44 27 62 17
- Email : ignacio.avellino@sorbonne-universite.fr
- Send your application by email, with [internship subject] in the subject line, a CV and a cover letter.
Subject : Deep learning for the automated analysis of the child-clinician acoustic interaction in unconstrained clinical contexts
Abstract:
This stage aims at developing an innovative technical solution in the framework of Autism treatment for preschool children and, more in general, in clinical contexts.
The project objective is the implementation of a completely non-invasive artificial intelligence system to automate the quantitative analysis of the child-clinician interaction in a longitudinal sample to evaluate the impact of interpersonal aspects on treatment response trajectories. The project is in collaboration with the University of Trento (Italy) to develop multidisciplinary approaches between clinical and computational aspects. It focuses on the validation and the setting up of an artificial neural network working on spectrograms of the acoustic interaction to eventually apply algorithms to quantify dyadic interpersonal synchrony to extract features that will be integrated in outcome predictive models.
Quantitative aspects are fundamental in clinical research to develop evidence-based personalized treatments [1,2]. In fact, research on efficacy showed a high level of inter-individual response variability. In this context, predictors, moderators, mediators, and mechanisms of change remain under-investigated mainly due to the lack of specific instruments that enable quantitative investigation [3,4]. In developmental research, and even more in clinical contexts, observational methods represent the main approach [5]. However, observational techniques are labor-intensive, time-consuming and require a lot of human work, mainly with respect to data annotation, that enable the employment of statistical analysis of computational techniques. Often, these limitations prevent quantitative investigation in clinical context, especially in public health [6]. The project has the objective to narrow the gap between research and clinical practice by automatizing data annotation in the acoustic domain. A first AI system was validated with promising results. The system is based on a siamese architecture trained on acoustic spectrograms to learn similarities between audio segments in terms of (1) presence of human voice and (2) diarization between child and clinician voices. The project aims at refining the validation phase and to employ the system on a longitudinal clinical sample of preschool autistic children exposed to early naturalistic developmental behavioral intervention at a research and clinical center of the University of Trento (Italy). The second objective concerns the employment of algorithms to measure interpersonal synchrony and prosody of communication and to evaluate the impact of interaction features on treatment response [7]. The stage focuses on artificial intelligence and deep learning, in particular the exploitation of convolutional neural networks trained on spectrograms for acoustic computing in unconstrained clinical contexts characterized by the presence of high levels of noise and low data quality. State-of-the-art techniques will be employed, including self-attention and transformer architectures in addition to the siamese architecture. Interaction variables will be eventually integrated in linear regression models to predict longitudinal treatment response trajectories.
Internship Objectives :
- Validation and refinement of the siamese architecture,
- Automated quantitative analysis on a longitudinal clinical sample to evaluate the child-clinician interaction in terms of interpersonal synchrony, emotional communication and prosody on the outcome variables,
- Integration of interaction features in predictive models to evaluate their impact on treatment response trajectories.
Required Profile :
Master 1 student (or equivalent school of engineering)
Computer Science / Data Science / Cognitive Science / Informatics / Robotics
Required skills :
- Machine Learning,
- Deep Learning,
- Predictive modeling,
- Statistics,
- Signal processing,
- Programming (Python),
- English.
Advised skills :
- Interest for social signal processing,
- Interest for multidisciplinary and clinical research,
- Interest for precision medicine.
More information:
- Supervisor: David Cohen – Professor, AP-HP Department of Child and Adolescent Psychiatry ; ISIR Sorbonne University / Mohamed Chetouani – Professor, ISIR Sorbonne University
- Co-supervisor: Giulio Bertamini – Postdoc, AP-HP Department of Child and Adolescent Psychiatry ; ISIR, Sorbonne University
- Starting date of the internship: April 2023
- Duration of the internship: 3 to 4 months
- Desired level of study: Master 1 or equivalent
- Host laboratory: ISIR (Institute of Intelligent Systems and Robotics), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris / APHP Pitié-Salpêtrière, Department of Child and Adolescent Psychiatry, 47-83 Boulevard de l’Hôpital, 75013, Paris
Contact person:
- Mohamed Chetouani
- Phone : +33 01 44 27 63 08
- Email : mohamed.chetouani(at)sorbonne-universite.fr ; giulio.bertamini(at)unitn.it
- Send your application by email, with [internship subject] in the subject line, a CV and a cover letter.
- Application deadline: January 31, 2023
Subject: Gestural Cooperative Interfaces
Abstract:
Intelligent interfaces are capable of “cooperating” with users; text entry interfaces that offer autocompletion, correction, etc. are a well-known example, but there are many others. Designing these interfaces can be complicated, and potentially requires expertise in software engineering, human behavior modeling, and algorithms. We are currently developing a Python library to facilitate the design of intelligent interfaces, notably by proposing tools to decouple these expertises (https://jgori-ouistiti.github.io/CoopIHC/branch/dev/).
In this internship, we are particularly interested in interfaces that exploit user models. These interfaces are developed by simulation approaches where we couple an interface with a (synthetic) user model, and not a real user. However, it is necessary to be able to evaluate these interfaces with real users in fine. The goal of the internship is to allow the user of the library to start from an interface designed for a synthetic user model and to evaluate it against a real user.
Internship Objectives:
Concretely, we already have a working proof of concept where the user exchanges with an interface via a web browser, whose intelligent part has been trained against a model, and then runs in a Python environment via the WebSocket protocol, but the web interface is minimalist, and we only cover a limited use case on the library side. The student will start from this base to implement a complete use case of assisted gesture interaction.
Required Profile:
The student will have a solid knowledge of web / Python development.
More information:
- Supervisor : Julien Gori
- Starting date of the internship: as soon as possible
- Duration of the internship: 6 months
- Desired level of study: Master 2 or last year of engineering school
- Host laboratory: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Julien Gori
- Tel : +33 6 51 40 64 04
- Email : gori(at)isir.upmc.fr
- Send your application by email, with [internship subject] in the subject line, a CV, M1/M2 transcripts, and a cover letter.
- Application deadline: end of January 2023
Subject: Field study on pointing with computer vision
Abstract:
Selecting targets with a cursor, for example with a mouse, is one of the most common interactions (pointing) for a user. Yet, surprisingly, the typical user still points as they did 30 years ago, despite the many published advances in human-computer interaction (HCI) pointing research – some predicting performance gains of more than 30%. What can explain this? In my opinion, there is a disconnect between current pointing research, which is based solely on controlled laboratory studies, and “real” pointing. We only know how to evaluate the performance of new pointing methods through controlled experiments because we don’t know how to observe the quantities we need to characterize them (nor generally implement these methods on real systems).
The goal of this internship is to instrument a computer to acquire the necessary data, particularly the sizes and positions of the targets selected by the user, to characterize the pointing directly in the user’s environment. A law, known as Fitts’ law, is often used to describe pointing in a controlled experiment setting. One of the questions that will be answered is whether this description is still relevant outside the controlled setting. This work will also serve as a first step for the future implementation of interaction techniques that require knowledge of target characteristics to work.
Internship Objectives:
Concretely, the student will exploit the accessibility APIs that are available for some applications. For example, the Gnome environment (available on Linux, and for example standard on Ubuntu) exposes size and position information of some icons through the AT-SPI interface. This implies that for a given number of applications, we are able to retrieve the information about the targets, necessary for the evaluation of Fitts’ law. At the same time, we can make a screen capture; we can thus constitute a database, with screen captures of various interfaces and in correspondence of the information on the targets of this interface. From this database, we can then use supervised learning to automatically extract potential targets from a screenshot. The student will build the dataset and perform the supervised learning and its evaluation.
Required Profile:
The desired student will have knowledge of C or C+ or Python programming, and Linux. Basic knowledge of supervised learning is also required.
Knowledge of object detection, user interface programming, and Xlib/Wayland will be particularly appreciated.
General information:
- Supervisor : Julien Gori
- Starting date of the internship: as soon as possible
- Duration of the internship: 6 months
- Desired level of study: Master 2 or last year of engineering school
- Host laboratory: ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Julien Gori
- Tel : +33 6 51 40 64 04
- Email : gori(at)isir.upmc.fr
- Send your application by email, with [internship subject] in the subject line, a CV and a cover letter.
- Application deadline: end of January 2023
Subject: Expressive nonverbal behaviors model
Abstract:
This internship is part of the National project ANR ENHANCER which aims to develop an embodied conversational agent platform to ensure interaction with healthy persons and with persons suffering of schizophrenia.
Embodied conversational agents can take on a human appearance and can communicate verbally or non-verbally (Lugrin et al., 2021). They can be used as an interface in human-machine interaction by playing multiple roles such as assistant, teacher, guide or companion. They have communication skills, i.e. they can interact with humans through verbal and non-verbal means of communication.
Non-verbal behavior can be characterized by its shape (e.g., a facial expression, a hand shape), its trajectories (linear, sinusoidal), its timing (in correlation with speech) and its manner of execution (speed of movement, acceleration). The latter is referred to as behavior expressivity. Laban annotation (Laban and Ullmann, 1988) describes expressive dance movements along four dimensions (time, weight, space, and flow). Several of these behavioral characteristics are used to develop computational models controlling virtual agents. Laban’s model was implemented in virtual agents (Durupinar et al., 2017). On the other hand, to characterize emotional body movements, Wallbott and Scherer (1986) defined a set of six expressivity parameters namely: spatial extent, temporal extent, fluidity, power, repetition, and overall activation. These parameters have been implemented to control the dynamic quality of virtual agents’ behavior (Hartmann et al., 2005). An extension has been proposed by (Huang et al., 2012). Lately, data-driven approach has been applied to model expressive gaze and gait (Randhavane et al., 2019), facial expression of emotion (Festl & McDonnell, 2018) and gesture (Neff, 2016).
Internship Objectives:
The objective of the internship is to manipulate the behavior expressivity of the agent. The agent will be able to perform non-verbal behaviors with different expressivities during the whole interaction. Expressivity acts on the dynamics and amplitude of the behaviors as well as on their number of occurrences. It will allow us to create agents doing few behaviors with low expressivity or doing more behaviors with higher expressivity. To this aim, several steps are foreseen:
- expand the current behavior expressivity model where 6 parameters are implemented (Huang&Pelachaud, 2012), so it will act globally over the whole interaction or over a specific time span,
- make use of the database of expressive movement Emilya (Fourati&Pelachaud, 2016) that contains motion capture data of movements done by 11 persons performing 7 actions with 8 emotions, to characterize values of the behavior expressivity parameters,
- evaluate the model through objective measures and through experimental study to measure the naturalness and perceived expressivity of the agent’s behavior.
Required Profile: Master student in engineering or computer science, engineering student
Required skills: Java, animation 3D
Applications: send CV, grades, names of referent and motivation letter (in pdf format) to catherine.pelachaud(at)upmc.fr
More information:
- Supervisor: Catherine Pelachaud
- Starting date of the internship: March-April 2023
- Duration of the internship: 6 months
- Desired level of study: Master 2
- Host laboratory : ISIR (Institut des Systèmes Intelligents et de Robotique), Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris.
Contact person:
- Catherine Pelachaud
- Email : catherine.pelachaud(at)upmc.fr
- Send your application by email, with [internship subject] in the subject line, your CV, your grades from M1 and M2, the names of your referees and a cover letter.
- Application deadline: January 2023