Home » Project

Active catheterism for neuroradiology

Catheters for INR operated by Shape Memory Alloys

To meet the need for controllability in interventional neuroradiology devices, we have developed a patented method for manufacturing active catheters incorporating Shape Memory Alloy (NiTi) wires. An advanced modeling step allowed us to optimize the dimensions of these active catheters which were then tested and validated on silicone and animal models. The smallest prototypes produced have diameters of the order of one millimeter and angles of curvature greater than 90°. Today, this technology has been transferred via the start-up Basecamp Vascular, which is industrializing the developed process.

Catheters for INR operated by Shape Memory Alloys

Context

Active catheterization as a solution to the difficulties of endovascular navigation

Currently, the catheters used in cardiology or interventional radiology (for example for the embolization of cerebral aneurysms) are completely passive tools and are manually introduced. In addition, the visual feedback offered to radiologists during the intervention is reduced to low-resolution radio images, most often in 2D. In this context, catheters are difficult to control and their progression is slow or even impossible until the anatomical target. In neuroradiology, for example (navigation into the brain), it is estimated that the navigation failure rate is 20% due to an anatomy that is too tortuous at the level of the supra-aortic trunks (carotid).

ISIR has been collaborating with the interventional neuroradiology department of the Rothschild Ophthalmological Foundation (FOR) for many years on the theme of active catheterization for INR. The aim of the CATANE project is to develop and validate motorized catheters, controllable in their distal part to improve the conditions of arterial navigation: negotiation of the branches, pointing of the target. The chosen approach is that of miniature actuators based on Shape Memory Alloys (SMA) directly integrated at the tip of the catheters.

Objectives

Results

Partnerships and collaborations

Basecamp Vascular industrializes the ISIR active catheters.

The CATANE project was first supported by the CAMI labex (Computer Assisted Medical Interventions) which financed the recruitment of an engineer in 2013 and then by the SATT Lutech in the form of a maturation grant between 2014 and 2015. Today, the technologies developed and in particular the one described in patent WO2011116961A1, have been transferred to the start-up Basecamp Vascular created in 2016. Its president is Dr. Raphaël Blanc, interventional neuroradiologist at the Rothshild Ophthalmological Foundation in Paris. Dr. Blanc has been associated with the CATANE project since its inception.

Robotic microsurgery of cholesteatoma (muRocs project)

Surgical interventions in the middle ear (such as the removal of a cholesteatoma) are frequent and very risky. In addition to the manipulative difficulties, there are problems related to the indirect vision by microscope. In the ANR muRocs project, we study the feasibility of a robotization of the cholesteatomy gesture.  This involves the use of a fine instrument such as a polyarticulated endoscope carried by a support arm. The task of ISIR in this project is to develop the means of an intuitive and powerful interfacing between the operator and the robotized system. In particular, different piloting modes are compared and the use of multimodal imaging is explored.

Context

Cholesteatoma is a serious disease of the middle ear with an incidence of 1/10000 per year. It consists of a cancerous growth of tissue that can reach the brain. The most effective treatment of the disease at present is surgery. This consists of resecting the cholesteatoma (first by scraping and then by laser ablation of the residues) either through the tympanic canal or through a milled passage in the mastoid. This operation is an example of the difficulties of microsurgery: narrow access, unwieldy tools, fragile tissues (optic nerve, tympano-ossicular chain), reduced indirect vision. Consequently, the failure rate is high (25%) and there is a great need to improve the procedure, particularly in terms of instrumentation. The muRocs project (ANR PRC 2018 consortium, Besançon University Hospital) aims at robotizing the cholesteatoma resection procedure to make it less invasive, less risky and more efficient.  In this project, ISIR is interested in the ergonomics of the new robotic system, in particular in terms of controllability and immersion of the operator in the surgical scene. 

Objectives

Results

Partnerships and collaborations 

µROCs (Micro-Robot for Cholesteatoma Surgery) is an ANR PRC 2018 project.

The ANR µROCs consortium is led by the University Hospital of Besançon (Laurent Tavernier, Olivier Gaiffe).

It also includes:

In this project, the clinical partners bring their experience in minimally invasive ear surgery. In particular, Dr Nguyen is a pioneer in robotic surgery of the middle ear as he was one of the first users of the robOtol, a robot designed at the ISIR and commercialized by the company Collin Médical. The Femto-ST laboratory is in charge of developing a miniature dexterous instrument capable of reaching all points of the surgical site. This instrument is based on the hybrid technology of cables and concentric tubes.

Cooperation of multiple reinforcement learning systems

Vertebrates are able to learn to modify their behavior based on rewards and punishments. This learning, called “reinforcement learning”, is also the subject of much research in Artificial Intelligence to increase the decision-making autonomy of robots.

How to learn by rewards and punishments, as fast as possible for a minimal computational cost? This is the question we are addressing by combining reinforcement learning algorithms with complementary characteristics.

This interdisciplinary project aims to improve the performance of robots, but also to better explain learning in vertebrates.

Context

Reinforcement learning distinguishes two main families of algorithms:

Vertebrates, on the other hand, are able to exhibit goal-directed behavior resulting from deductions about the structure of the environment. With prolonged learning, they develop habits that are difficult to challenge. It has been widely accepted since the mid-2000s (Daw et al., 2005) that MB algorithms are a good model of goal-directed behavior, and MF algorithms a good model of habit formation.

Objectives

We aim at defining methods to coordinate these two types of algorithms in order to combine them in the best possible way, to learn quickly and to adapt to changes, while minimizing computation when possible. We test our realizations in robotic navigation and human-machine cooperation tasks.

We rather seek to explain the observed interactions between flexible and habitual behavior, which do not necessarily seem optimal. This implies that the coordination methods developed for robotics and for neuroscience are not necessarily identical.

Results

We initially proposed a method for coordinating MB-MF algorithms to explain competition and cooperation effects between learning systems in rats (Dollé et al., 2010, 2018).

It was then adapted for use in robotic navigation (Caluwaerts et al., 2012), and equipped for the occasion with a context detection system to learn and re-learn quickly when the task changes. The development of a new coordination criterion explicitly taking into account computation time has allowed to propose a new robotic system with maximum performance, identical to that of an MB algorithm, for a computational cost divided by three (Dromnelle et al., 2020a, 2020b).

In parallel, models have been developed to explain decisions and response times in humans (Viejo et al., 2015) and macaques (Viejo et al., 2018).

The overall achievements of this substantive project have been summarized in the paper Adaptive coordination of multiple learning strategies in brains and robots (Khamassi, 2020).

Partnerships and collaborations

This work has been carried out in the framework of various projects, financed by the ANR (LU2, STGT, RoboErgoSum), the City of Paris (Emergence(s) HABOT), the B2V Memory Observatory, the CNRS, etc. 

They rely on collaborations with:

Human-Machine Kinesthetic Communication (Lexikhum)

The aim of this project is to build a lexicon of kinesthetic meaning units that would allow to approach the physical human-robot interaction as a sentence combining these meaning units. The emergence of these meaning units is studied in human-human interactions and the project is to extract the programmable parts by making computational models to build virtual partners, capable of producing an interaction as natural as with a human. These meaning units aim to be developed and studied by a multidisciplinary team of researchers who place the cognitive aspect of this interaction at the heart of their research. The quantification of the quality of human-human and human-virtual partner interactions is a expected contribution of this project, as it will provide a basis of units with precise knowledge of its plausibility and comprehensibility when coming from a human and when coming from a virtual partner.

Context

Automation in our daily lives – Technological changes in complex systems (e.g. in aeronautics) have profoundly altered the interaction between humans and machines. In the process, operators have found themselves dealing with increasingly complex and automated systems. Although the public has often been fascinated by the ingenuity of such systems, many more or less recent tragedies show the extent to which the interaction between humans and automated systems remains a sensitive issue. Thus, many studies underline the negative consequences in terms of safety and performance of system automation: difficulties in detecting errors or failures of the system (Kessel & Wickens, 1982), in understanding its current state (Sarter, Woods, & Billings, 1997) and in determining the appropriate actions for the continuation of the task (Endsley, 1999). These difficulties in the interaction between humans and automation constitute a major research challenge. The aim of this project is to participate in the understanding and compensation of these difficulties of interaction between humans and automatic systems.

The difficult but necessary intelligibility of systems – The opacity of artificial agents is considered a major cause of these difficulties (Christoffersen & Woods, 2002; Dekker & Woods, 2002). Indeed, the lack of information sent back by the system on its own functioning and, upstream, the lack of possible dialogue on the status of the transmitted information, are central elements in the difficulties encountered by operators. Today’s complex systems tend to develop cascades of automatic reactions that diminish or even eliminate the operators’ ability to predict their functioning and provoke disproportionate and unpredictable events (Taleb, 2012). This lack of information refers to what has classically been called ‘system opacity’. The opacity of artificial systems tends to make it difficult to understand the intentions of these artificial agents. This state of affairs is likely to lead to difficulties in anticipating/understanding the actions of my artificial partner, thereby generating difficulties in terms of coordination, acceptability and sense of control. In this sense, the intelligibility of artificial systems (e.g. producing clear, predictable and understandable behaviour) is a major challenge for the systems engineering community.

We believe that it is possible to use the haptic sense (a combination of touch and kinesthesia) to make artificial systems more intelligible, more predictable. However, if we consider the existence of communication, then we need to be able to send and receive clear and complete messages without disrupting the task, or at least in a way comparable to a human partner. To do this, we first want to identify the information needed for cooperation. Secondly, we want to produce kinesthetic messages that convey this information. In particular, we will evolve our models to obtain a behaviour whose message is as clear as possible for the user. The rate of understanding of this message by the test persons will be our main measure. We hope to build a lexicon, a base of messages, whose effective use as well as validity will be measured statistically, the project being in the long run to combine these messages to build a complex communication and thus increase the communication capacities of machines during human-robot interactions. We will call these messages kinesthetic sense units. These units can be arranged into complex messages called kinaesthetic discourses, i.e. sequences of kinaesthetic sense units that clarify the intention, the status of the information. Finally, we will evaluate the impact of these messages on the usability and acceptability of pilot assistance functions, as well as their impact on the human operator’s feeling of control.

Objectives

The objective of this project is to make machines intelligible through kinesthetic interaction. To achieve this objective four sub-objectives have been identified :

Results

The laboratory has developed a robot to study interactions with one degree of freedom. SEMAPHORO-1D (System for the Evaluation of Man-Robot Physical Manipulation).

This system has made it possible to develop a kinesthetic sense unit for binary negotiation (Left-Right). This unit of meaning can be parameterised and allows to show a more or less dominant behaviour (Roche and Saint-Bauzel ICRA19, THRI accepted).  This interface has also made it possible to study the notion of agentivity in collaborative decision-making and has shown that virtual partners do not produce a feeling of control over the interaction. This feeling of control has been measured with measures of intentional binding (Grynszpan et al., Concog 19). Finally, this interface made it possible to show that kinesthetic interaction is an implicit and rapid means of transmitting trust (Piezzulo, Roche and Saint-Bauzel, Nature ScR 20).

Partnerships and collaborations

The ANR collaborative scientific project brings together the following researchers:

This project is also the source of an international collaboration with : 

Presentation

Robotics represents a challenge for learning methods because it combines difficulties: large and continuous state and action spaces, scarce rewards, dynamic, open and partially observable world with noisy perceptions and actions. Their implementation is therefore delicate and requires a thorough analysis of the tasks to be performed, which reduces their potential for application. In the European DREAM project, we have defined the basis of a developmental approach that allows us to combine different methods to reduce these constraints and thus increase the adaptation capabilities of robots through learning. 

Context

The design of robots requires anticipating all the conditions they may face and predicting the appropriate behaviour. An unforeseen situation can therefore cause a malfunction that may recur if the same conditions occur again. This lack of adaptation is a hindrance to many robotics applications, especially when they target an uncontrolled environment such as our daily environment (for companion robots, for example) or more generally for collaborative robots, i.e. those acting in contact with humans. Artificial learning methods could help to make robots more adaptive, if they can overcome the multiple difficulties linked to the robotics context. It is these specific difficulties that this project aims to address.

Objectives

The objective of the project is to help design robots interacting with an uncontrolled environment, on tasks for which the desired behaviour is partially known or even totally unknown.

In this context, learning allows the robot to explore its environment autonomously, in order to extract relevant sensory, sensory-motor or purely motor representations. For example, learning to recognise objects, identifying which ones are manipulable, learning to pick them up, push them, throw them, etc. In this context, exploring the vast sensory-motor space in a relevant way is central, especially as many interactions are rare (the probability of catching an object with a purely random movement is almost zero).

We are therefore interested in the construction of these representations and rely on a modular and iterative approach aiming at exploring the robot’s capabilities and deducing representations that will facilitate the resolution of the tasks that arise, either with planning or learning methods. 

Results

The creation of state and action representations that can be used later requires first of all the generation of behaviours that are relevant to the robot’s capabilities. A behaviour is relevant if it highlights the robot’s ability to achieve a particular effect by interacting with its environment. Knowing that many of the robot’s movements do not create any effect, discovering the effects that the robot is likely to generate is difficult. This is compounded by the difficulty of exploring to learn behaviours without appropriate representations.

We therefore rely on exploration algorithms based on novelty search and Quality-Diversity algorithms to generate a large number of exploration behaviours and to deduce appropriate state and action spaces for further learning. 

Figure 1: The robot Baxter has learned a repertoire of joystick actions which it uses to learn to control a small wheeled robot.

Partnerships ans collaborations

The European project DREAM, coordinated by Sorbonne University (FET H2020 2015-2018), launched this research theme in the laboratory (http://dream.isir.upmc.fr/).

This was an academic project, with no industrial partner. 

It is being pursued in several projects to apply this work to an industrial context. The adaptive learning capability is intended to help engineers in the design phase and in updating the behaviour of a robot. The European SoftManBot project (http://softmanbot.eu) aims at applications to the manipulation of deformable objects. It has a consortium of 11 partners, including SIGMA in Clermont-Ferrand, IIT in Genoa and companies such as Decathlon and Michelin. The VeriDREAM project, in collaboration with DLR, ENSTA-Paristech, Magazino GmbH, Synesis and GoodAI, aims to facilitate the transfer of these methods to a wider industrial context, including in particular small and medium-sized enterprises with a focus on the logistics and video game sectors.

The key concept of this project is “user adaptive AI in the context of human-computer interaction”. This project addresses two aspects for this concept. First, we will conduct research on user adaptivity of artificial intelligence embodied as a conversational agent. When people talk to other people, they change their verbal and nonverbal communication behaviors according to those of the partner. Therefore, user adaptivity is an essential issue in improving human-agent interaction.

Context

Communication style is also different depending on the culture, and adapting the agent behaviors to a target culture is useful. We will tackle this problem by employing a machine learning approach. However, a bottleneck of this approach is that annotating users’ multi-modal behaviors to create training data is time consuming. We will offer semi-automated annotations and provide visual feedback to inspect and correct machine-generated labels by incorporating eXplainable AI (XAI) techniques. Thus, the concept for user adaptive AI is used to support users in creating multimodal corpus as well as improve the human-agent interaction. Moreover, the concept of user adaptivity is also focused on the psychological studies in this project, in which user motivation will be investigated in one relevant use case (personalised motivational coaching for physical activity). Therefore, this project envisions a new research methodology for machine-learning-based conversational agents by focusing on the concept of user adaptivity.

Objectives

The PANORAMA project aims to accomplish the following 5 research goals:

Results

Adaptive AI interface impacts the economy and the future society.

First, user adaptive technology enhances the quality of human-computer interaction and this contributes to improve task performance and productivity of the users in industries.

Moreover, user adaptive AI will effectively motivate the users to adopt a healthy lifestyle in the long term, and this may change people’s lives in the future society.

Partnerships and collaborations

The proposal is targeted to develop tools and methods for Social Skill Training (SST) for a large variety of population. Social skills refer to managing verbal and nonverbal behaviors during an interaction with one or more persons. People facing difficulties interacting with others suffer from using appropriately their social behaviors and interpreting them in others. SST is used by therapists to train people to practice social interaction and overcome their social fear. It relies on role play as a means to put participants in a controlled social situation.

Context

Conventional SST is an established method that was originally developed to reduce anxiety and discomfort and to learn appropriate skills required during social interactions. SST effectively improves social skills for people with ASD. The basic SST training model follows five steps: instruction, modeling, role-playing, feedback, and homework. In addition, we will model Cognitive Behavioral Therapy (CBT) such as ‘Cognitive Reframing,’ which is a useful technique for understanding unhappy feelings and moods, and for challenging the sometimes-wrong “automatic beliefs” that can lie behind them. It requires to train users to manage their own multimodal behaviors as well as to pay attention and adapt to the behavior of their interactants in a social stress situation (eg. public speaking).

Objectives

TAPAS will develop tools and methods for SST to provide personalised training of social skills and help overcome social stress in such everyday situations involving (joint) presentation at school and at work. Our aim is to develop a platform allowing participants to role play social interactions. The targeted population is neurotypical individuals but also individuals with different scales of social pathologies including shyness, alexithymia and Autism Spectrum Disorder (ASD). We will inspire the design of the platform from two mainstreams methods for social skills training: Conventional SST and Cognitive Behavioral Therapy.

Results

We will design three relevant scenarios training different sets of social skills:

Partnerships and collaborations

Contexte

La chirurgie ambulatoire permet au patient de regagner son domicile le jour même de son intervention. Cette chirurgie apporte de nombreux bénéfices aux patients comme aux professionnels de santé, en termes de qualité des soins et d’organisation. La chirurgie dite « mini-invasive » est l’une des techniques permettant aux patients d’avoir un temps d’hospitalisation réduit voire d’être pris en charge en ambulatoire.

Le chirurgien pratique de petites incisions (quelques millimètres) qui permettent l’introduction d’une caméra et des instruments opératoires. La manipulation des instruments de chirurgie mini-invasive est compliquée et éprouvante pour le chirurgien : la dextérité est réduite, le champ de vision est limité, la perception des efforts entre les organes et les instruments est considérablement dégradée.

Tous ces facteurs conduisent à ce que la chirurgie mini-invasive est sous utilisée dans la pratique clinique.

Des interfaces modulaires pour faciliter la chirurgie mini-invasive

Cockpit Chirurgical, Projet Fédérateur, ISIR

L’objectif principal de ce projet est de démocratiser la chirurgie mini-invasive en proposant aux chirurgiens différents modules technologiques combinables entre eux. Il s’agit d’assister le chirurgien en facilitant les gestes et la perception des organes pour qu’il opère de façon mini-invasive aussi facilement qu’en chirurgie ouverte.

Cockpit Chirurgical, ISIR

Ces modules sont conçus pour s’intégrer pleinement dans le parcours de soin et la pratique courante. Pour cela nous adressons tout particulièrement la question des interfaces et des interactions Chirurgien-Machine.

Les principaux thèmes de recherche portent sur :