Catheters for INR operated by Shape Memory Alloys
To meet the need for controllability in interventional neuroradiology devices, we have developed a patented method for manufacturing active catheters incorporating Shape Memory Alloy (NiTi) wires. An advanced modeling step allowed us to optimize the dimensions of these active catheters which were then tested and validated on silicone and animal models. The smallest prototypes produced have diameters of the order of one millimeter and angles of curvature greater than 90°. Today, this technology has been transferred via the start-up Basecamp Vascular, which is industrializing the developed process.
Catheters for INR operated by Shape Memory Alloys
Active catheterization as a solution to the difficulties of endovascular navigation
Currently, the catheters used in cardiology or interventional radiology (for example for the embolization of cerebral aneurysms) are completely passive tools and are manually introduced. In addition, the visual feedback offered to radiologists during the intervention is reduced to low-resolution radio images, most often in 2D. In this context, catheters are difficult to control and their progression is slow or even impossible until the anatomical target. In neuroradiology, for example (navigation into the brain), it is estimated that the navigation failure rate is 20% due to an anatomy that is too tortuous at the level of the supra-aortic trunks (carotid).
ISIR has been collaborating with the interventional neuroradiology department of the Rothschild Ophthalmological Foundation (FOR) for many years on the theme of active catheterization for INR. The aim of the CATANE project is to develop and validate motorized catheters, controllable in their distal part to improve the conditions of arterial navigation: negotiation of the branches, pointing of the target. The chosen approach is that of miniature actuators based on Shape Memory Alloys (SMA) directly integrated at the tip of the catheters.
- We aim at developing robust techniques for the realization of active catheters or guides actuated by SMA wires integrated in the head of the instrument. The main difficulties encountered in this field are the miniaturization of the device and the attaching of the actuators to the periphery of the support whose diameter is close to one millimeter.
- We also wish to develop mathematical models of the behavior of catheter or active guide structures. The goal here is to predict the movement that such a structure produces in response to a given electrical activation in order to control the energy conversion chain from the control signal to the displacement.
- We also seek to use the established behavioral models for optimal design purposes. SMA actuation has the advantage of a large variability of solution in terms of number and location of actuators. We therefore wish to develop tools for dimensional and topological optimization of active catheters or guides to meet medical needs as efficiently as possible.
- Finally, this research also addresses the issue of the Human Machine Interface (HMI) of these devices. We want to propose the best possible control interfaces in a context of instruments with multiple degrees of mobility (especially active ones) and in a context of visualization of the surgical scene made complex by the intermittent radiographic vision. This aspect fits perfectly with the general problem of the surgical cockpit developed in the team.
- A method for assembling active catheters/guides based on SMA wires has been developed and patented. On the basis of this process, different prototypes have been realized which have shown good performances, in particular in terms of response time (~1s) and lifetime (>100 activation cycles). The method has been patented internationally (WO2011116961A1) and an exclusive license has been granted to Basecamp Vascular.
- An analytical model of the transfer function of our active catheters/guides has been developed and experimentally validated. This model was used as a basis for the design of various devices corresponding to different medical specialties and navigation tasks, each with specific requirements: interventional neuroradiology, vascular surgery, billio-pancreatic intervention.
- These devices have been experimentally validated on anatomical phantoms (e.g. a miniature catheter of 1.2mm diameter with two AMF actuators for navigation in the cranial arterial network) or on animal models (e.g. an active guide of 2.1mm diameter with two active flexions for navigating in the abdominal aorta and its branches).
Partnerships and collaborations
Basecamp Vascular industrializes the ISIR active catheters.
The CATANE project was first supported by the CAMI labex (Computer Assisted Medical Interventions) which financed the recruitment of an engineer in 2013 and then by the SATT Lutech in the form of a maturation grant between 2014 and 2015. Today, the technologies developed and in particular the one described in patent WO2011116961A1, have been transferred to the start-up Basecamp Vascular created in 2016. Its president is Dr. Raphaël Blanc, interventional neuroradiologist at the Rothshild Ophthalmological Foundation in Paris. Dr. Blanc has been associated with the CATANE project since its inception.
Robotic microsurgery of cholesteatoma (muRocs project)
Surgical interventions in the middle ear (such as the removal of a cholesteatoma) are frequent and very risky. In addition to the manipulative difficulties, there are problems related to the indirect vision by microscope. In the ANR muRocs project, we study the feasibility of a robotization of the cholesteatomy gesture. This involves the use of a fine instrument such as a polyarticulated endoscope carried by a support arm. The task of ISIR in this project is to develop the means of an intuitive and powerful interfacing between the operator and the robotized system. In particular, different piloting modes are compared and the use of multimodal imaging is explored.
Cholesteatoma is a serious disease of the middle ear with an incidence of 1/10000 per year. It consists of a cancerous growth of tissue that can reach the brain. The most effective treatment of the disease at present is surgery. This consists of resecting the cholesteatoma (first by scraping and then by laser ablation of the residues) either through the tympanic canal or through a milled passage in the mastoid. This operation is an example of the difficulties of microsurgery: narrow access, unwieldy tools, fragile tissues (optic nerve, tympano-ossicular chain), reduced indirect vision. Consequently, the failure rate is high (25%) and there is a great need to improve the procedure, particularly in terms of instrumentation. The muRocs project (ANR PRC 2018 consortium, Besançon University Hospital) aims at robotizing the cholesteatoma resection procedure to make it less invasive, less risky and more efficient. In this project, ISIR is interested in the ergonomics of the new robotic system, in particular in terms of controllability and immersion of the operator in the surgical scene.
- We wish to develop a dexterous system able to access, by mini-invasive way, the middle ear cavity and to reach all the points of this cavity with a chosen orientation. To do this, we are oriented towards a macro-micro assembly composed of a manipulator arm extended by a fine polyarticulated instrument of the micro-endoscope type.
- The exploitation of the above redundant macro-micro structure requires the development of an adapted control interface. The solutions we plan to test are of two types: teleoperated approaches and co-manipulated approaches.
- In the context of cholesteatomy, it is essential to achieve a total resection of the cholesteatoma or risk re-operation of the patient in the short term. In order to do so, in addition to having a sufficiently dexterous robotic system, it is necessary to have fine means of visualization and location of the tissue to be resected. Endoscopic and OCT modalities are considered.
- We implemented a serial macro-micro system composed of the Franka robot with 7 ddl and a 3mm diameter micro-endoscope with 2 ddl. The micro-endoscope was motorized and integrated at the end of the carrier arm in order to constitute a redundant kinematic chain with 9 ddl fully controllable.
- A study is underway to evaluate different control solutions. Among other solutions, the direct teleoperation of the end of the device via a joystick with 6 ddl needs to be completed by an automatic management of the kinematic redundancies (internal movements). The comanipulation of the device via a handle and an effort sensor on the terminal of the carrying arm must be completed by an embedded micro-joystick for the piloting of the intracorporeal ddl.
- A first augmented visual feedback has been developed. It merges the information from the endoscopic vision and the instantaneous pose of the robot. It takes the form of a real-time 3D digital scene of the operating site. The integration of the OCT modality via an embedded probe at the end of the instrument is under study.
Partnerships and collaborations
µROCs (Micro-Robot for Cholesteatoma Surgery) is an ANR PRC 2018 project.
The ANR µROCs consortium is led by the University Hospital of Besançon (Laurent Tavernier, Olivier Gaiffe).
It also includes:
- the Femto-ST Institute (Nicolas Andreff, Kanty Rabenorosoa),
- the Hearing Institute (Yann Nguyen),
- and the ISIR.
In this project, the clinical partners bring their experience in minimally invasive ear surgery. In particular, Dr Nguyen is a pioneer in robotic surgery of the middle ear as he was one of the first users of the robOtol, a robot designed at the ISIR and commercialized by the company Collin Médical. The Femto-ST laboratory is in charge of developing a miniature dexterous instrument capable of reaching all points of the surgical site. This instrument is based on the hybrid technology of cables and concentric tubes.
Cooperation of multiple reinforcement learning systems
Vertebrates are able to learn to modify their behavior based on rewards and punishments. This learning, called “reinforcement learning”, is also the subject of much research in Artificial Intelligence to increase the decision-making autonomy of robots.
How to learn by rewards and punishments, as fast as possible for a minimal computational cost? This is the question we are addressing by combining reinforcement learning algorithms with complementary characteristics.
This interdisciplinary project aims to improve the performance of robots, but also to better explain learning in vertebrates.
Reinforcement learning distinguishes two main families of algorithms:
- those who build a model of the world and reason on it to decide what to do (MB or model-based). They require a lot of computation to make a decision, but are very efficient, learn to solve a problem with few trials and relearn just as quickly if the task changes.
- those without models that learn from simple state-action associations (MF or model-free). They are very cheap in computation, but on the other hand learn slowly and relearn even less quickly.
Vertebrates, on the other hand, are able to exhibit goal-directed behavior resulting from deductions about the structure of the environment. With prolonged learning, they develop habits that are difficult to challenge. It has been widely accepted since the mid-2000s (Daw et al., 2005) that MB algorithms are a good model of goal-directed behavior, and MF algorithms a good model of habit formation.
- From a robotics and AI point of view :
We aim at defining methods to coordinate these two types of algorithms in order to combine them in the best possible way, to learn quickly and to adapt to changes, while minimizing computation when possible. We test our realizations in robotic navigation and human-machine cooperation tasks.
- From a neuroscientific point of view :
We rather seek to explain the observed interactions between flexible and habitual behavior, which do not necessarily seem optimal. This implies that the coordination methods developed for robotics and for neuroscience are not necessarily identical.
We initially proposed a method for coordinating MB-MF algorithms to explain competition and cooperation effects between learning systems in rats (Dollé et al., 2010, 2018).
It was then adapted for use in robotic navigation (Caluwaerts et al., 2012), and equipped for the occasion with a context detection system to learn and re-learn quickly when the task changes. The development of a new coordination criterion explicitly taking into account computation time has allowed to propose a new robotic system with maximum performance, identical to that of an MB algorithm, for a computational cost divided by three (Dromnelle et al., 2020a, 2020b).
In parallel, models have been developed to explain decisions and response times in humans (Viejo et al., 2015) and macaques (Viejo et al., 2018).
The overall achievements of this substantive project have been summarized in the paper Adaptive coordination of multiple learning strategies in brains and robots (Khamassi, 2020).
Partnerships and collaborations
This work has been carried out in the framework of various projects, financed by the ANR (LU2, STGT, RoboErgoSum), the City of Paris (Emergence(s) HABOT), the B2V Memory Observatory, the CNRS, etc.
They rely on collaborations with:
- Andrea Brovelli, Institut de Neurosciences de la Timone (INT) of Marseille, in France,
- Raja Chatila, Benoît Girard and Mehdi Khamassi, Institut des Systèmes Intelligents et de Robotique (ISIR) of Paris, in France,
- Rachid Alami and Aurélie Clodic, Laboratoire d’analyse et d’architecture des systèmes (LAAS) of Toulouse, in France,
- Emmanuel Procyk, Stem cell and Brain Research Institute (SBRI) of Lyon, in France.
Human-Machine Kinesthetic Communication (Lexikhum)
The aim of this project is to build a lexicon of kinesthetic meaning units that would allow to approach the physical human-robot interaction as a sentence combining these meaning units. The emergence of these meaning units is studied in human-human interactions and the project is to extract the programmable parts by making computational models to build virtual partners, capable of producing an interaction as natural as with a human. These meaning units aim to be developed and studied by a multidisciplinary team of researchers who place the cognitive aspect of this interaction at the heart of their research. The quantification of the quality of human-human and human-virtual partner interactions is a expected contribution of this project, as it will provide a basis of units with precise knowledge of its plausibility and comprehensibility when coming from a human and when coming from a virtual partner.
Automation in our daily lives – Technological changes in complex systems (e.g. in aeronautics) have profoundly altered the interaction between humans and machines. In the process, operators have found themselves dealing with increasingly complex and automated systems. Although the public has often been fascinated by the ingenuity of such systems, many more or less recent tragedies show the extent to which the interaction between humans and automated systems remains a sensitive issue. Thus, many studies underline the negative consequences in terms of safety and performance of system automation: difficulties in detecting errors or failures of the system (Kessel & Wickens, 1982), in understanding its current state (Sarter, Woods, & Billings, 1997) and in determining the appropriate actions for the continuation of the task (Endsley, 1999). These difficulties in the interaction between humans and automation constitute a major research challenge. The aim of this project is to participate in the understanding and compensation of these difficulties of interaction between humans and automatic systems.
The difficult but necessary intelligibility of systems – The opacity of artificial agents is considered a major cause of these difficulties (Christoffersen & Woods, 2002; Dekker & Woods, 2002). Indeed, the lack of information sent back by the system on its own functioning and, upstream, the lack of possible dialogue on the status of the transmitted information, are central elements in the difficulties encountered by operators. Today’s complex systems tend to develop cascades of automatic reactions that diminish or even eliminate the operators’ ability to predict their functioning and provoke disproportionate and unpredictable events (Taleb, 2012). This lack of information refers to what has classically been called ‘system opacity’. The opacity of artificial systems tends to make it difficult to understand the intentions of these artificial agents. This state of affairs is likely to lead to difficulties in anticipating/understanding the actions of my artificial partner, thereby generating difficulties in terms of coordination, acceptability and sense of control. In this sense, the intelligibility of artificial systems (e.g. producing clear, predictable and understandable behaviour) is a major challenge for the systems engineering community.
We believe that it is possible to use the haptic sense (a combination of touch and kinesthesia) to make artificial systems more intelligible, more predictable. However, if we consider the existence of communication, then we need to be able to send and receive clear and complete messages without disrupting the task, or at least in a way comparable to a human partner. To do this, we first want to identify the information needed for cooperation. Secondly, we want to produce kinesthetic messages that convey this information. In particular, we will evolve our models to obtain a behaviour whose message is as clear as possible for the user. The rate of understanding of this message by the test persons will be our main measure. We hope to build a lexicon, a base of messages, whose effective use as well as validity will be measured statistically, the project being in the long run to combine these messages to build a complex communication and thus increase the communication capacities of machines during human-robot interactions. We will call these messages kinesthetic sense units. These units can be arranged into complex messages called kinaesthetic discourses, i.e. sequences of kinaesthetic sense units that clarify the intention, the status of the information. Finally, we will evaluate the impact of these messages on the usability and acceptability of pilot assistance functions, as well as their impact on the human operator’s feeling of control.
The objective of this project is to make machines intelligible through kinesthetic interaction. To achieve this objective four sub-objectives have been identified :
- to define a theoretical framework of kinesthetic communication,
- to analyse the conditions of joint action and their psychological models for human-human interaction,
- understanding and modelling the mechanisms at play in such communication. The aim will be to build a lexicon of kinaesthetic interaction,
- implement this lexicon in artificial agents in order to improve their intelligibility by the human operator and to support human-machine cooperation.
The laboratory has developed a robot to study interactions with one degree of freedom. SEMAPHORO-1D (System for the Evaluation of Man-Robot Physical Manipulation).
This system has made it possible to develop a kinesthetic sense unit for binary negotiation (Left-Right). This unit of meaning can be parameterised and allows to show a more or less dominant behaviour (Roche and Saint-Bauzel ICRA19, THRI accepted). This interface has also made it possible to study the notion of agentivity in collaborative decision-making and has shown that virtual partners do not produce a feeling of control over the interaction. This feeling of control has been measured with measures of intentional binding (Grynszpan et al., Concog 19). Finally, this interface made it possible to show that kinesthetic interaction is an implicit and rapid means of transmitting trust (Piezzulo, Roche and Saint-Bauzel, Nature ScR 20).
Partnerships and collaborations
The ANR collaborative scientific project brings together the following researchers:
- Bruno Berberian, Researcher at ONERA in the DTIS laboratory, specialising in ergonomics and the measurement of interaction performance,
- Camille Debras, Associate pmrofessor at the University of Paris-Nanterre, specialised in gesturality and linguistics,
- Ouriel Grynszpan, Professor at the University of Paris-Saclay – LIMSI laboratory, specialised in cognitive sciences and social intelligence issues,
- Aliyah Morgenstern, Professor at the University of Paris Nanterre – MoDyCo laboratory, specialised in developmental linguistics and sign language,
- Ludovic Saint-Bauzel, Associate Professor at Sorbonne University – ISIR laboratory, specialised in engineering and information sciences and life sciences,
- Claudia Savina Bianchini, Associate Professor at the University of Poitiers – FORELLIS laboratory, specialised in linguistics and more particularly in sign language.
This project is also the source of an international collaboration with :
- Giovanni Pezzulo, Researcher at the CNR (Italian National Research Centre) in the ISTC laboratory (Institute of Science and Technology of Cognition).
Robotics represents a challenge for learning methods because it combines difficulties: large and continuous state and action spaces, scarce rewards, dynamic, open and partially observable world with noisy perceptions and actions. Their implementation is therefore delicate and requires a thorough analysis of the tasks to be performed, which reduces their potential for application. In the European DREAM project, we have defined the basis of a developmental approach that allows us to combine different methods to reduce these constraints and thus increase the adaptation capabilities of robots through learning.
The design of robots requires anticipating all the conditions they may face and predicting the appropriate behaviour. An unforeseen situation can therefore cause a malfunction that may recur if the same conditions occur again. This lack of adaptation is a hindrance to many robotics applications, especially when they target an uncontrolled environment such as our daily environment (for companion robots, for example) or more generally for collaborative robots, i.e. those acting in contact with humans. Artificial learning methods could help to make robots more adaptive, if they can overcome the multiple difficulties linked to the robotics context. It is these specific difficulties that this project aims to address.
The objective of the project is to help design robots interacting with an uncontrolled environment, on tasks for which the desired behaviour is partially known or even totally unknown.
In this context, learning allows the robot to explore its environment autonomously, in order to extract relevant sensory, sensory-motor or purely motor representations. For example, learning to recognise objects, identifying which ones are manipulable, learning to pick them up, push them, throw them, etc. In this context, exploring the vast sensory-motor space in a relevant way is central, especially as many interactions are rare (the probability of catching an object with a purely random movement is almost zero).
We are therefore interested in the construction of these representations and rely on a modular and iterative approach aiming at exploring the robot’s capabilities and deducing representations that will facilitate the resolution of the tasks that arise, either with planning or learning methods.
The creation of state and action representations that can be used later requires first of all the generation of behaviours that are relevant to the robot’s capabilities. A behaviour is relevant if it highlights the robot’s ability to achieve a particular effect by interacting with its environment. Knowing that many of the robot’s movements do not create any effect, discovering the effects that the robot is likely to generate is difficult. This is compounded by the difficulty of exploring to learn behaviours without appropriate representations.
We therefore rely on exploration algorithms based on novelty search and Quality-Diversity algorithms to generate a large number of exploration behaviours and to deduce appropriate state and action spaces for further learning.
Partnerships ans collaborations
- ENSTA-ParisTech, in France,
- Sorbonne University, in France,
- the University of Coruna, in Spain,
- the University of Edinburgh in the United Kingdom,
- the Vrije Universiteit Amsterdam in the Netherlands.
This was an academic project, with no industrial partner.
It is being pursued in several projects to apply this work to an industrial context. The adaptive learning capability is intended to help engineers in the design phase and in updating the behaviour of a robot. The European SoftManBot project (http://softmanbot.eu) aims at applications to the manipulation of deformable objects. It has a consortium of 11 partners, including SIGMA in Clermont-Ferrand, IIT in Genoa and companies such as Decathlon and Michelin. The VeriDREAM project, in collaboration with DLR, ENSTA-Paristech, Magazino GmbH, Synesis and GoodAI, aims to facilitate the transfer of these methods to a wider industrial context, including in particular small and medium-sized enterprises with a focus on the logistics and video game sectors.
The key concept of this project is “user adaptive AI in the context of human-computer interaction”. This project addresses two aspects for this concept. First, we will conduct research on user adaptivity of artificial intelligence embodied as a conversational agent. When people talk to other people, they change their verbal and nonverbal communication behaviors according to those of the partner. Therefore, user adaptivity is an essential issue in improving human-agent interaction.
Communication style is also different depending on the culture, and adapting the agent behaviors to a target culture is useful. We will tackle this problem by employing a machine learning approach. However, a bottleneck of this approach is that annotating users’ multi-modal behaviors to create training data is time consuming. We will offer semi-automated annotations and provide visual feedback to inspect and correct machine-generated labels by incorporating eXplainable AI (XAI) techniques. Thus, the concept for user adaptive AI is used to support users in creating multimodal corpus as well as improve the human-agent interaction. Moreover, the concept of user adaptivity is also focused on the psychological studies in this project, in which user motivation will be investigated in one relevant use case (personalised motivational coaching for physical activity). Therefore, this project envisions a new research methodology for machine-learning-based conversational agents by focusing on the concept of user adaptivity.
The PANORAMA project aims to accomplish the following 5 research goals:
- propose a user adaptive multimodal annotation tool based on XAI technique,
- exploit this tool to collect annotated multimodal corpora in three countries (France, Germany, and Japan),
- propose models and methods for developing conversational agents with multi-level adaptation functionality, where nonverbal signals of the agent as well as the content of the dialogue are adapted to the user,
- provide multitask learning and transfer learning techniques to learn models using the multi-cultural corpus obtained and adapt the conversational agent to each culture,
- and propose the design basis of adaptive AI systems grounded in psychological theories and evaluation studies.
Adaptive AI interface impacts the economy and the future society.
First, user adaptive technology enhances the quality of human-computer interaction and this contributes to improve task performance and productivity of the users in industries.
Moreover, user adaptive AI will effectively motivate the users to adopt a healthy lifestyle in the long term, and this may change people’s lives in the future society.
Partnerships and collaborations
- Coordinator: Intelligent User Interface Lab (IUI-Lab), Seikei University, Japan,
- Human-Centered Multimedia (HCM), Augsburg University, Germany,
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne University France,
- Laboratoire d’Informatique pour la Mécanique et les Sciences de l’Ingénieur (LIMSI), group CPU, France,
- Social Signal Interaction Lab, Japan Advanced Institution of Science and Technology, JAIST, Japan.
The proposal is targeted to develop tools and methods for Social Skill Training (SST) for a large variety of population. Social skills refer to managing verbal and nonverbal behaviors during an interaction with one or more persons. People facing difficulties interacting with others suffer from using appropriately their social behaviors and interpreting them in others. SST is used by therapists to train people to practice social interaction and overcome their social fear. It relies on role play as a means to put participants in a controlled social situation.
Conventional SST is an established method that was originally developed to reduce anxiety and discomfort and to learn appropriate skills required during social interactions. SST effectively improves social skills for people with ASD. The basic SST training model follows five steps: instruction, modeling, role-playing, feedback, and homework. In addition, we will model Cognitive Behavioral Therapy (CBT) such as ‘Cognitive Reframing,’ which is a useful technique for understanding unhappy feelings and moods, and for challenging the sometimes-wrong “automatic beliefs” that can lie behind them. It requires to train users to manage their own multimodal behaviors as well as to pay attention and adapt to the behavior of their interactants in a social stress situation (eg. public speaking).
TAPAS will develop tools and methods for SST to provide personalised training of social skills and help overcome social stress in such everyday situations involving (joint) presentation at school and at work. Our aim is to develop a platform allowing participants to role play social interactions. The targeted population is neurotypical individuals but also individuals with different scales of social pathologies including shyness, alexithymia and Autism Spectrum Disorder (ASD). We will inspire the design of the platform from two mainstreams methods for social skills training: Conventional SST and Cognitive Behavioral Therapy.
We will design three relevant scenarios training different sets of social skills:
- train the user to work on a common task together with one or several virtual peer(s),
- train the user to maintain the attention and the engagement of a (small) virtual audience when presenting some material,
- train the user to jointly present with a virtual peer a joint work in front of a small virtual audience.
Partnerships and collaborations
- Institut des Systèmes Intelligents et de Robotique (ISIR), France,
- Laboratoire d’Informatique pour la Mécanique et les Sciences de l’Ingénieur (LIMSI), CPU group, France,
- Nara Institute of Science and Technology, Data Science Center, Japan.
La chirurgie ambulatoire permet au patient de regagner son domicile le jour même de son intervention. Cette chirurgie apporte de nombreux bénéfices aux patients comme aux professionnels de santé, en termes de qualité des soins et d’organisation. La chirurgie dite « mini-invasive » est l’une des techniques permettant aux patients d’avoir un temps d’hospitalisation réduit voire d’être pris en charge en ambulatoire.
Le chirurgien pratique de petites incisions (quelques millimètres) qui permettent l’introduction d’une caméra et des instruments opératoires. La manipulation des instruments de chirurgie mini-invasive est compliquée et éprouvante pour le chirurgien : la dextérité est réduite, le champ de vision est limité, la perception des efforts entre les organes et les instruments est considérablement dégradée.
Tous ces facteurs conduisent à ce que la chirurgie mini-invasive est sous utilisée dans la pratique clinique.
Des interfaces modulaires pour faciliter la chirurgie mini-invasive
L’objectif principal de ce projet est de démocratiser la chirurgie mini-invasive en proposant aux chirurgiens différents modules technologiques combinables entre eux. Il s’agit d’assister le chirurgien en facilitant les gestes et la perception des organes pour qu’il opère de façon mini-invasive aussi facilement qu’en chirurgie ouverte.
Ces modules sont conçus pour s’intégrer pleinement dans le parcours de soin et la pratique courante. Pour cela nous adressons tout particulièrement la question des interfaces et des interactions Chirurgien-Machine.
Les principaux thèmes de recherche portent sur :
- La commande de robots porte-instruments comanipulé pour l’assistance aux gestes (guides virtuels)
- La réalité augmentée (affichage et manipulation des objets virtuels dans une scène réelle, interfaces tangibles)
- Les interactions chirurgien-machine (changement de mode de commande, analyse de traces d’apprentissage)
- Les interactions au bloc opératoire (observations, entretiens, protocoles)