Project Lexikhum: Human-Machine Kinesthetic Communication
The aim of this project is to build a lexicon of kinesthetic meaning units that would allow to approach the physical human-robot interaction as a sentence combining these meaning units. The emergence of these meaning units is studied in human-human interactions and the project is to extract the programmable parts by making computational models to build virtual partners, capable of producing an interaction as natural as with a human. These meaning units aim to be developed and studied by a multidisciplinary team of researchers who place the cognitive aspect of this interaction at the heart of their research. The quantification of the quality of human-human and human-virtual partner interactions is a expected contribution of this project, as it will provide a basis of units with precise knowledge of its plausibility and comprehensibility when coming from a human and when coming from a virtual partner.
Automation in our daily lives – Technological changes in complex systems (e.g. in aeronautics) have profoundly altered the interaction between humans and machines. In the process, operators have found themselves dealing with increasingly complex and automated systems. Although the public has often been fascinated by the ingenuity of such systems, many more or less recent tragedies show the extent to which the interaction between humans and automated systems remains a sensitive issue. Thus, many studies underline the negative consequences in terms of safety and performance of system automation: difficulties in detecting errors or failures of the system (Kessel & Wickens, 1982), in understanding its current state (Sarter, Woods, & Billings, 1997) and in determining the appropriate actions for the continuation of the task (Endsley, 1999). These difficulties in the interaction between humans and automation constitute a major research challenge. The aim of this project is to participate in the understanding and compensation of these difficulties of interaction between humans and automatic systems.
The difficult but necessary intelligibility of systems – The opacity of artificial agents is considered a major cause of these difficulties (Christoffersen & Woods, 2002; Dekker & Woods, 2002). Indeed, the lack of information sent back by the system on its own functioning and, upstream, the lack of possible dialogue on the status of the transmitted information, are central elements in the difficulties encountered by operators. Today’s complex systems tend to develop cascades of automatic reactions that diminish or even eliminate the operators’ ability to predict their functioning and provoke disproportionate and unpredictable events (Taleb, 2012). This lack of information refers to what has classically been called ‘system opacity’. The opacity of artificial systems tends to make it difficult to understand the intentions of these artificial agents. This state of affairs is likely to lead to difficulties in anticipating/understanding the actions of my artificial partner, thereby generating difficulties in terms of coordination, acceptability and sense of control. In this sense, the intelligibility of artificial systems (e.g. producing clear, predictable and understandable behaviour) is a major challenge for the systems engineering community.
We believe that it is possible to use the haptic sense (a combination of touch and kinesthesia) to make artificial systems more intelligible, more predictable. However, if we consider the existence of communication, then we need to be able to send and receive clear and complete messages without disrupting the task, or at least in a way comparable to a human partner. To do this, we first want to identify the information needed for cooperation. Secondly, we want to produce kinesthetic messages that convey this information. In particular, we will evolve our models to obtain a behaviour whose message is as clear as possible for the user. The rate of understanding of this message by the test persons will be our main measure. We hope to build a lexicon, a base of messages, whose effective use as well as validity will be measured statistically, the project being in the long run to combine these messages to build a complex communication and thus increase the communication capacities of machines during human-robot interactions. We will call these messages kinesthetic sense units. These units can be arranged into complex messages called kinaesthetic discourses, i.e. sequences of kinaesthetic sense units that clarify the intention, the status of the information. Finally, we will evaluate the impact of these messages on the usability and acceptability of pilot assistance functions, as well as their impact on the human operator’s feeling of control.
The objective of this project is to make machines intelligible through kinesthetic interaction. To achieve this objective four sub-objectives have been identified :
- to define a theoretical framework of kinesthetic communication,
- to analyse the conditions of joint action and their psychological models for human-human interaction,
- understanding and modelling the mechanisms at play in such communication. The aim will be to build a lexicon of kinaesthetic interaction,
- implement this lexicon in artificial agents in order to improve their intelligibility by the human operator and to support human-machine cooperation.
The laboratory has developed a robot to study interactions with one degree of freedom. SEMAPHORO-1D (System for the Evaluation of Man-Robot Physical Manipulation).
This system has made it possible to develop a kinesthetic sense unit for binary negotiation (Left-Right). This unit of meaning can be parameterised and allows to show a more or less dominant behaviour (Roche and Saint-Bauzel ICRA19, THRI accepted). This interface has also made it possible to study the notion of agentivity in collaborative decision-making and has shown that virtual partners do not produce a feeling of control over the interaction. This feeling of control has been measured with measures of intentional binding (Grynszpan et al., Concog 19). Finally, this interface made it possible to show that kinesthetic interaction is an implicit and rapid means of transmitting trust (Piezzulo, Roche and Saint-Bauzel, Nature ScR 20).
Partnerships and collaborations
The ANR collaborative scientific project brings together the following researchers:
- Bruno Berberian, Researcher at ONERA in the DTIS laboratory, specialising in ergonomics and the measurement of interaction performance,
- Camille Debras, Associate pmrofessor at the University of Paris-Nanterre, specialised in gesturality and linguistics,
- Ouriel Grynszpan, Professor at the University of Paris-Saclay – LIMSI laboratory, specialised in cognitive sciences and social intelligence issues,
- Aliyah Morgenstern, Professor at the University of Paris Nanterre – MoDyCo laboratory, specialised in developmental linguistics and sign language,
- Ludovic Saint-Bauzel, Associate Professor at Sorbonne University – ISIR laboratory, specialised in engineering and information sciences and life sciences,
- Claudia Savina Bianchini, Associate Professor at the University of Poitiers – FORELLIS laboratory, specialised in linguistics and more particularly in sign language.
This project is also the source of an international collaboration with :
- Giovanni Pezzulo, Researcher at the CNR (Italian National Research Centre) in the ISTC laboratory (Institute of Science and Technology of Cognition).