Isir, LIP6 and SCAI are honored to invite Professor Patrick Haggard to the Virtual Colloquium that will take place on Wednesday, November 25 at 6:00 pm.
The professor in cognitive neurosciences at University College London (UCL) will speak on “Responsibility for intelligent machines: a cognitive approach“.
Patrick Haggard leads the “Action and Body” research group at the Institute of Cognitive Neuroscience, University College London. During 2020 he holds the Jean D’Alembert visiting professorship at IEA-Paris/Paris-Saclay. His core research interests lie in the sensory and motor bases of human cognition. He has published several articles on the bases of voluntary action, agency and responsibility in the human brain. He has a specific interest in the technological ethics of human action, and has published on the ethics of VR.
Abstract: Much is written about “responsible AI” – but what does responsibility mean in this context? This talk begins by considering the cognitive basis of human responsibility, in order to inform comparisons between human and artificial agents. Human agents make a mental link between their intended action, and the outcome of that action. I will show that this mental link underpins the everyday experiences of sense of agency and responsibility – which algorithmic systems currently lack. Human agency has two specific important features, which make (most) humans safe agents for us to interact with. First, human agents can step back from a current goal once circumstances mean that goal is no longer appropriate. Many artificial agents still rely on a human over-ride to perform this stepping-back function. Second, while human actions have low explainability (we often don’t know why we do what we do), they can have high fixability (we often change what we do, given appropriate learning signals). Discussions about the explainability of AI should be replaced by discussions of fixability. Last, I will consider the social dimension of human and machine action. The human sense of agency and responsibility are carefully trained by society, through reinforcement and cultural learning in early childhood experience that we do not generally remember. The public sphere is increasingly inhabited and shaped by artificial agents. I will consider what cognitive attributes AIs will need to have in order for us to cohabit with them, as opposed to merely use them, or avoid them.
Access link: https://us02web.zoom.us/j/82918805992
Link to the Colloquium web page: https://www.lip6.fr/colloquium/