Elliot Chane-Sane

Postdoctoral Researcher in Robot Learning

I am a postdoctoral researcher at LAAS-CNRS in the Gepetto team working on robot learning under the supervision of Nicolas Mansard. Previously, I completed my PhD in the Willow team of Inria Paris and École Normale Supérieure, advised by Ivan Laptev and Cordelia Schmid. I am interested in deep learning and robotics, with a particular focus on imitation and reinforcement learning for control.

Prior to research, I received an engineering degree from École Polytechnique and a MASt (Part III) in Mathematical Statistics from the University of Cambridge.

Email  /  Google Scholar  /  Linkedin  /  Github

profile photo

Research

SoloParkour: Constrained Reinforcement Learning for Visual Locomotion from Privileged Experience
Elliot Chane-Sane*, Joseph Amigo*, Thomas Flayols, Ludovic Righetti, Nicolas Mansard
CoRL, 2024
project page / video / arXiv / code

End-to-end visual reinforcement learning for agile legged robot parkour

CaT: Constraints as Terminations for Legged Locomotion Reinforcement Learning
Elliot Chane-Sane*, Pierre-Alexandre Leziart*, Thomas Flayols, Olivier Stasse, Philippe Souères, Nicolas Mansard
IROS, 2024
project page / video / arXiv / code

Simple and effective constrained reinforcement learning and its application to agile legged locomotion

Learning Video-Conditioned Policies for Unseen Manipulation Tasks
Elliot Chane-Sane, Cordelia Schmid, Ivan Laptev
ICRA, 2023
project page / arXiv

Zero-shot human video demonstration to robot manipulation

Goal-Conditioned Reinforcement Learning with Imagined Subgoals
Elliot Chane-Sane, Cordelia Schmid, Ivan Laptev
ICML, 2021
project page / arXiv / code

Hierarchical reinforcement learning to train non-hierarchical policies for long-horizon goal-reaching tasks


Design and source code from Jon Barron's website.