Biography
Dr Sylvain Calinon is a permanent researcher at the Idiap Research Institute (http://idiap.ch) since 2014, with research interests covering robot learning and human-robot interaction. He is also a lecturer at the Ecole Polytechnique Federale de Lausanne (EPFL), and an external collaborator at the Department of Advanced Robotics (ADVR), Italian Institute of Technology (IIT). From 2009 to 2014, he was a Team Leader at ADVR, IIT. From 2007 to 2009, he was a Postdoc at the Learning Algorithms and Systems Laboratory, EPFL. He holds a PhD from EPFL (2007), awarded by the Robotdalen, ABB and EPFL-Press awards. He co-authored about 90 publications in the field of robot learning, with recognition including Best Paper Award at IEEE Ro-Man’2007 and Best Paper Award Finalist at IEEE-RAS Humanoids’2009, IEEE/RSJ IROS’2013, ICIRA’2015 and IEEE ICRA’2016. He currently serves as Associate Editor in IEEE Robotics and Automation Letters, Springer Intelligent Service Robotics, Frontiers in Robotics and AI, and the International Journal of Advanced Robotic Systems.
Abstract
Robot learning from few interactions by exploiting data structure and geometry
Human-centric robotic applications often require the robots to learn new skills by interacting with the end-users and the environments. From a machine learning perspective, the challenge is to acquire skills from only few interactions, with strong generalization demands. It requires: 1) the development of intuitive active learning interfaces to acquire meaningful demonstrations; 2) the development of models that can exploit the structure and geometry of the acquired data in an efficient way; 3) the development of adaptive control techniques that can exploit the learned task variations and coordination patterns. The developed models often need to serve several purposes (recognition, prediction, online synthesis), and be compatible with different learning strategies (imitation, emulation, exploration). For the reproduction of skills, these models need to be enriched with force and impedance information to enable human-robot collaboration and to generate safe and natural movements.
I will present an approach combining model predictive control and statistical learning of movement primitives in multiple coordinate systems. The proposed approach will be illustrated in various applications, with robots either close to us (robot for dressing assistance), part of us (prosthetic hand with EMG and tactile sensing), or far from us (teleoperation of bimanual robot in deep water).