Self-Supervised Learning & World Models - ICRA 2020

1859 views
Download
  • Share

Animals and humans seem able to learn perception and control tasks extremely quickly, learning to drive a car or land an airplane takes 30 hours of practice. In contrast, popular machine learning paradigms require large amounts of human-labeled data for supervised learning or enormous amounts of trials for reinforcement learning. Humans and animals learn vast amounts of background knowledge about how the world works through mere observation in a task-independent manner. One hypothesis is that it is their ability to learn good representations and predictive models of the perceptual and motor worlds that allows them to learn new tasks efficiently. How do we reproduce this ability in machines? One promising avenue is self-supervised learning (SSL), where the machine predicts parts of its input from other parts of its input. SSL has already brought about great progress in discrete domains, such as language understanding. The challenge is to devise SSL methods that can handle the stochasticity and multimodality of prediction in high-dimensional continuous domains such as video. Such a paradigm would allow robots to learn world models and to use them for Model-Predictive Control or policy learning. An approach for this will be presented that handles uncertainty not through a probability distribution but through an energy function. An application to driving autonomous vehicles in dense traffic will be presented.


Yann LeCun is VP & Chief AI Scientist at Facebook and Silver Professor at NYU affiliated with the Courant Institute of Mathematical Sciences & the Center for Data Science. He was the founding Director of Facebook AI Research and of the NYU Center for Data Science. He received an Engineering Diploma from ESIEE (Paris) and a PhD from Sorbonne Université. After a postdoc in Toronto he joined AT&T Bell Labs in 1988, and AT&T Labs in 1996 as Head of Image Processing Research. He joined NYU as a professor in 2003 and Facebook in 2013. His interests include AI machine learning, computer perception, robotics and computational neuroscience. He is the recipient of the 2018 ACM Turing Award (with Geoffrey Hinton and Yoshua Bengio) for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing", a member of the National Academy of Engineering and a Chevalier de la Légion d’Honneur.

Animals and humans seem able to learn perception and control tasks extremely quickly, learning to drive a car or land an airplane takes 30 hours of practice. In contrast, popular machine learning paradigms require large amounts of human-labeled...

Advertisment

Advertisment