\dm_csml_event_details UCL ELLIS

Faster Learning and Richer Models for the Next AI Challenges


Speaker

Marc Deisenroth

Affiliation

Imperial College

Date

Friday, 21 June 2019

Time

13:00-14:00

Location

Zoom

Link

1-19 Torrington Place G12

Event series

DeepMind/ELLIS CSML Seminar Series

Abstract

High-impact areas of machine learning and AI, such as personalized healthcare, autonomous robots, or environmental science share some practical challenges: They are either small-data problems or a small collection of big-data problems. Therefore, learning algorithms need to be data/sample efficient, i.e., they need to be able to learn in complex domains, but only from fairly small datasets. Approaches for data-efficient learning include probabilistic modeling and inference, Bayesian deep learning, meta learning, Bayesian optimization, few-shot learning, etc.

High-impact areas of machine learning and AI, such as personalized healthcare, autonomous robots, or environmental science share some practical challenges: They are either small-data problems or a small collection of big-data problems. Therefore, learning algorithms need to be data/sample efficient, i.e., they need to be able to learn in complex domains, but only from fairly small datasets. Approaches for data-efficient learning include probabilistic modeling and inference, Bayesian deep learning, meta learning, Bayesian optimization, few-shot learning, etc.

In this talk, Marc will give a brief overview of some approaches to tackle the data-efficiency challenge. First, he will discuss a data-efficient reinforcement learning algorithm, which highlights the necessity for probabilistic models in RL. He will then present a meta-learning method for generalizing knowledge across tasks. Finally, he will motivate deep Gaussian processes, richer probabilistic models, which are composed of relatively simple building blocks. He will briefly discuss the model, inference and some potential extensions, which can be valuable for modeling complex relationships, while providing some uncertainty estimates, which will be useful in any downstream decision-making process.

In this talk, Marc will give a brief overview of some approaches to tackle the data-efficiency challenge. First, he will discuss a data-efficient reinforcement learning algorithm, which highlights the necessity for probabilistic models in RL. He will then present a meta-learning method for generalizing knowledge across tasks. Finally, he will motivate deep Gaussian processes, richer probabilistic models, which are composed of relatively simple building blocks. He will briefly discuss the model, inference and some potential extensions, which can be valuable for modeling complex relationships, while providing some uncertainty estimates, which will be useful in any downstream decision-making process.



Key references


  1. Marc P. Deisenroth, Dieter Fox, Carl E. Rasmussen, Gaussian Processes for Data-Efficient Learning in Robotics and Control, IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 37, pp. 408–423, 2015

  2. Steindór Sæmundsson, Katja Hofmann, Marc P. Deisenroth, Meta Reinforcement Learning with Latent Variable Gaussian Processes, Proceedings of the International the Conference on Uncertainty in Artificial Intelligence (UAI), 2018

  3. Hugh Salimbeni, Marc P. Deisenroth, Doubly Stochastic Variational Inference for Deep Gaussian Processes, Advances in Neural Information Processing Systems (NIPS), 2017

  4. Hugh Salimbeni, Vincent Dutordoir, James Hensman, Marc P. Deisenroth, Deep Gaussian Processes with Importance-Weighted Variational Inference, International Conference on Machine Learning (ICML), 2019


Biography