Room: Drago-Adeje

Reinforcement Learning (RL) has seen some spectacular developments recently. One of the key challenges for many applications facing RL is data efficiency, i.e., how to learn from limited data. In this workshop we want to focus on aspects from various disciplines including control, robotics, personalized healthcare and machine learning, which are central to progress on data-efficient RL, such as probabilistic methods, approximation techniques, experimental design, the exploration/exploitation tradeoff, data efficiency benchmarking and others.

Speakers:

  • Aldo Faisal
  • Max Jaderberg
  • Melanie Zeilinger
  • Roberto Calandra
  • Thomas Schön
  • Chris Watkins
  • Pierre-Luc Bacon


09:30- 10:00 Learning flexible models of nonlinear dynamical systems Thomas Schön, Uppsala University

10:00- 10:30 Towards Safe Learning during Closed-loop Control Melanie Zeilinger, ETH Zurich

10:30- 11:00 Goal-Driven Dynamics Learning for Model-Based RL Roberto Calandra, UC Berkeley

11:30- 12:00 When the patient in front of you is the data source: (Machine) learning to adapt in real-time to acute clinical settings Aldo Faisal, Imperial College

12:00- 12:30 Unifying Multi-Step Reinforcement Learning Methods through Matrix Splittings Pierre-Luc Bacon, McGill University

12:30- 13:00 Unsupervised Learning for RL Max Jaderberg, DeepMind

18:00- 18:30 Innate Knowledge Chris Watkins, Royal Holloway

18:30- 20:00 Panel discussion