Room: Teatro del Mar

In the last few years there has been an enormous amount of progress in Reinforcement Learning, with breakthroughs in our ability to handle problems with complex dynamics and high-dimensional state and observation spaces. Likewise, generative modeling capabilities have improved dramatically, e.g., in modeling complex high dimensional distributions over images, audio, and text. Both fields have benefited extensively from the use of flexible function approximators and advances in stochastic optimization, and currently share many computational and statistical challenges. At the same time, there are exciting opportunities for cross-fertilization of ideas. Generative models, with their promise to accurately capture uncertainty in increasingly complex domains, have much potential to lead to novel approaches to effective and efficient exploration and learning - both key challenges in tackling real world applications using RL formulations. Likewise, RL techniques are showing promise in extending the capabilities of current generative models. The workshop brings together experts in the areas of generative models and reinforcement learning to identify current limitations, key challenges, and promising research avenues.

09:30-09:45 Welcome and Opening

09:45-10:15 Structure Learning in Generative Deep LearningPascal Poupart, University of Waterloo

10:20-10:50 Posterior sampling for reinforcement learning: worst case regret boundsShipra Agrawal, Columbia University

10:50-11:20 Coffee

11:20-11:50 Deep Exploration via Randomized Value FunctionsIan Osband, DeepMind

11:55-12:25 Noisy natural gradient as variational inferenceRoger Grosse, University of Toronto

12:30-13:00 Planning in reinforcement learning with learned models in DynaMartha White, University of Alberta

13:00-14:00 Lunch

15:00-15:30 Backtracking model for Efficient Reinforcement LearningAnirudh Goyal, Université de Montréal

15:35-16:05 A Case Against Generative Models in RL?Shakir Mohamed, DeepMind

16:05-17:00 Discussion Panel