In the last few years there has been an enormous amount of progress in Reinforcement Learning, with breakthroughs in our ability to handle problems with complex dynamics and high-dimensional state and observation spaces. Likewise, generative modeling capabilities have improved dramatically, e.g., in modeling complex high dimensional distributions over images, audio, and text. Both fields have benefited extensively from the use of flexible function approximators and advances in stochastic optimization, and currently share many computational and statistical challenges. At the same time, there are exciting opportunities for cross-fertilization of ideas. Generative models, with their promise to accurately capture uncertainty in increasingly complex domains, have much potential to lead to novel approaches to effective and efficient exploration and learning - both key challenges in tackling real world applications using RL formulations. Likewise, RL techniques are showing promise in extending the capabilities of current generative models. The workshop brings together experts in the areas of generative models and reinforcement learning to identify current limitations, key challenges, and promising research avenues.
09:30- 10:00

10:15- 10:45

11:00- 11:30 Coffee

11:30- 12:00

12:15- 12:45

13:00- 14:00 Lunch

18:00- 18:30

18:45- 19:15