Room: Teatro del Mar

Deep learning has driven a revolution in several areas of machine learning, thanks to a large degree to their ability to discover rich and useful representations from data. Despite these empirical successes, especially in supervised learning, significant open questions remain about the computational goal of unsupervised and semi-supervised representation learning. How can we characterise the goal of unsupervised representation learning? What criteria should one use to evaluate and compare different data representations? Are maximum likelihood, variational inference, information bottleneck or hierarchical Bayesian models fundamentally fit for this purpose and if so, why? Is representation learning a fundamental problem, or an epiphenomenon of other perhaps more fundamental tasks, such as transfer learning and data-efficiency. The goal of this workshop is to explore the recent efforts in representation learning related, such as disentangling factors of variation, manifold learning, and transfer learning; the role of prior knowledge, structure, sparsity, invariances, and smoothness; and and the role of representation in problems in reinforcement learning, natural language, and scene understanding. We shall bring together machine learning researchers with different views on these questions to stimulate further discussion and progress.

 

09:30-09:55 Opening remarksFerenc Huszár, Twitter

09:55-10:30 Inference, relevance and statistical structure: the constraints for useful representationsHarri Valpola, Curious AI

10:30-11:00 Learning Representations for Hyperparameter Transfer LearningCedric Archambeau, Amazon

11:00-11:30 Coffee

11:30-12:00 Possible formalizations of Representation LearningOlivier Bousquet, Google Brain

12:00-12:30 Supervised Learning of Rules for Unsupervised Representation LearningJascha Sohl-Dickstein, Google Brain

12:30-13:00 Unsupervised Disentanglement or How to Transfer Skills and Imagine ThingsIrina Higgins, DeepMind

13:00-18:00 Break

18:00-18:30 Neural decision making in human and machine visionMatthias Bethge, University of Tübingen

18:30-19:00 Symbolic representation learningMarta Garnelo, Imperial College London

18:30-19:00 Open Discussion and Debate