Deep learning has driven a revolution in several areas of machine learning, thanks to a large degree to their ability to discover rich and useful representations from data. Despite these empirical successes, especially in supervised learning, significant open questions remain about the computational goal of unsupervised and semi-supervised representation learning. How can we characterise the goal of unsupervised representation learning? What criteria should one use to evaluate and compare different data representations? Are maximum likelihood, variational inference, information bottleneck or hierarchical Bayesian models fundamentally fit for this purpose and if so, why? Is representation learning a fundamental problem, or an epiphenomenon of other perhaps more fundamental tasks, such as transfer learning and data-efficiency. The goal of this workshop is to explore the recent efforts in representation learning related, such as disentangling factors of variation, manifold learning, and transfer learning; the role of prior knowledge, structure, sparsity, invariances, and smoothness; and and the role of representation in problems in reinforcement learning, natural language, and scene understanding. We shall bring together machine learning researchers with different views on these questions to stimulate further discussion and progress.
09:30- 10:00

10:15- 10:45

11:00- 11:30 Coffee

11:30- 12:00

12:15- 12:45

13:00- 14:00 Lunch

18:00- 18:30

18:45- 19:15