Room: Drago-Adeje


09:30- 10:10 Probabilistic Deep Learning - Stock Take 2017 Sebastian Nowozin, Microsoft Research Cambridge

10:15- 10:55 On Possible Relationships between Episodic Memory, Semantic Memory and Perception Volker Tresp, Siemens

11:00- 13:00 Coffee Break and Posters

Lunch

15:00- 15:45 Data Science using the Wolfram Language Sebastian Bodenstein, Wolfram

16:00- 17:00 Panel Session

Dinner slot (attendees to self-organise)

Abstracts


Probabilistic Deep Learning - Stock Take 2017

In the last three years powerful probabilistic deep learning models have been developed: the variational autoencoder, generative-adversarial networks, and maximum mean discrepancy models. These models provide tractable-by-design inference approximations at test-time and can be learned efficiently. I will present some work from my group, and also briefly survey the larger progress made in combining probabilistic models with deep learning, including a summary of open research questions.



On Possible Relationships between Episodic Memory, Semantic Memory and Perception

In recent years a number of large-scale triple-oriented knowledge graphs have been generated. They are being used in research and in applications to support search, text understanding and question answering. Knowledge graphs pose new challenges for machine learning and research groups have developed novel statistical models that can be used to compress knowledge graphs, to derive implicit facts, to detect errors, and to support the above mentioned applications. Some of the most successful statistical models are based on tensor decompositions that use latent representations of the involved generalized entities. In my talk I will address the question if these models might also provide insight into the brain's memory system. In particular we discuss how episodic memory, semantic memory and perception are all mutually dependent.



Data Science using the Wolfram Language

The Wolfram Language provides a unique environment for doing data science: highly automated machine learning, a neural network framework that is built into the language itself, easy cloud deployment, and powerful symbolic mathematical capabilities. The focus of this talk will be on the neural network framework. The first aim of the framework is to meld automation, flexibility, and scalability. Specifics will be discussed, such as automating the process of efficiently training networks on variable-length sequences. The second aim of the framework is to provide easy access to the widest possible set of pre-trained models, first by curated conversion of existing models from other frameworks (Caffe, TensorFlow, MXNet, Torch, DarkNet, etc), and second by a large-scale effort to build 30+ user-facing functions (e.g. ImageIdentify, ImageColorize, LanguageTranslate, etc) using the network framework and exposing these trained networks to users. This effort involves a major curation, data management and training challenge.