Room: Conference room 6

What ideas are important that aren't discussed enough? As machine learning has become more successful, more researchers are looking at the questions that drive the field. But are there issues that we are missing? Ideas that are not getting the attention they deserve? In this workshop each presenter will give a 20 minute overview of an idea that they believe is not getting enough attention in the wider community. Each presentation will be 20 minutes long and will be followed by long discussion of the idea and where it might be deployed.

09:30-09:50 Problematic confidence: The Role of Uncertainty when Data is Estimated from DataAasa Feragen, University of Copenhagen

10:15-10:35 Bootstrap Prediction and Bayesian Prediction Under Misspecified ModelsSebastian Nowozin, Google Brain

11:00-11:30 Coffee Break

11:30-11:50 Wiener’s Yellow PerilSimo Särkkä, Aalto University

13:00 Lunch Break

16:00-16:20 Operational representation learning?Søren Hauberg, DTU

16:45-17:05 Online learning in the presence of long-range dependenciesAzadeh Khaleghi, University of Lancaster

17:30-18:00 Coffee Break

18:00-18:20 Statistics or Evolution? Your ChoiceChris Watkins, Royal Holloway

18:45-19:05 Carl Rasmussen, University of Cambridge



Abstracts


Wiener’s Yellow Peril

Abstract

Wiener's 1949 book 'Extrapolation, Interpolation, and Smoothing of Stationary Time Series' is sometimes called Yellow Peril. More precisely, the nickname was given to the original classified report from 1942. The book/report presents methods for making predictions based on noisy observations of a random function and the random function is modeled as a Gaussian process. This methodology is basically the same as what nowadays is called Gaussian process regression - or Kalman smoothing in the temporal case. A causal version of the predictor is called the Wiener filter and it is a precursor of Kalman filtering. A lot of theory for Wiener filtering and smoothing is available in terms of Wiener's generalized harmonic analysis. It would be beneficial to revisit the theory for analyzing theoretical properties of Gaussian process regression methods and related kernel methods.



Online learning in the presence of long-range dependencies

Abstract

An important, yet rarely addressed, problem in machine learning is to deal with long-range dependencies that naturally exist in most real-world datasets. This is a particularly challenging task in online learning where a sampling policy may affect the distribution of observations. I will demonstrate this challenge in the context of a multi-armed restless bandit problem where the pay-offs are stationary $\phi$-mixing.