Draft Schedule

Room: Tearto del Mar


09:30- 10:10 Searching for the Principles of Reasoning and Intelligence. Shakir Mohammed, Deep Mind

10:15- 10:55 Counterfactual Explanations and EU's General Data Protection Regulation. Sandra Wachter, Oxford Internet Institute

11:00- 13:00 Coffee Break and Posters

Lunch and Excursion to Timanfaya Natural Park

18:00- 18:40 Probabilistic Machine Learning and AI Zoubin Ghahramani, University of Cambridge & Uber

19:00- 20:00 Panel Session

Dinner slot (attendees to self-organise)

Abstracts


Searching for the Principles of Reasoning and Intelligence.

We are collectively dedicated to a common task: a search for the general principles that make possible machines that learn. This leads to the question: What are the universal principles, if there are any, of reasoning and intelligence in machines? My search begins with four statistical operations that expose the dual tasks of learning, and of testing. We can instantiate many different types of inferential questions, and I share some of the paths I've followed in attempting to find general-purpose approaches to them. One such area is variational inference, and I'll briefly discuss the roles of amortised inference, stochastic optimisation, and general-purpose density estimators. For the most part, I'll explore recent work in testing as an inferential principle in implicit probabilistic models, and discuss work in estimation-by-comparison, density ratio estimation, and the method-of-moments. Different types of models require different types of inference, and any general-purpose inferential method remains elusive. I'll unpack some of the current research questions, but there is much more to do; my search for the probabilistic principles of reasoning and intelligence continues.



Counterfactual Explanations and EU's General Data Protection Regulation.

The proliferation of algorithms and AI systems is accelerating across the public (e.g. healthcare and criminal justice) and the private (e.g. finance and insurance) sectors. These decision-making systems often operate as black boxes and do not allow insights into how they arrived at a decision. Unsurprisingly, calls are getting louder to design systems that can explain themselves. Explanations are viewed as an ideal mechanism to enhance accountability even though explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically and legally challenging problem.The EU's General Data Protection Regulation(GDPR) is hoped to require these technologies to be more explainable and accountable. Unfortunately, the new framework raises more questions than it offers answers. This talk will explain what AI standards will be legally required and will argue that Counterfactual Explanations can - without opening the black box - help individuals to understand, challenge and alter automated decisions. Counterfactual Explanations bypass the current technical limitations of interpretability, while striking a balance between transparency and the rights and freedoms of others (e.g. privacy, trade secrets) and meet and exceed the legal requirements of the GDPR.



Probabilistic Machine Learning and AI

Probability theory provides a mathematical framework for understanding learning and for building rational intelligent systems. I will review the foundations of the field of probabilistic AI. I will then highlight some current areas of research at the frontiers, touching on topics such as Bayesian deep learning, probabilistic programming, Bayesian optimisation, and AI for data science. I will also describe how we have organised research at Uber AI Labs and where probabilistic machine learning fits in.