Room: Kingfisher Suite

In recent years, there has been a proliferation of ML algorithms in several safety-critical applications, including (amongst others), face detection, data centers, biometric identification and self driving cars. Even outside of safety critical systems, ML algorithms can lead to severely undesirable outcomes, including data leakage and racist/biased predictions. This calls for the need to have stronger checks on ML algorithms and the ability to train ML models not just to fit training data well, but also to satisfy auxiliary properties necessary for safe deployment. This workshop will discuss challenges and opportunties around developing secure ML systems, bringing together perspectives from formal verification, robust learning, robotics and autonomous systems and privacy.

09:30- 09:45 Opening remarks Pushmeet Kohli, DeepMind

09:45- 10:25 Why don't we have a provably robust ImageNet classifier yet? Zico Kolter, CMU

10:25- 11:00 Scalable Training and Verification of Robust Neural Networks Timon Gehr, ETH

11:00- 11:30 Coffee

11:30- 12:15 Role of Simulation in Safe Decision Making under Uncertainty Ashish Kapoor, Microsoft Research

12:15- 13:00 Structured representations for robust behavior in robots Subramanian Ramamoorthy, University of Edinburgh

1:00- 2:00 Lunch

14:00- 14:40 Toward Practical Tools for Research in Privacy-Preserving Deep Learning Andrew Trask, University of Oxford, DeepMind

14:40- 15:20 Proofs, Algorithms, and Tools for Private Data Analysis Adria Gascon, Turing Institute

15:20- 16:00 Panel Discussion on Challenges and Opportunities in Secure ML Pushmeet Kohli, DeepMind