Room: Sala Ginepro
- Chloe-Agathe Azencott
- Katie Gorman
- Richard Mallah
- Conrad McDonnell
- Jonathan Price
- John Quinn
- Noel Sharkey
- Daniel Susskind
9:30- 10:00 The Future of the Professions and the 'AI Fallacy' Daniel Susskind
10:00- 10:25 Why Autonomous Warfare is a Bad Idea Noel Sharkey [slides]
10:25- 10:50 The Landscape of AI Safety/Beneficence Research Richard Mallah [slides]
10:50- 11:15 How to Squash the Hype: Practical strategies for bringing the reality of research into the public conversation Katy Gorman
11:45- 12:10 Machine Learning in the Developing World John Quinn
12:10- 12:25 Bernhard Schölkopf
12:25- 13:00 Panel Discussion Danielle Belgrave, Joaquin Quiñonero Candela, Bernhard Schölkopf and Neil Lawrence (to include issues of data privacy and responsibility)
Lunch and Afternoon Activities
18:00- 18:25 It wasn’t me, my robot did it Conrad McDonnell [slides]
18:25- 18:50 Open Science and Genomic Privacy Chloe-Agathe Azencott [slides]
18:50- 19:15 Privacy and Data Protection in the Big Data Age Jonathan Price [slides]
19:15- 20:00 Panel Discussion Noel Sharkey, Jonathan Price, Richard Mallah, Conrad McDonnell, Adrian Weller (to include legal issues, and longer term themes)
How to Squash the Hype: Practical strategies for bringing the reality of research into the public conversationKaty Gorman
Almost all of the media coverage surrounding ML and AI research shares one characteristic: hyperbole. Positive or negative, hyperbole hurts the field by raising hopes and entrenching fears. This creates unrealistic expectations for the tools being developed. We'll explore some basic strategies for how to engage with journalists and help preserve the reality of your research and the field when that story is retold.
Katherine Gorman is a Boston-based podcast producer. In partnership with Ryan Adams (Harvard, Twitter) she created and produces the podcast Talking Machines which she produces and Adams hosts. With new episodes every other Thursday, Talking Machines focuses on telling the real story of current research happening in the machine learning and artificial intelligence fields. Her work focuses on the human stories at the heart of complex subjects. After almost a decade in public radio, Gorman left NPR and WBUR's daily news show Here and Now to exclusively create original podcasts. She was once bitten by a guest she was interviewing (yes, it was a human, no they don't do ML research).
Machine Learning in the Developing WorldJohn Quinn
Technological and social changes across the developing world have led to a proliferation of new digital data sources, and with them opportunities for machine learning to be applied in new ways. At an individual scale, it can be useful to automate the judgement of scarce experts (e.g. by automating laboratory diagnostics or diagnosing diseases in crops); at a population scale, the information gaps that hamper effective planning can be addressed (e.g. by using satellite imagery to provide timely and detailed assessments of poverty, or using telecoms-based data on population mobility to improve epidemiological predictions). I will describe some of the opportunities and obstacles for such applications, using examples of systems developed in Uganda.
John Quinn is a Data Scientist in the United Nations Global Pulse lab in Kampala, and part of the Artificial Intelligence group at Makerere University, Uganda. His research interests are in the application of artificial intelligence and data science techniques to practical problems in health, agriculture and other domains. http://air.ug/~jquinn
It wasn’t me, my robot did itConrad McDonnell
'It wasn’t me, my robot did it': Liability in current legal systems depends on foreseeability of harm and the concept of causation. It is challenging to apply this to machine learning systems. Even so, there is a need to attribute liability: should it be the owner, the developer, the operator, or the user? The concept of vicarious liability. Analogy with parental liability for the act of a child. Example of a real problem: Tay, Microsoft’s Twitter chatbot, which learned to make racist, homophobic, and anti-Semitic tweets.
Realistic risk areas in the immediate future: Hacking and cyber crime by AIs is the primary risk. Intellectual property theft. Financial crime. Distortion of search results or news articles. Misrepresentation (libel or slander). Racism, hate speech, pornography and other inappropriate communication. Suspension of utilities on which society depends: can a machine take industrial action? Accidental or unintended harm seems more likely than intended harm. Existing legal systems are adequate or adaptable for these risks, in general new laws and regulations are not required. There is the possibility for self-regulation by the industry.
Legal personality: Legal rights and direct personal liability may be conferred effectively on a machine. Achievable legal structures include the AI controlled company, and the self-owned AI. Legal penalties with a meaningful impact on such systems would be based on corporate liability, and may include: financial penalties, temporary suspension of access to markets or other systems, or, in the last resort, permanent suspension or dissolution. But, there is potential for avoidance of penalties through duplication of an operational system.
Open Science and Genomic PrivacyChloe-Agathe Azencott
Machine learning has the potential of major societal impact in computational biology applications. In particular, it plays a central role in the development of precision medicine, whereby treatment is tailored to the clinical or genetic specificities of the patients. However, these advances require collecting and sharing among researchers large amount of genomic data, which generates much concern about privacy. I will review recent trends in both compromising and protecting genomic privacy.
Chloé-Agathe Azencott is a junior research faculty at Mines ParisTech (Paris, France). She belongs to the Centre for Computational Biology, a joint research group between Mines ParisTech, Institut Curie and INSERM focusing on bioinformatics for cancer research. She holds a PhD in computer science from University of California, Irvine (USA), which she obtained in 2010. From 2011 to 2013 she was a postdoctoral fellow in the Machine Learning for Computational Biology research group of the Max Planck Institutes for Developmental Biology and Intelligent Systems in Tuebingen (Germany). Her research interests revolve around developing machine learning approaches for therapeutic research. This ranges from chemoinformatics methods for drug discovery to the analysis of large-scale, heterogeneous, whole-genome data for precision medicine. For more details see http://cazencott.info.