Safe Machine Learning

Specification, Robustness and Assurance

ICLR 2019 Workshop. Monday May 6th.

Room R6, Ernest N. Morial Convention Center, New Orleans.


Overview

The ultimate goal of ML research should be to have a positive impact on society and the world. As the number of applications of ML increases, it becomes more important to address a variety of safety issues; both those that already arise with today's ML systems and those that may be exacerbated in the future with more advanced systems.

Current ML algorithms tend to be brittle and opaque, reflect undesired bias in the data and often optimize for objectives that are misaligned with human preferences. We can expect many of these issues to get worse as our systems become more advanced (e.g. finding more clever ways to optimize for a misspecified objective). This workshop aims to bring together researchers in diverse areas such as reinforcement learning, formal verification, value alignment, fairness, privacy, and security to further the field of safety in machine learning.

The workshop will focus on three broad categories of ML safety problems: specification, robustness and assurance. Specification is defining the purpose of the system. Robustness is designing the system to withstand perturbations. Assurance is monitoring, understanding and controlling system activity before and during its operation.

For more information on the research areas and about submitting, see our Call for Papers.

Invited Speakers

Contributed Speakers

Panelists

Organizing committee

Contact: safe.ml.iclr2019@gmail.com

Program committee

  • Adam Gleave (University of California, Berkeley)
  • Ananya Kumar (Stanford University)
  • Anish Athalye (Massachusetts Institute of Technology)
  • Antti Honkela (University of Helsinki)
  • Berk Utsun (Harvard University)
  • Chris Russell (Alan Turing Institute)
  • Christos Dimitrakakis (Chalmers University)
  • El Mahdi El Mhamdi (École Polytechnique Fédérale de Lausanne)
  • Eric Wong (Carnegie Mellon University)
  • Geoffrey Irving (OpenAI)
  • Gillian Hadfield (University of Toronto)
  • Ian Goodfellow (Google Brain)
  • Jacob Steinhardt (Stanford University)
  • Jaime Fisac (University of California, Berkeley)
  • Jan Leike (DeepMind)
  • Josh Kroll (Princeton University)
  • Joshua Achiam (OpenAI)
  • Julius Adebayo (Massachusetts Institute of Technology)
  • Kai Xiao (Massachusetts Institute of Technology)
  • Krishnamurthy (Dj) Dvijotham (DeepMind)
  • Kush Varshney (IBM Research)
  • Ian Goodfellow (Google Brain)
  • Lisa Anne Hendricks (University of California, Berkeley)
  • Luca Oneto (University of Genoa)
  • Ludwig Schmidt (University of California, Berkeley)
  • Min Wu (University of Oxford)
  • Muhammad Bilal Zafar (Bosch Center for AI)
  • Nicholas Carlini (Google Brain)
  • Niki Kilbertus (University of Cambridge)
  • Nina Grgic-Hlaca (Max Planck Institute MPI-SWS)
  • Paul Christiano (OpenAI)
  • Ramana Kumar (DeepMind)
  • Rohin Shah (University of California, Berkeley)
  • Rudy Bunel (University of Oxford)
  • Sarah Tan (Cornell University)
  • Seyed-Mohsen Moosavi (École Polytechnique Fédérale de Lausanne)
  • Smitha Milli (University of California, Berkeley)
  • Solon Barocas (Cornell University)
  • Sven Gowal (DeepMind)
  • Tameem Adel (University of Cambridge)
  • Taylan Cemgil (Bogazici University, DeepMind)
  • Tatsunori Hashimoto (Massachusetts Institute of Technology)
  • Tom Everitt (DeepMind)
  • William Isaac (DeepMind)
  • Xiaowei Huang (University of Liverpool)

Thanks to Tom Everitt, Taylan Cemgil, Ray Jiang, Christina Heinze-Deml, and Andrew Trask for providing emergency reviews.