Fields Colloquium

Event Information Lipschitz Regularized Deep Neural Networks Converge and are robust to adversarial perturbations
14:10 on Wednesday October 24, 2018
15:00 on Wednesday October 24, 2018
Stewart Library, Fields Institute, 222 College St.
Adam Oberman
https://www.adamoberman.net/
McGill University

Deep Neural Networks perform much better than traditional Machine Learning methods in a number of tasks. However, they lack performance guarantees, which limits the use of the technology in real world and real time applications where errors can be costly. The first step towards these guarantees is a proof of generalization. However a recent Machine Learning paper, “Understanding Deep Learning requires rethinking generalization" showed that the traditional machine learning generalization theory does not apply to deep neural networks.

We will prove that Lipschitz regularized DNNs converge, (with a rate of convergence), which implies generalization. We use at theory which is based on variational methods for inverse problems. The regularization leads to robust networks which are more resistant to adversarial examples.

https://arxiv.org/abs/1810.00953

https://arxiv.org/abs/1808.09540