On the statistical foundations of adversarially robust learning

Date
Tue November 17th 2020, 4:30pm
Speaker
Edgar Dobriban, University of Pennsylvania

Abstract:   Robustness has long been viewed as an important desired property of statistical methods. More recently, it has been recognized that complex prediction models such as deep neural nets can be highly vulnerable to adversarially chosen perturbations of their outputs at test time. This area, termed adversarial robustness, has garnered an extraordinary level of attention in the machine learning community over the last few years. However, little is known about the most basic statistical questions. In this talk, I will present answers to some of them. In particular, I will show how class imbalance has a crucial effect, and leads to unavoidable tradeoffs between robustness and accuracy, even in the limit of infinite data (i.e., for the Bayes error). I will also show other results, some of them involving novel applications of results from robust isoperimetry (Cianchi et al, 2011).

This is joint work with Hamed Hassani, David Hong, and Alex Robey.

Zoom Recording [SUNet/SSO authentication required]