Modern surprises in classical machine learning

Date
Tue February 23rd 2021, 4:30pm
Speaker
Vidya Muthukumar, Georgia Tech

Abstract:   Seemingly counter-intuitive phenomena in deep neural networks and kernel methods have prompted a recent re-investigation of classical machine learning methods, like linear models. Of particular focus is sufficiently high-dimensional setups in which interpolation of training data is possible. In this talk, we will first review recent works showing that zero regularization, or fitting of noise, need not be harmful in regression tasks. Then, we will use this insight to uncover two new surprises for high-dimensional linear classification:

  • least-2-norm interpolation can classify consistently even when the corresponding regression task fails, and
  • the support-vector-machine will exactly interpolate discrete labels, i.e., all training points become support vectors, in sufficiently high-dimensional models.

This is joint work with Misha Belkin, Daniel Hsu, Adhyyan Narang, Anant Sahai, Vignesh Subramanian, and Ji Xu.

Zoom Recording [SUNet/SSO authentication required]