Speaker: Vidya Muthukumar, Georgia Tech
Abstract: Seemingly counter-intuitive phenomena in deep neural networks and kernel methods have prompted a recent re-investigation of classical machine learning methods, like linear models. Of particular focus is sufficiently high-dimensional setups in which interpolation of training data is possible. In this talk, we will first review recent works showing that zero regularization, or fitting of noise, need not be harmful in regression tasks. Then, we will use this insight to uncover two new surprises for high-dimensional linear classification:
This is joint work with Misha Belkin, Daniel Hsu, Adhyyan Narang, Anant Sahai, Vignesh Subramanian, and Ji Xu.