Skip to content Skip to navigation

Prevalence of neural collapse during terminal phase of deep learning training

Tuesday, September 15, 2020 - 4:30pm

Speaker:   David Donoho, Stanford Statistics

Abstract:   Modern deep neural networks for image classification have achieved superhuman performance. Yet, the complex details of trained networks have forced most practitioners and researchers to regard them as blackboxes with little that could be understood. This talk considers in detail a now-standard training methodology: driving the cross-entropy loss to zero, continuing long after the classification error is already zero. Applying this methodology to an authoritative collection of standard deepnets and datasets, we observe the emergence of a simple and highly symmetric geometry of the deepnet features and of the deepnet classifier; and we document important benefits that the geometry conveys, thereby helping us understand an important component of the modern deep learning training paradigm.

This is joint work with Vardan Papyan, University of Toronto, and XY Han, Cornell.