Pre-training is a popular and powerful paradigm in machine learning. As an example, suppose one has a modest-sized dataset of images of cats and dogs, and plans to fit a deep neural network to classify them from the pixel features. With pre-training, we start with a neural network trained on a large corpus of images, consisting of not just cats and dogs but hundreds of other image types. Then we fix all of the network weights except for the top layer(s) — which makes the final classification — and train (or "fine tune") those weights on our dataset. This often results in dramatically better performance than the network trained solely on our smaller dataset.
In this talk, we ask the question "Can pre-training help the lasso?" We develop a framework for the lasso in which an overall model is fit to a large set of data and then fine-tuned to a specific task on a smaller dataset. This latter dataset can be a subset of the original dataset but does not need to be. This framework has a wide variety of applications, including stratified models, multinomial targets, multi-response models, conditional average treatment estimation and even gradient boosting. In the stratified model setting, the pre-trained lasso pipeline estimates the coefficients common to all groups at the first stage, and then group-specific coefficients at the second fine-tuning stage. We show that under appropriate assumptions, the support recovery rate of the common coefficients is superior to that of the usual lasso trained only on individual groups. This separate identification of common and individual coefficients can also be useful for scientific understanding.
This is joint work with Erin Craig, Mert Pilanci, Thomas Le Menestrel, Balasubramanian Narasimhan, Manuel A. Rivas, Roozbeh Dehghannasiri, Julia Salzman and Jonathan Taylor.