Cross-validation: What does it estimate and how well does it do it?

Date
Tue October 12th 2021, 4:30pm
Location
Hewlett 102
Speaker
Stephen Bates, UC Berkeley

Cross-validation is a widely-used technique to estimate prediction error, but its behavior is complex and not fully understood. Ideally, one would like to think that cross-validation estimates the prediction error for the model at hand, fit to the training data. We prove that this is not the case for the linear model fit by ordinary least squares; rather it estimates the average prediction error of models fit on other unseen training sets drawn from the same population. We further show that this phenomenon occurs for most popular estimates of prediction error, including data splitting, bootstrapping, and Mallow's Cp. Next, the standard confidence intervals for prediction error derived from cross-validation may have coverage far below the desired level. Because each data point is used for both training and testing, there are correlations among the measured accuracies for each fold, and so the usual estimate of variance is too small. We introduce a nested cross-validation scheme to estimate this variance more accurately and show empirically that this modification leads to intervals with approximately correct coverage in many examples where traditional cross-validation intervals fail.

Zoom Recording [SUNet/SSO validation required]