Skip to content Skip to navigation

Scaled minimax optimality in high-dimensional linear regression: A non-convex algorithmic regularization approach

Tuesday, September 29, 2020 - 4:30pm

Speaker:   Mohamed Ndaoud, University of Southern California

Abstract:   The question of fast convergence in the classical problem of high-dimensional linear regression has been extensively studied. Arguably, one of the fastest procedures in practice is Iterative Hard Thresholding (IHT). Still, IHT relies strongly on the knowledge of the true sparsity parameters. In this talk, we present a novel fast procedure for estimation in high-dimensional linear regression. Taking advantage of the interplay between estimation, support recovery and optimization, we achieve both optimal statistical accuracy and fast convergence. The main advantage of our procedure is that it is fully adaptive, making it more practical than state-of-the-art IHT methods. Our procedure achieves optimal statistical accuracy faster than, for instance, classical algorithms for the Lasso. Moreover, we establish sharp optimal results for both estimation and support recovery. As a consequence, we present a new iterative hard thresholding algorithm for high-dimensional linear regression that is scaled minimax optimal (achieves the estimation error of the oracle that knows the sparsity pattern if possible), fast and adaptive.