Main content start

Algorithmic thresholds for random perceptron models

Date
Mon February 9th 2026, 4:00pm
Location
Sequoia 200
Speaker
Brice Huang, Stanford Statistics

We consider the problem of efficiently optimizing random (spherical or Ising) perceptron models with general bounded Lipschitz activation. We focus on a class of algorithms with Lipschitz dependence on the disorder: this includes gradient descent, Langevin dynamics, approximate message passing, and any constant-order method on dimension-free timescales. Our main result exactly characterizes the optimal value ALG such algorithms can attain in terms of a one-dimensional stochastic control problem. Qualitatively, ALG is the largest value whose level set contains a certain "dense solution cluster." Quantitatively, this characterization yields both improved algorithms and hardness results for a variety of asymptotic regimes, which are sharp up to absolute constant factors.

This is joint work (in progress) with Mark Sellke and Nike Sun.