There has been a recent flurry of exciting research surrounding e-values, sequential testing, and confidence sequences, but these developments have been overwhelmingly non-asymptotic in nature, meaning that they control coverage or type-I error rates in finite samples. While such guarantees are often desirable, there is a vast collection of statistical problems that can only be tackled asymptotically (even in non-sequential settings), including conditional independence testing without Model-X, observational causal inference, and semiparametric inference more broadly speaking. In the fixed-n (non-sequential) regime, these problems are typically reduced to mean estimation, after which central limit theorem (CLT)-based asymptotic confidence intervals and tests are applicable.
I will talk about some recent work that provides the "confidence sequence" and "sequential testing" analogues of these CLT-based procedures, thereby providing a framework for the aforementioned collection of problems to be tackled sequentially. To sidestep the shortcomings of pointwise asymptotics, all the methods that will be discussed are distribution-uniform (sometimes called "honest"), and the proofs of uniformity rely on some of our recent advances in strong laws of large numbers and strong Gaussian approximation, both of which will be discussed.
This is based on a series of joint works with Aaditya Ramdas, Martin Larsson, and Edward H. Kennedy.