Tengyu Ma comes to the department from Princeton University where he received a Ph.D. in Computer Science advised by Sanjeev Arora. His undergraduate studies were at Tsinghua University. With a joint appointment in Computer Science at Stanford, his research interests include topics in machine learning and algorithms, such as nonconvex optimization, deep learning and its theory, reinforcement learning, representation learning, distributed optimization, convex relaxation (e.g., sum-of-squares hierarchy), and high-dimensional statistics.
Tengyu's existing work brings together techniques from theoretical computer science, applied mathematics, statistics, probability, and information theory to answer the twin questions of how to design successful nonlinear models and efficiently optimize nonconvex training functions for those models. Several of his publications develop mathematical tools to characterize the optimization landscape of various machine learning problems including dictionary learning, matrix completion, tensor decomposition, and linearized (recurrent) neural nets; some of these results have been published in Transactions of the Association for Computational Linguistics and the Journal of Machine Learning Research. Tengyu has also worked on sum-of-squares algorithms and statistical and communication trade-offs in machine learning, both areas having technical and conceptual open problems that he intends to continue investigating.