Forecasting and aligning AI

Date
Tue May 31st 2022, 4:30pm
Location
Sloan 380Y
Speaker
Jacob Steinhardt, UC Berkeley

Modern ML systems sometimes undergo qualitative shifts in behavior simply by "scaling up" the number of parameters and training examples. Given this, how can we extrapolate the behavior of future ML systems and ensure that they behave safely and are aligned with humans? I'll argue that we can often study (potential) capabilities of future ML systems through well-controlled experiments run on current systems, and use this as a laboratory for designing alignment techniques. I'll also discuss some recent work on "medium-term" AI forecasting.