Schedule

Tuesday, April 5

Time (ET)
10:30am-10:40 Opening remarks
10:40-11:40 Invited talk
Lenka Zdeborova, EPFL
Overparametrization: Insights from Solvable Models
Abstract
11:40-12:05 Contributed talk
Bruno Loureiro, EPFL
Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension
Abstract
12:05-12:15 Break
12:15-1:15 Invited talk
Andrea Montanari, Stanford
From Projection Pursuit to Interpolation Thresholds in Small Neural Networks
Abstract
1:15-1:40 Contributed talk
Pratik Patil, Carnegie Mellon University
Revisiting Model Complexity in the Wake of Overparameterized Learning
Abstract
1:40-2:00 Break
2:00-3:00 Invited talk
Jeffrey Pennington, Google Research
Covariate Shift in High-Dimensional Random Feature Regression
Abstract
3:00-4:00 Lightning talk session #1
4:00-4:05 Break
4:05-5:05 Invited talk
Vidya Muthukumar, Georgia Tech
Classification versus Regression in Overparameterized Regimes: Does the Loss Function Matter?
Abstract
5:05-5:30 Contributed talk
Spencer Frei, UC Berkeley
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Abstract
5:30-6:30 Lightning talk session #2

 

Wednesday, April 6

Time (ET)
10:30am-11:30 Invited talk
Francis Bach, Ecole Normale Supérieure
The Quest for Adaptivity
Abstract
11:30-11:55 Contributed talk
Mariia Seleznova, LMU Munich
Neural Tangent Kernel Beyond the Infinite-Width Limit: Effects of Depth and Initialization
Abstract
11:55-12:10 Break
12:10-1:10 Invited talk
Daniel Hsu, Columbia University
Computational Lower Bounds for Tensor PCA
Abstract
1:10-1:35 Contributed talk
Nikhil Ghosh, UC Berkeley
The Three Stages of Learning Dynamics in High-dimensional Kernel Methods
Abstract
1:35-1:45 Break
1:45-2:45 Lightning talk session #3
2:45-3:10 Contributed talk
Lorenzo Luzi, Rice University
Double Descent and Other Interpolation Phenomena in GANs
Abstract
3:10-4:10 Invited talk
Caroline Uhler, MIT
Update: The talk will be given by Adityanarayanan Radhakrishnan, MIT
Over-parameterized Autoencoders and Causal Transportability
Abstract
4:10-4:20 Break
4:20-5:20 Invited talk
Edgar Dobriban, University of Pennsylvania
T-Cal: An Optimal Test for the Calibration of Predictive Models
Abstract
5:20-6:25 Lightning talk session #4
6:25-6:30 Closing remarks

 

 

Lightning Talks

Session #1: Tuesday, April 5, 3:00pm-4:00pm (ET)

On the Double Descent of Random Features Models Trained with SGD
Fanghui Liu (EPFL); Johan Suykens (KU Leuven); Volkan Cevher (EPFL)

Phase diagram of Stochastic Gradient Descent in High-Dimensional Two-Layer Neural Networks
Rodrigo Veiga (EPFL); Ludovic Stephan (EPFL); Bruno Loureiro (EPFL); Florent Krzakala (EPFL); Lenka Zdeborova (EPFL)

Precise Asymptotic Analysis for Double Descent under Generic Convex Regularization
David Bosch (Chalmers University); Ashkan Panahi (Chalmers University); Ayca Ozcelikkale (Uppsala University); Devdatt Dubhashi (Chalmers University)

Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Gowthami Somepalli (University of Maryland); Liam Fowl (University of Maryland); Arpit Bansal (University of Maryland); Ping-yeh Chiang (University of Maryland); Yehuda Dar (Rice University); Richard Baraniuk (Rice University); Micah Goldblum (NYU); Tom Goldstein (University of Maryland)

Overfitting in Transformers – The Slingshot Mechanism
Vimal Thilak (Apple); Etai Littwin (Apple); Shuangfei Zhai (Apple); Omid Saremi (Apple); Joshua M Susskind (Apple)

 

Session #2: Tuesday, April 5, 5:30pm-6:30pm (ET)

Benign Overfitting in Multiclass Classification: All Roads Lead to Interpolation
Ke Wang (UC Santa Barbara); Vidya Muthukumar (Georgia Tech); Christos Thrampoulidis (University of British Columbia)

Over-parameterization: A Necessary Condition for Models that Extrapolate
Roozbeh Yousefzadeh (Yale University)

Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Spencer Frei (UC Berkeley); Niladri S Chatterji (UC Berkeley); Peter Bartlett (UC Berkeley)

Benign Overfitting in Overparameterized Time Series Models
Shogo Nakakita (The University of Tokyo); Masaaki Imaizumi (The University of Tokyo / RIKEN AIP)

Benign Overfitting in Conditional Average Treatment Effect Prediction with Linear Regression
Masahiro Kato (Cyberagent / The University of Tokyo); Masaaki Imaizumi (The University of Tokyo)

 

Session #3: Wednesday, April 6, 1:45pm-2:45pm (ET)

Benign Overfitting in Two-layer Convolutional Neural Networks
Yuan Cao (The University of Hong Kong); Zixiang Chen (UCLA); Mikhail Belkin (UC San Diego); Quanquan Gu (UCLA)

Error Rates for Kernel Methods under Source and Capacity Conditions
Hugo CUI (EPFL); Bruno Loureiro (EPFL); Florent Krzakala (EPFL); Lenka Zdeborova (EPFL)

Effective Number of Parameters in Neural Networks via Hessian Rank
Sidak Pal Singh (ETH Zurich); Gregor Bachmann (ETH Zurich); Thomas Hofmann (ETH Zurich)

Locality Defeats the Curse of Dimensionality in Convolutional Teacher-Student Scenarios
Alessandro Favero (EPFL); Francesco Cagnetta (EPFL); Matthieu Wyart (EPFL)

Relative Stability Toward Diffeomorphisms Indicates Performance in Deep Nets
Leonardo Petrini (EPFL); Alessandro Favero (EPFL); Mario Geiger (EPFL); Matthieu Wyart (EPFL)

 

Session #4: Wednesday, April 6, 5:20pm-6:25pm (ET)

On How to Avoid Exacerbating Spurious Correlations When Models are Overparameterized
Tina Behnia (University of British Columbia); Ke Wang (UC Santa Barbara); Christos Thrampoulidis (University of British Columbia)

Consistent Interpolating Ensembles
Yutong Wang (University of Michigan); Clayton Scott (University of Michigan)

Provable Boolean Interaction Recovery from Tree Ensemble obtained via Random Forests
Merle Behr (UC Berkeley); Yu Wang (UC Berkeley); Xiao Li (UC Berkeley); Bin Yu (UC Berkeley)

Mitigating Multiple Descents: Model-Agnostic Risk Monotonization in High-Dimensional Learning
Pratik Patil (Carnegie Mellon University); Arun Kuchibhotla (Carnegie Mellon University); Alessandro Rinaldo (Carnegie Mellon University); Yuting Wei (University of Pennsylvania)

On the Implicit Bias Towards Minimal Depth of Deep Neural Networks
Tomer Galanti (MIT)