Contributed Talk: Revisiting Complexity and the Bias-Variance Tradeoff

Speaker: Raaz Dwivedi, UC Berkeley
Talk title: Revisiting Complexity and the Bias-Variance Tradeoff

Time: Wednesday, April 21, 10:30am-10:55am (PT)

Abstract:
The recent success of high-dimensional models, such as deep neural networks (DNNs), has led many to question the validity of the bias-variance tradeoff principle in high dimensions. We reexamine it with respect to two key choices: the model class and the complexity measure. We argue that failing to suitably specify either one can falsely suggest that the tradeoff does not hold. This observation motivates us to seek a valid complexity measure, defined with respect to a reasonably good class of models. Building on Rissanen’s principle of minimum description length (MDL), we propose a novel MDL-based complexity (MDL-COMP). We focus on linear models, which have recently been used as a stylized tractable approximation to DNNs in high-dimensions. MDL-COMP is defined via an optimality criterion over the encodings induced by a good Ridge-estimator class. We derive closed-form expressions for MDL-COMP and show that for a dataset with n observations and d parameters it isnot always equal to d/n; it is a function of the singular values of the design matrix and the signal-to-noise ratio. For random Gaussian design, we find that while MDL-COMP scales linearly with d in low-dimensions (d<n), for high-dimensions (d>n)—also known as overparameterized regime—the scaling is exponentially smaller, scaling as log d. We hope that such a slow growth of complexity in high-dimensions can help shed light on the strong generalization performance of overparameterized models. Moreover, via an array of simulations and real-data experiments, we show that a data-driven Prac-MDL-COMP can inform hyper-parameter tuning for ridge regression in limited data settings, sometimes improving upon cross-validation.

Joint work with Chandan Singh, Bin Yu, and Martin Wainwright.

 

Return to workshop schedule