Scaling Laws in Linear Regression: Compute, Parameters, and Data
Empirically, large-scale deep learning models often satisfy a neural scaling law: the test error of the trained model improves polynomially as the model size and data size grow. However, conventional wisdom suggests the test error consists of approximation, bias, and variance errors, where the varia...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Empirically, large-scale deep learning models often satisfy a neural scaling
law: the test error of the trained model improves polynomially as the model
size and data size grow. However, conventional wisdom suggests the test error
consists of approximation, bias, and variance errors, where the variance error
increases with model size. This disagrees with the general form of neural
scaling laws, which predict that increasing model size monotonically improves
performance.
We study the theory of scaling laws in an infinite dimensional linear
regression setup. Specifically, we consider a model with $M$ parameters as a
linear function of sketched covariates. The model is trained by one-pass
stochastic gradient descent (SGD) using $N$ data. Assuming the optimal
parameter satisfies a Gaussian prior and the data covariance matrix has a
power-law spectrum of degree $a>1$, we show that the reducible part of the test
error is $\Theta(M^{-(a-1)} + N^{-(a-1)/a})$. The variance error, which
increases with $M$, is dominated by the other errors due to the implicit
regularization of SGD, thus disappearing from the bound. Our theory is
consistent with the empirical neural scaling laws and verified by numerical
simulation. |
---|---|
DOI: | 10.48550/arxiv.2406.08466 |