If you're not familiar with the double descent phenomenon, I think you should be. I consider double descent to be one of the most interesting and surprising recent results in analyzing and understanding modern machine learning. Today, Preetum et al. released a new paper, “Deep Double Descent,” which I think is a big further advancement in our understanding of this phenomenon. I'd highly recommend at least reading the summary of the paper on the OpenAI blog. However, I will also try to summarize the paper here, as well as give a history of the literature on double descent and some of my personal thoughts.
Prior work
The double descent phenomenon was first discovered by Mikhail Belkin et al., who were confused by the phenomenon wherein modern ML practitioners would claim that “bigger models are always better” despite standard statistical machine learning theory predicting that bigger models should be more prone to overfitting. Belkin et al. discovered that the standard bias-variance tradeoff picture actually breaks down once you hit approximately zero training error—what Belkin et al. call the “interpolation threshold.” Before the interpolation threshold, the bias-variance tradeoff holds and increasing model complexity leads to overfitting, increasing test error. After the interpolation threshold, however, they found that test error actually starts to go down as you keep increasing model complexity! Belkin et al. demonstrated this phenomenon in simple ML methods such as decision trees as well as simple neural networks trained on MNIST. Here's the diagram that Belkin et al. use in their paper to describe this phenomenon:
Belkin et al. describe their hypothesis for what's happening as follows:
All of the learned predictors to the right of the interpolation threshold fit the training data perfectly and have zero empirical risk. So why should some—in particular, those from richer functions classes—have lower test risk than others? The answer is that the capacity of the function class does not necessarily reflect how well the predictor matches the inductive bias appropriate for the problem at hand. [The inductive bias] is a form of Occam’s razor: the simplest explanation compatible with the observations should be preferred. By considering larger function classes, which contain more candidate predictors compatible with the data, we are able to find interpolating functions that [are] “simpler”. Thus increasing function class capacity improves performance of classifiers.
I think that what this is saying is pretty magical: in the case of neural nets, it's saying that SGD just so happens to have the right inductive biases that letting SGD choose which model it wants the most out of a large class of models with the same training performance yields significantly better test performance. If you're right on the interpolation threshold, you're effectively “forcing” SGD to choose from a very small set of models with perfect training accuracy (maybe only one realistic option), thus ignoring SGD's inductive biases completely—whereas if you're past the interpolation threshold, you're letting SGD choose which of many models with perfect training accuracy it prefers, thus allowing SGD's inductive bias to shine through.
I think this is strong evidence for the critical importance of implicit simplicity and speed priors in making modern ML work. However, such biases also produce strong incentives for mesa-optimization (since optimizers are simple, compressed policies) and pseudo-alignment (since simplicity and speed penalties will favor simpler, faster proxies). Furthermore, the arguments for the universal prior and minimal circuits being malign suggest that such strong simplicity and speed priors could also produce an incentive for deceptive alignment.
“Deep Double Descent”
Now we get to Preetum et al.'s new paper, “Deep Double Descent.” Here are just some of the things that Preetum et al. demonstrate in “Deep Double Descent:”
- double descent occurs across a wide variety of different model classes, including ResNets, standard CNNs, and Transformers, as well as a wide variety of different tasks, including image classification and language translation,
- double descent occurs not just as a function of model size, but also as a function of training time and dataset size, and
- since double descent can happen as a function of dataset size, more data can lead to worse test performance!
Crazy stuff. Let's try to walk through each of these results in detail and understand what's happening.
First, double descent is a highly universal phenomenon in modern deep learning. Here is double descent happening for ResNet18 on CIFAR-10 and CIFAR-100:
And again for a Transformer model on German-to-English and English-to-French translation:
All of these graphs, however, are just showcasing the standard Belkin et al.-style double descent over model size (what Preetum et al. call “model-wise double descent”). What's really interesting about “Deep Double Descent,” however, is that Preetum et al. also demonstrate that the same thing can happen for training time (“epoch-wise double descent”) and a similar thing for dataset size (“sample-wise non-monotonicity”).
First, let's look at epoch-wise double descent. Take a look at these graphs for ResNet18 on CIFAR-10:
There's a bunch of crazy things happening here which are worth pointing out. First, the obvious: epoch-wise double descent is definitely a thing—holding model size fixed and training for longer exhibits the standard double descent behavior. Furthermore, the peak happens right at the interpolation threshold where you hit zero training error. Second, notice where you don't get epoch-wise double descent: if your model is too small to ever hit the interpolation threshold—like was the case in ye olden days of ML—you never get epoch-wise double descent. Third, notice the log scale on the y axis: you have to train for quite a while to start seeing this phenomenon.
Finally, sample-wise non-monotonicity—Preetum et al. find a regime where increasing the amount of training data by four and a half times actually increases test loss (!):
What's happening here is that more data increases the amount of model capacity/number of training epochs necessary to reach zero training error, which pushes out the interpolation threshold such that you can regress from the modern (interpolation) regime back into the classical (bias-variance tradeoff) regime, decreasing performance.
Additionally, another thing which Preetum et al. point out which I think is worth talking about here is the impact of label noise. Preetum et al. find that increasing label noise significantly exaggerates the test error peak around the interpolation threshold. Why might this be the case? Well, if we think about the inductive biases story from earlier, greater label noise means that near the interpolation threshold SGD is forced to find the one model which fits all of the noise—which is likely to be pretty bad since it has to model a bunch of noise. After the interpolation threshold, however, SGD is able to pick between many models which fit the noise and select one that does so in the simplest way such that you get good test performance.
Final comments
I'm quite excited about “Deep Double Descent,” but it still leaves what is in my opinion the most important question unanswered, which is: what exactly are the magical inductive biases of modern ML that make interpolation work so well?
One proposal I am aware of is the work of Keskar et al., who argue that SGD gets its good generalization properties from the fact that it finds “shallow” as opposed to “sharp” minima. The basic insight is that SGD tends to jump out of minima without broad basins around them and only really settle into minima with large attractors, which tend to be the exact sort of minima that generalize. Keskar et al. use the following diagram to explain this phenomena:
The more recent work of Dinh et al. in “Sharp Minima Can Generalize For Deep Nets,” however, calls the whole shallow vs. sharp minima hypothesis into question, arguing that deep networks have really weird geometry that doesn't necessarily work the way Keskar et al. want it to. (EDIT: Maybe not. See this comment for an explanation of why Dinh et al. doesn't necessarily rule out the shallow vs. sharp minima hypothesis.)
Another idea that might help here is Frankle and Carbin's “Lottery Ticket Hypothesis,” which postulates that large neural networks work well because they are likely to contain random subnetworks at initialization (what they call “winning tickets”) which are already quite close to the final policy (at least in terms of being highly amenable to particularly effective training). My guess as to how double descent works if the Lottery Tickets Hypothesis is true is that in the interpolation regime SGD gets to just focus on the wining tickets and ignore the others—since it doesn't have to use the full model capacity—whereas on the interpolation threshold SGD is forced to make use of the full network (to get the full model capacity), not just the winning tickets, which hurts generalization.
That's just speculation on my part, however—we still don't really understand the inductive biases of our models, despite the fact that, as double descent shows, inductive biases are the reason that modern ML (that is, the interpolation regime) works as well as it does. Furthermore, as I noted previously, inductive biases are highly relevant to the likelihood of possible dangerous phenomenon such as mesa-optimization and pseudo-alignment. Thus, it seems quite important to me to do further work in this area and really understand our models' inductive biases, and I applaud Preetum et al. for their exciting work here.
EDIT: I have now written a follow-up to this post talking more about why I think double descent is important titled “Inductive biases stick around.”
If this post is selected, I'd like to see the followup made into an addendum—I think it adds a very important piece, and it should have been nominated itself.
I agree with this, and was indeed kind of thinking of them as one post together.
Nice survey. The result about double descent even occurring in dataset size is especially surprising.
Regarding the 'sharp minima can generalize' paper, they show that there exist sharp minima with good generalization, not flat minima with poor generalization, so they don't rule out flatness as an explanation for the success of SGD. The sharp minima they construct with this property are also rather unnatural: essentially they multiply the weights of layer 1 by a constant α and divide the weights of layer 2 by the same constant. The piecewise linearity of ReLU means the output function is unchanged. For large α, the network is now highly sensitive to perturbations in layer 2. These solutions don't seem like they would be found by SGD, so it could still be that, for solutions found by SGD, flatness and generalization are correlated.
Ah—thanks for the summary. I hadn't fully read that paper yet, though I knew it existed and so I figured I would link it, but that makes sense. Seems like in that case the flat vs. sharp minima hypothesis still has a lot going for it—not sure how that interacts with the lottery tickets hypothesis, though.
The decision tree result seemed counterintuitive to me, so I took a look at that section of the paper. I wasn't impressed. In order to create a double descent curve for decision trees, they change their notion of "complexity" (midway through the graph... check Figure 5) from the number of leaves in a tree to the number of trees in a forest. Turns out that right after they change their notion of "complexity" from number of leaves to number of trees, generalization starts to improve :)
I don't see this as evidence for double descent per se. Just that ensembling improves generalization. Which is something we've known for a long time. (And this fact doesn't seem like a big mystery to me. From a Bayesian perspective, ensembling is like using the posterior predictive distribution instead of the MAP estimate. BTW I think there's also a Bayesian story for why flat minima generalize better -- the peak of a flat minimum is a slightly better approximation for the posterior predictive distribution over the entire hypothesis class. Sometimes I even wonder if something like this explains why Occam's Razor works...)
Anyway, the authors' rationale seems to be that once your decision tree has memorized the training set, the only way to increase the complexity of your hypothesis class is by adding trees to the forest. I'd rather they had kept the number of trees constant and only modulated the number of leaves.
However, the decision tree discussion does point to a possible explanation of the double descent phenomenon for neural networks. Maybe once you've got enough complexity to memorize the training set, adding more complexity allows for a kind of "implicit ensembling" which leads to memorizing the training set in many different ways and averaging the results together like an ensemble does.
It's suspicious to me that every neural network case study in the paper modulates layer width. There's no discussion of modulating depth. My guess is they tried modulating depth but didn't get the double descent phenomenon and decided to leave those experiments out.
I think increased layer width fits pretty nicely with my implicit ensembling story. Taking a Bayesian perspective on the output neuron: After there are enough neurons to memorize the training set, adding more leads to more pieces of evidence regarding the final output, making estimates more robust. Which is more or less why ensembles work IMO.
Note that double descent also happens with polynomial regression—see here for an example.
I took a look at the Colab notebook linked from that blog post, and there are some subtleties the blog post doesn't discuss. First, the blog post says they're using gradient descent, but if you look at the notebook, they're actually computing a pseudoinverse. [EDIT: They state a justification in their notebook which I missed at first. See discussion below.] Second, the blog post talks about polynomial regression, but doesn't mention that the notebook uses Legendre polynomials. I'm pretty sure high degree Legendre polynomials look like a squiggle which closely follows the x-axis on [-1, 1] (the interval used in their demo). If you fork the notebook and switch from Legendre polynomials to classic polynomial regression (1, x, x^2, x^3, etc.), the degree 100,000 fit appears to be worse than the degree 20 fit. I searched on Google and Google Scholar, and use of Legendre polynomials doesn't seem to be common practice in ML.
Hi, I'm one of the authors:
Regarding gradient descent vs. pseudoinverse:
As we mention in the note at the top of the Colab,for the linear regression objective, running gradient descent to convergence (from 0 initialization) is *equivalent* to applying the pseudoinverse -- they both find the minimum-norm interpolating solution.
So, we use the pseudoinverse just for computational efficiency; you would get equivalent results by actually running GD.
You're also right that the choice of basis matters. The choice of basis changes the "implicit bias"
of GD in linear regression, analogous to how the choice of architecture changes the implicit bias of GD on neural nets.
-- Preetum
Thanks Preetum. You're right, I missed that note the first time -- I edited my comment a bit.
It might be illuminating to say "the polynomial found by iterating weights starting at 0" instead of "the polynomial found with gradient descent", since in this case, the inductive bias comes from the initialization point, not necessarily gradient descent per se. Neural nets can't learn if all the weights are initialized to 0 at the start, of course :)
BTW, I tried switching from pseudoinverse to regularized linear regression, and the super high degree polynomials seemed more overfit to me.
That's exactly correct. You can prove it via the Laplace approximation: the "width" of the peak in each principal direction is the inverse of an eigenvalue of the Hessian, and each eigenvalue λi contributes −12log(λi) to the marginal log likelihood logP[data|model]. So, if a peak is twice as wide in one direction, its marginal log likelihood is higher by 12log(2), or half a bit. For models in which the number of free parameters is large relative to the number of data points (i.e. the interesting part of the double-descent curve), this is the main term of interest in the marginal log likelihood.
In Jaynes' Logic of Science book, in the chapter on model comparison, I believe he directly claims that this is how/why Occam's Razor works.
That said, it's not clear that this particular proof would generalize properly to systems which perfectly fit the data. In that case, there's an entire surface on which P[data|θ] is constant, so Laplace needs some tweaking.
This is perhaps the most striking fundamental discoveries of machine learning in the past 20 years, and Evan's post is well-deserving of a nomination for explaining it to LW.
For the newsletter:
OpenAI blog post summary:
Opinion:
Original paper (Reconciling modern machine learning practice and the bias-variance trade-off) summary:
Opinion:
Summary of this post:
Opinion:
I ended up reading another paper on double descent:
More Data Can Hurt for Linear Regression: Sample-wise Double Descent (Preetum Nakkiran) (summarized by Rohin): This paper demonstrates the presence of double descent (in the size of the dataset) for unregularized linear regression. In particular, we assume that each data point x is a vector in independent samples from N(0, σ^2), and the output is y = βx + ε. Given a dataset of (x, y) pairs, we would like to estimate the unknown β, under the mean squared error loss, with no regularization.
In this setting, when the dimensionality d of the space (and thus number of parameters in β) is equal to the number of training points n, the training data points are linearly independent almost always / with probability 1, and so there will be exactly one β that solves the n linearly independent equalities of the form βx = y. However, such a β must also be fitting the noise variables ε, which means that it could be drastically overfitted, with very high norm. For example, imagine β = [1, 1], and the data points are (-1, 3): 3 and (0, 1): 0 (the data points had errors of +1 and -1 respectively). The estimate will be β = [-3, 0], which is going to generalize very poorly.
As we decrease the number of training points n, so that d > n, there are infinitely many settings of the d parameters of β that satisfy the n linearly independent equalities, and gradient descent naturally chooses the one with minimum norm (even without regularization). This limits how bad the test error can be. Similarly, as we increase the number of training points, so that d < n, there are too many constraints for β to satisfy, and so it ends up primarily modeling the signal rather than the noise, and so generalizing well.
Rohin's opinion: Basically what's happening here is that at the interpolation threshold, the model is forced to memorize noise, and it has only one way of doing so, which need not generalize well. However, past the interpolation threshold, when the model is overparameterized, there are many models that successfully memorize noise, and gradient descent "correctly" chooses one with minimum norm. This fits into the broader story being told in other papers that what's happening is that the data has noise and/or misspecification, and at the interpolation threshold it fits the noise in a way that doesn't generalize, and after the interpolation threshold it fits the noise in a way that does generalize. Here that's happening because gradient descent chooses the minimum norm estimator that fits the noise; perhaps something similar is happening with neural nets.
This explanation seems like it could explain double descent on model size and double descent on dataset size, but I don't see how it would explain double descent on training time. This would imply that gradient descent on neural nets first has to memorize noise in one particular way, and then further training "fixes" the weights to memorize noise in a different way that generalizes better. While I can't rule it out, this seems rather implausible to me. (Note that regularization is not such an explanation, because regularization applies throughout training, and doesn't "come into effect" after the interpolation threshold.)
The idea—at least as I see it—is that the set of possible models that you can choose between increases with training. That is, there are many more models reachable within n+1 steps of training than there are models reachable within n steps of training. The interpolation threshold is the point at which there are the fewest reachable models with zero training error, so your inductive biases have the fewest choices—past that point, there are many more reachable models with zero training error, which lets the inductive biases be much more pronounced. One way in which I've been thinking about this is that ML models overweight the likelihood and underweight the prior, since we train exclusively on loss and effectively only use our inductive biases as a tiebreaker. Thus, when there aren't many ties to break—that is, at the interpolation threshold—you get worse performance.
If that were true, I'd buy the story presented in double descent. But we don't do that; we regularize throughout training! The loss usually includes an explicit term that penalizes the L2 norm of the weights, and that loss is evaluated and trained against throughout training, and across models, and regardless of dataset size.
It might be that the inductive biases are coming from some other method besides regularization (especially since some of the experiments are done without regularization iirc). But even then, to be convinced of this story, I'd want to see some explanation of how in terms of the training dynamics the inductive biases act as a tiebreaker, and why that explanation doesn't do anything before the interpolation threshold.
Reading your comment again, the first three sentences seem different from the last two sentences. My response above is responding to the last two sentences; I'm not sure if you mean something different by the first three sentences.
The bottom left picture on page 21 in the paper shows that this is not just regularization coming through only after the error on the training set is ironed out: 0 regularization (1/lambda=inf) still shows the effect.
Can we switch to the interpolation regime early if we, before reaching the peak, tell it to keep the loss constant? Aka we are at loss l* and replace the loss function l(theta) with |l(theta)-l*| or (l(theta)-l*)^2.
Interesting! Given that stochastic gradient descent (SGD) does provide an inductive bias towards models that generalize better, it does seem like changing the loss function in this way could enhance generalization performance. Broadly speaking, SGD's bias only provides a benefit when it is searching over many possible models: it performs badly at the interpolation threshold because the lowish complexity limits convergence to a small number of overfitted models. Creating a loss function that allows SGD more reign over the model it selects could therefore improve generalization.
If
#1 SGD is inductively biased to more generalizeable models in general
#2 an (l(θ)−l∗)2 loss-function gives all models with near l∗ a wider local minimum
#3 there are many different models where l(θ)≈l∗ at a given level of complexity as long as l∗>0
then it's plausible that changing the loss-function in this way will help emphasize SGD's bias towards models that generalize better. Point #1 is an explanation for double-descent. Point #2 seems intuitive to me (it makes the loss-function more convex and flatter when models are better performing) and Point #3 does too: there are many different sets of prediction that will all partially fit the training-dataset and yield the same loss function value of l∗, which implies that there are also many different predictive models that yield such a loss function.
To illustrate point #3 above, imagine we're trying to fit the set of training observations {→x1,→x2,→x3,...,→xi,...→xn}. Fully overfitting this set (getting l(θ)≈0) requires us to get all →xi from 1 to ncorrect. However, we can partially overfit this set (getting l(θ)=l∗) in a variety of different ways. For instance, if we get all →xi correct except for →xj, we may have roughly n different ways we can pick →xj that could yield the same l(θ)).[1] Consequently, our stochastic gradient descent process is free to apply its inductive bias to a broad set of models that have similar performances but make different predictions.
[1] This isn't exactly true because getting only one →xj wrong without changing the predictions for other →xi might only be achievable by increasing complexity since some predictions may be correlated with each other but it demonstrates the basic idea
Great walkthrough! One nitpick:
Did you mean to say increases test loss?
Thanks! And good catch—should be fixed now.
Off the top of your head, do you know anything about/have any hypotheses about how double descent interacts with the gaussian processes interpretation of deep nets? It seems like the sort of theory which could potentially quantify the inductive bias of SGD.
The neural tangent kernel guys have a paper where they give a heuristic argument explaining the double descent curve(in number of parameters) using the NTK.
Fwiw, I really liked Rethinking Bias-Variance Trade-off for Generalization of Neural Networks (summarized in AN #129), and I think I'm now at "double descent is real and occurs when (empirical) bias is high but later overshadowed by (empirical) variance". (Part of it is that it explains a lot of existing evidence, but another part is that my prior on an explanation like that being true is much higher than almost anything else that's been proposed.)
I was pretty uncertain about the arguments in this post and the followup when they first came out. (More precisely, for any underlying claim; my opinions about the truth value of the claim seemed nearly independent of my beliefs in double descent.) I'd be interested in seeing a rewrite based on the bias-variance trade-off explanation; my current guess is that they won't hold up, but I haven't thought about it much.
Finally got around to that one, and am also pretty into that explanation for the cases of double descent we observe. It also tentatively makes me want to say that the decrease in variance with model size is the 'real story'/primary thing we should think about.
This post was on Hacker News for a while.
This post gave a slightly better understanding of the dynamics happening inside SGD. I think deep double descent is strong evidence that something like a simplicity prior exists in SGG, which might have actively bad generalization properties, e.g. by incentivizing deceptive alignment. I remain cautiously optimistic that approaches like Learning the Prior can get circumnavigate this problem.
I'm no ML expert, but thanks to this post I feel like I have a basic grasp of some important ML theory. (It's clearly written and has great graphs.) This is a big deal because this understanding of deep double descent has shaped my AI timelines to a noticeable degree.
https://arxiv.org/abs/1806.00952 gives a theoretical argument that suggests SGD will converge to a point that is very close in L2 norm to the initialization. Since NNs are often initialized with extremely small weights, this amounts to implicit L2 regularization.
I want to point out some recent work by Andrew Gordon Wilson's group - https://cims.nyu.edu/~andrewgw/#papers.
Particularly, https://arxiv.org/abs/2003.02139 takes a look a double descent from the perspective where they argue that parameters are a bad proxy of model complexity/capacity. Rather, effective dimensionality is what we should be plotting against and double descent effectively vanishes (https://arxiv.org/abs/2002.08791) when we use Bayesian model averaging instead of point estimates.
Apologies if it's obvious, but why the focus on SGD? I'm assuming it's not meant as shorthand for other types of optimization algorithms given the emphasis on SGD's specific inductive bias, and the Deep Double Descent paper mentions that the phenomena hold across most natural choices in optimizers.
SGD is meant as a shorthand that includes other similar optimizers like Adam.
I found this post interesting and helpful, and have used it as a mental hook on which to hang other things. Interpreting what's going on with double descent, and what it implies, is tricky, and I'll probably write a proper review at some point talking about that.
I wonder if this is a neural network thing, an SGD thing, or a both thing? I would love to see what happens when you swap out SGD for something like HMC, NUTS or ATMC if we're resource constrained. If we still see the same effects then that tells us that this is because of the distribution of functions that neural networks represent, since we're effectively drawing samples from an approximation to the posterior. Otherwise, it would mean that SGD is plays a role.
Are you aware of this work and the papers they cite?
From the abstract:
I would field the hypothesis that large volumes of neural network space are devoted to functions that are similar to functions with low K-complexity, and small volumes of NN-space are devoted to functions that are similar to high K-complexity functions. Leading to a Solomonoff-like prior over functions.
Neither, actually—it's more general than that. Belkin et al. show that it happens even for simple models like decision trees. Also see here for an example with polynomial regression.
Yeah, I am. I definitely think that stuff is good, though ideally I want something more than just “approximately K-complexity.”