Same person as nostalgebraist2point0, but now I have my account back.
I agree with the critiques you make of specific papers (in section 2), but I'm less convinced by your diagnosis that these papers are attempting to manage/combat hype in a misguided way.
IMO, "underclaiming" is ubiquitous in academic papers across many fields -- including fields unrelated to NLP or ML, and fields where there's little to no hype to manage. Why do academics underclaim? Common reasons include:
I suspect 1+2+3 above, rather than hype management, explains the specific mistakes you discuss.
For example, Zhang et al 2020 seems like a case of #2. They cite Jia and Liang as evidence about a problem with earlier models, a problem they are trying to solve with their new method. It would be strange to "manage hype" by saying NLP systems can't do X, and then in the same breath present a new system which you claim does X!
Jang and Lukasiewicz (2021) is also a case of #2, describing a flaw primarily in order to motivate their own proposed fix.
Meanwhile, Xu et al 2020 seems like #3: it's a broad review paper on "adversarial attacks" which gives a brief description of Jia and Liang 2017 alongside brief descriptions of many other results, many of them outside NLP. It's true that the authors should not have used the word "SOTA" here, but it seems more plausible that this is mere sloppiness (they copied other, years-old descriptions of the Jia and Liang result) rather than an attempt to push a specific perspective about NLP.
I think a more useful framing might go something like:
Most complexity measures give roughly similar values for the (relative) complexity of most objects
I'll write mostly about this statement, as I think it's the crux of our disagreement.
The statement may be true as long as we hold the meaning of "objects" constant as we vary the complexity measure.
However, if we translate objects from one mathematical space to another (say by discretizing, or adding/removing a metric structure), we can't simply say the complexity measures for space A on the original A-objects inevitably agree with those space B on the translated B-objects. Whether this is true depends on our choice of translation.
(This is clear in the trivial cases of bad translation where we, say, map every A-object onto the same B-object. Now, obviously, no one would consider this a correct or adequate way to associate A-objects with B-objects. But the example shows that the claim about complexity measures will only hold if our translation is "good enough" in some sense. If we don't have any idea what "good enough" means, something is missing from the story.)
In the problem at hand, the worrying part of the translation from real to boolean inputs is the loss of metric structure. (More precisely, the hand-waviness about what metric structure survives the translation, if any.) If there's no metric, this destroys the information needed by complexity measures that care about how easy it is to reconstruct an object "close to" the specified one.
Basic information theory doesn't require a metric, only a measure. There's no sense of "getting an output approximately right," only of "getting the exactly right output with high probability." If you care about being approximately right according to some metric, this leads you to rate-distortion theory.
Both of these domains -- information theory without a metric, and with one -- define notions of incompressibility/complexity, but they're different. Consider two distributions on R:
According to basic information theory, these are equally simple/compressible. (They have the same differential entropy, or the same K-L divergence from a uniform distribution if you want to be pedantic.)
But in rate-distortion theory, (1) is way more simple/compressible than (2). If you're coding (2) over a noisy channel, you have to distinguish really hard between (say) a piece that stayed in place at [0, 0.1] and another piece that got translated to [1e8, 1e8 + 0.1]. Whereas if you're coding a standard normal, with its light tails, a 1e8-magnitude mistake is effectively impossible.
If you do all your analysis in the metric-less space, hoping it will cleanly pass over to the metric space at the end, you have no way of distinguishing these two possibilities. When you remove the metric, they're identical. So you have limited power to predict what the rate-distortion theory notion of complexity is going to say, once you put the metric back in.
Like Rohin, I'm not impressed with the information theoretic side of this work.
Specifically, I'm wary of the focus on measuring complexity for functions between finite sets, such as binary functions.
Mostly, we care about NN generalization on problems where the input space is continuous, generally R^n. The authors argue that the finite-set results are relevant to these problems, because one can always discretize R^n to get a finite set. I don't think this captures the kinds of function complexity we care about for NNs.
This is much too coarse a lens for distinguishing NNs from other statistical learning techniques, since all of them are generally going to involve putting a metric on the input space.
Let's see how this goes wrong in the Shannon entropy argument from this paper.
Sort of similar remarks apply to the other complexity measure used by authors, LZ complexity. Unlike the complexity measure discussed above, this one does implicitly put a structure on the input space (by fixing an enumeration of it, where the inputs are taken to be bit vectors, and the enumeration reads them off in binary).
"Simple" functions in the LZ sense are thus ones that respond to binary vectors in (roughly) a predictable way,. What does it mean for a function to respond to binary vectors in a predictable way? It means that knowing the values of some of the bits provides information about the output, even if you don't know all of them. But since our models are encoding the inputs as binary vectors, we are already setting them up to have properties like this.
I'm don't think this step makes sense:
Then we look at the scaling law chart you just provided us, and we look at those L-shaped indifference curves, and we think: OK, so a task which can't be done for less than 10e15 params is a task which requires 10e15 data points also.
In the picture, it looks like there's something special about having a 1:1 ratio of data to params. But this is a coincidence due to the authors' choice of units.
They define "one data point" as "one token," which is fine. But it seems equally defensible to define "one data point" as "what the model can process in one forward pass," which is ~1e3 tokens. If the authors had chosen that definition in their paper, I would be showing you a picture that looked identical except with different numbers on the data axis, and you would conclude from the picture that the brain should have around 1e12 data points to match its 1e15 params!
To state the point generally, the functional form of the scaling law says nothing about the actual ratio D/N where the indifference curves have their cusps. This depends on your choice of units. And, even if we were careful to use the same units, this ratio could be vastly different for different systems, and people would still say the systems "have the same scaling law." Scaling is about relationships between differences, not relationships between absolute magnitudes.
On the larger topic, I'm pessimistic about our ability to figure out how many parameters the brain has, and even more pessimistic about our ability to understand what a reasonable scale for "a data point" is. This is mostly for "Could a Neuroscientist Understand a Microprocessor?"-type reasons. I would be more interested in an argument that starts with upper/lower bounds that feel absurdly extreme but relatively certain, and then tries to understand if (even) these weak bounds imply anything interesting, rather than an argument that aims for an point estimate or a subjective distribution.
Actually, I think I spoke too soon about the visualization... I don't think your image of L(D) and L(N) is quite right.
Here is what the actual visualization looks like. More blue = lower loss, and I made it a contour plot so it's easy to see indifference curves of the loss.
In these coordinates, L(D) and L(N) are not really straight lines, but they are close to straight lines when we are far from the diagonal line:
To restate my earlier claims...
If either N or D is orders of magnitude larger than the other, then you get close to the same loss you would get from N ~ D ~ (whichever OOM is lower). So, setting eg (N, D) = (1e15, 1e12) would be sort of a waste of N, achieving only slightly lower loss than (N, D) = (1e12, 1e12).
This is what motives the heuristic that you scale D with N, to stay on the diagonal line.
On the other hand, if your goal is to reach some target loss and you have resource constraints, what matters is whichever resource constraint is more restrictive. For example, if we were never able to scale D above 1e12, then we would be stuck achieving a loss similar to GPT-3, never reaching the darkest colors on the graph.
When I said that it's intuitive to think about L(D) and L(N), I mean that I care about which target losses we can reach. And that's going to be set, more or less, by the highest N or the highest D we can reach, whichever is more restrictive.
Asking "what could we do with a N=1e15 model?" (or any other number) is kind of a weird question from the perspective of this plot. It could mean either of two very different situations: either we are in the top right corner with N and D scaled together, hitting the bluest region ... or we are just near the top somewhere, in which case our loss is entirely determined by D and can be arbitrarily low.
In Ajeya's work, this question means "let's assume we're using an N=1e15 model, and then let's assume we actually need that many parameters, which must mean we want to reach the target losses in the upper right corner, and then let's figure out how big D has to be to get there."
So, the a priori choice of N=1e15 is driving the definition of sufficient performance, defined here as "the performance which you could only reach with N=1e15 params".
What feels weird to me -- which you touched on above -- is the way this lets the scaling relations "backset drive" the definition of sufficient quality for AGI. Instead of saying we want to achieve some specific thing, then deducing we would need N=1e15 params to do it... we start with an unspecified goal and the postulate that we need N=1e15 params to reach it, and then derive the goal from there.
You can't have more D than you have compute, in some sense, because D isn't the amount of training examples you've collected, it's the amount you actually use to train... right? So... isn't this a heuristic for managing compute? It sure seemed like it was presented that way.
This is a subtle and confusing thing about the Kaplan et al papers. (It's also the subject of my post that I linked earlier, so I recommend you check that out.)
There are two things in the papers that could be called "optimal compute budgeting" laws:
I said the D vs N law was "not a heuristic for managing compute" because the S vs N law is more directly about compute, and is what the authors mean when they talk about compute optimal budgeting.
However, the D vs N law does tell you about how to spend compute in an indirect way, for the exact reason you say, that D is related to how long you train. Comparing the two laws yields the "breakdown" or "kink point."
Do you agree or disagree? ... I take [you] to mean that you think the human brain could have had almost identical performance with much fewer synapses, since it has much more N than is appropriate given its D?
Sorry, why do you expect I disagree? I think I agree. But also, I'm not really claiming the scaling laws say or don't say anything about the brain, I'm just trying to clarify what they say about (specific kinds of) neural nets (on specific kinds of problems). We have to first understand what they predict about neural nets before we can go on to ask whether those predictions generalize to explain some other area.
Perhaps it would help me if I could visualize it in two dimensions
This part is 100% qualitatively accurate, I think. The one exception is that there are two "optimal compute" lines on the plot with different slopes, for the two laws referred to above. But yeah, I'm saying we won't be on either of those lines, but on the L(N) or the L(D) line.
The scaling laws, IIRC, don't tell us how much data is needed to reach a useful level of performance.
The scaling laws from the Kaplan et al papers do tell you this.
The relevant law is L(N,D), for the early-stopped test loss given parameter count N and data size D. It has the functional form
The result that you should scale D∝N0.74 comes from trying to keep the two terms in this formula about the same size.
This is not exactly a heuristic for managing compute (since D is not dependent on compute, it's dependent on how much data you can source). It's more like a heuristic for ensuring that your problem is the right level of difficulty to show off the power of this model size, as compared to smaller models.
You always can train models that are "too large" on datasets that are "too small" according to the heuristic, and they won't diverge or do poorly or anything. They just won't improve much upon the results of smaller models.
In terms of the above, you are setting N∼1015 and then asking what D ought to be. If the heuristic gives you an answer that seems very high, that doesn't mean the model is "not as data efficient as you expected." Rather, it means that you need a very large dataset if you want a good reason to push the parameter count up to N∼1015 rather than using a smaller model to get almost identical performance.
I find it more intuitive to think about the following, both discussed in the papers:
If the Kaplan et al scaling results are relevant for AGI, I expect one of these two limits to provide the relevant constraint, rather than a careful balance between N and D to ensure we are not in either limit.
Ultimately, we expect AGI to require some specific-if-unknown level of performance (ie crossing some loss threshold LAGI). Ajeya's approach essentially assumes that we'll cross this threshold at a particular value of N, and then further assumes that this will happen in a regime where data and compute limitations are around the same order of magnitude.
I'm not sure why that ought to be true: it seems more likely that one side of the problem will become practically difficult to scale in proportion to the other, after a certain point, and we will essentially hug tight to either the L(N) or the L(D) curve until it hits LAGI.
See also my post here.
I don't think you're completely missing something. This is the active learning approach, which gwern also suggested -- see that thread for more.
I disagree. Transfer learning is practically the entire point. 'Blessings of scale' etc.
Sure -- my point to contrast two cases
Many, including OpenAI, argue that general web crawls are a good way to get high domain diversity for free. This includes domains the research would never have come up with themselves.
If we switch to manually hunting down large specialized datasets, this will definitely help, but we're no longer getting broad domain coverage for free. At best we get broad domain coverage through manual researcher effort and luck, at worst we don't get it at all.
I see your point about active learning "telling us" when we need more data -- that's especially appealing if it can point us to specific domains where more coverage would help.
What scaling curve in L(D)/L(C) could we get with even a simple active learning approach like running a small GPT over Common Crawl and throwing out datapoints which are too easily predicted?
IIUC, this is trying to make L(D) faster by making every data point more impactful (at lowering test loss). This will help if
I can imagine this regime becoming the typical one for non-text modalities like video that have huge data with lots of complex redundancy (which the model will learn to compress).
With text data, though, I'm concerned that (2) will fail soon.
The number of train steps taken by GPT-3 was the same order of magnitude as the size of Common Crawl. I haven't seen convincing evidence that comparably good/diverse text datasets can be constructed which are 10x this size, 100x, etc. The Pile is an interesting experiment, but they're mostly adding large quantities of single-domain text like Github, which is great for those domains but won't help outside them.