People often say US-China deals to slow AI progress and develop AI more safely would be hard to enforce/verify.
However, there are easy to enforce deals: each destroys a fraction of their chips at some level of AI capability. This still seems like it could be helpful and it's pretty easy to verify.
This is likely worse than a well-executed comprehensive deal which would allow for productive non-capabilities uses of the compute (e.g., safety or even just economic activity). But it's harder to verify that chips aren't used to advance capabilities while easy to check if they are destroyed.
See also: Daniel's post on "Cull the GPUs".
Apparently, similar work on prompt repetition came out a bit before I published this post (but after I ran my experiments) and reproduces the effect. It looks like this paper doesn't test the "no output / cot prior to answer setting" and instead tests the setting where you use non-reasoning LLMs (e.g. Opus 4.5 without extended thinking enabled).
The fit is notably better for "cumulative investment over time". Years still produces a slightly better fit.
I've cut off the fit as of 2010, about when the original version of moore's law stops. If you try to project out after 2010, then I think cumulative investment would do better, but I think only because of investment slowing in response to moore's law dying.
(Doing the fit to a lagged-by-3 years investment series doesn't make any important difference.)
Ok, so this works but works less well than I initially thought. The model will still reason even with a prefil, just at some low rate. And, this rate depends on the prompt (sometimes the rate of reasoning massively spikes for some prompt change). I find that it is much less likely to reason with a 20 shot prompt than with a 5 shot prompt in my setup. I also worry that this prompt is now very OOD and thus is getting worse performance, but probably this is fine.
(Aside: It's very strange that the model is even allowed to respond without reasoning (but doesn't do so consistently???) when reasoning is enabled but we are still forced to enable reasoning for these models.)
Regardless, this does let me get results for Gemini 2.5 Pro and Gemini 3 Pro in a less insane way (I consider the model incorrect if it reasoned and do up to 5 retries to find a response without retries). I find that both models benefit from repeats/filler. At repeats=5, Gemini 3 Pro has a time horizon of 3.8 minutes while Gemini 2.5 Pro has a time horizon of 2.7 minutes. (Without repeats, the time horizons are 2.8 minutes and 1.9 minutes respectively.)
(Note that I'm considering the release date of 2.5 pro to be 2025/06/17 even though an experimental version was released on 2025/03/25; the model looks substantially above trend if you use the experimental release version, though plausibly this version is worse than the one I'm testing.)
Huh, that sure seems to work. Interesting.
I wonder if this is an intended feature...
By meta-cognition, in this context I mean "somewhat flexibly deciding what to think about / allocate cognition to in at least some cases". More generally, I meant "this probably shows some type of internal cognitive sophistication of a type you might not have thought LLMs had". I didn't really mean anything very precise or to make a very strong claim. and probably in retrospect, I should have said something other than meta-cognition.
Also, I agree that it could be something else that it is not that interesting and doesn't correspond to "somewhat flexibly deciding what to think about" and isn't well described as very basic metacognition; probably my language was insufficiently caveated.
Recall that without filler, Opus 4.5 performance is 45.2%. I tried the following experiments on Opus 4.5 with filler counting to 300:
So, it seems like the framing doesn't matter ~at all and actually having the filler tokens is the key thing (at least for Opus 4.5, though I strongly expect this would reproduce for Opus 4, Sonnet 4).
Note that repeating the problem X times also works (and yields similar performance increase to filler tokens given the optimal number of repeats/filler). Also yields similar boost across different types of filler (which you'd naively result in different suggestion etc.
I can quickly test this though, will run in one sec.
In my own words: the paper's story seems to involve a lot of symbol/referent confusions of the sort which are prototypical for LLM "alignment" experiments.
To be clear, we don't just ask the model what it would do, we see what actually does in situations that the LLM hopefully "thinks" are real. It could be that our interpretations of motives and beliefs (e.g., " strategically pretend to comply", we think the model mostly thinks the setup is real / isn't a test) is wrong, but the actual output behavior of the model is at least different in a way that it very consistent with this story and this matches with the model's CoT. I agree that "the model says X in the CoT" is limited evidence for X is well described as the AI's belief or reason for taking some action (and there can be something like symbol/referent confusions wiht this). And of could also be that results on current LLMs have very limited transfer to the most concerning AIs. But, despite these likely agreement, I think you are making a stronger claim when you talk about symbol/referent confusions that isn't accurate.
I wonder what the natural way to apply Wright's law to AI is. Cumulative GPU hours? Total quality adjusted inference usage?