One alternative way of thinking about this is to decompose plans by which actor the plan is for:
This isn't a perfect breakdown, e.g. Plan A might focus mostly on what the US should do, but it might still be helpful.
This decomposition was proposed by @Lukas Finnveden.
Anthropic, GDM, and xAI say nothing about whether they train against Chain-of-Thought (CoT) while OpenAI claims they don't[1].
I think AI companies should be transparent about whether (and how) they train against CoT. While OpenAI is doing a better job at this than other companies, I think all of these companies should provide more information about this.
It's particularly striking that Anthropic says nothing about whether they train against CoT given their system card (for 4.5 Sonnet) is very thorough and includes a section on "Reasoning faithfulness" (kudos to them for providing so much other information!). Naively, this seems like some evidence that they are training against CoT and didn't want to disclose this.[2]
I think we should try to create an incentive gradient that pushes AI companies to disclose information even if that information makes them look bad, so we should make companies feel some heat for not disclosing (reasonably) important information like how/whether they are training against CoT.
It's worth noting that some versions of training against CoT might be totally fine (as in, they don't incentivize obfuscation of misaligned reasoning) and that some particular approach for training against CoT could be worth it even if it does incentivize obfuscating misaligned reasoning. For instance, it could end up being the case that training against CoT works well and we end up with sufficient confidence that this doesn't result in substantial obfuscation of reasoning. The situation is also just generally complicated because things like "what SFT/prompting initialization do you use for your CoT" could have big effects on obfuscation and aren't exactly well described as applying optimization pressure to CoT. (But, if you start iterating on the SFT/prompting initialization to reduce misalignment observed in the CoT, this could cause problems.)
Given that the situation is complicated, ideally AI companies would publicly disclose enough information that other actors can come to their own conclusions about how worried they should be about CoT obfuscation.[3] If companies are strongly opposed because this would leak too much IP, they could instead disclose this information to some number of non-conflicted third parties (e.g. METR) who could then publicly share their conclusions.
Source: METR's evaluation of GPT-5 says that OpenAI asserts "There was no direct training pressure on GPT-5 to hide or obfuscate misaligned reasoning in reasoning traces" and "There's no particular reason to expect that GPT-5's reasoning traces are less monitorable than METR would assume.". It's of course possible that OpenAI will train against CoT in future models and it's possible that for GPT-5 they did train against CoT in some way that they think wouldn't directly apply pressure to hide/obfuscate misaligned reasoning (or they could be lying). ↩︎
Of course, it is totally possible that Anthropic just didn't get around to saying anything or just defaulted to secrecy rather than specifically going out of their way to not say anything in this case. I think the prior probability on this happening just because not saying anything is the default (or something like this) is high. ↩︎
The information that OpenAI has disclosed doesn't suffice for other actors (that don't have private information) to come to their own conclusions without trusting relatively general assurances from OpenAI that aren't backed by very specific claims. Of course, this is still much more information about training against CoT than other companies! ↩︎
The main class of projects that need granular model weight access to frontier models is model internals/interpretability.
You could potentially do a version of this sort of API which has some hooks for interacting with activations to capture a subset of these use cases (e.g. training probes). It would probably add a lot of complexity though and might only cover a small subset of research.
I'm not trying to say "Plan A is doable and shut it all down is intractable".
My view is that "shut it all down" probably requires substantially more (but not a huge amount more) political will than Plan A such that it is maybe like 3x less likely to happen given similar amounts of effort from the safety community.
You started by saying:
My main question is "why do you think Shut Down actually costs more political will?".
So I was trying to respond to this. I think 3x less likely to happen is actually a pretty big deal; this isn't some tiny difference, but neither is it "Plan A is doable and shut it all down is intractable". (And I also think "shut it all down" has various important downsides relative to Plan A, maybe these downsides can be overcome, but by default this makes Plan A look more attractive to me even aside from the political will considerations.)
I think something like Plan A or "shut it all down" are both very unlikely to happen and I'd be pretty sympathetic to describing both as politically intractable (e.g., I think something as good/strong as Plan A is only 5% likely). "politically intractable" isn't very precise though, so I think we have to talk more quantitatively.
Note that my view is also that I think pushing for Plan A isn't the most leveraged thing for most people to do at the margin; I expect to focus on making Plans C/D go better (with some weight on things like Plan B).
But like, the last draft of Plan A I saw include "we relocate all the compute to centralized locations in third party countries" as an eventual goal. That seems pretty crazy?
Yes, this is much harder (from a political will perspective) than compute + fab monitoring which is part of my point? Like my view is that in terms of political will requirements:
compute + fab monitoring << Plan A < Shut it all down
I think compute + fab monitoring with potential for escalation requires much lower political will than shutting down AI development. I agree that both Plan A and shut it all down require something like this. Like I think this monitoring would plausibly not require much more political will than export controls.
Advanced bio AI seems pretty good for the world and to capture a lot of the benefits
Huh? No it doesn't capture much of the benefits. I would have guessed it captures a tiny fraction of the benefits for advanced AI, even for AIs around the level where you might want to pause at human level.
But, it seems like a version of the treaty that doesn't at least have the capacity to shutdown compute temporarily is a kinda fake version of Plan A, and once you have that, "Shut down" vs "Controlled Takeoff" feels more like arguing details than fundamentals to me.
I agree you will have the capacity to shut down compute temporarily either way; I disagree that there isn't much of a difference between slowing down takeoff and shutting down all further non-narrow AI development.
Sure, I agree that Nate/Eliezer think we should eventually build superintelligence and don't want to causal a pause that lasts forever. In the comment you're responding to, I'm just talking about difficulty in getting people to buy the narrative.
More generally, what Nate/Eliezer think is best is doesn't resolve concerns with the pause going poorly because something else happens in practice. This includes the pause going on too long or leading to a general anti-AI/anti-digital-minds/anti-progress view which is costly for the longer run future.) (This applies to the proposed Plan A as well, but I think poor implementation is less scary in various ways and the particular risk of ~anti-progress forever is less strong.)
I think Plan A and "Shut It Down" both require very similar opening steps that are the most politically challenging part AFAICT, and once the world is even remotely considering those steps, the somewhat different shut-it-down steps don't seem particularly hard sells.
I think shutting down all AI development is much more costly than not shutting down all AI development in a pretty straightforward sense that will in fact probably be priced into the required level of political will: Nvidia is in fact much worse off if all AI development shuts down versus if AI development proceeds, but with capabilities developing more slowly once they reach a high level of capabilities.
I would guessed the stock market will react pretty different to something like Plan A vs "shut it all down" for reasonable reasons.
I don't understand why you think the opening steps are the most politically challenging part given that the opening steps for Plan A plausibly don't require stopping AI development.
Another point is that many people have pretty reasonable existing objections to "shut it all down". Here are some example objections people might have that apply more to "shut it all down" than "Plan A":
I think this both factors into political will and makes me more reluctant to push for "shut it all down" because I partially buy these views and because I think it's good to be cooperative/robust-under a variety of pretty reasonable views. Like I do really feel "delaying AI for 30 years results in ~1/4 of the population dying of old age when they otherwise wouldn't have" from a cooperativeness with other moral views perspective (I put most weight on longtermism myself).
I also think Plan A "bad implementation" is much more likely, and also much worse
Pausing for a long time at a low level of capability seems like it makes the risk of other actors overtaking and destablizing the pause regime especially bad. E.g., just shutting down AI development in the US is much worse than just implementing Plan A in the US, but this generally applies to any sort of partial non-proliferation/pause. More capable AIs also can help make the regime more stable.
I agree that a bad implementation of Plan A can decay to something more like Plan C or worse where you don't actually spend that much of the lead time on safety and how you feel about this depends on how you feel about something like Plan C.
One way to put this is that a long pause is probably taking on a bunch more "some actor gets outside the agreement" or "the current regime collapses and you go quickly from here" risk (due to additional time and lower level of capability) with not that much benefit. E.g., like if we seemingly had the political will for a 30 year pause, I'd be pretty worried about this collapsing in 10 years and us doing something much worse than Plan A, while if we start with Plan A then we've already gotten a bunch done by the time the regime (potentially) collapses.
I also think that if you don't stop semiconductor progress (which again, would make political will requirements substantially higher than under Plan A), then there is a real risk of the takeoff being much faster than it would make been by default due to overhang. It's unclear how bad this is, but I think it's really nice to have the singularity happen at a point where compute (and ideally fab capacity) is a big bottleneck. Note that both compute overhang is less extreme than Plan A and you are slowing the takeoff itself at the most leveraged point in Plan A (such that even if you exit somewhat early, you still got most of what you could have hoped for).
(I'm noticing that we're calling one proposal "shut it all down" and the other "Plan A" (even though it's just my favorite proposal for what to do with this level of political will) which is pretty obviously biased naming as an side effect of how I've introduced this proposal. I'll keep using this naming, but readers should try to adjust for this bias as applicable.)
gain see "Shut It Down" is simpler than "Controlled Takeoff"
I agree "shut it all down" is a simpler proposal (in initial implementation) and this is a big advantage. If you think massively augmented humans are likely as a result of "shut it all down", then from our perspective it's overall simpler, not just in terms of initial implementation. Otherwise, someone still has to eventually handle the situation which is potentially complicated, especially if alignment moonshots don't work out.
I agree you do eventually want to go back to Plan A anyway, so I mostly am just not seeing why you really want to treat these as separate plans
Notably, the "shut it all down" plan proposed in If Anyone Builds It, Everyone Dies involves stopping AI progress for a long period at the current level of capability, so it really is a separate plan. I agree you sometimes want to prevent development beyond a capability cap and sometimes you want to proceed, but the question from my perspective is more like "at what level of capability do you want to spend this time" and "how much time do you realistically have".
I agree with some of the risks of "geopolitical situation might get harder to have control over" and "humanity generally becoming anti-progress" but these don't even seem strictly worse in Shutdown World vs Controlled Takeoff world.
I think "humanity generally becoming anti-progress" (and stopping AI development much longer term) seems much more likely if you're stopping all AI progress for decades (both evidentially and causally).
I think the geopolitical situation in 30 years naively looks scary due to the rise of China and the relative fall of Europe and I don't think general cultural/societal progress looks fast enough on that time frame to overcome this. I think the current CCP having control over most/all of the universe seems like 50% as bad as AI takeover in my lights, though I'm sympathetic to being more optimistic, especially about the version of the CCP that exists in 30 years.
Responding to your other comment
One way the geopolitical situation might get worse is "time passes, and, all kinds of stuff can change when time passes."
Another way it can get worse is "the current dynamics still involve a feeling of being rushed, and time pressure, and meanwhile the international agreements we have leave a lot more wiggle room and more confused spirit-of-the-law about how people are allowed to maneuever." This could cause the geopolitical situation to get worse faster than it would otherwise.
Which of those is worse? idk, I'm not a geopolitical expert. But, it's why it seems pretty obviously not 'strictly worse' (which is a high bar, with IMO a higher burden of proof) under Shut It Down.
I think China predictably getting relatively (much?) more powerful is pretty relevant. I agreee it's not strictly worse, I think "humanity generally becoming anti-progress" is ~strictly worse under "shut it all down".
I agree it's messy and the comparison is complicated.
Also, note "shut it all down" is not like it's actually going to be permanent.
Sure, the intention isn't that it is permanent, but I think there is a real risk of it lasting a long time until the agreement is suddenly exited in a very non-ideal way (and some small chance of this altering culture for the worse longer term and some smaller chance of this resulting in humanity never building powerful AI before it is too late).
I'm going to default to bowing out, but if you want to bid for me to engage a bunch, you can.
Maybe I should clarify my view a bit on Plan A vs "shut it all down":
This is my view after more seriously getting into some of the details of the Plan A related to compute verification and avoiding blacksite projects as well as trying to do a more precise comparison with "shut it all down".