LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
Mmm nod. (I bucket this under "given this ratio of right/wrong responses, you think a smart alignment researcher who's paying attention can keep it in a corrigibility basin even as capability levels rise?". Does that feel inaccurate, or, just, not how you'd exactly put it?)
There's a version of Short Timeline World (which I think is more likely? but, not confidently) which is : "the current paradigm does basically work... but, the way we get to ASI, as opposed to AGI, routes through 'the current paradigm helps invent a new better paradigm, real fast'."
In that world, GPT5 has the possibility-of-true-generality, but, not necessarily very efficiently, and once you get to the sharper part of the AI 2027 curve, the mechanism by which the next generation of improvement comes is via figuring out alternate algorithms.
certainly if AI systems were only ever roughly this misaligned we'd be doing pretty well.
I think this is an important disagreement with the "alignment is hard" crowd. I particularly disagree with "certainly."
The question is "what exactly is the AI trying to do, and what happens if it magnified it's capabilities a millionfold and it and it's descendants were running openendedly?", and are any of the instances catastrophically bad?
Some things you might mean that are raising your position to "certainly" (whereas I'd say "most likely not, or, it's too dumb to even count as 'aligned' or 'misaligned'")
Were any of those what you meant? Or are you thinking about it an entirely different way?
I would naively expect, if you took LLM-agents current degree of alignment, and ran a lotta copies trying to help you with end-to-end alignment research with dialed up capabilities, at least a couple instances would end up trying to subtle sabotage you and/or escape.
This framing feels reasonable-ish, with some caveats.[1]
I am assuming we're starting the question at the first stage where either "shut it down" or "have a strong degree of control over global takeoff" becomes plausibly politically viable. (i.e. assume early stages of Shut It Down and Controlled Takeoff both include various partial measures that are more immediately viable and don't give you the ability to steer capability-growth that hard)
But, once it becomes a serious question "how quickly should we progress through capabilities", then one thing to flag is, it's not like you know "we get 5 years, therefore, we want to proceed through those years at X rate." It's "we seem to have this amount of buy-in currently..." and the amount of buy-in could change (positively or negatively).
Some random thoughts on things that seem important:
If it's not viable to do that, well, then we don't. (but, then we're not really having a real convo about how slow the takeoff should ideally be, just riding the same incentive wave we're currently riding with slightly more steering). ((We can instead have a convo about how to best steer given various murky conditions, which seems like a real important convo, I'm just responding here to this comment's framing))[3]
If we reach a point where humanity has demonstrated the capability of "stop training on purpose, stop uncontrolled compute production, and noticeably improve our ability to predict the next training run", then I'm not obviously opposed to doing relatively rapid advancement, but, it's not obviously better to do "rapid to the edge" than "do one round where there are predictions/incentives/prizes somehow for people to accurately predict how the next training rounds go, then evaluate that, then do it again."
I think there's at least some confusion where people are imagining the simplest/dumbest version of Shut It Down, and imagining "Plan A" is nuanced and complicated. I think the actual draft treaty has levers that are approximately the same levers you'd want to do this sort of controlled takeoff.
I'm not sure how powerful nVidia is an an interest group. Maybe it is important to avoid them getting a deal like this so they're less of an interest group with power at the negotiating table.
FYI my "Ray detects some political bs motivations in himself" alarm is tripping as I write this paragraph. It currently seems right to me but let me know if I'm missing something here.
(Having otherwise complained a bunch about some of the commentary/framing around Plan A vs Shut It Down, I do overall like this post and think having the lens of the different worlds is pretty good for planning).
(I am also appreciating how people are using inline reacts)
Nod.
FYI, I think Shut It Down is approximately as likely to happen as "Full-fledged Plan A that is sufficiently careful enough to actually help much more than [the first several stages of Plan A that Plan A and Shut It Down share]", on account of being simple enough that it's even really possible to coordinate on it.
I agree they are both pretty unlikely to happen. (Regardless, I think the thing to do is probably "reach for whatever wins seem achievable near term and try to build coordination capital for more wins")
I think it's a major possible failure mode of Plan A is "it turns it a giant regulatory capture molochian boondoggle that both slows thing down for a long time in confused bad ways and reads to the public as a somewhat weirdly cynical plot, which makes people turn against tech progress comparably or more than the average Shut It Down would." (I don't have a strong belief about the relative likelihoods of that
None of those beliefs are particularly strong and I could easily learn a lot that would change all my beliefs.
Seems fine to leave it here. I dont have more arguments I didn't already write up in "Shut It Down" is simpler than "Controlled Takeoff", just stating for the record I don't think you've put forth an argument that justifies the 3x increase in difficulty of Shut It Down over the fully fledged version of Plan A. (We might still be imagining different things re: Shut It Down)
Nod, I agree centralizing part is harder than non-centralized fab monitoring. But, I think a sufficient amount of "non-centralized" fab monitoring is still a much bigger ask than export controls, and, the centralization was part of at least one writeup of Plan A, and it seemed pretty weird to include that bit but write off "actual shutdown" as politically intractable.
I think compute + fab monitoring with potential for escalation requires much lower political will than shutting down AI development. I agree that both Plan A and shut it all down require something like this. Like I think this monitoring would plausibly not require much more political will than export controls.
FYI this is cruxy. I don't have very strong political-viability-intuitions, but seems like this requires export controls that several (sometimes rivalrous) major nations are agreeing to simultaneously, with at least nontrivial trust for establishing the monitoring process together, which eventually is pretty invasive.
(maybe you are imagining the monitoring is actually mostly done with spy satellites that don't require much trust or cooperation?)
But like, the last draft of Plan A I saw include "we relocate all the compute to centralized locations in third party countries" as an eventual goal. That seems pretty crazy?
Thanks. I'll leave some responses but feels more fine to leave here for now.
I think shutting down all AI development is much more costly than not shutting down all AI development in a pretty straightforward sense that will in fact probably be priced into the required level of political will: Nvidia is in fact much worse off if all AI development shuts down versus if AI development proceeds, but with capabilities developing more slowly once they reach a high level of capabilities.
I would guessed the stock market will react pretty different to something like Plan A vs "shut it all down" for reasonable reasons.
I don't understand why you think the opening steps are the most politically challenging part given that the opening steps for Plan A plausibly don't require stopping AI development.
First, slight clarification: the thing I had in mind isn't the opening step (which is presumably "do some ad hoc deals that build political momentum without too much cost").
The step I have in mind is "all global compute clusters and fab production is monitored, with buy in from China, UK, Europe etc, with intent for major international escalation of some kind of some violates the monitor-pact". This doesn't directly shut down nVidia, but, it sure is putting some writing on the wall that I would expect nVidian political interests to fight strongly even if it doesn't immediately come with a shut down.
I'm imagining a Plan A that doesn't include something like that is more like a Plan A / B hybrid or some other "not the full Plan A." (based on some other internal Plan A docs I've looked at that went into more detail as of a few weeks ago).
I don't think there's any way you get to that point without most major world leaders actually believing-in-their-heart "if anyone builds it, something real bad is dangerously likely to happen." And by the point people are actually agreeing to have international inspection of some kind, I would expect people to more thinking "okay will this actually work?" than "what do we have buy-in for?".
(There is a version where the US enforces it at gunpoint or at least economicsanction-point without everyone else's buy in but I both don't expect them to do that and don't really expect it to work?)
MIRI discusses in the IABIED resources that they would prefer carveouts for narrow bio AI, so it's not like they're even advocating all progress to stop. (Advanced bio AI seems pretty good for the world and to capture a lot of the benefits).
...
I certainly do expect you-et-al to disagree with MIRI-et-al on a bunch of implementation details of the treaty.
But, it seems like a version of the treaty that doesn't at least have the capacity to shutdown compute temporarily is a kinda fake version of Plan A, and once you have that, "Shut down" vs "Controlled Takeoff" feels more like arguing details than fundamentals to me.
I dunno, this seems really important and I am really confused why y'all are oriented this way.
Yes, I very much would like responses on these and my other comment, although no worries if you want to take a bit more time to address more thoroughly.
It sees like the reason Claude's level is misalignment is fine is because it's capabilities aren't very good, and there's not much/any reason to assume it'd be fine if you held alignment constant but dialed up capabilities.
Do you not think that?
(I don't really see why it's relevant how aligned Claude is if we're not thinking about that as part of it)