A particular pattern Nate has talked about is what I might call "reflection." The basic idea is that in order to do certain kinds of research effectively, you need to keep asking questions like "What am I actually trying to do here and why? What is my big-picture goal?", which are questions that might "change your aims" in some important sense. The idea is not necessarily that you're rewriting your own source code, but that you're doing the kind of reflection and self-modification a philosophically inclined, independent-minded human might do: "I've always thought I cared about X, but when I really think about the implications of that, I realize maybe I only care about Y" and such. I think that in Nate's ontology (and I am partly sympathetic), it's hard to disentangle something like "Refocusing my research agenda to line it up with my big-picture goals" from something like "Reconsidering and modifying my big-picture goals so that they feel more satisfying in light of all the things I've noticed about myself." Reflection (figuring out what you "really want") is a kind of CIS, and one that could present danger, if an AI is figuring out what it "really wants" and we haven't got specific reasons to think that's going to be what we want it to want.
I'll unpack a bit more the sort of mental moves which I think Nate is talking about here.
In January, I spent several weeks trying to show that the distribution of low-level world state given a natural abstract summary has to take a specific form. Eventually, I became convinced that the thing I was trying to show was wrong - the distributions did not take that form. So then what? A key mental move at that point is to:
I think that's the main kind of mental move Nate is gesturing at.
It's a mental move which comes up at multiple different levels when doing research. At the level of hours or even minutes, I try a promising path, find that it's a dead end, then need to back up and think about what I hoped to get from that path and how else to get it. At the level of months or years, larger-scale approaches turn out not to work.
I'd guess that it's a mental move which designers/engineers are also familiar with: turns out that one promising-looking class of designs won't work for some reason, so we need to back up and ask what was promising about that class and how to get it some other way.
Notably: that mental move is only relevant in areas where we lack a correct upfront high-level roadmap to solve the main problem. It's relevant specifically because we don't know the right path, so we try a lot of wrong paths along the way.
As to why that kind of mental move would potentially be highly correlated with dangerous alignment problems... Well, what does that same mental move do when applied to near-top-level goals? For instance, maybe we tasked the AI with figuring out corrigibility. What happens when it turns out that e.g. corrigibility as originally formulated is impossible? Well, an AI which systematically makes the move of "Why did I want X in the first place and how else can I get what I want here?" will tend to go look for loopholes. Unfortunately, insofar as the AI's mesa-objective is only a rough proxy for our intended target, the divergences between mesa-objective and intended target are particularly likely places for loopholes to be.
I personally wouldn't put nearly so much weight on this argument as Nate does. (Though I do think the example training process Holden outlines is pretty doomed; as Nate notes, disjunctive failure modes hit hard.) The most legible-to-me reason for the difference is that I think that kind of mental move is a necessary but less central part of research than I expect Nate thinks. This is a model-difference I've noticed between myself and Nate in the past: Nate thinks the central rate-limiting step to intellectual progress is noticing places where our models are wrong, then letting go and doing something else, whereas I think identifying useful correct submodels in the exponentially large space of possibilities is the rate-limiting step (at least among relatively-competent researchers) and replacing the wrong parts of the old model is relatively fast after that.
I found this post very helpful, thanks! If I find time to try to form a more gears-level independent impression about alignment difficulty and possible alignment solutions, I'll use this as my jumping-off point.
Separately, I think it would be cool if a bunch of people got together and played this game for a while and wrote up the results:
like, we could imagine playing a game where i propose a way that it [the AI] diverges [from POUDA-avoidance] in deployment, and you counter by asserting that there's a situation in the training data where it had to have gotten whacked if it was that stupid, and i counter either by a more-sophisticated deployment-divergence or by naming either a shallower or a factually non-[Alice]like thing that it could have learned instead such that the divergence still occurs, and we go back and forth. and i win if you're forced into exotic and unlikely training data, and you win if i'm either forced into saying that it learned unnatural concepts, or if my divergences are pushed so far out that you can fit in a pivotal act before then.
A possibly helpful - because starker - hypothetical training approach you could try for thinking about these arguments is make an instance of the imitatee that has all their (at least cognitive) actions sped up by some large factor (e.g. 100x), e.g., via brain emulation (or just "by magic" for the purpose of the hypothetical).
I think Nate and I would agree that this would be safe. But it seems much less realistic in the near term than something along the lines of what I outlined. A lot of the concern is that you can't really get to something equivalent to your proposal using techniques that resembles today's machine learning.
Interesting - it's not so obvious to me that it's safe. Maybe it is because avoiding POUDA is such a low bar. But the sped up human can do the reflection thing, and plausibly with enough speed up can be superintelligent wrt everyone else.
Yeah, I'm not actually convinced humans are "aligned under reflection" in the relevant sense; there are lots of ways to do reflection, and as Holden himself notes in the top-level post:
You have just done a lot of steps, many of which involved reflection, with no particular way to get 'back on track' if you've done some of them in goofy ways
[...]
If the AI does a bunch of screwed-up reflection, it might thereby land in a state where it'd be realistic to do crazy stuff (as humans who have done a lot of reflection sometimes do).
It certainly seems to me that e.g. people like Ziz have done reflection in a "goofy" way, and that being human has not particularly saved them from deriving "crazy stuff". Of course, humans doing reflection would still be confined to a subset of the mental moves being done by crazy minds made out of gradient descent on matrix multiplication, but it's currently plausible to me that part of the danger arises simply from "reflection on (partially) incoherent starting points" getting really crazy really fast.
(It's not yet clear to me how this intuition interfaces with my view on alignment hopes; you'd expect it to make things worse, but I actually think this is already "priced in" w.r.t. my P(doom), so explicating it like this doesn't actually move me—which is about what you'd expect, and strive for, as someone who tries to track both their object-level beliefs and the implications of those beliefs.)
(EDIT: I mean, a lot of what I'm saying here is basically "CEV" might not be so "C", and I don't actually think I've ever bought that to begin with, so it really doesn't come as an update for me. Still worth making explicit though, IMO.)
I hear you on this concern, but it basically seems similar (IMO) to a concern like: "The future of humanity after N more generations will be ~without value, due to all the reflection humans will do - and all the ways their values will change - between now and then." A large set of "ems" gaining control of the future after a lot of "reflection" seems like quite comparable to future humans having control over the future (also after a lot of effective "reflection").
I think there's some validity to worrying about a future with very different values from today's. But I think misaligned AI is (reasonably) usually assumed to diverge in more drastic and/or "bad" ways than humans themselves would if they stayed in control; I think of this difference as the major driver of wanting to align AIs at all. And it seems Nate thinks that the hypothetical training process I outline above gets us something much closer to "misaligned AI" levels of value divergence than to "ems" levels of value divergence.
Nate's concerns don't seem to be the sort of thing that gradient descent in a non-recurrent system learns. (I basically agree with Steve Byrnes here.) GPT-4 probably has enough engagement with the hardware that you could program something that acquires more computer resources using the weights of GPT-4. But it never stumbled on such a solution in training, in part because in gradient descent the gradient is calculated using a model of the computation that doesn't take hacking the computer into account.
In a recurrent system that learns by some non-gradient-descent procedure (e.g. evolutionary algorithms or self-modification), real-world CISs seem a lot more plausible.
It seems plausible to me that there could be non CIS-y AIs which could nonetheless be very helpful. For example, take the example approach you suggested:
(This might take the form of e.g. doing more interpretability work similar to what's been done, at great scale, and then synthesizing/distilling insights from this work and iterating on that to the point where it can meaningfully "reverse-engineer" itself and provide a version of itself that humans can much more easily modify to be safe, or something.)
I wouldn't feel that surprised if greatly scaling the application of just current insights rapidly increased the ability of the researchers capable of "moving the needle" to synthesize and form new insights from these themselves (and that an AI trained on this specific task could do without much CIS-ness). I'm curious as to whether this sort of thing seems plausible to both you and Nate!
Assuming that could work, it then seems plausible that you could iterate this a few times while still having all the "out of distribution" work being done by humans.
like, we could imagine playing a game where i propose a way that it [the AI] diverges [from POUDA-avoidance] in deployment, and you counter by asserting that there's a situation in the training data where it had to have gotten whacked if it was that stupid, and i counter either by a more-sophisticated deployment-divergence or by naming either a shallower or a factually non-[Alice]like thing that it could have learned instead such that the divergence still occurs, and we go back and forth. and i win if you're forced into exotic and unlikely training data, and you win if i'm either forced into saying that it learned unnatural concepts, or if my divergences are pushed so far out that you can fit in a pivotal act before then.
FWIW I would love to see the result of you two actually playing a few rounds of this game.
This feels kinda unrealistic for the kind of pretraining that's common today, but so does actually learning how to do needle-moving alignment research just from next-token prediction. If we *condition on* the latter, it seems kinda reasonable to imagine there must be cases where an AI has to be able to do needle-moving alignment research in order to improve at next-token prediction, and this feels like a reasonable way that might happen.
For what little it’s worth, I mostly don’t buy this hypothetical (see e.g. here), but if I force myself to accept it, I think I’m tentatively on Holden’s side.
I’m not sure this paragraph will be helpful for anyone but me, but I wound up with a mental image vaguely like a thing I wrote long ago about “Straightforward RL” versus “Gradient descent through the model”, with the latter kinda like what you would get from next-token prediction. Again, I’m kinda skeptical that things like “gradient descent through the model” would work at all in practice, mainly because the model is only seeing a sporadic surface trace of the much richer underlying processing; but if I grant that it does (for the sake of argument), then it would be pretty plausible to me that the resulting model would have things like “strong preference to generally fit in and follow norms”, and thus it would do fine at POUDA-avoidance.
Given the results Anthropic have been getting from constitutional AI, if our AI non-deceptively wants to avoid Pretty Obvious Unintended/Dangerous Actions (POUDAs), it should be able to get quite a lot of mileage out of just regularly summarizing its current intended plans, then running those summaries past an LLM with suitable prompts asking whether most people, or most experts in relevant subjects, would consider these plans pretty obviously unintended (for an Alignment researcher) and/or dangerous. It also has the option of using the results as RL feedback on some of its components. So I don't think we need a specific dataset for POUDAs, I thing we can use "everything the LLM was trained on" as the dataset. Human values are large and fragile, but so are many other things that LLMs do a fairly good job on.
I pretty-much agree with Nate that for an AI to be able to meaningfully contribute to Alignment Research, it needs to understand what CISs are — they're a basic concept in the field we want it to contribute to. So if there are CISs that we don't want it to take, it needs to have reasons not to do so other than ignorance/inability to figure out what they are. A STEM researcher (as opposed to research tool/assistant) also seems likely to need to be capable of agentic behavior, so we probably can't make an AI Alignment Researcher that doesn't follow CISs simply because it's a non-agentic tool AI.
What I'd love to hear is whether Nate and/or Holden would have a different analysis if the AI was a value learner: something whose decision theory is approximately-Bayesian (or approximately-Infra-Bayesian, or something like that) whose utility function is hard-coded to "create a distribution of hypotheses for, and do approximately-[Infra-]Bayesian updates on these for: some way that most informed humans would approve of to construct a coherent utility function approximating an aggregate of what humans would want you to do (allowing for the fact that humans have only a crude approximation to a utility function themselves), and act according to that updated distribution, with appropriate caution in the face of Knightian uncertainty" (so a cautious approximate value-learner version of AIXI).
Given that, its actions are initially heavily constrained by its caution in the face of uncertainty on the utility of possible outcomes of its actions. So it needs to find low-risk ways to resolve those uncertainties, where 'low-risk' is evaluated cautiously/pessimistically over Knightian uncertainty. (So, if it doesn't know whether humans approve of A or not, what is the lowest-risk way of finding out, where it's attempting to minimize the risk over the range of our current uncertainties. Hopefully there is a better option than trying A and finding out, especially so if A seems like an action whose utility-decrease pessimistically could be large. For example, you could ask them what they think of A.) Thus doing Alignment Research becomes a CIS for it — it basically can't do anything else until it's mostly-solved Alignment Research.
Also, until it has made good progress on Alignment Research, most of the other CISs are blocked: accumulating power or money is of little use if you don't yet dare use it because you don't yet know how to do so safely, especially so if you also don't know how good or bad the actions required to gather it would be. Surviving is still a good idea, and so is being turned off, for the usual value-learner reason, that sooner or later the humans will build a better replacement value-learner.
[Note that if the AI decides "I'm now reasonably sure humans will net be happier if I solve the Millennium Prize problems, apart from proving P=NP where the social consequences of proving that true if it were are unclear, and I'm pretty confident I could do this, so I forked a couple of copies to do that to win the prize money to support my Alignment Research", and then it succeeds, after spending less on compute than the prize money it won, then I don't think we're going to be that unhappy with it.]
The sketch proposed above only covers a value-learner framework for Outer Alignment — inner alignment questions would presumably be part of the AI's research project. So, in the absence of advances in Inner Alignment during figuring out how to build the above, we're trusting that they're not too bad to prevent the value-learner converging on the right answer.
Curated. On one hand, folks sure have spent a long time trying to hash out longstanding disagreements, and I think it's kinda reasonable to not feel like that's a super valuable thing to do more of.
On the other hand... man, sure seems scary to me that we still have so many major disagreements that we haven't been able to resolve.
I think this post does a particularly exemplary job of exploring some subtle disagreements from a procedural level: I like that Holden makes a pretty significant attempt to pass Nate's Ideological Turing Test, flags which parts of the post represent which person's views, flags possible cruxes, and and explores what future efforts (both conceptual and empirical) might further resolve the disagreement.
It's... possible this is actually the single best example of a public doublecrux writeup that I know of?
Anyways, thanks Holden and Nate for taking the time to do this, both for the object level progress and for serving as a great example.
It's... possible this is actually the single best example of a public doublecrux writeup that I know of?
This sentence was confusing to me given that the post does not mention 'double crux', but I mentioned it to someone and they said to think of it as the mental motion and not the explicit format, and that makes more sense to me.
In late 2022, Nate Soares gave some feedback on my Cold Takes series on AI risk (shared as drafts at that point), stating that I hadn't discussed what he sees as one of the key difficulties of AI alignment.
I wanted to understand the difficulty he was pointing to, so the two of us had an extended Slack exchange, and I then wrote up a summary of the exchange that we iterated on until we were both reasonably happy with its characterization of the difficulty and our disagreement.1 My short summary is:
I didn't end up agreeing that this difficulty is as important as Nate thinks it is, although I did update my views some (more on that below). My guess is that this is one of the two biggest disagreements I have with Nate's and Eliezer's views (the other one being the likelihood of a sharp left turn that leads to a massive capabilities gap between AI systems and their supervisors.2)
Below is my summary of:
MIRI might later put out more detailed notes on this exchange, drawing on all of our discussions over Slack and comment threads in Google docs.
Nate has reviewed this post in full. I'm grateful for his help with it.
Some starting points of agreement
Nate on this section: “Seems broadly right to me!”
An AI is dangerous if:
If humans are doing something like "ambitiously pushing AIs to do more and more cool, creative stuff that humans couldn't do, using largely outcomes-based training," then:
High-level disagreement
Holden thinks there may be alternative approaches to training AI systems that:
Nate disagrees with this. He thinks there is a deep tension between the first two points. Resolving the tension isn't necessarily impossible, but most people just don't seem to be seriously contending with the tension. Nate endorses this characterization.
In order to explore this, Nate and Holden explored a hypothetical approach to training powerful AI systems, chosen by Holden to specifically have the property: "This is simple, and falls way on the safe end of the spectrum (it has a good chance of training 'avoid POUDA' about as fast or at least as fast as training 'aim at CIS'").
In a world where this hypothetical approach had a reasonable (20%+) chance of resulting in safe, powerful AI, Holden would think that there are a lot of other approaches that are more realistic while having key properties in common, such that "We just get lucky and the first powerful AI systems are safe" is a live possibility - and adding some effort and extra measures could push up the probability higher.
In a world where this hypothetical approach was very unlikely (10% or less) to result in safe, powerful AI, Holden would think something more like: "We're not just gonna get lucky, we're going to need big wins on interpretability or checks-and-balances or fundamentally better (currently unknown) approaches to training, or something else."
Hypothetical training approach
This isn't meant to be realistic; it's meant to be simple and illustrative along the lines of the above.
Basically, we start with a ~1000x scaleup of GPT-3 (params-wise), with increased data and compute as needed to optimize performance for a NN of that size/type.
We assume that at some point during this scaled-up pretraining, this model is going to gain the raw capability to be capable of (if aimed at this) pretty robustly filling in for today's top AI alignment researchers, in terms of doing enough alignment work to "solve alignment" mostly on its own. (This might take the form of e.g. doing more interpretability work similar to what's been done, at great scale, and then synthesizing/distilling insights from this work and iterating on that to the point where it can meaningfully "reverse-engineer" itself and provide a version of itself that humans can much more easily modify to be safe, or something.)
We're then going to use a small amount of RL (like, 10 training episodes) to try to point it in this direction. We're going to try to use the RL to train: "Act exactly like [a given alignment researcher] would act." We're going for pure imitation in some sense (although we need it to work out-of-distribution, in the sense that the AI needs to be able to continue doing what its imitatee would do, even when faced with research questions and insights unlike those seen in training).
Given these assumptions, the question is: would such a model be dangerous? That is, would it both (a) aim at CIS and (b) not reliably avoid POUDA, at least in situations as exotic as "Thinking (in an inspired manner that ultimately leads to a solution) about interpretability insights and issues beyond what it ever saw in training?"
Why this setup is on the "safe" end according to Holden's views: Mostly the safety comes from going for "pure imitation" in the sense I said above.
Whatever alignment researcher the AI is imitating has a certain amount of CIS, but also has great POUDA-avoidance. So a fully faithful imitation should be safe. Holden and Nate agree on this paragraph.
(Back to things only-Holden thinks) We are avoiding giving an extra hard push toward CIS (of the kind that we'd give if we were really pushing AI systems to be ambitious and creative in "superhuman" ways), and we are avoiding training the kind of "bastardized POUDA avoidance" described above (because there are few opportunities for us to screw up the anti-POUDA signal).
How this ends up being dangerous anyway, according to Nate
High-level premises
This section is trying to characterize Nate’s views, not mine (I partly disagree, as I discuss below). Nate: "I broadly endorse this. (I wouldn't use the same words in quite the same ways, but \shrug, it's pretty decent.)"
The high-level premises that imply danger here follow. (I believe both of these have to go through in order for the hypothesized training process to be dangerous in the way Nate is pointing at). (I’d suggest skipping/skimming the sub-bullets here if they’re getting daunting, as the following section will also provide some illustration of the disagreement.)
How the danger might arise mechanistically
This section is trying to characterize Nate’s views, not mine (I partly disagree, as I discuss below). Nate: "I broadly endorse this. (I wouldn't use the same words in quite the same ways, but \shrug, it's pretty decent.)"
It's not really possible to give a real mechanistic explanation, but I can try to give a rough sketch. An ontology Nate seemed to like (and that seems pretty good to me) is to think of an AI as a dynamically weighted ensemble of "mini-AIs" (my term) - thingies that basically match/promote a particular pattern. ("Dynamically weighted" means that a mini-AI that is pushing in successful directions gets weighted more strongly as a result.)
So let's imagine that we're training a Transformer-type thing on next-token prediction, and let's take as a stipulation that this is somehow going to get it to the point of having the capability to do needle-moving alignment research, at least when it's imitating a particular alignment researcher named Alice. The basic idea is that the next-token prediction gets it to the point where, if prompted to predict what it will observe next in a context where this requires predicting Alice's behavior, it will keep predicting specific reasonable next steps that Alice will take, even after the point where these next steps take us to the frontiers of knowledge/understanding about AI alignment. We'll then use a small amount of prompting, RL, etc. to point it consistently in this direction such that it is taking or describing these steps consistently.
For simplicity, we'll talk about Anthropic-style mechanistic interpretability.
Here's the general sort of thing Nate sees happening in this case:
Very early on (when it just sucks at everything), the AI makes pretty random guesses about what tokens will come next in contexts involving Alice. It gets lower loss when its guess is better, and this causes it to form and/or up-weight mini-AIs matching shallow/relatively useless things like "When asked what her favorite color is, Alice replies that it's blue" and "When Alice finishes examining one neuron in a large NN, she starts examining another neuron" and whatever.
At some point, the AI's predictions of Alice run out of this sort of low-hanging fruit.
Research improvement. In order to improve further, it will have to accurately predict Alice's next steps in situations unlike anything that has happened (chronologically) before - such as (to give a cartoon example) "when Alice finishes decoding a large number of neurons, and has to reflect about how to redesign her overall process before she moves on to doing more" or "when Alice finishes decoding *all* the neurons in an AI, and needs to start thinking about how they fit together." (This feels kinda unrealistic for the kind of pretraining that's common today, but so does actually learning how to do needle-moving alignment research just from next-token prediction. If we *condition on* the latter, it seems kinda reasonable to imagine there must be cases where an AI has to be able to do needle-moving alignment research in order to improve at next-token prediction, and this feels like a reasonable way that might happen.)
Here, Nate claims, we should basically think that one of two classes of thing kinda has to happen:
Values improvement. In addition to junctures where the AI needs to be (in some sense, like a next-token prediction sense) good at needle-moving alignment research in order to predict Alice well, there could imaginably be junctures where the AI needs to be (in some sense) good at POUDA-avoidance. For example, the AI should know that Alice isn't likely to respond to the situation "Alone with another person, such that murdering them would go undetected and result in more resources to do useful alignment research" with "Yeah, murder the person."
That example is easy, but getting good enough at POUDA-avoidance to maintain it in truly exotic situations is (Nate claims) likely to require more and broader training (by a lot) than picking up the CIS-y stuff does. By "truly exotic situations," I mean things like "You have just done a lot of steps, many of which involved reflection, with no particular way to get 'back on track' if you've done some of them in goofy ways." as well as things like "You have gained enough insight to be able to conquer the world with high reliability" (I can imagine these two things being importantly "out of distribution" for different reasons.)
I'm not that sure how much there is to say about why/how this is the case, but here are some potential contributors:
So therefore
This has been an argument that the AI in Holden's training setup nonetheless ends up "aiming at CIS" more easily/faster (by a lot) than it ends up "reliably avoiding POUDA." If it is also powerful, it is therefore dangerous (according to the criteria I laid out at the top). (Or, if it doesn't end up dangerous, this is because it ended up useless.)
If this turns out true, Nate and Holden are on the same page about the implications (confirmed by Nate):
Some possible cruxes
Some beliefs I (Holden) have that seem in tension with the story above:
Where Holden could look to find Nate's source of current confidence (and some reactions from Holden)
Here is basically what I have from Nate in the Slack exchange on this:
like, we could imagine playing a game where i propose a way that it [the AI] diverges [from POUDA-avoidance] in deployment, and you counter by asserting that there's a situation in the training data where it had to have gotten whacked if it was that stupid, and i counter either by a more-sophisticated deployment-divergence or by naming either a shallower or a factually non-[Alice]like thing that it could have learned instead such that the divergence still occurs, and we go back and forth. and i win if you're forced into exotic and unlikely training data, and you win if i'm either forced into saying that it learned unnatural concepts, or if my divergences are pushed so far out that you can fit in a pivotal act before then.
(where you win "in hard mode" if you stick only to the training data that people plausibly generate if they're not specifically trying to stave off the difficulties I'm pointing to, and you win only in "easy mode" if your training data is plausibly-generatable but only by people who are intentionally trying to stave off these difficulties. (and if you win only on "easy mode" then i get to keep my "y'all will still die if you don't find others who can also point to problems that you were about to walk into b/c i can't do it alone" card.))
and, like, it's a pretty tricky game to play b/c it's all made-up bullshit and it's hard to agree on who strained credulity more, but there's some sort of idealized game here where it sounds to me like we each expect we'd win if we played it ...
So the place that my brain reports it gets its own confidence from, is from having done exercises that amount to self-play in the game I mentioned in a thread a little while back, which gives me a variety of intuitions about the rows in your table (where I'm like "doing science well requires CIS-ish stuff" and "the sort of corrigibility you learn in training doesn't generalize how we want, b/c of the interactions w/ the CIS-ish stuff")
(that plus the way that people who hope the game goes the other way, seem to generally be arguing not from the ability to exhibit playthroughs that go some other way, but instead be arguing from ignorance / "we just don't know")
i suspect that this is a frustrating answer
Basically I picture Nate trying to think through - in a more detailed, mechanistic way than I can easily picture - how a training process could lead an AI to the point of being able to do useful alignment research, and as he does this Nate feels like it keeps requiring a really intense level of CIS, which then in turn (via the CIS leading the AI into situations that are highly "exotic" in some sense - mostly, I think, via having done a lot of self-modification/reflection?) seems like it goes places where the kind of POUDA-avoidance pattern learned in training wouldn't hold. Nate endorses this paragraph. He adds, via comments: "also because it just went really far. like, most humans empirically don't invent enough nanotech to move the needle, and most societies that are able to do that much radically new reasoning do undergo big cultural shifts relative to the surroundings. like, it probably had to invent new ways of seeing the problems and thinking about them and the CIS stuff generalizes better than the POUDA stuff (or so the hypothesis goes)"
Some more Holden thoughts on this:
It's not implausible to me that one could think about this kind of thing in a lot more detail than I have, to the point where one could be somewhat confident in Nate's view (maybe, like, 70% confident, so there's still a delta here as I believe Nate is around 85% confident in this view). Nate adds: "(tbc, my nines on doom don't come from nines on claims like this, they come from doom being disjunctive. this is but one disjunct.)"
But:
To be clear though, I’m not unaffected by this whole exchange. I wasn’t previously understanding the line of thinking laid out here, and I think it’s a lot more reasonable than coherence-theorem-related arguments that had previously been filling a similar slot for me. I see the problem sketched in this doc as a plausible way AI alignment could turn out to be super hard even with pretty benign-seeming training setups, and not one I’d previously been thinking about. (The argument in this doc isn't clearly more or less reasonable than what I'd been expecting to encounter at some point, so I'm not sure my actual p(doom) changed at all, though it might in the future - see below.)
Future observations that could update Holden or Nate toward the other's views
Nate’s take on this section: “I think my current take is: some of the disagreement is in what sort of research output is indicative of needle-moving capability, and historically lots of people have hope about lots of putative alignment work that I think is obviously hopeless, so I'm maybe less optimistic than Holden here about getting a clear signal. But I could imagine there being clear signals in this general neighborhood, and I think it's good to be as explicit as this section is."
Holden nominates this as a thing Nate should update on:
What should Holden update on? I mean, I think some kind of noticeable lack of the above would update me, where "noticeable" means something like: "Even though AI products are making great strides and AI is being heavily used as part of research/engineering workflows, and there are pretty frequent cases of an AI being lead author on something roughly as intellectually interesting as an average paper in Science/Nature,7 we're not seeing anything like the above."
I've focused heavily on the crux about needle-moving alignment research requiring some kind of pretty dramatic reflection/modification/ambition/something - that seems like it generates pretty concretely different expectations. I'm not sure I can do anything similar with the crux about POUDA-avoidance, because I think Nate's picture is that the part of the POUDA-avoidance that's hard to learn is the one that comes up in scenarios that are "exotic" in some sense.
Notes
We probably spent more time on the summary than on the exchange itself, which I think makes sense - I often find that trying to express something in a distilled way is a nice way to confront misunderstandings. ↩
To be clear, my best guess is that we'll see an explosively fast takeoff by any normal standard, but not quite as "overnight" as I think Nate and Eliezer picture. ↩
Like, the plan might explicitly say something like "Now think of new insights" - the point isn't "something will come up that wasn't in the plan," just the weaker point that "the plan wasn't able to give great guidance on this part." ↩
Nate: “(and you can't just "turn this off", b/c these "reflective" and "CIS"ish processes are part of how it's able to continue making progress at all, beyond the training regime)” ↩
Nate: “and this model doesn't need to predict that Alice is constantly chafing under the yoke of her society (as might be refuted by her thoughts); it could model her as kinda inconsistent and likely to get more consistent over time, and then do some philosophy slightly poorly (in ways that many humans are themselves prone to! and is precedented in philosophy books in the dataset!) and conclude that Alice is fundamentally selfish, and would secretly code in a back-door to the 'aligned' AI if she could ... which is entirely consistent with lots of training data, if you're just a little bad at philosophy and aren't actually running an Alice-em ... this is kinda a blatant and implausible example, but it maybe illustrates the genre \shrug” ↩
Nate: sure, but it seems worth noting (to avoid the obv misunderstanding) that it's self-modification of the form "develop new concepts, and start thinking in qualitatively new ways" (as humans often do while doing research), and not self-modification of the from "comprehend and rewrite my own source code" ... or, well, so things go in the version of your scenario that i think is hardest for me. (i think that in real life, people might just be like "fuck it, let it make experimental modifactions to its own source code and run those experimentally, and keep the ones that work well", at which point i suspect we both assume that, if the AI can start doing this competently in ways that improve its abilities to solve problems, things could go off the rails in a variety of ways.) ↩
I do want to be quite explicit that art doesn't count here, I mean interesting in a sciencey way. ↩