I'm an AGI safety / AI alignment researcher in Boston with a particular focus on brain algorithms—see https://sjbyrnes.com/agi.html. Email: steven.byrnes@gmail.com. Twitter: @steve47285. Employer: https://astera.org/. Physicist by training.
I think the basic idea of instrumental convergence is just really blindingly obvious, and I think it is very annoying that there are people who will cluck their tongues and stroke their beards and say "Hmm, instrumental convergence you say? I won't believe it unless it is in a very prestigious journal with academic affiliations at the top and Computer Modern font and an impressive-looking methods section."
I am happy that your papers exist to throw at such people.
Anyway, if optimal policies tend to seek power, then I desire to believe that optimal policies tend to seek power :) :) And if optimal policies aren't too relevant to the alignment problem, well neither are 99.99999% of papers, but it would be pretty silly to retract all of those :)
So this basic idea: the AI takes a bunch of steps, and gradient descent is performed based on audits of whether those steps seem reasonable while blinded to what happened as a result.
Ohh, sorry you had to tell me twice, but maybe I’m finally seeing where we’re talking past each other.
Back to the OP, you wrote:
- In training, an AI system gets tasks of the form “Produce a plan to accomplish X that looks good to humans” (not tasks of the form “accomplish X”).
- The AI system is rewarded based on whether the plan makes sense and looks good to humans - not how well it actually ends up working.
When I read that, I was thinking that you meant:
But your subsequent replies make me think that this isn’t what you meant, particularly the “brainstorm for an hour” part.
…But hold that thought while I explain why I don’t find the above plan very helpful (just so you understand my previous responses):
OK, so that’s where I was coming from in my previous replies.
But, now I no longer think that the above is what you meant in the first place. Instead I think you meant:
Is that right?
If so, that makes a lot more sense.
In my (non-LLM) context, I would re-formulate the above as something like:
If that’s a fair translation, then yeah, this seems like an excellent idea to me, I’m all for it.
I don’t see this working as “Plan A”, because I expect the interpretability step to be pretty labor intensive (untangling a “thought” made of lots of idiosyncratic concepts built on other concepts), and therefore only adequate to produce a quite sparse reward, whereas I think programmers will need a very dense reward signal for capabilities reasons.
(UPDATE: you're probably gonna say we can solve that with a reward model. Hmm, I guess I'm more thinking that we need a rich and high-dimensional ground truth for this to be sufficient, and I'm skeptical that we'll get enough samples, if each sample is a whole research project. It also requires that we never stop supervising, even in deployment, thanks to the AI online-learning. But I dunno, I guess I shouldn't rule out things like that.)
But even if we have some other “Plan A” for the rewards, I would still want to do this kind of thing as much as possible, as another layer of protection.
Thanks, that all makes sense.
I agree you can still get a problem from goal misgeneralization and instrumental reasoning, but this seems noticeably less likely (assuming process-based training) than getting a problem from reinforcing pursuit of unintended outcomes. (https://www.lesswrong.com/posts/iy2o4nQj9DnQD7Yhj/discussion-with-nate-soares-on-a-key-alignment-difficulty has some discussion.) I put significant credence on something like "Internals-based training doesn't pan out, but neither does the concern about goal misgeneralization and instrumental reasoning (in the context of process-based training, ie in the context of not reinforcing pursuit of unintended outcomes)."
I think an important thing behind the scenes here is that you’re hopeful that we can get to TAI using mostly self-supervised learning, whereas I’m expecting that people are eventually going to switch to AI approaches that involve RL in a much more central way than LLMs do today. (More like humans, where for every thought you think, you’re thinking it in part because it’s more rewarding than whatever alternate thoughts you might think instead.) In other words, I’m planning for futures where your “Hypothetical Training Approach” doesn’t work.
I think it’s a good rule-of-thumb to say that the more centrally RL is involved in TAI, as opposed to self-supervised learning, the harder is safety & alignment in general.
(So, I’m pretty unhappy to hold the belief that RL will probably be very centrally involved in TAI!)
So anyway, in the scenarios I’m thinking about, I think “goal misgeneralization and instrumental reasoning” is far more likely and difficult to avoid, compared to what you’re thinking about. I suppose process-based optimization would help a nonzero amount even in my scenarios, but sufficiently little that it doesn’t really seem worth thinking about.
(Obviously, nobody knows how to make TAI, so I think it’s fine and healthy for different people to be thinking through different scenarios! We’ll find out sooner or later!)
That was helpful, thanks! I will try to rephrase. Let’s consider two failure modes. First, define some terms:
X is what we want the AI to be trying to do:
Y is something we want the AI to not try to do:
Now consider two failure modes.
FAILURE MODE 1:
FAILURE MODE 2:
My understanding is that you’re thinking about Failure Mode 1 here, and you’re saying that process-based training will help because there it’s less difficult to supervise really well, such that we’re not rewarding the AI for doing Y a little bit / incompetently / randomly.
If so—OK, fair enough.
However, we still need to deal with Failure Mode 2.
One might hope that Failure Mode 2 won’t happen because the AI won’t want to do Y in the first place, because after all it’s never done Y before and got rewarded. However, you can still get Y from goal misgeneralization and instrumental reasoning. (E.g., it’s possible for the AI to generalize from its reward history to “wanting to get reward [by any means necessary]”, and then it wants to hack out of the box for instrumental reasons, even if it’s never done anything like that before.)
So, I can vaguely imagine plans along the lines of:
If that’s what you have in mind, then yeah, I see why you’re interested in process-based optimization as a piece of that puzzle.
But for my part, I’m not crazy about any plans of that form. I’m currently more hopeful about plans along the lines of:
And then process-based training is not so relevant.
[I have two vague research paths along these lines (1,2), although I’m not sure you’d find those links useful in any detail because I’m assuming model-based RL rather than LLMs.]
I'd consider this to be one of the more convincing reasons to be hesitant about a pause (as opposed to the 'crying wolf' argument, which seems to me like a dangerous way to think about coordinating on AI safety?).
Can you elaborate on this? I think it’s incredibly stupid that people consider it to be super-blameworthy to overprepare for something that turned out not to be a huge deal—even if the expected value of the preparation was super-positive given what was known at the time. But, stupid as it may be, it does seem to be part of the situation we’re in. (What politician wants an article like this to be about them?) (Another example.) I’m in favor of interventions to try to change that aspect of our situation (e.g. widespread use and normalization of prediction markets??), but in the meantime, it seems to me that we should keep that dynamic in mind (among other considerations). Do you disagree with that in principle? Or think it’s overridden by other considerations? Or something else?
I think that’s one consideration, but I think there are a bunch of considerations pointing in both directions. For example:
Pause in scaling up LLMs → less algorithmic progress:
Pause in scaling up LLMs → more algorithmic progress:
Other things:
There are probably other things too. For me, the balance of considerations is that pause in scaling up LLMs will probably lead to more algorithmic progress. But I don’t have great confidence.
(We might differ in how much of a difference we’re expecting LLM code-assistants and research-assistants to make. I put them in the same category as PyTorch and TensorFlow and IDEs and stackoverflow and other such productivity-enhancers that we’re already living with, as opposed to something wildly more impactful than that.)
The description doesn't seem so bad to me. Your post "Reward is not the optimization target" is about what actual RL algorithms actually do. The wiki descriptions here are a kind of normative motivation as to how people came to be looking into those algorithms in the first place. Like, if there's an RL algorithm that performs worse than chance at getting a high reward, then that ain't an RL algorithm. Right? Nobody would call it that.
I think lots of families of algorithms are likewise lumped together by a kind of normative "goal", even if any given algorithm in that family is doing something somewhat different and more complicated than “achieving that goal”, and even if, in any given application, the programmer might not want that goal to be perfectly achieved even if it could be. So by the same token, supervised learning algorithms are "supposed" to minimize a loss, compilers are "supposed" to create efficient and correct assembly code, word processors are "supposed" to process words, etc., but in all cases that's not a literal and complete description of what the algorithms in question actually do, right? It’s a pointer to a class of algorithms.
Sorry if I'm misunderstanding.
Sure, we can take some particular model-based RL algorithm (MuZero, APTAMI, the human brain algorithm, whatever), but instead of “the reward function” we call it “function #5829”, and instead of “the value function” we call it “function #6241”, etc. If you insist that I use those terms, then I would still be perfectly capable of describing step-by-step why this algorithm would try to kill us. That would be pretty annoying though. I would rather use the normal terms.
I’m not quite sure what you’re talking about (“projected from the labeled world model”??), but I guess it’s off-topic here unless it specifically applies to APTAMI.
FWIW the problems addressed in this post involve the model-based RL system trying to kill us via using its model-based RL capabilities in the way we normally expect—where the planner plans, and the critic criticizes, and the world-model models the world, etc., and the result is that the system makes and executes a plan to kill us. I consider that the obvious, central type of alignment failure mode for model-based RL, and it remains an unsolved problem.
In addition, one might ask if there are other alignment failure modes too. E.g. people sometimes bring up more exotic things like the “mesa-optimizer” thing where the world-model is secretly harboring a full-fledged planning agent, or whatever. As it happens, I think those more exotic failure modes can be effectively mitigated, and are also quite unlikely to happen in the first place, in the particular context of model-based RL systems. But that depends a lot on how the model-based RL system in question is supposed to work, in detail, and I’m not sure I want to get into that topic here, it’s kinda off-topic. I talk about it a bit in the intro here.
I think I see where you’re coming from but I generally have mixed feelings, and am going back and forth but leaning towards sticking with textbook terminology for my part.
I’m a bit confused about what you’re proposing. AlphaZero has an input (board state) and an output (move). Are you proposing to call this input-output function “a policy”?
If so, sure we can say that, but I think people would find it confusing—because there’s a tree search in between the input and output, and one ingredient of the tree search is the “policy network” (or maybe just “policy head”, I forget), but here the relation between the “policy network” and the final input-output function is very indirect, such that it seems odd to use (almost) the same term for them.
The word “agent” invokes a bundle of intuitions / associations, and you think many of those are misleading in general. So then one approach is to ask everyone to avoid the word “agent” in cases where those intuitions don’t apply, and the other is to ask everyone to constantly remind each other that the “agents” produced by RL don’t necessarily have thus-and-such properties.
Neither option is great; this is obviously a judgment call.
For my part, I think that if I say:
“An RL agent isn’t necessarily planning ahead towards goals, in many cases it’s better to think of it as a bundle of situation-dependent reactions…”
…then that strikes me as a normal kind of thing to say as part of a healthy & productive conversation.
So maybe I see pushing-back-on-the-intuitions-while-keeping-the-word as a more viable approach than you do.
(And separately, I see editing widely-used terminology as a very very big cost, probably moreso than you.)
Ditto for “reward”.
this kinda sounds slightly weird in my mind because I seem to be intuitively associating “reinforcement” with “updates” and the policy in question is a fixed-point that stops getting updated altogether.
You mention that this is off-topic so maybe you don’t want to discuss it, but I probably disagree with that—with the caveat that it’s very difficult to do an other-things-equal comparison. (I.e., we’re presumably interested in RL-safety-versus-SSL-safety holding capabilities fixed, but switching from RL to SSL does have an effect on capabilities.)
Then later you say “only using unsupervised pretraining doesn't mean you're safe” which is a much weaker statement, and I agree with it.