AI ALIGNMENT FORUM
AF

252
Rohin Shah
Ω565110714021
Message
Dialogue
Subscribe

Research Scientist at Google DeepMind. Creator of the Alignment Newsletter. http://rohinshah.com/

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
10rohinmshah's Shortform
6y
44
Value Learning
Alignment Newsletter
leogao's Shortform
Rohin Shah20d20

it'd be fine if you held alignment constant but dialed up capabilities.

I don't know what this means so I can't give you a prediction about it.

I don't really see why it's relevant how aligned Claude is if we're not thinking about that as part of it

I just named three reasons:

  1. Current models do not provide much evidence one way or another for existential risk from misalignment (in contrast to frequent claims that "the doomers were right")
  2. Given tremendous uncertainty, our best guess should be that future models are like current models, and so future models will not try to take over, and so existential risk from misalignment is low
  3. Some particular threat model predicted that even at current capabilities we should see significant misalignment, but we don't see this, which is evidence against that particular threat model.

Is it relevant to the object-level question of "how hard is aligning a superintelligence"? No, not really. But people are often talking about many things other than that question.

For example, is it relevant to "how much should I defer to doomers"? Yes absolutely (see e.g. #1).

Reply
leogao's Shortform
Rohin Shah21d*66

I bucket this under "given this ratio of right/wrong responses, you think a smart alignment researcher who's paying attention can keep it in a corrigibility basin even as capability levels rise?". Does that feel inaccurate, or, just, not how you'd exactly put it?

I'm pretty sure it is not that. When people say this it is usually just asking the question: "Will current models try to take over or otherwise subvert our control (including incompetently)?" and noticing that the answer is basically "no".[1] What they use this to argue for can then vary:

  1. Current models do not provide much evidence one way or another for existential risk from misalignment (in contrast to frequent claims that "the doomers were right")
  2. Given tremendous uncertainty, our best guess should be that future models are like current models, and so future models will not try to take over, and so existential risk from misalignment is low
  3. Some particular threat model predicted that even at current capabilities we should see significant misalignment, but we don't see this, which is evidence against that particular threat model.[2]

I agree with (1), disagree with (2) when (2) is applied to superintelligence, and for (3) it depends on details.

In Leo's case in particular I don't think he's using the observation for much, it's mostly just a throwaway claim that's part of the flow of the comment, but inasmuch as it is being used it is to say something like "current AIs aren't trying to subvert our control, so it's not completely implausible on the face of it that the first automated alignment researcher to which we delegate won't try to subvert our control", which is just a pretty weak claim and seems fine, and doesn't imply any kind of extrapolation to superintelligence. I'd be surprised if this was an important disagreement with the "alignment is hard" crowd.

  1. ^

    There are demos of models doing stuff like this (e.g. blackmail) but only under conditions selected highly adversarially. These look fragile enough that overall I'd still say current models are more aligned than e.g. rationalists (who under adversarially selected conditions have been known to intentionally murder people).

  2. ^

    E.g. One naive threat model says "Orthogonality says that an AI system's goals are completely independent of its capabilities, so we should expect that current AI systems have random goals, which by fragility of value will then be misaligned". Setting aside whether anyone ever believed in such a naive threat model, I think we can agree that current models are evidence against such a threat model.

Reply
Research Agenda: Synthesizing Standalone World-Models
Rohin Shah23d*52

Which, importantly, includes every fruit of our science and technology.

I don't think this is the right comparison, since modern science / technology is a collective effort and so can only cumulate thinking through mostly-interpretable steps. (This may also be true for AI, but if so then you get interpretability by default, at least interpretability-to-the-AIs, at which point you are very likely better off trying to build AIs that can explain that to humans.)

In contrast, I'd expect individual steps of scientific progress that happen within a single mind often are very uninterpretable (see e.g. "research taste").

If we understood an external superhuman world-model as well as a human understands their own world-model, I think that'd obviously get us access to tons of novel knowledge.

Sure, I agree with that, but "getting access to tons of novel knowledge" is nowhere close to "can compete with the current paradigm of building AI", which seems like the appropriate bar given you are trying to "produce a different tool powerful enough to get us out of the current mess".

Perhaps concretely I'd wildly guess with huge uncertainty that this would involve an alignment tax of ~4 GPTs, in the sense that if you had an interpretable world model from GPT-10 similar in quality to a human's understanding of their own world model, that would be similarly useful as GPT-6.

Reply
Research Agenda: Synthesizing Standalone World-Models
Rohin Shah1mo84

a human's world-model is symbolically interpretable by the human mind containing it.

Say what now? This seems very false:

  • See almost anything physical (riding a bike, picking things up, touch typing a keyboard, etc). If you have a dominant hand / leg, try doing some standard tasks with the non-dominant hand / leg. Seems like if the human mind could symbolically interpret its own world model this should be much easier to do.
  • Basically anything to do with vision / senses. Presumably if vision was symbolically interpretable to the mind then there wouldn't be much of a skill ladder to climb for things like painting.
  • Symbolic grammar usually has to be explicitly taught to people, even though ~everyone has a world model that clearly includes grammar (in the sense that they can generate grammatical sentences and identify errors in grammar)

Tbc I can believe it's true in some cases, e.g. I could believe that some humans' far-mode abstract world models are approximately symbolically interpretable to their mind (though I don't think mine is). But it seems false in the vast majority of domains (if you are measuring relative to competent, experienced people in those domains, as seems necessary if you are aiming for your system to outperform what humans can do).

Reply
Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro
Rohin Shah2mo2423

I agree with this directionally, and generally tend to believe that certain kinds of straight lines will continue to be straight. But also iirc you rely a lot on the METR trend line, which is (a) noisy, (b) over a short timescale (~2 years[1]), and (c) not a great measure of what people care about[2][3]. I don't think you should strongly expect that particular line to continue to be straight over a long time period.

(Though I tend to think that it's more likely to slow down by 2027 than to speed up, relative to what you'd predict by extrapolating the 2024-2025 trend, so I agree with the overall conclusion of the post. And obviously one should have non-trivial probability on each of {slow down, stay the same, speed up}, as you say.)

  1. ^

    I'm excluding GPT-2 and GPT-3 which seem very noisy.

  2. ^

    Compared to things like Moore's Law, or energy efficiency of solar, or pretraining perplexity, or other similar things where we observe straight lines.

  3. ^

    None of this is to say that the METR graph is bad! It is the best empirical evidence on timelines I'm aware of.

Reply1
Optimally Combining Probe Monitors and Black Box Monitors
Rohin Shah3mo30

From the paper:

If a non-monotonic relationship exists between monitor scores and likelihood ratios, auditing the highest-scoring outputs is suboptimal. [...] This means ΠNP will audit some moderately high-scoring outputs (e.g., those with a monitor score of 7.2) before auditing certain higher-scoring outputs

I agree this is clearly better if you trust your LLRs to be 100% accurate, but ultimately this has to be fit on some training set and then we are depending on it generalizing to deployment. However, that generalization could fail just for normal ML reasons (the deployment distribution is different, the LLR scores were noisily estimated) or because the model is scheming and adapts to your monitoring strategy.

Suppose I want to rule by fiat that for outputs O1 and O2 where O1 gets higher scores than O2, we are guaranteed to be at least as likely to audit O1 as O2. How would you suggest I go about doing that?

After writing the above I remembered you said the word "kernel" and looked in the paper for that and saw you did mention this limitation, and suggested that practitioners "increase kernel width or enforce monotonicity". So I guess my question is, how should I go about enforcing monotonicity? I still have to map two scores (m1, m2) to a decision of whether or not to audit; it's unclear to me that I can brute force that problem, much easier to brute force when I have a single score (the LLR).

Reply
Training a Reward Hacker Despite Perfect Labels
Rohin Shah3mo122

I think Oliver's pushback in another comment is getting strongly upvoted because, given a description of your experimental setup, a bunch of people aside from you/Quintin/Steve would have assigned reasonable probability to the right answer. But I wanted to emphasize that I consider generating an experiment that turns out to be interesting (as your frame did) to be the thing that most of the points should be assigned for.)

The experimental setup (in the sense of getting bad behavior despite perfect labels on the training set) was also done prior to the popularization of reward-as-chisel.

Reply
[Research Note] Optimizing The Final Output Can Obfuscate CoT
Rohin Shah4mo62

Great research, thanks for doing it!

My sense is that there are a lot of caveats on this result:

  • The reward hack is easy to execute (likely doesn't require reasoning, or only very little reasoning)
  • There is a single task; the model can learn very specific heuristics to that task rather than being more general
  • The reward function is deliberately designed to cause obfuscation, and is about as close to process-based as you can get while not looking directly at the thoughts. (The reward literally strips out the actual command.)
  • The task is sabotaged, so that reward hacking is required 100% of the time to get rewards
  • The results don't seem that scary? For the CoT judge penalty to go down from ~0.32 to ~0.13, the pass rate goes down 11%. It's not quite clear how to interpret this given how artificial the setting is, but things you could take away -- (a) obfuscatory rewards are not competitive or (b) safety rewards do actually succeed in reducing the bad behavior.

How hard did you have to try to get this result? Was this just the first thing you tried, where you made everything as easy as possible to get some result? Or did you try a bunch of things, and have to relax them to get obfuscation to arise? If it's the latter, I feel like this should be overall good news for CoT monitorability.

Reply
“Behaviorist” RL reward functions lead to scheming
Rohin Shah4mo50

If we use Myopic Optimization from Non-myopic Approval (MONA), then the AI will not want to engage in sneaky power-seeking behaviors, because such behaviors will not help with the short-term tasks that the AI-in-a-lab wants to do

I mainly rule out MONA for capabilities reasons; see Thoughts on “Process-Based Supervision” §5.3.

I'd make this argument even for regular outcomes-based RL. Presumably, whenever you are providing rewards (whether in brain-like AGI or RLHF for LLMs), they are based on actions that the AI took over some bounded time in the past, let's call that time bound T. Then presumably sneaky power-seeking behaviors should be desired inasmuch as they pay out within time T. Currently T is pretty small so I don't expect it to pay out. Maybe you're imagining that T will increase a bunch?

Secondarily, I have a concern that, even if a setup seems like it should lead to an AI with exclusively short-term desires, it may in fact accidentally lead to an AI that also has long-term desires. See §5.2.1 of that same post for specifics.

Sure, I agree with this (and this is why I want to do MONA even though I expect outcomes-based RL will already be quite time-limited), but if this is your main argument I feel like your conclusion should be way more measured (more like "this might happen" rather than "this is doomed").

3.5 “The reward function will get smarter in parallel with the agent being trained”

This one comes from @paulfchristiano (2022): “I’m most optimistic about approaches where RL is only performed on a reward function that gets smarter in parallel with the agent being trained.”

I, umm, don’t really know what he’s getting at here. Maybe something like: use AI assistants to better monitor and catch sneaky behavior? If so, (1) I don’t expect the AI assistants to be perfect, so my story still goes through, and (2) there’s a chicken-and-egg problem that will lead to the AI assistants being misaligned and scheming too.

Re: (1), the AI assistants don't need to be perfect. They need to be good enough that the agent being trained cannot systematically exploit them, which is a much lower bar than "perfect" (while still being quite a high bar). In general I feel like you're implicitly treating an adversarial AI system as magically having perfect capabilities while any AI systems that are supposed to be helpful are of course flawed and have weaknesses. Pick an assumption and stick with it!

Re: (2), you start with a non-scheming human, and bootstrap up from there, at each iteration using the previous aligned assistant to help oversee the next AI. See e.g. IDA.

There are lots of ways this could go wrong, I'm not claiming it will clearly work, but I don't think either of your objections matters much.

Reply
Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
Rohin Shah4mo124

A couple of years ago, the consensus in the safety community was that process-based RL should be prioritized over outcome-based RL [...]

I don't think using outcome-based RL was ever a hard red line, but it was definitely a line of some kind.

I think many people at OpenAI were pretty explicitly keen on avoiding (what you are calling) process-based RL in favor of (what you are calling) outcome-based RL for safety reasons, specifically to avoid putting optimization pressure on the chain of thought. E.g. I argued with @Daniel Kokotajlo about this, I forget whether before or after he left OpenAI.

There were maybe like 5-10 people at GDM who were keen on process-based RL for safety reasons, out of thousands of employees.

I don't know what was happening at Anthropic, though I'd be surprised to learn that this was central to their thinking.

Overall I feel like it's not correct to say that there was a line of some kind, except under really trivial / vacuous interpretations of that. At best it might apply to Anthropic (since it was in the Core Views post).


Separately, I am still personally keen on process-based RL and don't think it's irrelevant -- and indeed we recently published MONA which imo is the most direct experimental paper on the safety idea behind process-based RL.

In general there is a spectrum between process- and outcome-based RL, and I don't think I would have ever said that we shouldn't do outcome-based RL on short-horizon tasks with verifiable rewards; I care much more about the distinction between the two for long-horizon fuzzy tasks.

I do agree that there are some signs that people will continue with outcome-based RL anyway, all the way into long-horizon fuzzy tasks. I don't think this is a settled question -- reasoning models have only been around a year, things can change quite quickly.


None of this is to disagree with your takeaways, I roughly agree with all of them (maybe I'd have some quibbles about #2).

Reply
Load More
Newsletters
5 years ago
(+17/-12)
25GDM: Consistency Training Helps Limit Sycophancy and Jailbreaks in Gemini 2.5 Flash
12d
0
80Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
4mo
7
34Evaluating and monitoring for AI scheming
4mo
0
41Google DeepMind: An Approach to Technical AGI Safety and Security
7mo
6
58Negative Results for SAEs On Downstream Tasks and Deprioritising SAE Research (GDM Mech Interp Team Progress Update #2)
7mo
6
52AGI Safety & Alignment @ Google DeepMind is hiring
9mo
1
48A short course on AGI safety from the GDM Alignment team
9mo
0
44MONA: Managed Myopia with Approval Feedback
10mo
7
97AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work
1y
17
22On scalable oversight with weak LLMs judging strong LLMs
1y
18
Load More