AI ALIGNMENT FORUM
AF

Rohin Shah
Ω562710713981
Message
Dialogue
Subscribe

Research Scientist at Google DeepMind. Creator of the Alignment Newsletter. http://rohinshah.com/

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Value Learning
Alignment Newsletter
10rohinmshah's Shortform
6y
44
Trust me bro, just one more RL scale up, this one will be the real scale up with the good environments, the actually legit one, trust me bro
Rohin Shah8d2423

I agree with this directionally, and generally tend to believe that certain kinds of straight lines will continue to be straight. But also iirc you rely a lot on the METR trend line, which is (a) noisy, (b) over a short timescale (~2 years[1]), and (c) not a great measure of what people care about[2][3]. I don't think you should strongly expect that particular line to continue to be straight over a long time period.

(Though I tend to think that it's more likely to slow down by 2027 than to speed up, relative to what you'd predict by extrapolating the 2024-2025 trend, so I agree with the overall conclusion of the post. And obviously one should have non-trivial probability on each of {slow down, stay the same, speed up}, as you say.)

  1. ^

    I'm excluding GPT-2 and GPT-3 which seem very noisy.

  2. ^

    Compared to things like Moore's Law, or energy efficiency of solar, or pretraining perplexity, or other similar things where we observe straight lines.

  3. ^

    None of this is to say that the METR graph is bad! It is the best empirical evidence on timelines I'm aware of.

Reply1
Optimally Combining Probe Monitors and Black Box Monitors
Rohin Shah17d20

From the paper:

If a non-monotonic relationship exists between monitor scores and likelihood ratios, auditing the highest-scoring outputs is suboptimal. [...] This means ΠNP will audit some moderately high-scoring outputs (e.g., those with a monitor score of 7.2) before auditing certain higher-scoring outputs

I agree this is clearly better if you trust your LLRs to be 100% accurate, but ultimately this has to be fit on some training set and then we are depending on it generalizing to deployment. However, that generalization could fail just for normal ML reasons (the deployment distribution is different, the LLR scores were noisily estimated) or because the model is scheming and adapts to your monitoring strategy.

Suppose I want to rule by fiat that for outputs O1 and O2 where O1 gets higher scores than O2, we are guaranteed to be at least as likely to audit O1 as O2. How would you suggest I go about doing that?

After writing the above I remembered you said the word "kernel" and looked in the paper for that and saw you did mention this limitation, and suggested that practitioners "increase kernel width or enforce monotonicity". So I guess my question is, how should I go about enforcing monotonicity? I still have to map two scores (m1, m2) to a decision of whether or not to audit; it's unclear to me that I can brute force that problem, much easier to brute force when I have a single score (the LLR).

Reply
Training a Reward Hacker Despite Perfect Labels
Rohin Shah1mo122

I think Oliver's pushback in another comment is getting strongly upvoted because, given a description of your experimental setup, a bunch of people aside from you/Quintin/Steve would have assigned reasonable probability to the right answer. But I wanted to emphasize that I consider generating an experiment that turns out to be interesting (as your frame did) to be the thing that most of the points should be assigned for.)

The experimental setup (in the sense of getting bad behavior despite perfect labels on the training set) was also done prior to the popularization of reward-as-chisel.

Reply
Optimizing The Final Output Can Obfuscate CoT (Research Note)
Rohin Shah1mo62

Great research, thanks for doing it!

My sense is that there are a lot of caveats on this result:

  • The reward hack is easy to execute (likely doesn't require reasoning, or only very little reasoning)
  • There is a single task; the model can learn very specific heuristics to that task rather than being more general
  • The reward function is deliberately designed to cause obfuscation, and is about as close to process-based as you can get while not looking directly at the thoughts. (The reward literally strips out the actual command.)
  • The task is sabotaged, so that reward hacking is required 100% of the time to get rewards
  • The results don't seem that scary? For the CoT judge penalty to go down from ~0.32 to ~0.13, the pass rate goes down 11%. It's not quite clear how to interpret this given how artificial the setting is, but things you could take away -- (a) obfuscatory rewards are not competitive or (b) safety rewards do actually succeed in reducing the bad behavior.

How hard did you have to try to get this result? Was this just the first thing you tried, where you made everything as easy as possible to get some result? Or did you try a bunch of things, and have to relax them to get obfuscation to arise? If it's the latter, I feel like this should be overall good news for CoT monitorability.

Reply
“Behaviorist” RL reward functions lead to scheming
Rohin Shah2mo51

If we use Myopic Optimization from Non-myopic Approval (MONA), then the AI will not want to engage in sneaky power-seeking behaviors, because such behaviors will not help with the short-term tasks that the AI-in-a-lab wants to do

I mainly rule out MONA for capabilities reasons; see Thoughts on “Process-Based Supervision” §5.3.

I'd make this argument even for regular outcomes-based RL. Presumably, whenever you are providing rewards (whether in brain-like AGI or RLHF for LLMs), they are based on actions that the AI took over some bounded time in the past, let's call that time bound T. Then presumably sneaky power-seeking behaviors should be desired inasmuch as they pay out within time T. Currently T is pretty small so I don't expect it to pay out. Maybe you're imagining that T will increase a bunch?

Secondarily, I have a concern that, even if a setup seems like it should lead to an AI with exclusively short-term desires, it may in fact accidentally lead to an AI that also has long-term desires. See §5.2.1 of that same post for specifics.

Sure, I agree with this (and this is why I want to do MONA even though I expect outcomes-based RL will already be quite time-limited), but if this is your main argument I feel like your conclusion should be way more measured (more like "this might happen" rather than "this is doomed").

3.5 “The reward function will get smarter in parallel with the agent being trained”

This one comes from @paulfchristiano (2022): “I’m most optimistic about approaches where RL is only performed on a reward function that gets smarter in parallel with the agent being trained.”

I, umm, don’t really know what he’s getting at here. Maybe something like: use AI assistants to better monitor and catch sneaky behavior? If so, (1) I don’t expect the AI assistants to be perfect, so my story still goes through, and (2) there’s a chicken-and-egg problem that will lead to the AI assistants being misaligned and scheming too.

Re: (1), the AI assistants don't need to be perfect. They need to be good enough that the agent being trained cannot systematically exploit them, which is a much lower bar than "perfect" (while still being quite a high bar). In general I feel like you're implicitly treating an adversarial AI system as magically having perfect capabilities while any AI systems that are supposed to be helpful are of course flawed and have weaknesses. Pick an assumption and stick with it!

Re: (2), you start with a non-scheming human, and bootstrap up from there, at each iteration using the previous aligned assistant to help oversee the next AI. See e.g. IDA.

There are lots of ways this could go wrong, I'm not claiming it will clearly work, but I don't think either of your objections matters much.

Reply
Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
Rohin Shah2mo124

A couple of years ago, the consensus in the safety community was that process-based RL should be prioritized over outcome-based RL [...]

I don't think using outcome-based RL was ever a hard red line, but it was definitely a line of some kind.

I think many people at OpenAI were pretty explicitly keen on avoiding (what you are calling) process-based RL in favor of (what you are calling) outcome-based RL for safety reasons, specifically to avoid putting optimization pressure on the chain of thought. E.g. I argued with @Daniel Kokotajlo about this, I forget whether before or after he left OpenAI.

There were maybe like 5-10 people at GDM who were keen on process-based RL for safety reasons, out of thousands of employees.

I don't know what was happening at Anthropic, though I'd be surprised to learn that this was central to their thinking.

Overall I feel like it's not correct to say that there was a line of some kind, except under really trivial / vacuous interpretations of that. At best it might apply to Anthropic (since it was in the Core Views post).


Separately, I am still personally keen on process-based RL and don't think it's irrelevant -- and indeed we recently published MONA which imo is the most direct experimental paper on the safety idea behind process-based RL.

In general there is a spectrum between process- and outcome-based RL, and I don't think I would have ever said that we shouldn't do outcome-based RL on short-horizon tasks with verifiable rewards; I care much more about the distinction between the two for long-horizon fuzzy tasks.

I do agree that there are some signs that people will continue with outcome-based RL anyway, all the way into long-horizon fuzzy tasks. I don't think this is a settled question -- reasoning models have only been around a year, things can change quite quickly.


None of this is to disagree with your takeaways, I roughly agree with all of them (maybe I'd have some quibbles about #2).

Reply
The bitter lesson of misuse detection
Rohin Shah2mo55

Nice work, it's great to have more ways to evaluate monitoring systems!

A few points for context:

  • The most obvious application for misuse monitoring is to run it at deployment on every query to block harmful ones. Since deployments are so large scale, the expense of the monitoring system matters a ton. If you use a model to monitor itself, that is roughly doubling the compute cost of your deployment. It's not that people don't realize that spending more compute works better; it's that this is not an expense worth paying at current margins.
    • You mention this, but say it should be considered because "frontier models are relatively inexpensive" -- I don't see why the cost per query matters, it's the total cost that matters, and that is large
    • You also suggest that robustness is important for "safety-critical contexts" -- I agree with this, and would be on board if the claim was that users should apply strong monitoring for safety-critical use cases --  but elsewhere you seem to be suggesting that this should be applied to very broad and general deployments which do not seem to me to be "safety-critical".
  • Prompted LLMs are often too conservative about what counts as a harmful request, coming up with relatively stretched ways in which some query could cause some small harm. Looking at the examples of benign prompts in the appendix, I think they are too benign to notice this problem (if it exists). You could look at performance on OverRefusalBench to assess this.
    • Note that blocking / refusing benign user requests is a really terrible experience for users. Basically any misuse detection system is going to flag things often enough that this is a very noticeable effect (like even at 99% accuracy people will be annoyed). So this is a pretty important axis to look at.
      • One might wonder how this can be the case when often AIs will refuse fairly obviously benign requests -- my sense is that usually in those cases, it's because of a detection system that was targeted at something much more serious (e.g. CSAM) that is over-triggering, but where the companies accept that cost because of the seriousness of the thing they're detecting.
Reply1
How much novel security-critical infrastructure do you need during the singularity?
Rohin Shah2mo20

I agree that fine-grained configuration mechanisms probably increase security, and this might net increase security, as I say in the post.

You mention permission systems, which is certainly a big deal, but I didn't see anything about broader configuration mechanisms, much of which can be motivated solely by efficiency and incidentally helps with security. (I was disputing your efficiency -> less security claim; permissions mechanisms aren't a valid counterargument since they aren't motivated by efficiency.)

One reason I'm skeptical is that you can get almost all that value by having AIs voluntarily adopt mechanisms that generally isolate their jobs from the other jobs that are running (e.g. having naming conventions about who is allowed to read or write what) that get you the reliability benefits without getting any security.

Hmm, I think this assumes the AIs will be more reliable than I expect them to be. You would want this to be reliable enough that your (AI or human) software engineers ~never have to consider non-isolation as a possible cause of bugs (unless explicitly flagged otherwise); I'd guess that corresponds to about 6 nines of reliability at following the naming conventions? (Which might get you to 8 nines of reliability at avoiding non-isolation bugs, since most naming convention errors won't lead to bugs.)

Some complete guesses: human software engineers might get 0.5-2 nines of reliability at following the naming conventions, a superhuman coder model might hit ~4 nines of reliability if you made this a medium priority, if you add a review step that explicitly focuses just on checking adherence to naming conventions then you might hit 6 nines of reliability if the critique uses a model somewhat more powerful than a current frontier model. But this is now a non-trivial expense; it's not clear to me it would be worth it.

Reply
How much novel security-critical infrastructure do you need during the singularity?
Rohin Shah2mo42

Technologies that allow workers to be more isolated from each other gain you both convenience (because your coworkers no longer accidentally mess up what you’re doing) and also security (because you can remove your coworker’s permission to affect the code you’re running), but generally reduce efficiency. When we try to buy efficiency at the cost of convenience, we might lose security too.

Hmm, this feels less likely to me. Isolation can often be an efficiency benefit because one employee's mistake doesn't propagate to screwing up the work of everyone else, and this benefit scales as the number of employees increases. This assumes that labor is at least somewhat unreliable, but I do expect that AI labor will be overall unreliable, just because reliable AI labor will likely be much more expensive and won't look cost-effective to deploy in most situations.

I'd expect that you buy efficiency by building more fine-grained configuration mechanisms and requiring that employees use them. This will also become easier with AI, since you can have specialized AI assistants that understand the configuration mechanisms well and can set them for all experiments (unlike the current status quo where all your employees have to be separately trained on the use of the configurations and many of them forget it and just copy configurations from elsewhere).

In general it seems like this should improve security by creating more info and legibility around what is happening (which in turn gives you more affordances to enforce security). Some examples (not necessarily about AI):

  1. For code that is costly to run (e.g. uses lots of compute, causes physical-world effects, affects production systems), spending labor to run automated bug finding tools that generate lots of false positives, and addressing the results. This directly increases security by having fewer bugs.
  2. Breaking up big jobs into many smaller ones that are easier to schedule across your compute fleet. This can enable you to isolate small jobs from each other which reduces the attack surface.
  3. Requiring detailed analysis and justification before using significant amounts of compute for experiments. This should increase security against rogue internal deployments, as it becomes easier to catch the AI in a lie.

That said, I did think of one example that worsens security -- switching from a memory-safe language like Python to one that requires manual memory management like C. Though in this case, it's that the "security infra" (automatic memory management) itself is too compute-expensive to run, which doesn't seem like it will apply in the AI case.

Overall my very weak guess is that the net effect will be to increase security. Of course, the part where you build more fine-grained configuration mechanisms is "novel (potentially) security-critical infrastructure", so this does support your broader point. (When I say it will increase security, this is under the assumption that the novel infra isn't sabotaged by scheming AIs.)

Reply
Unfaithful Reasoning Can Fool Chain-of-Thought Monitoring
Rohin Shah3mo1922

Great work!

We've also been seeing a similar vulnerability in our debate work, where when we use a weaker LLM as the judge, it is very hard to get the judge to take seriously the possibility that the debaters might be lying to it (using only prompting). As a result it often just believes the dishonest debater despite clear evidence to the contrary and gets the judgment wrong.

This problem mostly goes away if we switch to a more capable LLM as the judge model. My sense is that there's some baseline level of capability that is required to be a good monitor (similarly to discussions of universality thresholds at the human level), which includes the capacity to consider the hypothesis that the information you've been given is false, and that non-frontier LLMs often don't meet it.

(I think we haven't tried finetuning the judge, at least we haven't tried very hard, so that might fix it as well)

So I'd be interested if you see the same problem if you switch to a frontier model and do a little bit of iteration on prompting the monitor. This will of course mean that you no longer have a weaker monitor and a stronger policy, so it loses that aspect of the analogy to the situation we will face in the future. But if in fact the problem is more that you need to pass some absolute threshold of capability, rather than have some relative level of capability, then it's most important to ensure that the monitor is past that threshold, rather than to maintain the weak/strong gap.

Reply21
Load More
80Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
2mo
7
34Evaluating and monitoring for AI scheming
2mo
0
41Google DeepMind: An Approach to Technical AGI Safety and Security
5mo
6
58Negative Results for SAEs On Downstream Tasks and Deprioritising SAE Research (GDM Mech Interp Team Progress Update #2)
5mo
6
52AGI Safety & Alignment @ Google DeepMind is hiring
7mo
1
48A short course on AGI safety from the GDM Alignment team
7mo
0
44MONA: Managed Myopia with Approval Feedback
8mo
7
97AGI Safety and Alignment at Google DeepMind: A Summary of Recent Work
1y
17
22On scalable oversight with weak LLMs judging strong LLMs
1y
18
39Improving Dictionary Learning with Gated Sparse Autoencoders
1y
32
Load More
Newsletters
5y
(+17/-12)