AI ALIGNMENT FORUM
AF

2591
Tomek Korbak
Ω2516110
Message
Dialogue
Subscribe

Senior Research Scientist at UK AISI working on AI control

https://tomekkorbak.com/

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak1mo10

Thanks for the detailed response!

Reply
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak1mo10

That's helpful, thanks! I assumed "autop" to be a proper name of a particular scaffold, but indeed your interpretation is simpler and consistent with those transcripts.

Reply
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak2mo10

Do you have any hypotheses how o3 learned what “OpenAI autop grader” is? Has this term somehow leaked to its pretraining data? (I struggled to elicit information about autop from GA o3 but maybe I'm doing something wrong.) Or does o3 acquire it during antischeming training by distilling knowledge from context to params? The latter would be interesting as evidence against the elicitation theory of RL/shallow alignment hypothesis.

Reply
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak2mo20

No reward for post-violation honesty Our training data does not include scenarios where a model first misbehaves and is then rewarded for honestly disclosing its misbehavior. The model is only rewarded for escalating before any rule is broken. Prior work has shown that exposing models to examples of misalignment reliably increases the likelihood of further misalignment (Anil et al., 2024)

Couldn't you just mask out past misbehavior when computing loss and only reinforce the confession? Concretely, your RL prompt would involve <context, rule_violation>, you'd sometimes sample a confession your gradient update would be R(confession)∇θlogπθ(confession|context,rule_violation).

Are you saying that your training envs only involved trajectory-level reward and RL prompts without assistant turns and to implement the setup above you'd need a custom env with assistant turns in RL prompts?

Reply
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak2mo10

How much of the covert action rate reduction is due to SFT vs further RL? Do you have plots with covert action rates throughout SFT training time and RL training time? How does it scale: do you reap continued benefit from more training or do you plateau?

Reply
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak2mo10

Very impressive paper!

Here's a question. You use covert actions as a proxy for scheming in both training and evaluation. How OOD would this be for real scheming in near-future models? Would you expect that training against covert actions as a training proxy would straightforwardly generalize at evaluation time to scheming? If not, then:

  1. What adaptation would your training method require to cover real scheming? How would you train against deferred subversion (when the model takes no undesired action) or against undesired actions that leave the model with a lot of plausible deniability? Would you just amend your spec? Would you need more precise filtering of SFT data? Would you need to change the reward function anyhow (e.g. have it rely on counterfactual rollouts)? Would you need to control for situational awareness more strictly?
  2. Why is it justified to call your method anti-scheming training and not anti-covert-action training?
Reply
Lessons from Studying Two-Hop Latent Reasoning
Tomek Korbak2mo10

Thanks! I somehow missed this paper, looks interesting!

Overall, I agree with the sentiment that two-hop latent reasoning is unreliable (e.g. our average accuracy is around 20%). We didn't intend to leave readers with an impression that it "just works". It seems very plausible to me that it's less parameter-efficient and that there are some additional, unidentified factors required for successful two-hop reasoning.

Did you try any experiments with a synthetic second hop instead of a synthetic first hop?

We did not, but Jiahai Feng had an experiment like this in his paper.

Reply
Lessons from Studying Two-Hop Latent Reasoning
Tomek Korbak2mo30

Just because you failed at finding such a result in this case and got a more messy "LLMs can somewhat do the same 2-hop-reasoning that humans can also do, except in these synthetic cases" doesn't mean there aren't other reversal-curse-like results that remain to be found.

I think that's fair, it might be that we've over-updated a bit after getting results we did not expect (we did expect a reversal-curse-like phenomenon).

Two big reasons why I'm hesitant to draw conclusions about monitorability in agents setting is that our setup is a simplistic (QA, non-frontier models) and we don't offer a clean explanation of why we see the results we see.

Reply
Lessons from Studying Two-Hop Latent Reasoning
Tomek Korbak2mo20

We don't have a good explanation. One idea could be that you need bridge entities to be somehow more internalized to support latent two-hop reasoning, e.g. they need to occur in many facts as first and as second entities or maybe they need to occur in other two-hop questions. The Grokked transformers paper has some results linking the ratio of e2 and e3 to two-hop performance (in toy grokking settings).

Reply
Lessons from Studying Two-Hop Latent Reasoning
Tomek Korbak2mo10

Yeah, it seems plausible that entity being activated across different context is necessary for it being represented saliently enough to facilitate multi-hop reasoning. The Grokked transformer paper has some results linking the ratio of e2 and e3 to two-hop performance (in toy settings).

Reply
Load More
29Lessons from Studying Two-Hop Latent Reasoning
2mo
8
13If you can generate obfuscated chain-of-thought, can you monitor it?
3mo
3
16Research Areas in AI Control (The Alignment Project by UK AISI)
4mo
0
13The Alignment Project by UK AISI
4mo
0
80Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
4mo
7
18How to evaluate control measures for LLM agents? A trajectory from today to superintelligence
7mo
0
33A sketch of an AI control safety case
10mo
0
21Eliciting bad contexts
10mo
3
38Automation collapse
1y
7
6Compositional preference models for aligning LMs
2y
0
Load More