AI ALIGNMENT FORUM
Petrov Day
AF

1205
Tomek Korbak
Ω250690
Message
Dialogue
Subscribe

Senior Research Scientist at UK AISI working on AI control

https://tomekkorbak.com/

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak1d10

Do you have any hypotheses how o3 learned what “OpenAI autop grader” is? Has this term somehow leaked to its pretraining data? (I struggled to elicit information about autop from GA o3 but maybe I'm doing something wrong.) Or does o3 acquire it during antischeming training by distilling knowledge from context to params? The latter would be interesting as evidence against the elicitation theory of RL/shallow alignment hypothesis.

Reply
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak1d10

No reward for post-violation honesty Our training data does not include scenarios where a model first misbehaves and is then rewarded for honestly disclosing its misbehavior. The model is only rewarded for escalating before any rule is broken. Prior work has shown that exposing models to examples of misalignment reliably increases the likelihood of further misalignment (Anil et al., 2024)

Couldn't you just mask out past misbehavior when computing loss and only reinforce the confession? Concretely, your RL prompt would involve <context, rule_violation>, you'd sometimes sample a confession your gradient update would be R(confession)∇θlogπθ(confession|context,rule_violation).

Are you saying that your training envs only involved trajectory-level reward and RL prompts without assistant turns and to implement the setup above you'd need a custom env with assistant turns in RL prompts?

Reply
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak1d10

How much of the covert action rate reduction is due to SFT vs further RL? Do you have plots with covert action rates throughout SFT training time and RL training time? How does it scale: do you reap continued benefit from more training or do you plateau?

Reply
Stress Testing Deliberative Alignment for Anti-Scheming Training
Tomek Korbak1d10

Very impressive paper!

Here's a question. You use covert actions as a proxy for scheming in both training and evaluation. How OOD would this be for real scheming in near-future models? Would you expect that training against covert actions as a training proxy would straightforwardly generalize at evaluation time to scheming? If not, then:

  1. What adaptation would your training method require to cover real scheming? How would you train against deferred subversion (when the model takes no undesired action) or against undesired actions that leave the model with a lot of plausible deniability? Would you just amend your spec? Would you need more precise filtering of SFT data? Would you need to change the reward function anyhow (e.g. have it rely on counterfactual rollouts)? Would you need to control for situational awareness more strictly?
  2. Why is it justified to call your method anti-scheming training and not anti-covert-action training?
Reply
Lessons from Studying Two-Hop Latent Reasoning
Tomek Korbak12d10

Thanks! I somehow missed this paper, looks interesting!

Overall, I agree with the sentiment that two-hop latent reasoning is unreliable (e.g. our average accuracy is around 20%). We didn't intend to leave readers with an impression that it "just works". It seems very plausible to me that it's less parameter-efficient and that there are some additional, unidentified factors required for successful two-hop reasoning.

Did you try any experiments with a synthetic second hop instead of a synthetic first hop?

We did not, but Jiahai Feng had an experiment like this in his paper.

Reply
Lessons from Studying Two-Hop Latent Reasoning
Tomek Korbak12d30

Just because you failed at finding such a result in this case and got a more messy "LLMs can somewhat do the same 2-hop-reasoning that humans can also do, except in these synthetic cases" doesn't mean there aren't other reversal-curse-like results that remain to be found.

I think that's fair, it might be that we've over-updated a bit after getting results we did not expect (we did expect a reversal-curse-like phenomenon).

Two big reasons why I'm hesitant to draw conclusions about monitorability in agents setting is that our setup is a simplistic (QA, non-frontier models) and we don't offer a clean explanation of why we see the results we see.

Reply
Lessons from Studying Two-Hop Latent Reasoning
Tomek Korbak12d20

We don't have a good explanation. One idea could be that you need bridge entities to be somehow more internalized to support latent two-hop reasoning, e.g. they need to occur in many facts as first and as second entities or maybe they need to occur in other two-hop questions. The Grokked transformers paper has some results linking the ratio of e2 and e3 to two-hop performance (in toy grokking settings).

Reply
Lessons from Studying Two-Hop Latent Reasoning
Tomek Korbak15d10

Yeah, it seems plausible that entity being activated across different context is necessary for it being represented saliently enough to facilitate multi-hop reasoning. The Grokked transformer paper has some results linking the ratio of e2 and e3 to two-hop performance (in toy settings).

Reply
Optimizing The Final Output Can Obfuscate CoT (Research Note)
Tomek Korbak2mo50

Cool research!

  1. Have you tried experimenting with models other than Qwen? Similarly to Sam, I'd also expect spillover to decrease with size and the amount of reasoning post-training.
  2. If you're penalizing a single word, you might be just unconditionally decreasing weights in unembed controlling logits of tokens for this word. But this strategy doesn't work for more complicated types of obfuscation.

Conditional feedback spillover: Since later tokens are conditioned on earlier tokens, safe‑looking CoTs may increase the likelihood of safe outputs, causing safe-looking CoTs to be reinforced. Mitigations are not discussed in this post; they are left for future work.

Potentially you might have two separate vocabularies, for CoT and for final outputs. This should reduce spillover.

Reply
Pretraining Language Models with Human Preferences
Tomek Korbak3y20

Have you tested the AI's outputs when run in <|bad|> mode instead of <|good|> mode?

We did, LMs tends to generate toxic text when conditioned on <|bad|>. Though we tended to have a risk-aversive thresholds, i.e. we used <|good|> for only about 5% safest sentences and <|bad|> for the remaining 95%. So <|bad|> is not bad all the time.

Here it would be helpful to know what the AI produces when prompted by <|bad|>.

That's a good point. We haven't systematically investigate difference in capabilities between<|good|> and <|bad|> modes, I'd love to see that.

Just before public release, one could delete the <|bad|> token from the tokenizer and the model parameters, so switching to evil mode would require rediscovering that token embedding.

Yeah, you could even block the entire direction in activation space corresponding to the embedding of the <|bad|> token

Reply
Load More
29Lessons from Studying Two-Hop Latent Reasoning
16d
8
13If you can generate obfuscated chain-of-thought, can you monitor it?
2mo
3
16Research Areas in AI Control (The Alignment Project by UK AISI)
2mo
0
13The Alignment Project by UK AISI
2mo
0
80Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety
2mo
7
18How to evaluate control measures for LLM agents? A trajectory from today to superintelligence
5mo
0
33A sketch of an AI control safety case
8mo
0
21Eliciting bad contexts
8mo
3
38Automation collapse
1y
7
6Compositional preference models for aligning LMs
2y
0
Load More