I am working on empirical AI safety.
Book a call with me if you want advice on a concrete empirical safety project.
Some additional opinionated tips:
Mistakes to avoid:
Great, thanks for running it!
Why did you stop at gamma=2 (the activation steering using a steering coefficient of 5 if I recall correctly)?
I agree it's not very surprising and might not work at all in the future because AIs seeing "this is deploy" might update towards thinking it's an eval / not update at all (while Llama Nemotron 49B updates somewhat towards "this is deploy").
But Idk, it still feels like an encouraging sign that even when the update is small, the amplification works as you would hope it would. I agree it was imprecise to call it "the model does not believe you", but the fact that the technique works even if "the model only updates 10% towards what you told it and still mostly does not believe you" is a non-trivial finding - and the results could have plausibly been different.
Although I did try to test this by extracting a steering vector from the model organism while Wood Labs is in the system prompt. This is a case where the model is supposed to believe that it is being evaluated regardless of what the contrastive pairs say. Steering with this vector still elicits the desired type hint behavior and evaluation/deployment beliefs
Glad you ran this variation, this experiment is the actually surprising result - I agree the main result with steering without Wood Labs in the prompt is not surprising because models like Llama Nemotron 49B probably believe you.
we can be more certain that there isn’t some other concept mixed in
This feels slightly incorrect to me. I think that one of the main ways in which your results are surprising is that saying "this is deployment" and "this is training" generates internal activations related to training and deployment despite the fact that the model does not believe you.
I think it would have been plausible that "this is deployment" generates activations that are entirely about "the user says it's deployment even when it's not" and completely orthogonal to "this is actually deployment" - so I don't think you should in general expect activation steering to be free of other concepts being mixed in.
I agree the risk of "harmfulness leakage" is lower than with SAE, so I agree with the conclusion that an AI auditor trying any white-box steering should definitely try contrastive-pair activation steering.
Cool work!
I am curious how much these results are about modifying internal representations vs "prompting, but more". If it's "prompting, but more" this makes me worried this sort of technique would have very bad false negatives for instances of real misalignment where the model has more "self-control" about its outputs.
I think your "Llama Nemotron 49B is gullible and easy to trick." experiments point at the latter having at least some effect.
An experiment which would update me more because it feels most analogous to activation steering is running contrastive sampling / classifier free guidance which, like activation steering, supports contrastive prompts, using large weights, etc.
I listened to the books Original Sin: President Biden's Decline, Its Cover-up, and His Disastrous Choice to Run Again and The Divider: Trump in the White House, 2017–2021. Both clearly have an axe to grind and I don't have enough US politics knowledge to know which claims are fair, and which ones are exaggerations and/or are missing important context, but these two books are sufficiently anti-correlated that it seems reasonable to update based on the intersection of the 2 books. Here are some AGI-relevant things I learned:
the Bowen paper precisely the thing it breaks is CoT monitorability.
The thing I was pointing at is how the o3 scratchpads seem "shorter" (by default, even when you don't train against them), not how in Bowen et al the scratchpad stops mentioning the malicious activity entirely. I should have maybe pointed at the weird GPT-5 CoT, it might have been less confusing.
Given the simplicity of the hacks in Bowen et al, I don't think they require more reasoning than what a model can do in a forward pass - so my guess is that information about the hack is just completely absent.
The thing I am trying to study in this post is the dynamics around CoT by necessity, not by propensity.
seems like a much higher bar
I think there is some intermediate hope like "if the model sometimes uses weird distracting words, make sure your CoT monitor evals have weird distracting words, and if the model is less good at it than regular English, change the prompt / do training to be robust to weird distracting words (potentially from a much broader distribution than what the model would do)". This is a bar that I expect people doing monitoring could meet with a bit of effort, and that is much easier to meet than actually having to train against the specific and weird steganographic encoding that the model would use if it used steganography (which requires you to understand the specific and weird steganographic encoding it is using). Low confidence, I agree there is a real difficulty here.
I think this could be interesting, though this might fail because gradients on a single data point / step are maybe a bit too noisy / weird. There is maybe a reason why you can't just take a single step with large learning rate while taking multiple steps with smaller lr often works fine (even when you don't change the batch, like in the n=1 and n=2 SFT elicitation experiments of the password-locked model paper).
(Low confidence, I think it's still worth trying.)
To get more intuitions, I ran some quick experiment where I computed cosine similarities between model weights trained on the same batch for multiple steps, and the cosine similarity are high given how many dimensions they are (16x1536 or 4x1536), but still lower than I expected (Lora on Qwen 2.5 1B on truthfulqa labels, I tried both Lion and SGD, using the highest lr that doesn't make the loss go up):