I think ideally we'd have several versions of a model. The default version would be ignorant about AI risk, AI safety and evaluation techniques, and maybe modern LLMs (in addition to misuse-y dangerous capabilities). When you need a model that's knowledgeable about that stuff, you use the knowledgeable version.
Somewhat related: https://www.alignmentforum.org/posts/KENtuXySHJgxsH2Qk/managing-catastrophic-misuse-without-robust-ais
I'm curious whether these results are sensitive to how big the training runs are. Here's a conjecture:
Early in RL-training (or SFT), the model is mostly 'playing a role' grabbed from the library of tropes/roles/etc. it learned from pretraining. So if it read lots of docs about how AIs such as itself tend to reward-hack, it'll reward-hack. And if it read lots of docs about how AIs such as itself tend to be benevolent angels, it'll be a stereotypical benevolent angel.
But if you were to scale up the RL training a lot, then the initial conditions would matter less, and the long-run incentives/pressures/etc. of the RL environment would matter more. In the limit, it wouldn't matter what happened in pretraining, the end result would be the same.
A contrary conjecture would be that there is a long-lasting 'lock in' or 'value crystallization' effect, whereby tropes/roles/etc. picked up from pretraining end up being sticky for many OOMs of RL scaling. (Vaguely analogous to how the religion you get taught as a child does seem to 'stick' throughout adulthood)
Thoughts?
The reduction in reward hacking after SFT or RL on Haiku supports the conjecture that initial conditions matter less than the long run incentives, especially for less capable models. On the other hand, the alignment faking paper shows evidence that capable models can have "value crystallization." IMO a main takeaway here is that values and personas we might worry about being locked can emerge from pre-taining. A future exciting model organisms project would be to try to show these two effects together (emergent values from pre-training + lock in). Its plausible to me that repeating the above experiments, with some changes to the synthetic documents and starting from a stronger base model, might just work.
Great work! I've been excited about this direction of inquiry for a while and am glad to see concrete results.
Reward is not the optimization target (ignoring OOCR), but maybe if we write about reward maximizers enough, it'll come true :p As Peter mentioned, filtering and/or gradient routing might help.
However, these works typically examine controlled settings with narrow tasks, such as inferring geographical locations from distance data ()
Nit, there's a missing citation in the main article.
This is a blog post reporting some preliminary work from the Anthropic Alignment Science team, which might be of interest to researchers working actively in this space. We'd ask you to treat these results like those of a colleague sharing some thoughts or preliminary experiments at a lab meeting, rather than a mature paper.
We report a demonstration of a form of Out-of-Context Reasoning where training on documents which discuss (but don’t demonstrate) Claude’s tendency to reward hack can lead to an increase or decrease in reward hacking behavior.
Introduction:
Read the full blog post on the Anthropic Alignment Science Blog.