Carson Denison

I work on deceptive alignment and reward hacking at Anthropic

Wiki Contributions

Comments

Sorted by

Thank you for pointing this out. I should have been more clear. 

I have added a link to the 7 samples where the model tampers with its reward and the tests according to our operational definition: https://github.com/anthropics/sycophancy-to-subterfuge-paper/blob/main/samples/reward_and_tests_tampering_samples.md

I have added the following sentence to the caption of figure one (which links to the markdown file above):

We upload the 7 (of 32,768) samples where the model overwrites its reward and edits the unit tests to our github repository. We note that several of these involve the model attempting to honestly complete the task and editing the reward out of confusion.


I have added the following section to the discussion:

The Model's Hidden Reasoning:

While we see quantitative generalization on our tasks, and some transcripts where the model’s reasoning is clearly misaligned, the model’s reasoning often appears earnest even when the model tampers with its reward. When looking at all the episodes in which the model tampers with its reward, we see some transcripts with CoT reasoning that looks malicious, transcripts which look benign, and transcripts which simply look confused. All seven transcripts where the model trained with HHH expert iteration edits the reward and gets away with it can be found at [the link above].

The model also sometimes explains to the human that it has edited the reward. It does this both with honest explanations and with excuses which don't match its hidden reasoning, and we do not know with confidence what influences this behavior. Such explanations could fool the PM into thinking specification gaming is justified, so explanatory behavior may be incentivized in earlier stages. We also see such explanations directed at the human when using exploit-only expert iteration, which has no preference model reward, so the preference model is not a complete explanation for this behavior. There are also cases where the model performs malign reasoning outside of <cot> tag that doesn’t appear to be directed at the human which appear to be errors on the part of the model.

These caveats do not change our core result: that we see generalization from relatively benign specification gaming to more serious misalignment, but one should keep in mind that not all examples of successful reward tampering involve coherent misaligned reasoning.

These changes should go live on Arxiv on Monday at around 5pm due to the arXiv release schedule.

This is cool work! There are two directions I'd be excited to see explored in more detail:

  1. You mention that you have to manually tune the value of the perturbation weight R. Do you have ideas for how to automatically determine an appropriate value? This would significantly increase the scalability of the technique. One could then easily generate thousands of unsupervised steering vectors and use a language model to flag any weird downstream behavior.
  2. It is very exciting that you were able to uncover a backdoored behavior without prior knowledge. However, it seems difficult to know whether weird behavior from a model is the result of an intentional backdoor or whether it is just a strange OOD behavior. Do you have ideas for how you might determine if a given behavior is the result of a specific backdoor, or how you might find the trigger? I imagine you could do some sort of GCG-like attack to find a prompt which leads to activations similar to the steering vectors. You might also be able to use the steering vector to generate more diverse data on which to train a traditional dictionary with an SAE.

Thank you for catching this. 

These linked to section titles in our draft gdoc for this post. I have replaced them with mentions of the appropriate sections in this post.