I don't use LessWrong much anymore. Find me at www.turntrout.com.
My name is Alex Turner. I'm a research scientist at Google DeepMind on the Scalable Alignment team. My views are strictly my own; I do not represent Google. Reach me at alex[at]turntrout.com
If you're interested in learning what making progress on a hard problem actually feels like, Team Shard is where you want to be.
— Bruce Lee. MATS 7.0, primary author of Distillation Robustifies Unlearning
I really like Team Shard's focus on solving big problems that other people are missing. This focus resulted in me doing work that I think is much more impactful than I would have otherwise done. Being in Team Shard is also really fun.
— Luke Marks. MATS 8.0, primary author on Optimizing The Final Output Can Obfuscate CoT
Alex Turner and Alex Cloud provided consistently thoughtful guidance and inspiration that enabled my progress. I also had a ton of fun with the team :)
— Ariana Azarbal. MATS 8.0, primary author on Training a Reward Hacker Despite Perfect Labels
Being a member of Team Shard helped me grow tremendously as a researcher. It gave me the necessary skills and confidence to work in AI Safety full-time.
— Jacob Goldman-Wetzler. MATS 6.0, primary author of Gradient Routing, now working at Anthropic
The mentors are ambitious and set high expectations, but are both super friendly and go out of their way to create a healthy, low-stress atmosphere amongst the team, ideal for brainstorming and collaboration. This collaborative environment, combined with their strong high-level research taste, has consistently led to awesome research outputs.
My time on Team Shard set the bar for what a productive collaboration should look like.
— Jacob Drori. MATS 8.0, primary author of Optimizing The Final Output Can Obfuscate CoT (Research Note)
Apply for MATS mentorship at Team Shard before October 2nd. Alex Cloud (@cloud) and I run this MATS stream together. We help alignment researchers grow from seeds into majestic trees. We have fun, consistently make real alignment progress, and have a dedicated shitposting channel.
Our mentees have gone on to impactful jobs, including (but not limited to)
We likewise have a strong track record in research outputs, including
Our team culture is often super tight-knit and fun. For example, in this last MATS round, we lifted together every Wednesday and Thursday.
Apply here before October 2nd. (Don't procrastinate, and remember the planning fallacy!)
Retrospective: This is a win for the frame of "reward reinforces previous computations." Ever since 2022, I've thought of "reward" as reinforcing the computations which led to the reward and as a chisel which carves circuits into the policy. From "Reward is not the optimization target":
What reward actually does is reinforce computations which lead to it...
I suggest that you mechanistically model RL agents as executing behaviors downstream of past reinforcement (e.g. putting trash away), in addition to thinking about policies which are selected for having high reward on the training distribution (e.g. hitting the button). The latter form of reasoning skips past the mechanistic substance of reinforcement learning: The chiseling of computations responsible for the acquisition of the cognition-updater...
In my view, reward’s proper role isn’t to encode an objective, but a reinforcement schedule, such that the right kinds of computations get reinforced within the AI’s mind.
By thinking about reward in this way, I was able to predict[1] and encourage the success of this research direction.
Ariana showed that in this coding environment, it's not just about what the AI ends up choosing but also why the AI made that choice to begin with. Even though we "perfectly" reinforce the AI for doing what we wanted (i.e. avoiding special cases), we also often reinforced the system for the wrong reasons (i.e. considering special-casing the algorithm, even when not asked to do so). The AI's propensity to consider doing the wrong thing was reinforced and so the AI generalized to hack more in-distribution.
Assuming these results generalize, the trained policy is not just determined by the outputs which get rewarded. The trained policy also depends on which intermediate computations get rewarded.
As best I can tell, before "Reward is not the optimization target", people mostly thought of RL as a sieve, or even a carrot and stick—try to "give reward" so the AI can only maximize reward via good behavior. Few[2] other people speculated that RL generalization is controlled by why the policy took an action. So I give myself and @Quintin Pope[3] a bunch of points.
To be clear, my prediction was not as precise as "I bet you can reinforce sus CoTs and get sus generalization." The brainstorming process went like:
Perhaps Steve Byrnes is an exception.
Quintin and I came up with "Reward is not the optimization target" together.
Nice work. What a cool use of steering vectors!
based prediction
Wasn't it the case that for some reason, full distillation had comparable compute requirement to data filtering? I was surprised by that. My impression is that distillation should be more like 10% of pretraining (data filtering), which would make the computational UNDO results much stronger. Not sure what happened here.
I think you missed the point here. My suggested scheme is 1. label a small amount of data 2. train a classifier 3. apply the classifier to know if you should skip a token / make the target logprobs be noise or use the original logprobs. This is spiritually the same as 1. label a small amount of data 2. use that for unlearning 3. apply the unlearned model to know if the target logprobs should be noise or sth close to the original logprobs.
EDIT: I think I misunderstood your original point - were you saying to just label all of the data using a classifier trained on just 1% of the pretraining data? (Neither of your schemes say what to do after step 3.)
> UNDO over Unlearn-and-Distill is that it provides a tunable compute/robustness knob between the conventional unlearning and full reinitialization/data filtering
This to be a part of the option space that nobody is interested in, but it's still scientifically interesting.
Why do you claim that no one is interested in this? Lots of labs do data filtering, which is known to be effective but quite costly to iterate on.
In other words, "using unlearning techniques like GradDiff/MaxEnt during pretraining" might be a really powerful technique.
I have a cached thought that this was found to disrupt overall capabilities / make learning harder, but I don't have a reference on hand.
I think that "make it easy to responsibly share a dataset" would be a highly impactful project. Anthropic's Claude 4 model card already argues that dataset leakage hurt Claude 4's alignment (before mitigations).
For my part, I'll put out a $500 bounty on someone completing this project and doing a good job of it (as judged by me / whomever I consult). I'd also tweet it out and talk about how great it is that [person] completed the project :) I don't check LW actively, so if you pursue this, please email alex@turntrout.com.
EDIT: Thanks to my coworker Anna Wang , the bounty is doubled to $1,000! Completion criterion is:
An unfamiliar researcher can follow the instructions and have their dataset responsibly uploaded within one hour
Please check proposed solutions with dummy datasets and scrapers
That isn't a conclusion I draw, though. I think you don't know how to parse what I'm saying as different from that rather extreme conclusion -- which I don't at all agree with? I feel concerned by that. I think you haven't been tracking my beliefs accurately if you think I'd come to this conclusion.
FWIW I agree with you on a), I don't know what you mean by b), and I agree with c) partially—meaningfully different, sure.
Anyways, when I talk about "imperfect" values, I'm talking about a specific concept which I probably should have clarified. The model you still need to download to get my view is that Alignment Allows “Non-Robust” Decision-Influences and Doesn’t Require Robust Grading (the culmination of two previous essays).