Let’s call “interpretability-in-the-loop training” the idea of running a learning algorithm that involves an inscrutable trained model, and there’s some kind of interpretability system feeding into the loss function / reward function.
Interpretability-in-the-loop training has a very bad rap (and rightly so). Here’s Yudkowsky 2022:
When you explicitly optimize against a detector of unaligned thoughts, you're partially optimizing for more aligned thoughts, and partially optimizing for unaligned thoughts that are harder to detect. Optimizing against an interpreted thought optimizes against interpretability.
An AI produces a final output [X] via some method [M]. You can analyze [M] using technique [T], to learn what the AI is up to. You could train on that. Never do that.
You train on [X]. Only [X]. Never [M], never [T].
Why? Because [T] is how you figure out when the model is misbehaving.
If you train on [T], you are training the AI to obfuscate its thinking, and defeat [T]. You will rapidly lose your ability to know what is going on, in exactly the ways you most need to know what is going on.
Those bits of optimization pressure from [T] are precious. Use them wisely.
This is a simple argument, and I think it’s 100% right.
But…
Consider compassion in the human brain. I claim that we have an innate reward function that triggers not just when I see that my friend is happy or suffering, but also when I believe that my friend is happy or suffering, even if the friend is far away. So the human brain reward can evidently get triggered by specific activations inside my inscrutable learned world-model.
Thus, I claim that the human brain incorporates a form of interpretability-in-the-loop RL training.
Inspired by that example, I have long been an advocate for studying whether and how one might use interpretability-in-the-loop training for aligned AGI. See for example Reward Function Design: a starter pack sections 1, 4, and 5.
My goal in the post is to briefly summarize how I reconcile the arguments at the top with my endorsement of this kind of research program.
My overall position
Yudkowsky & Zvi are correct that straightforward interpretability-in-the-loop training of LLMs is almost definitely a very bad idea.
…Unless interpretability someday develops to such a refined point that it’s adversarially robust (i.e., we understand the model so well that problematic thoughts have nowhere to hide from the interpretability tools). But that sure seems like a long-shot.
There exists a certain brain-like version of interpretability-in-the-loop RL training that does not suffer from this very obvious failure mode
(…although it might or might not fail for other reasons!)
And there’s a straightforward explanation of exactly how it avoids this obvious failure mode.
The rest of this post will present this explanation:
How the brain-like version of interpretability-in-the-loop training avoids the obvious failure mode
As an oversimplified toy model, I suggest to think of desires as a learned linear functional on beliefs (see my Valence series §2.4.1). I.e. “desires” constitute a map whose input is some thought / plan / etc. (over on the belief side), and whose output is a numerical score indicating whether that thought / plan / etc. is good (if the score is positive) or bad (if it’s negative).
Anyway, the important point is that these two boxes are updated in different ways. Let’s expand the diagram to include the different updating systems, and how interpretability-in-the-loop training fits in:
The interpretability data is changing the reward signals, but the reward signals are not directly changing the belief box that the interpretability system is querying.
That means: The loop doesn’t close. This interpretability system is not creating any gradient that directly undermines its own faithfulness.
…So that’s how this brain-like setup avoids the obvious failure mode of interpretability-in-the-loop that Yudkowsky & Zvi were talking about at the top.
Things can still go wrong in more subtle and indirect ways
…Or at least, it avoids the most straightforward manifestation of that problem. There are more subtle things that might go wrong. I have a high-level generic discussion in Valence series §3.3, where I point out that there exist indirect pathways through this diagram, and discuss how they can cause problems:
And these kinds of problems can indeed pop up in the context of compassion and other interpretability-in-the-loop human social instincts. The result is that human social instincts that on paper might look robustly prosocial, are in fact not so robustly prosocial in the real (human) world. See my Sympathy Reward post §4.1 and Approval Reward post §6 for lots of everyday examples.
So the upshot is: I don’t think the brain-like version of interpretability-in-the-loop RL training is a panacea for aligned ASI, and I’m open-minded to the possibility that it’s just not a viable approach at all. But it’s at least a not-obviously-doomed research direction, and merits more study.
Let’s call “interpretability-in-the-loop training” the idea of running a learning algorithm that involves an inscrutable trained model, and there’s some kind of interpretability system feeding into the loss function / reward function.
Interpretability-in-the-loop training has a very bad rap (and rightly so). Here’s Yudkowsky 2022:
Or Zvi 2025:
This is a simple argument, and I think it’s 100% right.
But…
Consider compassion in the human brain. I claim that we have an innate reward function that triggers not just when I see that my friend is happy or suffering, but also when I believe that my friend is happy or suffering, even if the friend is far away. So the human brain reward can evidently get triggered by specific activations inside my inscrutable learned world-model.
Thus, I claim that the human brain incorporates a form of interpretability-in-the-loop RL training.
Inspired by that example, I have long been an advocate for studying whether and how one might use interpretability-in-the-loop training for aligned AGI. See for example Reward Function Design: a starter pack sections 1, 4, and 5.
My goal in the post is to briefly summarize how I reconcile the arguments at the top with my endorsement of this kind of research program.
My overall position
The rest of this post will present this explanation:
How the brain-like version of interpretability-in-the-loop training avoids the obvious failure mode
The human brain has beliefs and desires. They are different. It’s possible to want something without expecting it, and it’s possible to expect something without wanting it. This should be obvious common sense to everyone, unless your common sense has been crowded out by “active inference” nonsense.
Beliefs and desires are stored in different parts of the brain, and updated in different ways. (This is a huge disanalogy between LLMs and brains.)
As an oversimplified toy model, I suggest to think of desires as a learned linear functional on beliefs (see my Valence series §2.4.1). I.e. “desires” constitute a map whose input is some thought / plan / etc. (over on the belief side), and whose output is a numerical score indicating whether that thought / plan / etc. is good (if the score is positive) or bad (if it’s negative).
Anyway, the important point is that these two boxes are updated in different ways. Let’s expand the diagram to include the different updating systems, and how interpretability-in-the-loop training fits in:
The interpretability data is changing the reward signals, but the reward signals are not directly changing the belief box that the interpretability system is querying.
That means: The loop doesn’t close. This interpretability system is not creating any gradient that directly undermines its own faithfulness.
…So that’s how this brain-like setup avoids the obvious failure mode of interpretability-in-the-loop that Yudkowsky & Zvi were talking about at the top.
Things can still go wrong in more subtle and indirect ways
…Or at least, it avoids the most straightforward manifestation of that problem. There are more subtle things that might go wrong. I have a high-level generic discussion in Valence series §3.3, where I point out that there exist indirect pathways through this diagram, and discuss how they can cause problems:
And these kinds of problems can indeed pop up in the context of compassion and other interpretability-in-the-loop human social instincts. The result is that human social instincts that on paper might look robustly prosocial, are in fact not so robustly prosocial in the real (human) world. See my Sympathy Reward post §4.1 and Approval Reward post §6 for lots of everyday examples.
So the upshot is: I don’t think the brain-like version of interpretability-in-the-loop RL training is a panacea for aligned ASI, and I’m open-minded to the possibility that it’s just not a viable approach at all. But it’s at least a not-obviously-doomed research direction, and merits more study.