1788

AI ALIGNMENT FORUM
AF

1787
Interpretability (ML & AI)TransformersAI

22

Weight-sparse transformers have interpretable circuits

by leogao
13th Nov 2025
1 min read
1

22

Interpretability (ML & AI)TransformersAI
Weight-sparse transformers have interpretable circuits
3Daniel Kokotajlo
New Comment
1 comment, sorted by
top scoring
Click to highlight new comments since: Today at 11:38 PM
[-]Daniel Kokotajlo3h30

Nice! Do you have thoughts on how to scale this to larger circuits? Presumably circuitry like "the high-level goals and principles used to make important decisions" involve a lot more than just two neurons and two attention channels.

Reply
Moderation Log
More from leogao
View more
Curated and popular this week
1Comments

TL;DR: We develop a novel method for finding interpretable circuits in Transformers, by training them to have sparse weights. This results in models that contain very high quality circuits: our circuits are global rather than datapoint dependent; we explain the circuit down to very granular objects, like individual neurons and attention channels, rather than entire MLP layers, attention heads, or groups of nodes; and the circuits are often simple enough to draw in their entirety on a whiteboard. The downside is that our method produces de novo sparse language models, which are extremely expensive to train and deploy, making it unlikely that we will ever be able to use this method to directly pretrain frontier models. We share preliminary results on using sparse models to explain an existing dense model, but our main theory of impact is to eventually scale our method to train a fully interpretable moderate-sized model. If we could fully interpret even (say) a GPT-3 level intelligence, it could aid dramatically in developing a theory of cognition in general.

[Blog] [Paper] [Code]

Abstract

Finding human-understandable circuits in language models is a central goal of the field of mechanistic interpretability. We train models to have more understandable circuits by constraining most of their weights to be zeros, so that each neuron only has a few connections. To recover fine-grained circuits underlying each of several hand-crafted tasks, we prune the models to isolate the part responsible for the task.
These circuits often contain neurons and residual channels that correspond to natural concepts, with a small number of straightforwardly interpretable connections between them. We study how these models scale and find that making weights sparser trades off capability for interpretability, and scaling model size improves the capability-interpretability frontier. However, scaling sparse models beyond tens of millions of nonzero parameters while preserving interpretability remains a challenge. In addition to training weight-sparse models de novo, we show preliminary results suggesting that our method can also be adapted to explain existing dense models. Our work produces circuits that achieve an unprecedented level of human understandability and validates them with considerable rigor.