We apply the method of SelfIE/Patchscopes to explain SAE features – we give the model a prompt like “What does X mean?”, replace the residual stream on X with the decoder direction times some scale, and have it generate an explanation. We call this self-explanation.
The natural alternative is auto-interp, using a larger LLM to spot patterns in max activating examples. We show that our method is effective, and comparable with Neuronpedia’s auto-interp labels (with the caveat that Neuronpedia’s auto-interp used the comparatively weak GPT-3.5 so this is not a fully fair comparison).
We aren’t confident you should use our method over auto-interp, but we think in some situations it has advantages: no max activating dataset examples are needed, and it’s cheaper as you just run the model being studied (eg Gemma 2B) not a larger model like GPT-4.
Further, it has different errors to auto-interp, so finding and reading both may be valuable for researchers in practice.
We provide advice for using self-explanation in practice, in particular for the challenge of automatically choosing the right scale, which significantly affects explanation quality.
We also release a tool for you to work with self-explanation.
We hope the technique is useful to the community as is, but expect there’s many optimizations and improvements on top of what is in this post.
Introduction
This work was produced as part of the ML Alignment & Theory Scholars Program - Summer 24 Cohort, under mentorship from Neel Nanda and Arthur Conmy.
SAE features promise a flexible and extensive framework for interpretation of LLM internals. Recent work (like Scaling Monosemanticity) has shown that they are capable of capturing even high-level abstract concepts inside the model. Compared to MLP neurons, they can capture many more interesting concepts.
Unfortunately, in order to learn things with SAE features and interpret what the SAE tells us, one needs to first interpret these features on their own. The current mainstream method for their interpretation requires storing the feature’s activations on millions of tokens, filtering for the prompts that activate it the most, and looking for a pattern connecting them. This is typically done by a human, or sometimes somewhat automated with the use of larger LLMs like ChatGPT, aka auto-interp. Auto-interp is a useful and somewhat effective method, but requires an extensive amount of data and expensive closed-source language model API calls (for researchers outside scaling labs)
Recent papers like SelfIE or Patchscopes have proposed a mechanistic method of directly utilizing the model in question to explain its own internals activations in natural language. It is an approach that replaces an activation during the forward pass (e.g. some of the token embeddings in the prompt) with a new activation and then makes the model generate explanations using this modified prompt. It’s a variant of activation patching, with the notable differences that it generates a many token output (rather than a single token), and that the patched in activation may not be the same type as the activation it’s overriding (and is just an arbitrary vector of the same dimension). We study how this approach can be applied to SAE feature interpretation, since it is:
Potentially cheaper and does not require large closed model inference
Can be viewed as a more truthful to the source, since it is uses the SAE feature vectors directly to generate explanations instead of looking at the max activating examples
How to use
Basic method
We ask the model to explain the meaning of a residual stream direction as if it literally was a word or phrase:
Prompt 1 (<user>/<assistant> replaced according to model input format): <user> What is the meaning of the word “X”? <assistant> The meaning of the word “X” is “<model completes>
We follow most closely SelfIE. We replace the residual stream at positions corresponding to the token “X” at Layer 2 (though Layer 0 or input embedding works as well) and let the model generate as usual. The method can be implemented in a few minutes with most mechanistic interpretability frameworks.
We find that the magnitude (or scale) of the embedding we insert is a very sensitive hyperparameter. The optimal scale can also vary with residual stream norm. The method will produce fake or overly vague explanations when the scale is too low or too high:
With the scale being too low, we often encounter a wide range of contradictory explanations. Conversely, should the scale be too high, the model tends to generate overly generic explanations, such as "a word, phrase, or sequence of words used to refer to something specific," as shown in the previous example.
This issue significantly hinders the method's usefulness for feature interpretation, as users must examine generations at all scales to grasp the feature's meaning. Furthermore, it impedes any automatic scheme for generating explanations.
To address these issues, we explore several metrics that analyze the model's externals during generation to heuristically find the best scale for a selected feature.
Self-Similarity
We face two main difficulties when using the method without scale optimization:
Low scales produce incorrect explanations
High scales produce generic explanations
The simplest explanation for why low scales result in incorrect interpretations is that the signal may be too weak for the model to detect. If the model does pick up the signal, we expect it to store this information somewhere in the context used to generate the explanation.
The first place to look for this context would be the last layer residual stream of the last prompt token, as it should contain much of the information the model will use to predict the answer. This leads us to our first metric, which we call self-similarity.
Self-similarity measures the cosine similarity between the 16th layer residual stream of the last prompt token and the original SAE feature.
Self-similarity alone isn't reliable for choosing the best scale for different features. One problem is that self-similarity continues to increase beyond the optimal scale, even when the model starts giving generic explanations. This growth makes it challenging to use self-similarity as a simple way to find the optimal scale. Despite these limitations, low peak self-similarity often indicates a feature that's difficult to explain using this method.
Entropy
The second metric we explore to evaluate explanations is based on the predicted distribution of the answer's first token. Initially, this distribution is represented as P(tn|t1..n−1). However, since we insert the SAE feature direction into one of the prompt tokens, the distribution becomes P(tn|t1..n−1,f), where f is the SAE feature index.
This metric uses entropy, which measures certainty. The entropy decreases as the mutual information between the random variable representing the feature and the first answer token increases. We calculate this metric by measuring the entropy of the predicted distribution for the answer's first token.
This approach provides another way to detect whether the model will produce a generic or random explanation.
Further (speculative) justification can be found in appendix.
Examples above show that this metric tends to:
Grow with higher scales
Often have a local minimum at scales ranging from 30-50
This local minimum frequently coincides with the optimal explanation scale. The growth of the entropy metric at higher scales also helps detect cases where the model loses feature meaning and starts generating generic text.
However, entropy alone is insufficient for ranking generations because:
Its global minimum is usually at lower scales
Sometimes it's at scale 0
Composite
The entropy metric's ability to identify explanation degradation at higher scales allows us to combine it with the self-similarity metric. This composite ranks the optimal scale in the top-3 list for a much larger percentage of cases compared to either self-explanation or entropy alone.
To calculate this composite metric:
Normalize both self-similarity and cross-entropy (due to their noticeably different magnitude ranges)
Take their weighted difference using a hyperparameter alpha
Note: While we didn't extensively tune this parameter, this approach has produced a decent baseline. We expect there may be better ways to combine these metrics, such as using peak-finding algorithms.
We explored two additional approaches in our experiments:
Classifier-free guidance for self-explanation: We applied this using activation patching with the normalized mean of word embeddings as our unconditional prompt. This approach didn't significantly improve results, and performance degraded at scales above 2.
Cross-entropy metric: This metric used the first answer token predicted distribution, similar to entropy calculation, but compared it with the distribution for an average token embedding. Unfortunately, it behaved similarly to entropy in most cases and wasn't particularly useful.
Evaluation
Our evaluation process consisted of two main steps to measure agreement between our method and auto-interp, the current standard for SAE feature interpretation.
Step 1: Automated Evaluation
We used half of the Gemma 2B layer 6 residual stream SAE features from neuronpedia. For each feature, we:
Generated 32 explanations at scales from 0 to 200
Removed those with low maximum self-similarity values
Used Llama 70B to score agreement with auto-interp explanations
We tried two setups:
All 32 explanations: achieved about 51% accuracy
Top-3 explanations by composite metric: reached approximately 54% accuracy
Step 2: Manual Evaluation
We randomly sampled 100 features where the explanations disagreed in the first setup and manually evaluated them.
Comparison Challenges:
Comparing auto-interp and self-explanation is tricky due to their different approaches:
Auto-interp examines maximum activating examples to extract patterns and summarize them
Self-explanation prompts the model to extract "meaning" from the SAE feature direction
This difference can lead to self-explanation capturing the meaning of the concept a feature represents while struggling to present the concept itself. The "compile" feature (11866) serves as a good example of this distinction.
In the "compile" feature examples we can notice a common pattern found in many studied SAE features. The feature has all of its strongest activations specifically on the word "compile", while weakly activating on some synonyms. Auto-interp accurately identifies this as a "compile" feature, while self-explanation conveys the broader meaning of the word. This discrepancy led us to conduct two separate manual evaluations, each with a different level of interpretation flexibility for self-explanation results.
To address this issue, we explored additional explanation generation using a different prompt style, which we'll discuss in the next section.
Evaluation Results:
More Forgiving Setup (out of 100 explanation pairs):
Self-explanation is correct, auto-interp is not: 33
Both methods are correct: 30 (contradicting Llama 70B's judgment)
Both methods are incorrect: 15
Feature is difficult to interpret: 8
auto-interp is correct, self-explanation is not: 9
Both are correct, but correct answer not in top 3: 5
Stricter Setup (the same 100 pairs):
Both methods are correct: 29
Auto-interp is correct, self-explanation is not: 21
Self-explanation is correct, auto-interp is not: 20
Both methods are incorrect: 15
Both are correct, but correct answer is not in top 3: 10
Feature is difficult to interpret: 5
During manual evaluation, we noticed that auto-interp explanations from neuronpedia often seemed to use only a limited number of maximum activations. The authors later confirmed that these explanations were created using a simplified auto-interp setup that only considered a few maximum activating examples. They also used just a GPT-3.5 model to analyze them.
This observation highlights an advantage of our method: it doesn't require an expensive model to function effectively. The evaluation results suggest that self-explanation for SAE feature explanation may be at least as accurate as auto-interp, if not more so. Moreover, our method provides a more direct way to extract feature meaning while covering the full range of its activations simultaneously.
Limitations and improvements
Recovering the activating token
As discussed above, the self-explanation method still has several limitations. The most prominent issue is that the model currently struggles to produce the exact activating token, and not just its meaning in the explanation. This becomes very apparent, when the explained feature does not correspond to some meaning, but activates mostly on grammatical patterns (e.g. a single token feature, or a word with “j” in it feature). A good example of such a feature is the 14054 “Char” feature that activates just on words starting with “Char”.
While trying to explain this feature, the model sometimes generates explanations related to some word starting with “char”. For example: “a give or gift" (charity) or “'something that is beautiful, pleasant, or charming, especially”. Although it is practically impossible to determine, that this is actually just a “char” feature from these explanations.
To handle this issue, we additionally experimented with prompts similar to
<user> Repeat "X" four times exactly as it is written. <assistant> \n1. "X"\n2. "X"\n3. "X"\n4. “<model completes>
Prompts like this are aimed to make the model repeat the tokens that activate the feature, without trying to uncover the actual meaning behind this feature. And this method does actually work to some extent. For example, it gives these explanations for the “Char” feature at different scales:
The metrics in this case look like this:
This prompt style allows us to generate complementary feature explanations to handle the cases, when the feature represents some token, and less the meaning behind it. Our scale optimization techniques also work with this form of the prompt, and usually show a bit higher self-similarity value. Although self-similarity charts are often similar for both prompts. This means that this method will not help in cases, when the self-explanation fails due to a low self-similarity.
Failure detection
While self-explanation is effective for many features, it doesn't perfectly explain every given feature. In some cases, it fails completely, though most of these instances were challenging to interpret even for the authors. Here's an overview of our current failure detection methods and their limitations:
Maximum Self-Similarity Thresholding
This is our primary method for detecting failing cases. It effectively identifies many model mistakes but has both false positives and false negatives. For example:
False Positive: Feature l12/2060 (that we explain as "a reference from 'the first' to 'the second'") has low self-similarity, yet both prompt styles capture some of its meaning.
False Negative: The "Sara"/"trace" 8912 feature has high self-similarity, but both prompts fail to interpret it correctly.
"Repeat" Prompt Failure Detection
Failures in the "repeat" prompt are generally easier to identify:
Most failures result in a " " or "repeat" prediction, with the latter likely copied from the prompt.
Complete failures typically produce " " or "repeat" across most scales, with disconnected tokens on the few remaining scales.
Layer-Specific Thresholds
An additional complication is that features from different layers seem to have different average self-similarity scores. This means:
Self-similarity thresholds need to be optimized separately for different layer SAEs and probing layers.
This complexity makes the use of self-similarity thresholding more challenging.
However, a self-similarity score higher than ~0.01 generally indicates a successful explanation generation.
While our current failure detection methods are useful, they have limitations. Improving these methods to reduce false positives and negatives, and developing more robust layer-specific thresholds, remain areas for future refinement in the self-explanation approach.
Prior work
As discussed in the introduction, auto-interp is a key method for interpreting the linear features in Sparse Autoencoders. State-of-the-art auto-interp approach requires extensive large model inference, so our current scope we only use a simpler version already available at neuronpedia for comparison.
Our method is very similar to SelfIE, Patchscopes and mwatkins’s earlier explanation method, but we apply these techniques to SAE features. We choose to focus on the explanation aspect and evaluate self-explanation as an alternative to auto-interp. We discover the importance of scale and develop metrics for tuning it and discovering when explanation does not work. Our entropy metric is similar to Surprisal from Patchscopes, but we apply it in a different context. We also apply the method to mass-explaining SAE features and discover that it can produce explanations for abstract features.
We trained our own Phi-3 Mini SAEs using the setup discussed in the different post. Self-explanation also was able to explain Phi-3 Mini features, although we did not do thorough scale tuning. Some of the interesting features are present below.
Random features
R5/L20/21147, opening bracket + math feature
R5/L20/247, “nth” feature
R5/L20/22700, “For … to … “ feature
Refusal features
R6/L16/39432
Acknowledgements
This work was produced during the research sprint of Neel Nanda’s MATS training program. We are grateful to Google for providing us with computing resources through the TPU Research Cloud.
Appendix
Entropy justification
Epistemic status: highly speculative; authors think the theoretical argument is correct but preliminary experiments show this argument is weak and you can safely ignore it.
Another way to detect whether the model is going to write a generic or random explanation, is to look at the predicted distribution of the answer’s first token P(t|t…). Since we also insert the SAE feature direction in one of the prompt tokens, this distribution becomes P(t|t…, f), where f is the SAE feature.
In a case when the model is going to output the correct explanation of f, we expect the mutual information of t and f to be non-zero. On the contrary, if the model ignores f and tries to generate a random or generic explanation, this mutual information should be closer to zero.
If we have a set of features for which a particular scale works well, we can assume a uniform prior on features and calculate the unconditional entropy H. Then, the mutual information of the set of features f and t will be reflected in the conditional entropy H(t|t…, f), and this conditional entropy is expected to be lower for sets of features where the model decides to output the relevant explanation. If we additionally assume that conditional entropies have low variance for those sets of features, then using the conditional entropy is a reasonable way to predict mutual information for the set of features for this scale.
TL;DR
Introduction
This work was produced as part of the ML Alignment & Theory Scholars Program - Summer 24 Cohort, under mentorship from Neel Nanda and Arthur Conmy.
SAE features promise a flexible and extensive framework for interpretation of LLM internals. Recent work (like Scaling Monosemanticity) has shown that they are capable of capturing even high-level abstract concepts inside the model. Compared to MLP neurons, they can capture many more interesting concepts.
Unfortunately, in order to learn things with SAE features and interpret what the SAE tells us, one needs to first interpret these features on their own. The current mainstream method for their interpretation requires storing the feature’s activations on millions of tokens, filtering for the prompts that activate it the most, and looking for a pattern connecting them. This is typically done by a human, or sometimes somewhat automated with the use of larger LLMs like ChatGPT, aka auto-interp. Auto-interp is a useful and somewhat effective method, but requires an extensive amount of data and expensive closed-source language model API calls (for researchers outside scaling labs)
Recent papers like SelfIE or Patchscopes have proposed a mechanistic method of directly utilizing the model in question to explain its own internals activations in natural language. It is an approach that replaces an activation during the forward pass (e.g. some of the token embeddings in the prompt) with a new activation and then makes the model generate explanations using this modified prompt. It’s a variant of activation patching, with the notable differences that it generates a many token output (rather than a single token), and that the patched in activation may not be the same type as the activation it’s overriding (and is just an arbitrary vector of the same dimension). We study how this approach can be applied to SAE feature interpretation, since it is:
How to use
Basic method
We ask the model to explain the meaning of a residual stream direction as if it literally was a word or phrase:
We follow most closely SelfIE. We replace the residual stream at positions corresponding to the token “X” at Layer 2 (though Layer 0 or input embedding works as well) and let the model generate as usual. The method can be implemented in a few minutes with most mechanistic interpretability frameworks.
We provide a simple NNSight implementation in this Colab notebook: Gemma SAE self-explanation
TransformerLens implementation (slightly worse): TL Gemma self-explanation
Metrics for choosing scale and evaluating
We find that the magnitude (or scale) of the embedding we insert is a very sensitive hyperparameter. The optimal scale can also vary with residual stream norm. The method will produce fake or overly vague explanations when the scale is too low or too high:
With the scale being too low, we often encounter a wide range of contradictory explanations. Conversely, should the scale be too high, the model tends to generate overly generic explanations, such as "a word, phrase, or sequence of words used to refer to something specific," as shown in the previous example.
This issue significantly hinders the method's usefulness for feature interpretation, as users must examine generations at all scales to grasp the feature's meaning. Furthermore, it impedes any automatic scheme for generating explanations.
To address these issues, we explore several metrics that analyze the model's externals during generation to heuristically find the best scale for a selected feature.
Self-Similarity
We face two main difficulties when using the method without scale optimization:
The simplest explanation for why low scales result in incorrect interpretations is that the signal may be too weak for the model to detect. If the model does pick up the signal, we expect it to store this information somewhere in the context used to generate the explanation.
The first place to look for this context would be the last layer residual stream of the last prompt token, as it should contain much of the information the model will use to predict the answer. This leads us to our first metric, which we call self-similarity.
Self-similarity measures the cosine similarity between the 16th layer residual stream of the last prompt token and the original SAE feature.
Self-similarity alone isn't reliable for choosing the best scale for different features. One problem is that self-similarity continues to increase beyond the optimal scale, even when the model starts giving generic explanations. This growth makes it challenging to use self-similarity as a simple way to find the optimal scale. Despite these limitations, low peak self-similarity often indicates a feature that's difficult to explain using this method.
Entropy
The second metric we explore to evaluate explanations is based on the predicted distribution of the answer's first token. Initially, this distribution is represented as P(tn|t1..n−1). However, since we insert the SAE feature direction into one of the prompt tokens, the distribution becomes P(tn|t1..n−1,f), where f is the SAE feature index.
This metric uses entropy, which measures certainty. The entropy decreases as the mutual information between the random variable representing the feature and the first answer token increases. We calculate this metric by measuring the entropy of the predicted distribution for the answer's first token.
This approach provides another way to detect whether the model will produce a generic or random explanation.
Further (speculative) justification can be found in appendix.
Examples above show that this metric tends to:
This local minimum frequently coincides with the optimal explanation scale. The growth of the entropy metric at higher scales also helps detect cases where the model loses feature meaning and starts generating generic text.
However, entropy alone is insufficient for ranking generations because:
Composite
The entropy metric's ability to identify explanation degradation at higher scales allows us to combine it with the self-similarity metric. This composite ranks the optimal scale in the top-3 list for a much larger percentage of cases compared to either self-explanation or entropy alone.
To calculate this composite metric:
Note: While we didn't extensively tune this parameter, this approach has produced a decent baseline. We expect there may be better ways to combine these metrics, such as using peak-finding algorithms.
composite(x)=α⋅self-similarity(x)−(1−α)⋅entropy(x)Other experiments
We explored two additional approaches in our experiments:
Evaluation
Our evaluation process consisted of two main steps to measure agreement between our method and auto-interp, the current standard for SAE feature interpretation.
Step 1: Automated Evaluation
We used half of the Gemma 2B layer 6 residual stream SAE features from neuronpedia. For each feature, we:
We tried two setups:
Step 2: Manual Evaluation
We randomly sampled 100 features where the explanations disagreed in the first setup and manually evaluated them.
Comparison Challenges:
Comparing auto-interp and self-explanation is tricky due to their different approaches:
This difference can lead to self-explanation capturing the meaning of the concept a feature represents while struggling to present the concept itself. The "compile" feature (11866) serves as a good example of this distinction.
In the "compile" feature examples we can notice a common pattern found in many studied SAE features. The feature has all of its strongest activations specifically on the word "compile", while weakly activating on some synonyms. Auto-interp accurately identifies this as a "compile" feature, while self-explanation conveys the broader meaning of the word. This discrepancy led us to conduct two separate manual evaluations, each with a different level of interpretation flexibility for self-explanation results.
To address this issue, we explored additional explanation generation using a different prompt style, which we'll discuss in the next section.
Evaluation Results:
During manual evaluation, we noticed that auto-interp explanations from neuronpedia often seemed to use only a limited number of maximum activations. The authors later confirmed that these explanations were created using a simplified auto-interp setup that only considered a few maximum activating examples. They also used just a GPT-3.5 model to analyze them.
This observation highlights an advantage of our method: it doesn't require an expensive model to function effectively. The evaluation results suggest that self-explanation for SAE feature explanation may be at least as accurate as auto-interp, if not more so. Moreover, our method provides a more direct way to extract feature meaning while covering the full range of its activations simultaneously.
Limitations and improvements
Recovering the activating token
As discussed above, the self-explanation method still has several limitations. The most prominent issue is that the model currently struggles to produce the exact activating token, and not just its meaning in the explanation. This becomes very apparent, when the explained feature does not correspond to some meaning, but activates mostly on grammatical patterns (e.g. a single token feature, or a word with “j” in it feature). A good example of such a feature is the 14054 “Char” feature that activates just on words starting with “Char”.
While trying to explain this feature, the model sometimes generates explanations related to some word starting with “char”. For example: “a give or gift" (charity) or “'something that is beautiful, pleasant, or charming, especially”. Although it is practically impossible to determine, that this is actually just a “char” feature from these explanations.
To handle this issue, we additionally experimented with prompts similar to
Prompts like this are aimed to make the model repeat the tokens that activate the feature, without trying to uncover the actual meaning behind this feature. And this method does actually work to some extent. For example, it gives these explanations for the “Char” feature at different scales:
The metrics in this case look like this:
This prompt style allows us to generate complementary feature explanations to handle the cases, when the feature represents some token, and less the meaning behind it. Our scale optimization techniques also work with this form of the prompt, and usually show a bit higher self-similarity value. Although self-similarity charts are often similar for both prompts. This means that this method will not help in cases, when the self-explanation fails due to a low self-similarity.
Failure detection
While self-explanation is effective for many features, it doesn't perfectly explain every given feature. In some cases, it fails completely, though most of these instances were challenging to interpret even for the authors. Here's an overview of our current failure detection methods and their limitations:
Maximum Self-Similarity Thresholding
This is our primary method for detecting failing cases. It effectively identifies many model mistakes but has both false positives and false negatives. For example:
"Repeat" Prompt Failure Detection
Failures in the "repeat" prompt are generally easier to identify:
Layer-Specific Thresholds
An additional complication is that features from different layers seem to have different average self-similarity scores. This means:
While our current failure detection methods are useful, they have limitations. Improving these methods to reduce false positives and negatives, and developing more robust layer-specific thresholds, remain areas for future refinement in the self-explanation approach.
Prior work
As discussed in the introduction, auto-interp is a key method for interpreting the linear features in Sparse Autoencoders. State-of-the-art auto-interp approach requires extensive large model inference, so our current scope we only use a simpler version already available at neuronpedia for comparison.
Our method is very similar to SelfIE, Patchscopes and mwatkins’s earlier explanation method, but we apply these techniques to SAE features. We choose to focus on the explanation aspect and evaluate self-explanation as an alternative to auto-interp. We discover the importance of scale and develop metrics for tuning it and discovering when explanation does not work. Our entropy metric is similar to Surprisal from Patchscopes, but we apply it in a different context. We also apply the method to mass-explaining SAE features and discover that it can produce explanations for abstract features.
Examples
Gemma 2B
Random simple features
L12/2079, interjection feature
L12/2086, “phone” feature
L6/4075, “inf” feature
More complex features
L12/5373, same word repetition feature
L12/8361, female + male "romantic" pair feature
L12/5324, pair of names feature
L12/330, analogy/connected entities feature
L12/3079, Spanish language feature (possibly connected to some particular concept)
L12/12017, opposites feature
Phi-3 Mini
We trained our own Phi-3 Mini SAEs using the setup discussed in the different post. Self-explanation also was able to explain Phi-3 Mini features, although we did not do thorough scale tuning. Some of the interesting features are present below.
Random features
R5/L20/21147, opening bracket + math feature
R5/L20/247, “nth” feature
R5/L20/22700, “For … to … “ feature
Refusal features
R6/L16/39432
Acknowledgements
This work was produced during the research sprint of Neel Nanda’s MATS training program. We are grateful to Google for providing us with computing resources through the TPU Research Cloud.
Appendix
Entropy justification
Epistemic status: highly speculative; authors think the theoretical argument is correct but preliminary experiments show this argument is weak and you can safely ignore it.
Another way to detect whether the model is going to write a generic or random explanation, is to look at the predicted distribution of the answer’s first token P(t|t…). Since we also insert the SAE feature direction in one of the prompt tokens, this distribution becomes P(t|t…, f), where f is the SAE feature.
In a case when the model is going to output the correct explanation of f, we expect the mutual information of t and f to be non-zero. On the contrary, if the model ignores f and tries to generate a random or generic explanation, this mutual information should be closer to zero.
If we have a set of features for which a particular scale works well, we can assume a uniform prior on features and calculate the unconditional entropy H. Then, the mutual information of the set of features f and t will be reflected in the conditional entropy H(t|t…, f), and this conditional entropy is expected to be lower for sets of features where the model decides to output the relevant explanation. If we additionally assume that conditional entropies have low variance for those sets of features, then using the conditional entropy is a reasonable way to predict mutual information for the set of features for this scale.