AI ALIGNMENT FORUM
AF

Lennart Buerger
Ω14120
Message
Dialogue
Subscribe

I studied Physics in Heidelberg and Oxford and am now doing research on AI Alignment and LLM Interpretability, currently as part of my Master's thesis in Fred Hamprecht's SciAI Lab (Heidelberg, Germany). If you want to discuss something, have questions or would like to collaborate, feel free to drop me a message!

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Truth is Universal: Robust Detection of Lies in LLMs
Lennart Buerger1y*10

This is an excellent question! Indeed, we cannot rule out that tG is a linear combination or boolean function of features since we are not able to investigate every possible distribution shift. However, we showed in the paper that tG generalizes robustly under several significant distribution shifts. Specifically, tG is learned from a limited training set consisting of simple affirmative and negated statements on a restricted number of topics, all ending with a "." token. Despite this limited training data tG generalizes reasonably well to (i) unseen topics, (ii) unseen statement types, (iii) real-world scenarios, (iv) other tokens like "!" or ".'". I think that the real-world scenarios (iii) are a particularly significant distribution shift. However, I agree with you that tests on many more distribution shifts are needed to be highly confident that tG is indeed an elementary feature (if something like that even exists).

Reply
JumpReLU SAEs + Early Access to Gemma 2 SAEs
Lennart Buerger1y10

Nice work! I was wondering what context length you were using when you extracted the LLM activations to train the SAE. I could not find it in the paper but I might also have missed it. I know that OpenAI used a context length of 64 tokens in all their experiments which is probably not sufficient to elicit many interesting features. Do you use a variable context length or also a fixed value?

Reply
14Truth is Universal: Robust Detection of Lies in LLMs
1y
2