x
A Rocket–Interpretability Analogy — AI Alignment Forum