x
How Can Interpretability Researchers Help AGI Go Well? — AI Alignment Forum