x
Towards Causal Foundations of Safe AGI — AI Alignment Forum