x
Ideas for studies on AGI risk — AI Alignment Forum