JanBrauner

JanBrauner's Comments

Disentangling arguments for the importance of AI safety

I struggle to understand the difference between #2 and #3. The prosaic AI alignment problem only exists because we don't know how to make an agent that tries to do what we want it to do. Would you say that #3 is a concrete scenario for how #2 could lead to a catastrophe?