I frequently see arguments of this form: > We have two choices: > > 1. accept the current rate of AI progress and a very large risk[1] of existential catastrophe, > > or > > 2. slow things down, greatly reducing the risk of existential catastrophe, in exchange for a...