Alternate titles: Deconfusing Take-off; Taboo “Fast” and “Slow” Take-off.

This post has two main purposes: 

  1. To suggest that we taboo the terms "fast" and "slow" take-off in favour of "gradual" and "sudden" take-off, respectively (in order to avoid confusion with “short” and “long” timelines). I also encourage people to discuss take-off "dynamics" instead of take-off "speeds". 
  2. To highlight a point made by Buck S. at EAG London and by Paul C. here: For every choice of "AGI difficulty", gradual take-off implies shorter timelines. 

Although these points have been made before, I expect some people will be deconfused by this post.
 

To clarify terms:

Short vs long timelines is the question of when (i.e. by which year) we will develop AGI/TAI/Superintelligence.

Paul Christiano operationalizes a gradual take-off as: 

There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)

Sudden take-off is just the negation of this statement. 

Instead of considering economic output, we can also characterize take-off in terms of AI capabilities. For some measure of AI capabilities, we could say a gradual-take-off is a situation in which:

There will be a complete 4 year interval in which AI capabilities double, before the first 1 year interval in which AI capabilities double. (Similarly, we’ll see an 8 year doubling before a 2 year doubling, etc.)

Daniel Kokotajlo also argues against GDP as a metric for AI timelines and take-off dynamics, pointing to other factors that we actually care about (e.g. warning-shots, multi-polarity, etc).

Take-off is therefore not a question about timelines – it’s a question about AI capabilities (or the economic usefulness of AI) before some threshold level which is sufficient for explosive growth (e.g. the level of intelligence needed for recursive self-improvement, or some discontinuous phase-change in capabilities). 

Here is a graph of gradual vs sudden take-off (for completeness I’ve included a “no take-off” curve):

As you can see, the gradual take-off scenario happens at an earlier time. 

In a gradual take-off world, AI systems become increasingly capable and useful before reaching the threshold for explosive growth. In this world, pre-AGI systems can fully automate or rapidly speed up many tasks. They provide lots of economic value and transform the world in many ways. One effect of these systems is to speed up AI research – implying we get to AI sooner than in a sudden take-off world (conditioned on a given level of “AGI difficulty”). Furthermore, the increased value and usefulness of these systems causes more funding, talent, and compute to be invested in AI, leading to further improvements (leading to more funding/talent/compute…). In addition, AI replaces an increasing number of jobs, freeing up more people to work on AI (as researchers, engineers, overseers, data-generators, etc.). The interplay of all of these factors leads to a virtuous circle which ultimately causes an increasing rate of AI capability growth. 

In a sudden take-off world, AIs haven’t been very valuable up until some threshold of capability (at which point something like recursive improvement kicks in). Therefore they haven’t changed the world much and in particular haven’t made AI research go faster, so it's probably taken us longer to get to AGI. 

In some very loose sense, you can think of the area under the take-off curve (cut off at the point where the line crosses the threshold) as having to be similar in both settings (if the y axis corresponds to improving AI research), as then AI research improvement * time (x-axis) = amount of progress towards AGI/Explosive Growth Threshold/TAI, which needs to be the same in both scenarios. Hence, sudden take-off, which has a lower line, has to wait longer until that progress is made. (Thanks to Rob Kirk for this picture.)

Is it inconsistent, then, to think both that take-off will be gradual and timelines will be long? No – people who hold this view probably do so because they think that marginal improvements in AI capabilities are hard. This belief implies both a gradual take-off and long timelines. (Thanks to Sammy Martin for this point.)

Similarly it is not inconsistent to think we will have a sudden take-off soon. This view would stem from a belief that "The threshold level of capabilities needed for explosive growth is very low." Which would imply that we hit explosive growth before AIs are useful enough to be integrated into the economy, i.e. sudden take-off, and also imply that we get AGI on short timelines. 

For every choice of AGI difficulty, conditioning on gradual take-off implies shorter timelines.

 

Acks. Thanks to Rob Kirk, Sammy Martin, Matt MacDermott, and Daniel Kokotajllo for helpful comments.








 

New Comment