AI Takeoff refers to “the process of an Artificial General Intelligence (AGI) going from a certain threshold of capability (often discussed as "human-level") to being super-intelligent and capable enough to control the fate of civilization”.

Whether this process will be “soft” (i.e. manageably slow) or “hard” (i.e. fast) is key to determining survivability of AGI, so estimating take-off speeds is important to design strategies to manage the advent of AGI. Current methods to predict this speed are based on qualitative arguments, averaging the opinions of experts or using very fundamental principles such as bioanchors (summarized here).

This essay makes the following points:

  • Considering limiting factors may help predict AI take-off speed.
  • The speed of the first phase of the take-off process (develop superintelligence) may not be at all informative of the speed of the overall process (up to where AGI becomes capable enough to control the fate of civilization). 
  • The methodologies that we should employ to control and survive the take-off process may have nothing to do with AI research.

What are limiting factors, and why do they matter?

The rate of any process is determined by its limiting factors. One can think of a limiting factor as the slowest step in a chain of production. To illustrate this concept, imagine a population of bacteria growing in a test tube. Their growth is the product of many chemical reactions happening inside the cells. Now consider a condition in which the slowest of these reactions is the ability of the bacteria to uptake glucose from the medium. What happens if we massively speed up this one reaction, for example by adding a ton of glucose to our test tube, or using genetic engineering to fill the bacterial membrane with glucose pumps? Will growth become infinite? No. Growth will increase until glucose uptake becomes non-limiting, which means that some other reaction down the chain is unable to process the large amounts of glucose that are being sent their way. If we want to predict how quickly our bacteria will grow after removing one limiting factor (glucose uptake), we must figure out which one will dominate the rate of the process next.

The speed of development of superintelligent AI won’t tell us much about the whole take-off process

The full take-off process requires an AGI to become super-intelligent and capable enough to control the fate of civilization. Intelligence itself is a component of this capability to control our fate, but other components will be necessary. In particular, the AGI must acquire enough control on the physical world and/or on our minds. There are good arguments suggesting that the transition from human-level AI to superintelligent AI can be very fast, but the speed of the subsequent process of power acquisition will be limited by factors other than intelligence. We can analyze this process by considering two possible scenarios:

Scenario 1: Current technology dominates

In this scenario, the AGI becomes an extremely competent player in our current world, but it does not develop any disruptive technology that fundamentally changes the rules of the game. Therefore, we can imagine what paths will be taken by a misaligned AGI that seeks to maximize its power. For example, it can attempt to seize control of automated systems via hacking, it can make successful investments and amass wealth, it can try to influence public opinion, etc. 

These possibilities are often considered in narratives about AI risk, but they should be studied more systematically and in detail by the relevant experts, in order to identify their rate-limiting factors and to design strategies to block or slow down the exploitation of an AGI of any of these power-seeking mechanisms. There seems to be little interest in the AI safety community to wargame these specific scenarios, perhaps because of the belief that the discovery of disruptive technologies will render them useless. I disagree with this belief; as I will argue in the remainder of this document, technological progress has its own rate-limiting factors, and may be too slow to contribute significantly to the take-off process.

Scenario 2: Disruptive technologies

In this scenario, the AGI develops disruptive technologies that upend the balance of power. For example, a narrative about AI takeover considers a superintelligent AI that finds a way to manufacture self-replicating killer nanobots, and sends the instructions to a factory that produces some of them. Once the first replicators are produced, humanity is doomed. This narrative illustrates how a disruptive technology may cause humans to lose control on a superintelligent AI arbitrarily fast, and in extremely unpredictable ways.

Indeed, once such a disruptive technology is developed, it may speed up the take-off process beyond any control. Some critics argue that such disruptive technologies are probably not possible, but we cannot bet our future on this mere hope. Therefore, here I will assume that such disruptive technologies do exist, but I will argue that the rate of their development is limited by factors that are to some extent predictable.   

Intelligence (defined as the ability to reason about facts and decide the best actions to achieve goals) is currently a limiting factor of technological progress. It’s likely that sufficiently advanced AI’s can perform most of the intellectual work of current scientists extremely fast, to the point where intelligence is no longer a limiting factor in technological progress. From that point onwards, the rate of technological progress will be determined by other factors. 

The speed of acquisition of “superknowledge” will dominate the next step of the process.

Scientific and technological progress requires two fundamental elements: 

  • Intelligence, understood as the ability to reason about facts and decide the best actions to achieve goals.
  • Knowledge, understood as the accumulation of true facts about the world that are needed to reach useful decisions. 

I define “superknowledge” as the accumulation of true facts about the world that would be necessary for a superintelligent AI to devise of disruptive technologies.

A key question is whether our current knowledge is enough to produce radically new technologies. In general, I think the answer is no. Let’s consider the example of self-replicating killer nanobots. One may believe that superintelligent AI could perform a very complex simulation with all the variables involved, and use its results to design the nanobots. However, this is very unlikely to work, because the universe is extremely complex and chaotic. Anything one wants to do depends critically on many unpredictable parameters. After learning all the chemistry in the world and performing initial simulations, the AI would realize that the stability of killer nanobots depends critically on, say, the exact ratio between carbon and calcium in human cell membranes. Nobody had thought of measuring this ratio with enough accuracy, so the AI must kindly ask humans to measure it before manufacturing its killer nanobots.

Many people in the community already realize this, and several articles describe iterative cycles of AI-directed research as part of the take-off process. But I think it’s important to make explicit the following points:

  • The process of amassing new knowledge applies to most scientific disciplines, but not all. For example, Mathematics is an exception. AI development is also an exception, since most of the progress can probably be made inside computers with little contact with the external world. However, these are exceptions, not the rule.
  • The speed at which knowledge can be acquired is limited even for an infinitely intelligent AI.
  • Even if extremely dangerous and unpredictable technologies exist, amassing superknowledge is a precondition for reaching them.

Can we compute the speed of acquisition of superknowledge?

This is hard, and obtaining any serious estimate will require dedicated research. Here are two possible directions:

A conservative way of answering this question is looking at the relative effort dedicated today to theoretical vs. experimental science. Let’s say that we currently dedicate 50% of our resources to experimental work. A naïve calculation would say that superintelligent AI would allow us to compress the remaining 50%, giving us a 2-fold speed-up. This is extremely conservative, and neglects the fact that superintelligent AI would also improve experimental design and would prevent us from performing many unnecessary experiments. Still, it may be a valid way of estimating a lower bound for the expected speed-up.

A more accurate picture might be obtained by looking at the past. We can take a given field (say, Biochemistry), and make a list of the minimal amount of experiments that would have been required to develop it. In an extreme version we can use the full benefit of hindsight: Take only the experiments needed to prove all the axioms of the field, and count the minimum number of experimental cycles needed. This approach neglects the fact that alternative hypotheses had to be discarded before finding the true ones, but would at least give us an upper bound of the speed at which a superintelligence could have developed the field.

This type of analysis will only provide approximate estimates, but it may be a good research direction to help us envision a future with superintelligent AI. 

Controlling the take-off

An important characteristic of the acquisition of knowledge is observability. The development of superintelligence is largely opaque, as it happens inside computer servers running models that surpass human comprehension. In contrast, the acquisition of new knowledge happens in the physical world, and therefore is more observable and controllable. 

Therefore, policies to control the take-off have a better chance if they focus on the step of acquisition of superknowledge rather than on the step of acquisition of superintelligence. These policies could be enforced early, for example if funding agencies mandate that all AI-directed research must be directed to be human-interpretable, and must actually be interpreted by humans before the following experimental cycle.

Mixtures of the two scenarios

Reality may be a mixture of both scenarios. For example, there may be an initial period without disruptive technologies, in which the rate of progress is limited both by the ability of the AGI to gather resources, and by the rate of knowledge acquisition.

Conclusions

  • Acquisition of superintelligence will happen inside computers and will not be easily observable. It is well described in the literature by relevant experts. It may be fast or slow, but arguments that it can be very fast are strong enough to be taken seriously.
  • The speed of acquisition of superintelligence will not tell us anything about the speed of the rest of the process, which will be determined by other limiting factors.
  • In the absence of disruptive technologies, the rate of power acquisition by an AI can probably be estimated by wargaming foreseeable scenarios. Experts in relevant disciplines (Economics, public safety, etc.) should be involved in this exercise.
  • Disruptive technologies would speed up the acquisition of power, but their development will be delayed by the acquisition of superknowledge.
  • The rate of acquisition of superknowledge may be estimated, and it may be possible to slow it down given its partial observability.

New to LessWrong?

New Comment