Stephen Bennett

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I'm a bit confused by your response. First, the meat of the argument:

You are implicitly comparing two models:  and , which make predictions about the world. Each model makes several claims, including the shape of the function governing AI improvement and about how the shape of that function comes about[1]. So far as I can tell, a typical central claim of people who endorse  is that AIs working on themselves will allow their capabilities to grow hyper-exponentially. Those who endorse  don't seem to dispute that self-improvement will occur, but expect it to be par for the course of a new technology and continue to be well modeled by exponential growth.

So, it seems to me that the existence of recursive self-improvement without an observed fast takeoff is evidence against . I presume you disagree, but I don't see how from a model selection framework.  predicts either the data we observe now or a fast takeoff, whereas  predicts only the exponential growth we are currently observing (do you disagree that we're in a time of exponential growth?). By the laws of probability,  places higher probability on the current data than . Due to Bayes' rule,  is therefore favored by the existing evidence (i.e. the Bayes factor indicates that you should update towards ). Now, you might have a strong enough prior that you still favor , but if your model placed less probability mass on the current data than another model, you should update towards that other model.

Second (and lastly), a quibble:

Yitz's response uses the terms hard/soft takeoff, was that edited? Otherwise your argument against "continuous" as opposed to slow or soft comes off as a non-sequitor; that you're battling for terminological ground that isn't even under contention.

  1. ^

    Different people will have different versions of each of these models. Some may even oscillate between them as is convenient for argumentative purposes (a-la motte and bailey).