All of KatjaGrace's Comments + Replies

I wrote an AI Impacts page summary of the situation as I understand it. If anyone feels like looking, I'm interested in corrections/suggestions (either here or in the AI Impacts feedback box).  

Looks good to me :)

A few quick thoughts on reasons for confusion:

I think maybe one thing going on is that I already took the coherence arguments to apply only in getting you from weakly having goals to strongly having goals, so since you were arguing against their applicability, I thought you were talking about the step from weaker to stronger goal direction. (I’m not sure what arguments people use to get from 1 to 2 though, so maybe you are right that it is also something to do with coherence, at least implicitly.)

It also seems natural to think of ‘weakly has goals’ as some... (read more)

3Rohin Shah3y
Thanks, that's helpful. I'll think about how to clarify this in the original post.

Thanks. Let me check if I understand you correctly:

You think I take the original argument to be arguing from ‘has goals' to ‘has goals’, essentially, and agree that that holds, but don’t find it very interesting/relevant.

What you disagree with is an argument from ‘anything smart’ to ‘has goals’, which seems to be what is needed for the AI risk argument to apply to any superintelligent agent.

Is that right?

If so, I think it’s helpful to distinguish between ‘weakly has goals’ and ‘strongly has goals’:

  1. Weakly has goals: ‘has some sort of drive toward something,
... (read more)

Yes, that's basically right.

You think I take the original argument to be arguing from ‘has goals' to ‘has goals’, essentially, and agree that that holds, but don’t find it very interesting/relevant.

Well, I do think it is an interesting/relevant argument (because as you say it explains how you get from "weakly has goals" to "strongly has goals"). I just wanted to correct the misconception about what I was arguing against, and I wanted to highlight the "intelligent" --> "weakly has goals" step as a relatively weak step in our current arguments. (In my ori... (read more)

I meant: conditional on it growing faster, why expect this is attributable to a small number of technologies, given that when it accelerated previously it was not like that (if I understand)?

1Daniel Kokotajlo3y
Another, somewhat different reply occurs to me: Plausibly the reason why growth rates have been roughly steady for the past sixty years or so is that world population growth has slowed down (thanks to education, birth control, etc.). So on this view, there's technological growth and there's population growth, and combined they equal GWP growth, and combined they've been on a hyperbolic trajectory for most of history but recently are merely on an exponential trajectory thanks to faltering population growth. If this story is right, then in order for GWP growth to accelerate again, we either need to boost population growth, or boost technological growth even more than usual, i.e. even more than was the case in previous periods of GWP acceleration like the industrial revolution or the agricultural revolution or, well, literally any period. So, I'd conclude that it's unlikely for GWP growth to accelerate again, absent specific reasons to think this time will be different. AGI is one such reason. The other answers people are giving are other reasons (though I don't find them plausible.)
0Daniel Kokotajlo3y
Ah, OK. Good point. I think when it accelerated previously, it was the result of a small number of technologies, so long as we are careful to define our technologies broadly enough. For example, we can say the acceleration due to the agricultural revolution was due to agriculture + a few other things maybe. And we can say the acceleration due to the industrial revolution was due to engines + international trade + mass-production methods + scientific institutions + capitalist institutions + a few other things I'm forgetting. I'm looking for something similar here; e.g. Abram's answer "We automate everything, but without using AGI" is acceptable to me, even though it's only a single technology if we define our tech extremely broadly.

If throughout most of history growth rates have been gradually increasing, I don't follow why you would expect one technology to cause it to grow much faster, if it goes back to accelerating.

1Daniel Kokotajlo3y
I currently don't expect it to grow much faster, at least not until we have AGI. Is your question why I think AGI would make it grow much faster? Roughly, my answer is "Because singularity." But if you think not even AGI would make it grow much faster -- which in this context means >9% per year -- then that's all the more reason to think "Economy doubles in 4 years before the first 1-year doubling" is a bad metric for what we care about. (To clarify though, I don't intend for the question to be only answerable by single technologies. Answers can list several technologies, e.g. all the ones on my list.)

1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.

2) My important point is rather that your '30 year' number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to sma... (read more)

1Daniel Kokotajlo4y
I like your point #2; I should think more about how the 30 year number changes with size. Obviously it's smaller for bigger entities and bigger for smaller entities, but how much? E.g. if we teleported 2020 Estonia back into 1920, would it be able to take over the world? Probably. What about 1970 though? Less clear. Military power isn't what I'm getting at either, at least not if measured in the way that would result in AI companies having little of it. Cortez had, maybe, 1/10,000th of the military power of Mexico when he got started. At least if you measure in ways like "What would happen if X fought Y." Probably 1/10,000th of Mexico's military could have defeated Cortez' initial band. If we try to model Cortez' takeover as him having more of some metric than all of Mexico had, then presumably Spain had several orders of magnitude more of that metric than Cortez did, and Western Europe as a whole had at least an order of magnitude more than that. So Western Europe had *many* orders of magnitude more of this stuff, whatever it is, than Mexico, even though Mexico had a similar population and GDP. So they must have been growing much faster than Mexico for quite some time to build up such a lead--and this was before the industrial revolution! More generally, this metric that is used for predicting takeovers seems to be the sort of thing that can grow and/or shrink orders of magnitude very quickly, as illustrated by the various cases throughout history of small groups from backwater regions taking over rich empires. (Warning: I'm pulling these claims out of my ass, I'm not a historian, I might be totally wrong. I should look up these numbers.)
2Matthew Barnett4y
This is a concern with AI, but why is it the concern. If eg. the United States could take over the world because they had some AI enabled growth, why would that not be a big deal? I'm imagining you saying, "It's not unique to AI" but why does it need to be unique? If AI is the root cause of something on the order of Britain colonizing the world in the 19th century, this still seems like it could be concerning if there weren't any good governing principles established beforehand.

The time it takes to get a DSA by growing bigger depends on how big you are to begin with. If I understand, you take your 30 years from considering the largest countries, which are not far from being the size of the world, and then use it when talking about AI projects that are much smaller (e.g. a billion dollars a year suggests about 1/100,000 of the world). If you start from a situation of an AI project being three doublings from taking over the world say, then most of the question of how it came to have a DSA seems to be the question of how it grew the... (read more)

1Daniel Kokotajlo4y
I was thinking of an initially large country growing fast via AI, yes. Still counts; it is soft takeoff leading to DSA. However I am also making much stronger claims than that--I think it could happen with a corporation or rogue AGI. I don't think annual income is at all a good measure of how close an entity is to taking over the world. When Cortez landed in Mexico he had less than 1/100,000th of the income, population, etc. of the region, yet he ruled the whole place three years later. Then a few years after that Pizarro repeated the feat in Peru, good evidence that it wasn't just an amazing streak of luck.