Daniel Kokotajlo

Philosophy PhD student, worked at AI Impacts, now works at Center on Long-Term Risk. Research interests include acausal trade, timelines, takeoff speeds & scenarios, decision theory, history, and a bunch of other stuff. I subscribe to Crocker's Rules and am especially interested to hear unsolicited constructive criticism. http://sl4.org/crocker.html

Sequences

AI Timelines
Takeoff and Takeover in the Past and Future

Comments

Another (outer) alignment failure story

Thanks for this, this is awesome! I'm hopeful in the next few years for there to be a collection of stories like this.

This is a story where the alignment problem is somewhat harder than I expect, society handles AI more competently than I expect, and the outcome is worse than I expect. It also involves inner alignment turning out to be a surprisingly small problem. Maybe the story is 10-20th percentile on each of those axes.

I'm a bit surprised that the outcome is worse than you expect, considering that this scenario is "easy mode" for societal competence and inner alignment, which seem to me to be very important parts of the overall problem. Am I right to infer that you think outer alignment is the bulk of the alignment problem, more difficult than inner alignment and societal competence?

Some other threads to pull on:

--In this story, there aren't any major actual wars, just simulated wars / war games. Right? Why is that? I look at the historical base rate of wars, and my intuitive model adds to that by saying that during times of rapid technological change it's more likely that various factions will get various advantages (or even just think they have advantages) that make them want to try something risky. OTOH we haven't had major war for seventy years, and maybe that's because of nukes + other factors, and maybe nukes + other factors will still persist through the period of takeoff? IDK, I worry that the reasons why we haven't had war for seventy years may be largely luck / observer selection effects, and also separately even if that's wrong, I worry that the reasons won't persist through takeoff (e.g. some factions may develop ways to shoot down ICBMs, or prevent their launch in the first place, or may not care so much if there is nuclear winter)

--Relatedly, in this story the AIs seem to be mostly on the same team? What do you think is going on "under the hood" so to speak: Have they all coordinated (perhaps without even causally communicating) to cut the humans out of control of the future? Why aren't they fighting each other as well as the humans? Or maybe they do fight each other but you didn't focus on that aspect of the story because it's less relevant to us?

--Yeah, society will very likely not be that competent IMO. I think that's the biggest implausibility of this story so far.

--(Perhaps relatedly) I feel like when takeoff is that distributed, there will be at least some people/factions who create agenty AI systems that aren't even as superficially aligned as the unaligned benchmark. They won't even be trying to make things look good according to human judgment, much less augmented human judgment! For example, some AI scientists today seem to think that all we need to do is make our AI curious and then everything will work out fine. Others seem to think that it's right and proper for humans to be killed and replaced by machines. Others will try strategies even more naive than the unaligned benchmark, such as putting their AI through some "ethics training" dataset, or warning their AI "If you try anything I'll unplug you." (I'm optimistic that these particular failure modes will have been mostly prevented via awareness-raising before takeoff, but I do a pessimistic meta-induction and infer there will be other failure modes that are not prevented in time.)

--Can you say more about how "the failure modes in this story are an important input into treachery?"

How do scaling laws work for fine-tuning?

Not according to this paper! They were able to get performance comparable to full-size networks, it seems. IDK.

How do scaling laws work for fine-tuning?

I totally agree that you still have to do all the matrix multiplications of the original model etc. etc. I'm saying that you'll need to do them fewer times, because you'll be training on less data.

Each step costs, say, 6*N flop where N is parameter count. And then you do D steps, where D is how many data points you train on. So total flop cost is 6*N*D. When you fine-tune, you still spend 6*N for each data point, but you only need to train on 0.001D data points, at least according to the scaling laws, at least according to the orthodox interpretation around here.

I'd recommend reading Ajeya's report (found here) for more on the scaling laws. There's also this comment thread.

How do scaling laws work for fine-tuning?

I think compute cost equals data x parameters, so even if parameters are the same, if data is 3 OOM smaller, then compute cost will be 3 OOM smaller.

I'm not sure I understand your edit question. I'm referring to the scaling laws as discussed and interpreted by Ajeya. Perhaps part of what's going on is that in the sizes of model we've explored so far, bigger models only need a little bit more data, because bigger models are more data-efficient. But very soon it is prophecied that this will stop and we will transition to a slower scaling law according to which we need to increase data by almost as much as we increase parameter count. So that's the relevant one I'm thinking about when thinking about TAI/AGI/etc.

Review of "Fun with +12 OOMs of Compute"

OK, thanks.

1. I concede that we're not in a position of complete ignorance w.r.t. the new evidence's impact on alternate hypotheses. However, the same goes for pretty much any argument anyone could make about anything. In my particular case I think there's some sense in which, plausibly, for most underlying views on timelines people will have, my post should cause an update more or less along the lines I described. (see below)

2. Even if I'm wrong about that, I can roll out the anti-spikiness argument to argue in favor of <7 OOMs, though to be fair I don't make this argument in the post. (The argument goes: If 60%+ of your probability mass is between 7 and 12 OOMs, you are being overconfident.)


Argument that for most underlying views on timelines people will have, my post should cause an update more or less along the lines I described:

--The only way for your credence in <7 to go down relative to your credence in7-12 after reading my post and (mostly) ruling out >12 hypotheses, is for the stuff you learn to also disproportionately rule out sub-hypotheses in the <7 range compared to sub-hypotheses in the 7-12 range. But this is a bit weird; my post didn't talk about the <7 range at all, so why would it disproportionately rule out stuff in that range? Like I said, it seems like (to a first approximation) the information content of my post was "12 OOMs is probably enough" and not something more fancy like "12 OOMs is probably enough BUT 6 is probably not enough." I feel unsure about this and would like to hear you describe the information content of the post, in your terms.

--I actually gave an argument that this should increase your relative credence in <7 compared to 7-12, and it's a good one I think: The arguments that 12 OOMs are probably enough are pretty obviously almost as strong for 11 OOMs, and almost as strong as that for 10 OOMs, and so on. To put it another way, our distribution shouldn't have a sharp cliff at 12 OOMs; it should start descending several OOMs prior. What this means is that actually stuff in the 7-12 OOM range is disproportionately ruled out compared to stuff in the <7 OOM range, so we should actually be more confident in <7 OOMs than you would be if you just threw out >12 OOM and renormalized.

How do scaling laws work for fine-tuning?

Thanks! Your answer no. 2 is especially convincing to me; I didn't realize the authors used smaller models as the comparison--that seems like an unfair comparison! I would like to see how well these 0.1%-tuned transformers do compared to similarly-sized transformers trained from scratch.

Review of "Fun with +12 OOMs of Compute"

I think I'm just not seeing why you think the >12 OOM mass must all go somewhere than the <4 OOM (or really, I would argue, <7 OOM) case. Can you explain more?

Maybe the idea is something like: There are two underlying variables, 'We'll soon get more ideas' and 'current methods scale.' If we get new ideas soon, then <7 are needed. If we don't but 'current methods scale' is true, 7-12 are needed. If neither variable is true then >12 is needed. So then we read my +12 OOMs post and become convinced that 'current methods scale.' That rules out the >12 hypothesis, but the renormalized mass doesn't go to <7 at all because it also rules out a similar-sized chunk of the <7 hypothesis (the chunk that involved 'current methods don't scale'). This has the same structure as your 1, 2, 3 example above.

Is this roughly your view? If so, nice, that makes a fair amount of sense to me. I guess I just don't think that the "current methods scale" hypothesis is confined to 7-12 OOMs; I think it is a probability distribution that spans many OOMs starting with mere +1, and my post can be seen as an attempt to upper-bound how high the distribution goes--which then has implications for how low it goes also, if you want to avoid the anti-spikiness objection. Another angle: I could have made a similar post for +9 OOMs, and a similar one for +6 OOMs, and each would have been somewhat less plausible than the previous. But (IMO) not that much less plausible; if you have 80% credence in +12 then I feel like you should have at least 50% by +9 and at least, idk, 25% by +6. If your credence drops faster than that, you seem overconfident in your ability to extrapolate from current data IMO (or maybe not, I'd certainly love to hear your arguments!)

How do we prepare for final crunch time?

Hmmm, if this is the most it's been done, then that counts as a No in my book. I was thinking something like "Ah yes, the Viet Cong did this for most of the war, and it's now standard in both the Vietnamese and Chinese armies." Or at least "Some military somewhere has officially decided that this is a good idea and they've rolled it out across a large portion of their force."

Review of "Fun with +12 OOMs of Compute"

Interesting, hmm.

In the 1-2-3 coin case, seeing that y is heads rules out 3, but it also rules out half of 1. (There are two 1 hypotheses, the yheads and the ytails version) To put it another way, terms P(yheads|1)=0.5. So we are ruling-out-and-renormalizing after all, even though it may not appear that way at first glance.

The question is, is something similar happening with the AI OOMs?

I think if the evidence leads us to think things like "This doesn't say anything about TAI at +4 OOM, since my prediction is based on orthogonal variables" then that's a point in my favor, right? Or is the idea that the hypotheses ruled out by the evidence presented in the post include all the >12OOM hypotheses, but also a decent chunk of the <6OOM hypotheses but not of the 7-12 OOM hypotheses such that overall the ratio of (our credence in 7-12 OOMs)/(our credence in 0 - 6 OOMs) increases?

"This makes me near-certain that TAI will happen by +10 OOM, since the +12 OOM argument didn't require more than that" also seems like a point in my favor. FWIW I also had the sense that the +12OOM argument didn't really require 12 OOMs, it would have worked almost as well with 10.

How do we prepare for final crunch time?

Thanks, this is a great thing to be thinking about and a good list of ideas!

Do other subjects come to mind?

Public speaking skills, persuasion skills, debate skills, etc.

Practice no-cost-too-large productive periods

I like this idea. At AI Impacts we were discussing something similar: having "fire drills" where we spend a week (or even just a day) pretending that a certain scenario has happened, e.g. "DeepMind just announced they have a turing-test-passing system and will demo it a week from now; we've got two journalists asking us for interviews and need to prep for the emergency meeting with the AI safety community tonight at 5." We never got around to testing out such a drill but I think variants on this idea are worth exploring. Inspired by what you said, perhaps we could have "snap drills" where suddenly we take our goals for the next two months and imagine that they need to be accomplished in a week instead, and see how much we can do. (Additionally, ideas like this seem like they would have bonus effects on morale, teamwork, etc.)

I don’t know what is entailed in cultivating that virtue. Perhaps meditation? Maybe testing one’s self at literal risk to one’s life?

This virtue is extremely important to militaries. Does any military use meditation as part of its training? I would guess that the training given to medics and officers (soldiers for whom clear thinking is especially important) might have some relevant lessons. Then again, maybe the military deals with this primarily by selecting the right sort of people rather than taking arbitrary people and training them. If so, perhaps we should look into applying similar selection methods in our own organizations to identify people to put in charge when the time comes.

Any more ideas?

In this post I discuss some:

Perhaps it would be good to have an Official List of all the AI safety strategies, so that whatever rationale people give for why this AI is safe can be compared to the list. (See this prototype list.)
Perhaps it would be good to have an Official List of all the AI safety problems, so that whatever rationale people give for why this AI is safe can be compared to the list, e.g. "OK, so how does it solve outer alignment? What about mesa-optimizers? What about the malignity of the universal prior? I see here that your design involves X; according to the Official List, that puts it at risk of developing problems Y and Z..." (See this prototype list.)
Perhaps it would be good to have various important concepts and arguments re-written with an audience of skeptical and impatient AI researchers in mind, rather than the current audience of friends and LessWrong readers.

Thinking afresh, here's another idea: I have a sketch of a blog post titled "What Failure Feels Like." The idea is to portray a scenario of doom in general, abstract terms (like Paul's post does, as opposed to writing a specific, detailed story) but with a focus on how it feels to us AI-risk-reducers, rather than focusing on what the world looks like in general or what's going on inside the AIs. I decided it would be depressing and not valuable to write. However, maybe it would be valuable as a thing people could read to help emotionally prepare/steel themselves for the time when they "are confronted with the stark reality of how doomed we are." IDK.

I guess overall my favorite idea is to just periodically spend time thinking about what you'd do if you found out that takeoff was happening soon. E.g. "Deepmind announces turing-test system" or "We learn of convincing roadmap to AGI involving only 3 OOMs more compute" or "China unveils project to spend +7 OOMs on a single training run by 2030, with lesser training runs along the way" I think that the exercise of thinking about near-term scenarios and then imagining what we'd do in response will be beneficial even on long timelines, but certainly super beneficial on short timelines (even if, as is likely, none of the scenarios we imagine come to pass).

Load More