All of Ajeya Cotra's Comments + Replies

Draft report on AI timelines

David Roodman put together a Guesstimate model that some people might find helpful: https://www.getguesstimate.com/models/18944

Draft report on AI timelines

There are some limited sensitivity analysis in the "Conservative and aggressive estimates" section of part 4.

1romeostevensit3moAppreciated
The case for aligning narrowly superhuman models

In my head the point of this proposal is very much about practicing what we eventually want to do, and seeing what comes out of that; I wasn't trying here to make something different sound like it's about practice. I don't think that a framing which moved away from that would better get at the point I was making, though I totally think there could be other lines of empirical research under other framings that I'd be similarly excited about or maybe more excited about.

In my mind, the "better than evaluators" part is kind of self-evidently intriguing for the... (read more)

The case for aligning narrowly superhuman models

I don't feel confident enough in the frame of "inaccessible information" to say that the whole agenda is about it. It feels like a fit for "advice", but not a fit for "writing stories" or "solving programming puzzles" (at least not an intuitive fit -- you could frame it as "the model has inaccessible information about [story-writing, programming]" but it feels more awkward to me). I do agree it's about "strongly suspecting it has the potential to do better than humans" rather than about "already being better than humans." Basically, it's about trying to find areas where lackluster performance seems to mostly be about "misalignment" rather than "capabilities" (recognizing those are both fuzzy terms).

2Abram Demski7moRight, ok, I like that framing better (it obviously fits, but I didn't generate it as a description before).
The case for aligning narrowly superhuman models

Yeah, you're definitely pointing at an important way the framing is awkward. I think the real thing I want to say is "Try to use some humans to align a model in a domain where the model is better than the humans at the task", and it'd be nice to have a catchy term for that. Probably a model which is better than some humans (e.g. MTurkers) at one task (e.g. medical advice) will also be better than those same humans at many other tasks (e.g. writing horror stories); but at the same time for each task, there's some set of humans (e.g. doctors in the first cas... (read more)

5Abram Demski7moOne response I generated was, "maybe it's just not so much about practicing what we eventually want to do, and that part is an illusion of the poor framing. We should figure out the right framing first and then ask whether it seems like practice, not optimize the framing to make it sound like practice." But I think my real response is: why is the superhuman part important, here? Maybe what's really important is being able to get answers (eg medical advice) without putting them in (eg without fine-tuning on medical advice filtered for high quality), and asking for superhuman ability is just a way of helping ensure that? Or perhaps more generally, there are other things like this which you expect people to do wrong if they're not dealing with a superhuman case, because you want the technology to eventually work for superhuman cases.
4Abram Demski7moI might be on board if "narrowly superhuman" were simply defined differently. Isn't it something more like "the model has information sufficient to do better"? EG, in the GPT example, you can't reliably get good medical advice from it right now, but you strongly suspect it's possible. That's a key feature of the whole idea, right? Is your suggested research program better described as: find (highly capable) models with inaccessible information [https://www.alignmentforum.org/posts/ZyWyAJbedvEgRT2uF/inaccessible-information] and get them to reveal that information? (Especially: get them to reveal the inaccessible information without using domain expertise to do so?)
MIRI comments on Cotra's "Case for Aligning Narrowly Superhuman Models"

The conceptual work I was gesturing at here is more Paul's work, since MIRI's work (afaik) is not really neural net-focused. It's true that Paul's work also doesn't assume a literal worst case; it's a very fuzzy concept I'm gesturing at here. It's more like, Paul's research process is to a) come up with some procedure, b) try to think of any "plausible" set of empirical outcomes that cause the procedure to fail, and c) modify the procedure to try to address that case. (The slipperiness comes in at the definition of "plausible" here, but the basic spirit of it is to "solve for every case" in the way theoretical CS typically aims to do in algorithm design, rather than "solve for the case we'll in fact encounter.")

MIRI comments on Cotra's "Case for Aligning Narrowly Superhuman Models"

This was a really helpful articulation, thanks! I like "frankness", "forthrightness", "openness", etc. (These are all terms I was brainstorming to get at the "ascription universality" concept at one point.)

I expect there to be a massive and important distinction between "passive transparency" and "active transparency", with the latter being much more shaky and potentially concealing of fatality, and the former being cruder as tech at the present rate which is unfortunate because it has so many fewer ways to go wrong.  I hope any terminology chosen continues to make the distinction clear.

The case for aligning narrowly superhuman models

The case in my mind for preferring to elicit and solve problems at scale rather than in toy demos (when that's possible) is pretty broad and outside-view, but I'd nonetheless bet on it: I think a general bias toward wanting to "practice something as close to the real thing as possible" is likely to be productive. In terms of the more specific benefits I laid out in this section, I think that toy demos are less likely to have the first and second benefits ("Practical know-how and infrastructure" and "Better AI situation in the run-up to superintelligence"),... (read more)

1David Krueger7moThanks for the response! I see the approaches as more complimentary. Again, I think this is in keeping with standard/good ML practice. A prototypical ML paper might first describe a motivating intuition, then formalize it via a formal model and demonstrate the intuition in that model (empirically or theoretically), then finally show the effect on real data. The problem with only doing the real data (i.e. at scale) experiments is that it can be hard to isolate the phenomena you wish to study. And so a positive result does less to confirm the motivating intuition, as there are many other factors as play that might be responsible. We've seen this happen rather a lot in Deep Learning and Deep RL, in part because of the focus on empirical performance over a more scientific approach.
The case for aligning narrowly superhuman models

Yeah, in the context of a larger alignment scheme, it's assuming that in particular the problem of answering the question "How good is the AI's proposed action?" will factor down into sub-questions of manageable size.

The case for aligning narrowly superhuman models

The intuition for it is something like this: suppose I'm trying to make a difficult decision, like where to buy a house. There are hundreds of cities I'd be open to, each one has dozens of neighborhoods, and each neighborhood has dozens of important features, like safety, fun things to do, walkability, price per square foot, etc. If I had a long time, I would check out each neighborhood in each city in turn and examine how it does on each dimension, and pick the best neighborhood.

If I instead had an army of clones of myself, I could send many of them to ea... (read more)

4johnswentworth7moI see, so it's basically assuming that problems factor.
The case for aligning narrowly superhuman models

Yes sorry — I'm aware that in the HCH procedure no one human thinks for a long time. I'm generally used to mentally abstracting HCH (or whatever scheme fits that slot) as something that could "effectively replicate the benefits you could get from having a human thinking a long time," in terms of the role that it plays in an overall scheme for alignment. This isn't guaranteed to work out, of course. My position is similar to Rohin's above:

I just personally find it easier to think about "benefits of a human thinking for a long time" and then "does HCH get the same benefits as humans thinking for a long time" and then "does iterated amplification get the same benefits as HCH".

The case for aligning narrowly superhuman models

My understanding is that HCH is a proposed quasi-algorithm for replicating the effects of a human thinking for a long time.

HCH is more like an infinite bureaucracy. You have some underlings who you can ask to think for a short time, and those underlings have underlings of their own who they can ask to think for a short time, and so on. Nobody in HCH thinks for a long time, though the total thinking time of one person and their recursive-underlings may be long.

(This is exactly why factored cognition is so important for HCH & co: the thinking all has to be broken into bite-size pieces, which can be spread across people.)

The case for aligning narrowly superhuman models

My biggest concern is actually that the problem is going to be too easy for supervised learning. Need GPT-3 to dispense expert medical advice? Fine-tune it on a corpus of expert medical advice! Or for slightly more sophistication, fine-tune it to predict advice plus a score for how good the advice was, then condition on the score being high!

I don't think you can get away with supervised learning if you're holding yourself to the standard of finding fuzzy tasks where the model is narrowly superhuman. E.g. the Stiennon et al., 2020 paper involved using RL... (read more)

1Charlie Steiner7moRe: part 1 - Good points, I agree. Though I think you could broadly replicate the summarization result using supervised learning - the hope for using supervised learning in superhuman domains is that your model learns a dimension of variation for "goodness" that can generalize well even if you condition on "goodness" being slightly outside any of the training examples. Re: part 2 - What it boils down to is that my standards (and I think the practical standards) for medical advice are low, while my standards for moral advice are high (as in, you could use this to align AGI). I agree that there's no magic property a moral question has that no medical question could have. But there are non-magical properties I expect to be relevant. With medical advice from a text model, I'm not expecting it to learn a detailed model of the human body and be able to infer new medical conditions and treatments that human experts haven't figured out yet. I'm just expecting it to do verbal reasoning to arrive at the same substantive advice a human expert would give, maybe packaged in a slightly superhuman good explanation. With moral advice, though, ask 3 human experts and you'll get 4 opinions. This is made worse by the fact that I've sneakily increased the size of the problem - "moral advice" can be about almost anything. Was it bad to pull the plug on Terry Schiavo? How much of your income should you give to charity? Is it okay to kiss your cousin twice removed? Is it a good future if all the humans are destructively copied to computers? Should we run human challenge trials for covid-19 vaccines? Medical advice seems to be in the "supervisable regime," where it's fulfilled its promise by merely telling us things that human experts know. Moral advice is very not, because humans aren't consistent about morality in the same way they can be about medicine. If MTurkers are on average anti-abortion and your experts are on average pro-choice, what the hell will your MTurkers think abou
The case for aligning narrowly superhuman models

We're simply not sure where "proactively pushing to make more of this type of research happen" should rank relative to other ways we could spend our time and money right now, and determining that will involve thinking about a lot of things that are not covered in this post (most importantly what the other opportunities are for our time and money).

already seen as a standard way to make progress on the full alignment problem

It might be a standard way to make progress, but I don't feel that this work has been the default so far — the other three types of... (read more)

The case for aligning narrowly superhuman models

I guess the crux here is "And if the Hard problem is indeed hard enough to not be solved by anyone," — I don't think that's the default/expected outcome. There hasn't been that much effort on this problem in the scheme of things, and I think we don't know where it ranges from "pretty easy" to "very hard" right now.

Ah... I think we have an enormous amount of evidence on very-similar problems.

For instance: consider a lawyer and a business owner putting together a contract. The business owner has a rough intuitive idea of what they want, but lacks expertise on contracts/law. The lawyer has lots of knowledge about contracts/law, but doesn't know what the business owner wants. The business owner is like our non-expert humans; the lawyer is like GPT.

In this analogy, the analogue of an expert human would be a business owner who is also an expert in contracts/law. The analo... (read more)

The case for aligning narrowly superhuman models

Thanks for the comment! Just want to explicitly pull out and endorse this part:

the experts be completely and totally absent from the training process, and in particular no data from the experts should be involved in the training process

I should have emphasized that more in the original post as a major goal. I think you might be right that it will be hard to solve the "sandwich" problem without conceptual progress, but I also think that attempts to solve the sandwich problem could directly spur that progress (not just reveal the need for it, but also ta... (read more)

4johnswentworth7moI partially agree with this; alignment is a bottleneck to value for GPT [https://www.lesswrong.com/posts/BnDF5kejzQLqd5cjH/alignment-as-a-bottleneck-to-usefulness-of-gpt-3] , and actually aligning it would likely produce some very impressive stuff. My disagreement is that it's a lot easier to make something which looks impressive than something which solves a Hard problem (like the sandwich problem), and therefore most impressive-looking "solutions" will probably circumvent the key part of the problem. And if the Hard problem is indeed hard enough to not be solved by anyone, the most impressive-looking results will be those which look good without actually solving it.
How does bee learning compare with machine learning?

I mostly agree with your comment, but I'm actually very unsure about 2 here: I think I recall bees seeming surprisingly narrow and bad at abstract shapes. Guille would know more here.

4guicosta7moI think Rohin's second point makes sense. Bees are actually pretty good at classifying abstract shapes (I mention a couple of studies that refer to this in the appendix about my choice of benchmark, such as Giurfa (1996)), so they might plausibly be able to generalize to stylized images.
Draft report on AI timelines

Hi John, I think I remember that presentation -- the reason the graph there was quite bimodal is because the Lifetime Anchor I was using at the time was simply assuming ~1x human lifetime levels of computation. In the current model, I'm assuming ~1000x human lifetime levels of computation, because ~1x seemed like a much less likely version of that anchor. The code in the quantitative model will let you see the untruncated version of the distribution, and it looks a lot more smooth now, though still a modest bump.

Also, apologies for such a late reply, I don't get email notifications for comments and haven't been checking regularly!

2johnswentworth10moThanks, that makes sense.
Draft report on AI timelines

Thanks! No need to wait for a more official release (that could take a long time since I'm prioritizing other projects).

Draft report on AI timelines

Yeah, I agree there is room for spending to be "irrational", though I would guess this is more likely in the direction of spending less than the "rational" amount rather than more, because developing TAI could be unprecedentedly profitable and companies' spending may be limited by capital constraints.

Draft report on AI timelines

Yeah, I considered pegging spending to a fraction of GWP instead of a fraction of GDP, but found that when I did this I wanted to push the fraction down because I felt that even though companies are getting increasingly globalized, coordination at the world-scale would probably still be thinner than coordination at the scale of something nation-sized (even if it's not actually a literal nation). Ultimately, I just went with GDP because there are more reference points for it.

I feel pretty uncertain about this though, and think there's a lot of room for a mo... (read more)

1Matthew "Vaniver" Graves1yThis makes sense, altho I note that I expect the funding here to quite plausibly be 'irrational.' For example, some substantial fraction of Microsoft's value captured is going to global development in a way that seems unlikely to make sense from Microsoft's bottom line (because Microsoft enriched one of its owners, who then decided to deploy those riches for global development). If building TAI comes out of the 'altruism' or 'exploration' budget instead of the 'we expect this to pay back on schedule' budget, you could see more investment than that last category would justify.
Draft report on AI timelines

Yes, it's assuming the scaling behavior follows the probability distributions laid out in Part 2, and then asking whether conditional on that the model size requirements could be off by a large amount. 

Draft report on AI timelines

Thanks! Agree that functional form uncertainty is a big deal here; I think that implicitly this uncertainty is causing me to up-weight Short Horizon Neural Network more than I otherwise would, and also up-weight "Larger than all hypotheses" more than I otherwise would.

With that said, I do predict that in clean artificial cases (which may or may not be relevant), we could demonstrate linear scaling. E.g., consider the case of inserting a frame of static or a blank screen in between every normal frame of an Atari game or StarCraft game -- I'd expect that modifying the games in this way would straightforwardly double training computation requirements.

Draft report on AI timelines

Thanks! 

I agree that full distribution information is very valuable, although I consider medians to be important as well. The spreadsheet linked in the report provides the full distribution implied by my views for the probability that the amount of computation required to train a transformative model is affordable, although it requires some judgment to translate that into P(TAI), because there may be other bottlenecks besides computation and there may be other paths to TAI besides training a transformative model. I'd say it implies somewhere between 2... (read more)

Draft report on AI timelines

Thanks! I definitely agree that the proper modeling technique would involve introducing uncertainty on algorithmic progress, and that this uncertainty would be pretty wide; this is one of the most important few directions of future research (the others being better understanding effective horizon length and better narrowing model size).

In terms of uncertainty in model size, I personally find it somewhat easier to think about what the final spread should be in the training FLOP requirements distribution, since there's a fair amount of arbitrariness in how t... (read more)

Draft report on AI timelines

Thanks Daniel! Quick replies:

  • On down-weighting low-end vs high-end compute levels: The reason that the down-weighting for low-end compute levels was done in a separate and explicit way was just because I think there's a structural difference between the two updates. When updating against low-end compute levels, I think it makes more sense to do that update within each hypothesis, because only some orders of magnitude are affected. To implement an "update against high-end compute levels", we can simply lower the probability we assign to high-compute hypothe
... (read more)

Thanks! Just as a heads up, I now have read it thoroughly enough that I've collected quite a few thoughts about it, and so I intend to make a post sometime in the next week or so giving my various points of disagreement and confusion, including my response to your response here. If you'd rather me do this sooner, I can hustle, and if you'd rather me wait till after the report is out, I can do that too.