Summary: AGI isn't super likely to come super soon. People should be working on stuff that saves humanity in worlds where AGI comes in 20 or 50 years, in addition to stuff that saves humanity in worlds where AGI comes in the next 10 years.

Thanks to Alexander Gietelink Oldenziel, Abram Demski, Daniel Kokotajlo, Cleo Nardo, Alex Zhu, and Sam Eisenstat for related conversations.

My views on when AGI comes

AGI

By "AGI" I mean the thing that has very large effects on the world (e.g., it kills everyone) via the same sort of route that humanity has large effects on the world. The route is where you figure out how to figure stuff out, and you figure a lot of stuff out using your figure-outers, and then the stuff you figured out says how to make powerful artifacts that move many atoms into very specific arrangements.

This isn't the only thing to worry about. There could be transformative AI that isn't AGI in this sense. E.g. a fairly-narrow AI that just searches configurations of atoms and finds ways to do atomically precise manufacturing would also be an existential threat and a possibility for an existential win.

Conceptual capabilities progress

The "conceptual AGI" view:

The first way humanity makes AGI is by combining some set of significant ideas about intelligence. Significant ideas are things like (the ideas of) gradient descent, recombination, probability distributions, universal computation, search, world-optimization. Significant ideas are to a significant extent bottlenecked on great natural philosophers doing great natural philosophy about intelligence, with sequential bottlenecks between many insights.

The conceptual AGI doesn't claim that humanity doesn't already have enough ideas to make AGI. I claim that——though not super strongly.

Timelines

Giving probabilities here doesn't feel great. For one thing, it seems to contribute to information cascades and to shallow coalition-forming. For another, it hides the useful models. For yet another thing: A probability bundles together a bunch of stuff I have models about, with a bunch of stuff I don't have models about. For example, how many people will be doing original AGI-relevant research in 15 years? I have no idea, and it seems like largely a social question. The answer to that question does affect when AGI comes, though, so a probability about when AGI comes would have to depend on that answer.

But ok. Here's some butt-numbers:

  • 3%-10% probability of AGI in the next 10-15ish years. This would be lower, but I'm putting a bit of model uncertainty here.
  • 40%-45% probability of AGI in the subsequent 45ish years. This is denser than the above because, eyeballing the current state of the art, it seems like we currently lack some ideas we'd need——but I don't know how many insights would be needed, so the remainder could be only a couple decades around the corner. It also seems like people are distracted now.
  • Median 2075ish. IDK. This would be further out if an AI winter seemed more likely, but LLMs seem like they should already be able to make a lot of money.
  • A long tail. It's long because of stuff like civilizational collapse, and because AGI might be really really hard to make. There's also a sliver of a possibility of coordinating for a long time to not make AGI.

If I were trying to make a model with parts, I might try starting with a mixture of Erlang distributions of different shapes, and then stretching that according to some distribution about the number of people doing original AI research over time.

Again, this is all butt-numbers. I have almost no idea about how much more understanding is needed to make AGI, except that it doesn't seem like we're there yet.

Responses to some arguments for AGI soon

The "inputs" argument

At about 1:15 in this interview, Carl Shulman argues (quoting from the transcript):

We've been scaling [compute expended on ML] up four times as fast as was the case for most of the history of AI. We're running through the orders of magnitude of possible resource inputs you could need for AI much much more quickly than we were for most of the history of AI. That's why this is a period with a very elevated chance of AI per year because we're moving through so much of the space of inputs per year [...].

This isn't the complete argument Shulman gives, but on its own it's interesting. On its own, it's valid, but only if we're actually scaling up all the needed inputs.

On the conceptual AGI view, this isn't the case, because we aren't very greatly increasing the amount of great natural philosophers doing great natural philosophy about intelligence. That's a necessary input, and it's only being somewhat scaled up. For one thing, many new AI researchers are correlated with each other, and many are focused on scaling up, applying, and varying existing ideas. For another thing, sequential progress can barely be sped up with more bodies.

The "big evolution" argument

Carl goes on to argue that eventually, when we have enough compute, we'll be able to run a really big evolutionary process that finds AGIs (if we haven't already made AGI). This idea also appears in Ajeya Cotra's report on the compute needed to create AGI.

I broadly agree with this. But I have two reasons that this argument doesn't make AGI seem very likely very soon.

The first reason is that running a big evolution actually seems kind of hard; it seems to take significant conceptual progress and massive engineering effort to make the big evolution work. What I'd expect to see when this is tried, is basically nothing; life doesn't get started, nothing interesting happens, the entities don't get far (beyond whatever primitives were built in). You can get around this by invoking more compute, e.g. by simulating physics more accurately at a more detailed level, or by doing hyperparameter search to find worlds that lead to cool stuff. But then you're invoking more compute. (I'd also expect a lot of the hacks that supposedly make our version of evolution much more efficient than real evolution, to actually result in our version being circumscribed, i.e. it peters out because the shortcut that saved compute also cut off some important dimensions of search.)

The second reason is that evolution seems to take a lot of serial time. There's probably lots of clever things one can do to shortcut this, but these would be significant conceptual progress.

"I see how to do it"

My (limited / filtered) experience with these ideas leads me to think that [ideas knowably sufficient to make an AGI in practice] aren't widespread or obvious. (Obviously it is somehow feasible to make an AGI, because evolution did it.)

The "no blockers" intuition

An intuition that I often encounter is something like this:

Previously, there were blockers to current systems being developed into AGI. But now those blockers have been solved, so AGI could happen any time now.

This sounds to my ears like: "I saw how to make AGI, but my design required X. Then someone made X, so now I have a design for an AGI that will work.". But I don't think that's what they think. I think they don't think they have to have a design for an AGI in order to make an AGI.

I kind of agree with some version of this——there's a lot of stuff you don't have to understand, in order to make something that can do some task. We observe this in modern ML. But current systems, though they impressively saturate some lower-dimensional submanifold of capability-space, don't permeate a full-dimensional submanifold. Intelligence is a positive thing. Most computer code doesn't put itself on an unbounded trajectory of gaining capabilities. To make it work you have to do engineering and science, at some level. Bridges don't hold weight just because there's nothing blocking them from holding weight.

Daniel Kokotajlo points out that for things that grow, it's kind of true that they'll succeed as long as there aren't blockers——and for example animal husbandry kind of just works, without the breeders understanding much of anything about the internals of why their selection pressures are met with adequate options to select. This is true, but it doesn't seem very relevant to AGI because we're not selecting from an existing pool of highly optimized "genomic" (that is, mental) content. If instead of tinkering with de novo gradient-searched circuits, we were tinkering with remixing and mutating whole-brain emulations, then I would think AGI comes substantially sooner.

Another regime where "things just work" is many mental contexts where a task is familiar enough in some way that you can expect to succeed at the task by default. For example, if you're designing a wadget, and you've previously designed similar wadgets to similar specifications, then it makes sense to treat a design idea as though it's going to work out——as though it can be fully fleshed out into a satisfactory, functioning design——unless you see something clearly wrong with it, a clear blocker like a demand for a metal with unphysical properties. Again, like the case of animal husbandry, the "things just work" comes from the (perhaps out of sight) preexisting store of optimized content that's competent to succeed at the task given a bit of selection and arrangement. In the case of AGI, no one's ever built anything like that, so the store of knowledge that would automatically flesh out blockerless AGI ideas is just not there.

Yet another such regime is markets, where the crowd of many agents can be expected to figure out how to do something as long as it's feasible. So, a version of this intuition goes:

There are a lot of people trying to make AGI. So either there's some strong blocker that makes it so that no one can make AGI, or else someone will make AGI.

This is kind of true, but it just goes back to the question of how much conceptual progress will people make towards AGI. It's not an argument that we already have the understanding needed to make AGI. If it's used as an argument that we already have the understanding, then it's an accounting mistake: it says "We already have the understanding. The reason we don't need more understanding, is that if there were more understanding needed, someone else will figure it out, and then we'll have it. Therefore no one needs to figure anything else out.".

Finally: I also see a fair number of specific "blockers", as well as some indications that existing things don't have properties that would scare me.

"We just need X" intuitions

Another intuition that I often encounter is something like this:

We just need X to get AGI. Once we have X, in combination with Y it will go all the way.

Some examples of Xs: memory, self-play, continual learning, curricula, AIs doing AI research, learning to learn, neural nets modifying their own weights, sparsity, learning with long time horizons.

For example: "Today's algorithms can learn anything given enough data. So far, data is limited, and we're using up what's available. But self-play generates infinite data, so our systems will be able to learn unboundedly. So we'll get AGI soon.".

This intuition is similar to the "no blockers" intuition, and my main response is the same: the reason bridges stand isn't that you don't see a blocker to them standing. See above.

A "we just need X" intuition can become a "no blockers" intuition if someone puts out an AI research paper that works out some version of X. That leads to another response: just because an idea is, at a high level, some kind of X, doesn't mean the idea is anything like the fully-fledged, generally applicable version of X that one imagines when describing X.

For example, suppose that X is "self-play". One important thing about self-play is that it's an infinite source of data, provided in a sort of curriculum of increasing difficulty and complexity. Since we have the idea of self-play, and we have some examples of self-play that are successful (e.g. AlphaZero), aren't we most of the way to having the full power of self-play? And isn't the full power of self-play quite powerful, since it's how evolution made AGI? I would say "doubtful". The self-play that evolution uses (and the self-play that human children use) is much richer, containing more structural ideas, than the idea of having an agent play a game against a copy of itself.

Most instances of a category are not the most powerful, most general instances of that category. So just because we have, or will soon have, some useful instances of a category, doesn't strongly imply that we can or will soon be able to harness most of the power of stuff in that category. I'm reminded of the politician's syllogism: "We must do something. This is something. Therefore, we must do this.".

The bitter lesson and the success of scaling

Sutton's bitter lesson, paraphrased:

AI researchers used to focus on coming up with complicated ideas for AI algorithms. They weren't very successful. Then we learned that what's successful is to leverage computation via general methods, as in deep learning and massive tree search.

Some add on:

And therefore what matters in AI is computing power, not clever algorithms.

This conclusion doesn't follow. Sutton's bitter lesson is that figuring out how to leverage computation using general methods that scale with more computation beats trying to perform a task by encoding human-learned specific knowledge about the task domain. You still have to come up with the general methods. It's a different sort of problem——trying to aim computing power at a task, rather than trying to work with limited computing power or trying to "do the task yourself"——but it's still a problem. To modify a famous quote: "In some ways we feel we are as bottlenecked on algorithmic ideas as ever, but we believe we are bottlenecked on a higher level and about more important things."

Large language models

Some say:

LLMs are already near-human and in many ways super-human general intelligences. There's very little left that they can't do, and they'll keep getting better. So AGI is near.

This is a hairy topic, and my conversations about it have often seemed not very productive. I'll just try to sketch my view:

  • The existence of today's LLMs is scary and should somewhat shorten people's expectations about when AGI comes.
  • LLMs have fixed, partial concepts with fixed, partial understanding. An LLM's concepts are like human concepts in that they can be combined in new ways and used to make new deductions, in some scope. They are unlike human concepts in that they won't grow or be reforged to fit new contexts. So for example there will be some boundary beyond which a trained LLM will not recognize or be able to use a new analogy; and this boundary is well within what humans can do.
  • An LLM's concepts are mostly "in the data". This is pretty vague, but I still think it. A number of people who think that LLMs are basically already AGI have seemed to agree with some version of this, in that when I describe something LLMs can't do, they say "well, it wasn't in the data". Though maybe I misunderstand them.
  • When an LLM is trained more, it gains more partial concepts.
  • However, it gains more partial concepts with poor sample efficiency; it mostly only gains what's in the data.
  • In particular, even if the LLM were being continually trained (in a way that's similar to how LLMs are already trained, with similar architecture), it still wouldn't do the thing humans do with quickly picking up new analogies, quickly creating new concepts, and generally reforging concepts.
  • LLMs don't have generators that are nearly as powerful as the generators of human understanding. The stuff in LLMs that seems like it comes in a way that's similar to how stuff in humans comes, actually comes from a lot more data. So LLMs aren't that much of indication that we've figured out how to make things that are on an unbounded trajectory of improvement.
  • LLMs have a weird, non-human shaped set of capabilities. They go much further than humans on some submanifold, and they barely touch some of the full manifold of capabilities. (They're "unbalanced" in Cotra's terminology.)
  • There is a broken inference. When talking to a human, if the human emits certain sentences about (say) category theory, that strongly implies that they have "intuitive physics" about the underlying mathematical objects. They can recognize the presence of the mathematical structure in new contexts, they can modify the idea of the object by adding or subtracting properties and have some sense of what facts hold of the new object, and so on. This inference——emitting certain sentences implies intuitive physics——doesn't work for LLMs.
  • The broken inference is broken because these systems are optimized for being able to perform all the tasks that don't take a long time, are clearly scorable, and have lots of data showing performance. There's a bunch of stuff that's really important——and is a key indicator of having underlying generators of understanding——but takes a long time, isn't clearly scorable, and doesn't have a lot of demonstration data. But that stuff is harder to talk about and isn't as intuitively salient as the short, clear, demonstrated stuff.
  • Vaguely speaking, I think stable diffusion image generation is comparably impressive to LLMs, but LLMs seem even more impressive to some people because LLMs break the performance -> generator inference more. We're used to the world (and computers) creating intricate images, but not creating intricate texts.
  • There is a missing update. We see impressive behavior by LLMs. We rightly update that we've invented a surprisingly generally intelligent thing. But we should also update that this behavior surprisingly turns out to not require as much general intelligence as we thought.

Other comments on AGI soon

  • There's a seemingly wide variety of reasons that people I talk to think AGI comes soon. This seems like evidence for each of these hypotheses: that AGI comes soon is overdetermined; that there's one underlying crux (e.g.: algorithmic progress isn't needed to make AGI) that I haven't understood yet; that I talked to a heavily selected group of people (true); that people have some other reason for saying that AGI comes soon, and then rationalize that proposition.
  • I'm somewhat concerned that people are being somewhat taken in by hype (experiments systematically misinterpreted by some; the truth takes too long to put on its pants, and the shared narrative is already altered).
  • I'm kind of baffled that people are so willing to say that LLMs understand X, for various X. LLMs do not behave with respect to X like a person who understands X, for many X.
  • I'm pretty concerned that many people are fairly strongly deferring to others, in a general sense that includes updating off of other people's actions and vibes. Widespread deference has many dangers, which I list in "Dangers of deference".
  • I'm worried that there's a bucket error where "I think AGI comes soon." isn't separated from "We're going to be motivated to work together to prevent existential risk from AGI.".

My views on strategy

  • Alignment is really hard. No one has good reason to think any current ideas would work to make an aligned / corrigible AGI. If AGI comes, everyone dies.

  • If AGI comes in five years, everyone dies. We won't solve alignment well enough by then. This of course doesn't imply that AGI coming soon is less likely. However, it does mean that some people should focus on somewhat different things. Most people trying to make the world safe by solving AGI alignment should be open to trains of thought that likely will only be helpful in twenty years. There will be a lot of people who can't help the world if AGI comes in five years; if those people are going to stress out about how they can't help, instead they should work on stuff that helps in twenty or fifty years.

  • A consensus belief is often inaccurate, e.g. because of deference and information cascades. In that case, the consensus portfolio of strategies will be incorrect.

  • Not only that, but furthermore: Suppose there is a consensus believe, and suppose that it's totally correct. If funders, and more generally anyone who can make stuff happen (e.g. builders and thinkers), use this totally correct consensus belief to make local decisions about where to allocate resources, and they don't check the global margin, then they will in aggregrate follow a portfolio of strategies that is incorrect. The make-stuff-happeners will each make happen the top few things on their list, and leave the rest undone. The top few things will be what the consensus says is most important——in our case, projects that help if AGI comes within 10 years. If a project helps in 30 years, but not 10 years, then it doesn't get any funding at all. This is not the right global portfolio; it oversaturates fast interventions and leaves slow interventions undone.

  • Because the shared narrative says AGI comes soon, there's less shared will for projects that take a long time to help. People don't come up with such projects, because they don't expect to get funding; and funders go on not funding such projects, because they don't see good ones, and they don't particularly mind because they think AGI comes soon.

Things that might actually work

Besides the standard stuff (AGI alignment research, moratoria on capabilities research, explaining why AGI is an existential risk), here are two key interventions:

  • Human intelligence enhancement. Important, tractable, and neglected. Note that if alignment is hard enough that we can't solve it in time, but enhanced humans could solve it, then making enhanced humans one year sooner is almost as valuable as making AGI come one year later.
  • Confrontation-worthy empathy. Important, probably tractable, and neglected.
    • I suspect there's a type of deep, thorough, precise understanding that one person (the intervener) can have of another person (the intervened), which makes it so that the intervener can confront the intervened with something like "If you and people you know succeed at what you're trying to do, everyone will die.", and the intervened can hear this.
    • This is an extremely high bar. It may go beyond what's normally called empathy, understanding, gentleness, wisdom, trustworthiness, neutrality, justness, relatedness, and so on. It may have to incorporate a lot of different, almost contradictory properties; for example, the intervener might have to at the same time be present and active in the most oppositional way (e.g., saying: I'm here, and when all is said and done you're threatening the lives of everyone I love, and they have a right to exist) while also being almost totally diaphanous (e.g., in fact not interfering with the intervened's own reflective processes). It may involve irreversible changes, e.g. risking innoculation effects and unilateralist commons-burning. It may require incorporating very distinct skills; e.g. being able to make clear, correct, compelling technical arguments, and also being able to hold emotional space in difficult reflections, and also being interesting and socially competent enough to get the appropriate audiences in the first place. It probably requires seeing the intervened's animal, and the intervened's animal's situation, so that the intervener can avoid being a threat to the intervened's animal, and can help the intervened reflect on other threats to their animal. Developing this ability probably requires recursing on developing difficult subskills. It probably requires to some extent thinking like a cultural-rationalist and to some extent thinking very much not like a cultural-rationalist. It is likely to have discontinuous difficulty——easy for some sorts of people, and then very difficult in new ways for other sorts of people.
    • Some people are working on related abilities. E.g. Circlers, authentic relaters, therapists. As far as I know (at least having some substantial experience with Circlers), these groups aren't challenging themselves enough. Mathematicians constantly challenge themselves: when they answer one sort of question, that sort of question becomes less interesting, and they move on to thinking about more difficult questions. In that way, they encounter each fundamental difficulty eventually, and thus have likely already grappled with the mathematical aspect of a fundamental difficulty that another science encounters.
    • Critch talks about empathy here, though maybe with a different emphasis.
New Comment
14 comments, sorted by Click to highlight new comments since:

Is there a specific thing you think LLMs won't be able to do soon, such that you would make a substantial update toward shorter timelines if there was an LLM able to do it within 3 years from now?

Well, making it pass people's "specific" bar seems frustrating, as I mentioned in the post, but: understand stuff deeply--such that it can find new analogies / instances of the thing, reshape its idea of the thing when given propositions about the thing taken as constraints, draw out relevant implications of new evidence for the ideas.

Like, someone's going to show me an example of an LLM applying modus ponens, or making an analogy. And I'm not going to care, unless there's more context; what I'm interested in is [that phenomenon which I understand at most pre-theoretically, certainly not explicitly, which I call "understanding", and which has as one of its sense-experience emanations the behavior of making certain "relevant" applications of modus ponens, and as another sense-experience emanation the behavior of making analogies in previously unseen domains that bring over rich stuff from the metaphier].

Alright, to check if I understand, would these be the sorts of things that your model is surprised by?

  1. An LLM solves a mathematical problem by introducing a novel definition which humans can interpret as a compelling and useful concept.
  2. An LLM which can be introduced to a wide variety of new concepts not in its training data, and after a few examples and/or clarifying questions is able to correctly use the concept to reason about something.
  3. A image diffusion model which is shown to have a detailed understanding of anatomy and 3D space, such that you can use it to transform an photo of a person into an image of the same person in a novel pose (not in its training data) and angle with correct proportions and realistic joint angles for the person in the input photo.

Unfortunately, more context is needed.

An LLM solves a mathematical problem by introducing a novel definition which humans can interpret as a compelling and useful concept.

I mean, I could just write a python script that prints out a big list of definitions of the form

"A topological space where every subset with property P also has property Q"

and having P and Q be anything from a big list of properties of subsets of topological spaces. I'd guess some of these will be novel and useful. I'd guess LLMs + some scripting could already take advantage of some of this. I wouldn't be very impressed by that (though I think I would be pretty impressed by the LLM being able to actually tell the difference between valid proofs in reasonable generality). There are some versions of this I'd be impressed by, though. Like if an LLM had been the first to come up with one of the standard notions of curvature, or something, that would be pretty crazy.

An LLM which can be introduced to a wide variety of new concepts not in its training data, and after a few examples and/or clarifying questions is able to correctly use the concept to reason about something.

I haven't tried this, but I'd guess if you give an LLM two lists of things where list 1 is [things that are smaller than a microwave and also red] and list 2 is [things that are either bigger than a microwave, or not red], or something like that, it would (maybe with some prompt engineering to get it to reason things out?) pick up that "concept" and then use it, e.g. sorting a new item, or deducing from "X is in list 1" to "X is red". That's impressive (assuming it's true), but not that impressive.

On the other hand, if it hasn't been trained on a bunch of statements about angular momentum, and then it can--given some examples and time to think--correctly answer questions about angular momentum, that would be surprising and impressive. Maybe this could be experimentally tested, though I guess at great cost, by training a LLM on a dataset that's been scrubbed of all mention of stuff related to angular momentum (disallowing math about angular momentum, but allowing math and discussion about momentum and about rotation), and then trying to prompt it so that it can correctly answer questions about angular momentum. Like, the point here is that angular momentum is a "new thing under the sun" in a way that "red and smaller than microwave" is not a new thing under the sun.

I think the argument here basically implies that language models will not produce any novel, useful concepts in any existing industries or research fields that get substantial adoption (e.g. >10% of ppl use it, or a widely cited paper) in those industries, in the next 3 years, and if it did this, then the end would be nigh (or much nigher).

To be clear, you might get new concepts from language models about language if you nail some Chris Olah style transparency work, but the language model itself will not output ones that aren't about language in the text.

I roughly agree. As I mentioned to Adele, I think you could get sort of lame edge cases where the LLM kinda helped find a new concept. The thing that would make me think the end is substantially nigher is if you get a model that's making new concepts of comparable quality at a comparable rate to a human scientist in a domain in need of concepts.

if you nail some Chris Olah style transparency work

Yeah that seems right. I'm not sure what you mean by "about language". Sorta plausibly you could learn a little something new about some non-language domain that the LLM has seen a bunch of data about, if you got interpretability going pretty well. In other words, I would guess that LLMs already do lots of interesting compression in a different way than humans do it, and maybe you could extract some of that. My quasi-prediction would be that those concepts

  1. are created using way more data than humans use for many of their important concepts; and
  2. are weirdly flat, and aren't suitable out of the box for a big swath of the things that human concepts are suitable for.
[-]Max H814

(Obviously it is somehow feasible to make an AGI, because evolution did it.)

This parenthetical is one of the reasons why I think AGI is likely to come soon.

The example of human evolution provides a strict upper bound on the difficulty of creating (true, lethally dangerous) AGI, and of packing it into a 10 W, 1000 cm box.

That doesn't mean that recreating the method used by evolution (iterative mutation over millions of years at planet scale) is the only way to discover and learn general-purpose reasoning algorithms. Evolution had a lot of time and resources to run, but it is an extremely dumb optimization process that is subject to a bunch of constraints and quirks of biology, which human designers are already free of.

To me, LLMs and other recent AI capabilities breakthroughs are evidence that methods other than planet-scale iterative mutation can get you something, even if it's still pretty far from AGI. And I think it is likely that capabilities research will continue to lead to scaling and algorithms progress that will get you more and more something. But progress of this kind can't go on forever - eventually it will hit on human-level (or better) reasoning ability. 

The inference I make from observing both the history of human evolution and the spate of recent AI capabilities progress is that human-level intelligence can't be that special or difficult to create in an absolute sense, and that while evolutionary methods (or something isomorphic to them) at planet scale are sufficient to get to general intelligence, they're probably not necessary.

Or, put another way:
 

Finally: I also see a fair number of specific "blockers", as well as some indications that existing things don't have properties that would scare me.


I mostly agree with the point about existing systems, but I think there are only so many independent high-difficulty blockers which can "fit" inside the AGI-invention problem, since evolution somehow managed to solve them all through inefficient brute force. LLMs are evidence that at least some of the (perhaps easier) blockers can be solved via methods that are tractable to run on current-day hardware on far shorter timescales than evolution.

 

When there is a simple enlightening experiment that can be constructed out of available parts (including theories that inform construction), it can be found with expert intuition, without clear understanding. When there are no new parts for a while, and many experiments have been tried, this is evidence that further blind search becomes less likely to produce results, that more complicated experiments are necessary that can only be designed with stronger understanding.

Recently, there are many new parts for AI tinkering, some themselves obtained from blind experimentation (scaling gives new capabilities that couldn't be predicted to result from particular scaling experiments). Not enough time and effort has passed to rule out further significant advancement by simple tinkering with these new parts, and scaling itself hasn't run out of steam yet, it by itself might deliver even more new parts for further tinkering.

So while it's true that there is no reason to expect specific advancements, there is still reason to expect advancements of unspecified character for at least a few years, more of them than usually. This wave of progress might run out of steam before AGI, or it might not, there is no clear theory to say which is true. Current capabilities seem sufficiently impressive that even modest unpredictable advancement might prove sufficient, which is an observation that distinguishes the current wave of AI progress from previous ones.

I think the current wave is special, but that's a very far cry from being clearly on the ramp up to AGI.

The point is, it's still a matter of intuitively converting impressiveness of current capabilities and new parts available for tinkering that hasn't been done yet into probability of this wave petering out before AGI. The arguments for AGI "being overdetermined" can be amended to become arguments for particular (kinds of) sequences of experiments looking promising, shifting the estimate once taken into account. Since failure of such experiments is not independent, the estimate can start going down as soon as scaling stops producing novel capabilities, or reaches the limits of economic feasibility, or there is a year or two without significant breakthroughs.

Right now, it's looking grim, but a claim I agree with is that planning for the possibility of AGI taking 20+ years is still relevant, nobody actually knows it's inevitable. I think the following few years will change this estimate significantly either way.

I'm not really sure whether or not we disagree. I did put "3%-10% probability of AGI in the next 10-15ish years".

I think the following few years will change this estimate significantly either way.

Well, I hope that this is a one-time thing. I hope that if in a few years we're still around, people go "Damn! We maybe should have been putting a bit more juice into decades-long plans! And we should do so now, though a couple more years belatedly!", rather than going "This time for sure!" and continuing to not invest in the decades-long plans. My impression is that a lot of people used to work on decades-long plans and then shifted recently to 3-10 year plans, so it's not like everyone's being obviously incoherent. But I also have an impression that the investment in decades-plans is mistakenly low; when I propose decades-plans, pretty nearly everyone isn't interested, with their cited reason being that AGI comes within a decade.

FWIW I think that confrontation-worthy empathy and use of the phrase "everyone will die" to describe AI risk are approximately mutually exclusive with each other, because communication using the latter phrase results from a failure to understand communication norms.

(Separately I also think that "if we build AGI, everyone will die" is epistemically unjustifiable given current knowledge. But the point above still stands even if you disagree with that bit.)

What I mean by confrontation-worthy empathy is about that sort of phrase being usable. I mean, I'm not saying it's the best phrase, or a good phrase to start with, or whatever. I don't think inserting Knightian uncertainty is that helpful; the object-level stuff is usually the most important thing to be communicating.

This maybe isn't so related to what you're saying here, but I'd follow the policy of first making it common knowledge that you're reporting your inside views (which implies that you're not assuming that the other person would share those views); and then you state your inside views. In some scenarios you describe, I get the sense that Person 2 isn't actually wanting Person 1 to say more modest models, they're wanting common knowledge that they won't already share those views / won't already have the evidence that should make them share those views.

"I don't think inserting Knightian uncertainty is that helpful; the object-level stuff is usually the most important thing to be communicating."

The main point of my post is that accounting for disagreements about Knightian uncertainly is the best way to actually communicate object level things, since otherwise people get sidetracked by epistemological disagreements.

"I'd follow the policy of first making it common knowledge that you're reporting your inside views"

This is a good step, but one part of the epistemological disagreements I mention above is that most people consider inside views to be much a much less coherent category, and much less separable from other views, than most rationalists do. So I expect that more such steps are typically necessary.

"they're wanting common knowledge that they won't already share those views"

I think this is plausibly true for laypeople/non-ML-researchers, but for ML researchers it's much more jarring when someone is making very confident claims about their field of expertise, that they themselves strongly disagree with.