I also asked this question on the Effective Altruism Forum. One informative answer on the EA Forum from Nicole Nohemi has already summarized a range of relevant forecasts from Metaculus, as well as relevant predictions or models from Shane Legg and Demis Hassabis from leading AI research company DeepMind, and AI alignment researchers Eliezer Yudkowsky, Paul Christiano and Ajeya Cotra. This question has been cross-posted to LessWrong to broaden the conversation to the rationality community as well.

The proportion of long-termists who are expressing confident convictions that the timeline for smarter-than-human AI is much shorter than has previously been predicted has been increasing at an accelerating rate in the last year. This appears to be a shift in perspective among several hundred long-termists. Among the dozens I've read opinions from, numbers are almost never provided.

Based on the way different people are talking about it, I wouldn't be surprised if they thought the timeline is 10-20 years, or 5-10 years, or even 2-3 years. I've talked to others who are also concerned and open-minded to this or that short AI timeline but haven't done the research themselves yet, or had much opportunity to learn from those who have. We want to understand better but basic information crucial to understanding more like numbers for different models or timelines aren't being presented. We want to know and need to know help better. What are the numbers?

New to LessWrong?

New Answer
New Comment

5 Answers sorted by

calef

Apr 22, 2022

160

I think the difficulty with answering this question is that many of the disagreements boil down to differences in estimates for how long it will take to operationalize lab-grade capabilities. Say we have intelligences that are narrowly human / superhuman on every task you can think of (which, for what it’s worth, I think will happen within 5-10 years). How long before we have self-replicating factories? Until foom? Until things are dangerously out of our control? Until GDP doubles within one year? In what order do these things happen? Etc. etc.

If I got anything out of the thousands of words of debate on the site in the last couple of months, it’s the answers to these questions that folks seem to disagree about (though I think I only actually have a good sense of Paul’s answers to these). Also curious to see more specific answers / timelines.

I think the difficulty with answering this question is that many of the disagreements boil down to differences in estimates for how long it will take to operationalize lab-grade capabilities.

The same point was made on the Effective Altruism Forum and it's a considerable one. Yet I expected that. 

The problem frustrating me is that the relative number of individuals who have volunteered their own numbers is so low it's an insignificant minority. One person doesn't disagree with their own self unless there is model uncertainty or whatever. Unless individ... (read more)

Evan R. Murphy

Apr 22, 2022

100

GovAI is working on a survey of ML experts about timelines (among other things). It looks like this is supposed to replicate the influential Grace et al. (2017) survey.

They are asking various questions about human-level AI forecasts and predicted AI progress milestones. You can see the questions and other info about it here: https://osf.io/b239h/

It looks the survey may have actually been conducted back in 2019 but the results aren't published yet. (Update: The results are published now.) So it's moving slowly but once out may be a good formal resource on timelines to complement a lot of the informal conversations scattered about on the forums. 

I'm not sure the status of this but would be interested to know if someone has a chance to reach out to Baobao Zhang who I believe is leading the effort.

I just sent Baobao an email to see if she has any information about when the results might be available.

2Evan_Gaensbauer2y
Thank you!

This is great news! This could even be a topic for one of our meetups!

1Evan R. Murphy2y
Indeed! :)

habryka

Apr 22, 2022

100

The shortest ones I've heard have been 7 years median estimate. 

Thanks. Do you feel like you have a sense of what proportion of long-termists you know who are forecasting that way? Or do you know of some way how one might learn more about forecasts like this and the reasoning or models behind them?

AnthonyRepetto

Apr 24, 2022

80

For an intelligence which is capable of continual addition of new domains of knowledge, generating its own schema, and making accurate predictions out-of-distribution? 5 to 10 years.

Why: AlphaGo was only six years ago. The global effort per year toward development is still increasing rapidly; and chips are not a bottleneck. We know for sure that human brains, with our 100T synapses, do not need to be replicated in every detail, before comparable performance is reached. As proof: Google's RETRO language-model out-performs the average human, while using only one 13,000th of a human brain's synapses. Our Broca's Area alone is many times larger; our brains are clearly bloat-ware.

So, AI performance will not be hindered by hardware... the limitation is its capacity to learn from few examples, and generalize. Looking at how image recognition once required 10 Million examples to learn a cat, and comparing that to the ten photos needed to construct a high-res 3D render, now, I don't see 'few examples' being a problem. Generalization is more difficult, because we must expand the search of architectures to include the more-neglected and less-performant models, like Numenta's. I still don't expect that stumbling-block to take long, and once a model can generalize out-of-distribution from conceptualizations even a little bit, then we have a matter of months, a year at best, before it's overwhelmingly good.

When FOOM? We're late to that party: it's already happening. If we had told any futurist in 2010 that we were getting 10x, 25x, and more, regularly, across numerous performance metrics, for various applications... they would have called that pace a FOOM. It just doesn't need artificial general intelligence to happen... human + machine, and AutoML seem to be plenty for self-acceleration. By the time AGI is entrusted with anything, we're likely to have only a few multiples of algorithmic performance remaining! Most of the FOOM, on log-chart, will happen before AGI - and that's a very specific prediction.

Flaglandbase

Apr 23, 2022

10

My research in trying to write a novel about this subject seems to indicate the experts think that chip speeds and densities are improving so slowly there is no chance of building an AI smart enough to begin improving itself for at least fifteen to twenty years if even then. The only way it might happen sooner is if there was an emergent worldwide effort composed of many separate researchers focusing on different sub-problems of the same highly focused goal, like trying to invent the simplest possible self-replicating nanobot or smallest computing element.

Can you help me understand why chip speeds are a bottleneck? Why couldn't an AI continue to advance by harnessing increased parallelized cloud compute?

1Flaglandbase2y
This is due to the vast difference in capabilities between ICs and brains, which has a long way to go to overcome. A computer merely as powerful as a human brain would be the size of a large room and use a lot more energy.