Again, I invite your feedback on this snippet from an intelligence explosion analysis Anna Salamon and myself have been working on. This section is less complete than the others; missing text is indicated with brackets: [].

_____

 

We do not know what it takes to build a digital intelligence. Because of this, we do not know what groundwork will be needed to understand intelligence, nor how long it may take to get there.

Worse, it’s easy to think we do know. Studies show that except for weather forecasters (Murphy and Winkler 1984), nearly all of us give inaccurate probability estimates when we try, and in particular we are overconfident of our predictions (Lichtenstein, Fischoff, and Phillips 1982; Griffin and Tversky 1992; Yates et al. 2002). Experts, too, often do little better than chance (Tetlock 2005), and are outperformed by crude computer algorithms (Grove and Meehl 1996; Grove et al. 2000; Tetlock 2005). So if you have a gut feeling about when digital intelligence will arrive, it is probably wrong.

But uncertainty is not a “get out of prediction free” card. You either will or will not save for retirement or support AI risk mitigation. The outcomes of these choices will depend, among other things, on whether digital intelligence arrives in the near future. Should you plan as though there are 50/50 odds of reaching digital intelligence in the next 30 years? Are you 99% confident that digital intelligence won’t arrive in the next 30 years? Or is it somewhere in between?

Other than using one’s guts for prediction or deferring to an expert, how might one estimate the time until digital intelligence? We consider several strategies below.

Time since Dartmouth. We have now seen 60 years of work toward digital intelligence since the seminal Dartmouth conference on AI, but digital intelligence has not yet arrived. This seems, intuitively, like strong evidence that digital intelligence won’t arrive in the next minute, good evidence it won’t arrive in the next year, and significant but far from airtight evidence that it won’t arrive in the next few decades. Such intuitions can be formalized into models that, while simplistic, can form a useful starting point for estimating the time to digital intelligence.1

Simple hardware extrapolation. Vinge (1993) wrote: “Based on [hardware trends], I believe that the creation of greater-than-human intelligence will occur [between 2005 and 2030].” Vinge seems to base his estimates on estimates of the “raw hardware power that is present in organic brains.” In a 2003 reprint of his article, Vinge notes the insufficiency of this reasoning: even if we have the hardware sufficient for AI, we may not have the software problem solved.

Extrapolating the requirements for whole brain emulation. One way to solve the software problem is to scan and emulate the human brain. Thus Ray Kurzweil (2005) extrapolates our progress in hardware, brain scanning, and our understanding of the brain to predict that (low resolution) whole brain emulation can be achieved by 2029. Many neuroscientists think this estimate is too optimistic, but the basic approach has promise.

Tracking progress in machine intelligence. Many folks intuitively estimate the time until digital intelligence by asking what proportion of human abilities today’s software can match, and how quickly machines are catching up. However, it is not clear how to divide up the space of “human abilities,” nor how much each one matters. We also don’t know whether machine progress will be linear or include a sudden jump. Watching an infant’s progress in learning calculus might lead one to conclude the child will not learn it until the year 3000, until suddenly the child learns it in a spurt at age 17. Still, machine progress in chess performance has been regular,2 and it may be worth checking whether a measure can be found for which both: (a) progress is smooth enough to extrapolate; and (b) when performance rises to a certain level, we can expect digital intelligence.3

Estimating progress in scientific research output. Imagine a man digging a ten-kilometer ditch. If he digs 100 meters in one day, you might predict the ditch will be finished in 100 days. But what if 20 more diggers join him, and they are all given steroids? Now the ditch might not take so long. Analogously, when predicting progress toward digital intelligence it may be useful to consider not how much progress is made per year, but instead how much progress is made per unit of research effort. Thus, if we expect jumps in the amount of effective research effort (for reasons given in section 2.2.), we should expect analogous jumps in progress toward digital intelligence.

Given the long history of confident false predictions within AI, and the human tendency toward overconfidence in general, it would seem misguided to be 90% confident that AI will succeed in the coming decade.4 But 90% confidence that digital intelligence will not arrive before the end of the century also seems wrong, given that (a) many seemingly difficult AI benchmarks have been reached, (b) many factors, such as more hardware and automated science, may well accelerate progress toward digital intelligence, and (c) whole brain emulation may well be a relatively straightforward engineering problem that will succeed by 2070 if not 2030. There is a significant probability that digital intelligence will arrive within a century, and additional research can improve our estimates (as we discuss in section 5).


________
1 We can make a simple formal model of this by assuming (with much simplification) that every year a coin is tossed to determine whether we will get AI that year, and that we are initially unsure of the weighting on that coin. [add more: The 60 years of no AI that we’ve had so far is then highly unlikely under models where the coin comes up “AI” on 90% of years (the probability of this would be 10^-60), or even that it comes up “AI” in 10% of all years (probability 0.18%, or one time in 500), whereas it’s the expected case if the coin comes up “AI” in, say, 1% of all years, or for that matter in 0.0001% of all years. Thus, depending on one’s prior over coin weightings, one should in this toy model update strongly against coin weightings in which AI would be likely in the next minute, or even year, while leaving the relative probabilities of “AI in 200 years” and “AI in 2 million years” more or less untouched.]
2 See http://lukeprog.com/special/chess.pdf.
3 It is probably also worth keeping crude track of “progress in AI” as a whole, even though there is no guarantee progress would be linear. [It would be nice to add a crude attempt to nevertheless quantify which areas of human intelligence have been substantially matched by machines.  Ideal would be to take some canonical-sounding article from some decades ago that listed domains we haven’t matched with computers, and then to note something like: of these domains, (3) has been solved, and (1) and (4) have seen substantial progress. GEB has a suitable listing, but might be better to use a more canonical article if we can find one.]
4 Unless, that is, you have a kind of evidence that is strongly different from the kinds of evidence possessed by the many researchers since Dartmouth who incorrectly predicted that their particular AI paradigm, or human-level AI in general, was about to succeed.

New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 12:07 PM

but instead how much progress is made per unit of research effort. Thus, if we expect jumps in the amount of effective research effort (for reasons given in section 2.2.),

Where is this 2.2? This statement contradicts my current understanding; that research is getting more expensive on a per-discovery basis year over year

It cringes me slightly to see the verb "arrive" used to denote implementation of AGI.

[-][anonymous]12y00

I don't think the implications are as extreme as you might think. At least, I generally hear "arrive" used in reference to any radical technological development, including such mundanities as LCD displays (back when CRTs were the norm).

Of course, one might still avoid the word when making press releases.

[-][anonymous]12y00

I don't think the implications are as extreme as you might think. At least, I generally hear "arrive" used to refer to any radical technological development, including such mundanities as LCD displays (back when CRTs were the norm).

Of course, one might still avoid the word when making press releases.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]12y00

I voted you up, because I agree with your feeling.

But I have to assume English is not your native language, since "It cringes me ..." is seriously ungrammatical. It should be "I cringe slightly ..."

[This comment is no longer endorsed by its author]Reply
[-][anonymous]12y40

It would be nice if the Time Since Dartmouth analysis wasn't so simple. Instead of statistically independent weighted trials, maybe take some inspiration from the hope function discussion here.

But what if 20 more diggers join him, and they are all given steroids?

Amphetamines would be better.

We already have a lot of digital intelligence greater than human intelligence, and have for a long time, in particular areas. Every time that computers beat us at some task, we pretend that task doesn't really count.

Digital intelligence becomes greater than ours when it can solve problems better. And when it does, it usually solves them in different ways than ours does.

One of the most interesting results I've seen in a long time were these robotic fingers dribbling a ping pong ball. As it turned out, the secret wasn't fancy algorithms, but fast electronics. If your feedback loops are fast enough, simple error feedback is enough. The same thing happened with Deep Blue with Kasparov and Watson on Jeopardy. They didn't use our algorithms, but had algorithms leveraging the advantages of digital hardware.

We've been trying to build our intelligence in machines, but machines beat us with algorithms that aren't our algorithms, and leverage the advantages that machines have over us in speed, computation, and memory.

When computers are more intelligent than us, there will probably still be areas where humans 1.0 are still better. Likely the real winner will merge the two technologies.

The question isn't so much how to make programs that exceed human performance at any particular cognitive task; it is how to make programs that can reach or exceed human performance at the entire range of cognitive tasks we can deal with, and that we can expect to do at least as well as we would dealing with challenges that we haven't encountered yet.

In fewer words, mastering the trick of cross-domain optimization.

I don't think that's a good question at all.

My question is how to create more value, and trying to be better than humans in all things likely yields a sub optimal result for creating value for me; it force feeds problems computers aren't good at, while starving problems computers are good at.

The context of the OP, the hypothetical intelligence explosion, pretty much assumes this interpretation.

At the very least, it assumes that an AGI will be G enough to take a look at its own "code" (whatever symbolic substrate it uses for encoding the computations that define it, which may not necessarily look like the "source code" we are familiar with, though it may well start off being a human invention) and figure out how to change that code so as to become an even more effective optimizer.

"Create more value" doesn't in and of itself lead to an intelligence explosion. It's something that would be nice to have, but not a game-changer.

That cross-domain thing, which is where we still have the lead, is a game-changer. (Dribbling a ping-pong ball is cute, but I want to know what the thing will do with an egg. Dribbling the egg is right out. Figuring out the egg is food, that's the kind of thing you want an AGI capable of.)

I think the real question is: When will computers be good enough that humans don't need to work? They're currently better at us at specialized tasks, but they are completely and utterly incapable of operating without us to guide them. People, on the other hand, have lived without computers for millennia.

You could try taking an outside view: go through historical predictions of technological development, and compare how long it actually took to realize a technology to how long the experts predicted it would take.

The FHI in one of its status reports listed a paper being worked on that was similar to that idea:

  • Meyer, A., Hillerbrand, R., and Bostrom, N., (2007) “Predicting technological progress. The predictions of the 1960s revisted”, in preparation

After heavily googling around, I couldn't find out anything about what happened, so I emailed Hillerbrand (no longer with the FHI) back on 9 September and asked. No reply.

Meyer is Andrew Meyer; I just pinged him as well.

Bostrom said the paper never even reached draft form, and the analysis didn't turn out to be all that productive. The main thing he learned was that predictions from the 1960s weren't even specified well enough to be able to tell whether they had come true or not.

Meyer just replied; I've forwarded it to you.

Please do reply to my comment here or email me if you hear back from them!

If we're estimating time to AI, aren't expert surveys of any interest? http://www.aleph.se/andart/archives/2011/04/when_will_we_get_our_robot_overlords.html

Also, thanks to HamletHenna for linking the hope function - I was going to, but then I thought 'I am the only one who seems to find it even a little bit interesting or enlightening; perhaps something is wrong with me'.

how might one estimate the time until digital intelligence? We consider several strategies below.

A good start - got you should probably give an estimate, or refer to a few who have done so - besides Vinge and Kurzweil. A probability density function would be neatest.

We do not know what it takes to build a digital intelligence. Because of this, we do not know what groundwork will be needed to understand intelligence, nor how long it may take to get there.

This sentence doesn't cover all evolutionary based approaches, but that's not necessarily critically important.

Should you plan as though there are 50/50 odds of reaching digital intelligence in the next 30 years? Are you 99% confident that digital intelligence won’t arrive in the next 30 years? Or is it somewhere in between?

People don't understand VoI, opportunity cost, etc. You need to be more explicit about different beliefs about what is likely imply different optimal actions.

guts

"gut"

Many neuroscientists think this estimate is too optimistic, but the basic approach has promise.

Many neuroscientists think this estimate is far too optimistic [cite], but the basic approach has promise [cite].

But what if 20 more diggers join him, and they are all given steroids?

Or food, even.

Include an alternative analogy, such as a shovel breaking, to correspond to difficulty writing threaded programs, etc.