A few months ago, Anna Salamon and I began to write an academic overview of intelligence explosion scenarios — something we could hand to people to explain all our major points in one brief article.

We encountered two major problems.

First: The Summit happened, taking all of our time. Then I was made Executive Director, taking all of my time in a more persistent way.

Second: Being thorough and rigorous in an overview of intelligence explosion requires deep knowledge of a huge spectrum of science and philosophy: history of AI progress, history of planning for the future mattering, AI architectures, hardware progress, algorithms progress, massive datasets, neuroscience, factors in the speed of scientific progress, embryo selection, whole brain emulation, properties of digital minds, AI convergent instrumental values, self-improvement dynamics, takeoff scenarios, heuristics and biases, unipolar and multipolar intelligence explosion scenarios, human values and value extrapolation, decision theory, arms races, human dynamics of technological development, technological forecasting, the economics of machine intelligence, anthropics, evolution, AI-boxing, and much more. Because we were trying to write a short article, we kept having to consume and compress an entire field of knowledge into a single paragraph (or even a single sentence!) with the perfect 2-8 citations, which occasionally meant several days of work for a single paragraph. (This is an extreme example, but it's the kind of problem we often encountered, in different degrees.)

So, we've decided to take a different approach and involve the broader community.

We'll be posting short snippets, short pieces of the puzzle, for feedback from the community. Sometimes we'll pose questions, or ask for references about a given topic, or ask for suggested additions to the dialectic we present.

In the end, we hope to collect and remix the best and most essential snippets, incorporate the feedback and additions provided by the community, and write up the final article.

Think of it as a Polymath Project for intelligence explosion analysis. It's collaborative science and philosophy. Members of Less Wrong tend to be smart, and each one has deep knowledge of one or a few fields that we may not have. We hope you'll join us, and contribute your expertise to this project.

I'll keep a table of contents of all the snippets here, as they are published.

Draft #1:

  1. Introduction
  2. Types of digital intelligence
  3. Why designing digital intelligence gets easier over time
  4. How long before digital intelligence?
  5. From digital intelligence to intelligence explosion
  6. [not finished]

Draft #2:
  1. Snippet 1
  2. ...

Also see:

 

New to LessWrong?

New Comment
15 comments, sorted by Click to highlight new comments since: Today at 1:29 PM

I don't think you can use "explosion" as a mass noun. I would recommend saying "explosions" or "explosion scenarios" instead, as you have done in some places.

I agree with the second sentence (the recommendation, in particular of "explosion scenarios").

As for the first, my agreement is partial. One could say "overview of intelligence explosion", but only if one is thinking of "explosion" specifically as a nominalization of the verb "explode". (Example: "Fireworks commonly explode" -> "The explosion of fireworks is a common occurence".) In this case the phrase "intelligence explosion" would be analogous to a phrase like "product development" (and one can certainly speak of an "overview of product development"). This is however a fairly rare (and rather technical-sounding) usage. The more common meaning of "explosion" is "instance of something exploding", as in "I just saw a huge explosion!". With this meaning, "overview of intelligence explosion" is completely wrong, just like "overview of extinction event" would be.

Like you (I suspect), I initially read the phrase with the second meaning, and it sounded awful at first, before the other interpretation occurred to me a second or two later. This suggests sticking to the second sense of "explosion", and writing "overview of intelligence explosion scenarios".

Thanks. I did consider the other meaning. Note that in "product development", a product is being developed, while in "intelligence explosion", intelligence is exploding. That doesn't by itself make the interpretation impossible; "intelligence explosion" could instead be analogous to "product evolution", say. But the distracting interpretation where intelligence is being exploded made the interpretation of "explosion" as a nominalization of "explode" seem even more unnatural, as did the use elsewhere of "an intelligence explosion" and "the intelligence explosion". (Likewise, in the case of "the explosion of fireworks", at first I read that as fireworks being exploded and not fireworks exploding, but there it doesn't matter, because fireworks being exploded makes sense.)

[-][anonymous]12y00

Thanks. Note that in "product development", the product is being developed, whereas in "intelligence explosion", intelligence is not being exploded.

[This comment is no longer endorsed by its author]Reply

...with the perfect 2-8 citations...

What is important about citations is that they do actually provide the suggested information. The cited work should give evidence for the issue in question, or convincingly demonstrate its truth. If the cited work does cite yet another paper that is only loosely associated with the original thesis, it might become unfounded.

Take for example AI-boxing, what are you going to cite? Eliezer's experiment solely relies on his credit and that of the participants of telling the truth, it doesn't even meet the most basic scientific standards like peer review evaluation or reproducibility.

A calibration question: Are you open to the possibility that, after going through all this "deep knowledge of a huge spectrum of science and philosophy" you will find that there is a heretofore undetected fault in the concept?

Of course.

But this harder to do than to say. Part of my brain would of course be relieved to discover that, say, I have less reason to expect human extinction this century than I had previously thought. On the other hand, I know that as crazy as it sounds, part of my brain is probably committed to my past stated beliefs and to my current mission, and would prefer to operate under confirmation bias.

Which means I need to do all kinds of things to be actually curious for the truth, like leaving a line of retreat for myself by visualizing what it would be like to make such a discovery and how the world would be okay with that result.

Which, since you gave me the opportunity, I just spent 20 seconds doing. :)

Are you open to the possibility that, after going through all this "deep knowledge of a huge spectrum of science and philosophy" you will find that there is a heretofore undetected fault in the concept?

I bet they are open to the possibility, but it will never arise because they don't know how their beliefs are supposed to pay rent in future anticipations, short of having an AI take over the universe.

Do their hypotheses make any testable predictions? I don't see any reason why the SIAI couldn't sustain its position even a thousand years from now, because their arguments will still work.

The reason for this is that their concepts are simply too vague to be updated on new evidence.

their concepts are simply too vague to be updated on new evidence.

Surely a computer simulation of the intelligence explosion can be created, given some simplified assumptions.

Surely a computer simulation of the intelligence explosion can be created, given some simplified assumptions.

This will either require so many simplifying assumptions that it effectively proves nothing, or be as difficult to do as creating the intelligence explosion itself.

How do you know? Have you done any? Gotta start somewhere.

Where are you hoping to have this published?

Will this have anything to do with rationality? Do people come to LessWrong for this kind of stuff, and will it attract new members?

Quite a lot of LWers care about singularity issues, probably because most of this site's top authors are Singularitarians. Also, I'm putting this in Discussion, not Main.

[-][anonymous]12y60

I would say that it does, to the extent that anything having to do with the history of the external world has something to do with rationality. It may not be directly related, but Less Wrong isn't limited to just discussions of abstract reasoning devoid of concrete context or implications.

Of course, whether or not to take the Singularity seriously is an open questions of a sort. That doesn't preclude discussing the subject (it does just the opposite, in fact).