Short summary: Biological anchors are a bad way to predict AGI. It’s a case of “argument from comparable resource consumption.” Analogy: human brains use 20 Watts. Therefore, when we have computers with 20 Watts, we’ll have AGI! The 2020 OpenPhil estimate of 2050 is based on a biological anchor, so we should ignore it.
Lots of folks made bad AGI predictions by asking:
To find (1), they use a “biological anchor,” like the computing power of the human brain, or the total compute used to evolve human brains.
Hans Moravec, 1988: the human brain uses 10^13 ops/s, and computers with this power will be available in 2010.
Eliezer objects that:
Eliezer, 1999: Eliezer mentions that he too made bad AGI predictions as a teenager
Ray Kurzweil, 2001: Same idea as Moravec, but 10^16 ops/s. Not worth repeating
Someone, 2006: it took ~10^43 ops for evolution to create human brains. It’ll be a very long time before a computer can reach 10^43 ops, so AGI is very far away
Eliezer objects that the use of a biological anchor is sufficient to make this estimate useless. It’s a case of a more general “argument from comparable resource consumption.”
Analogy: human brains use 20 Watts. Therefore, when we have computers with 20 Watts, we’ll have AGI!
OpenPhil, 2020: A much more sophisticated estimate, but still based on a biological anchor. They predict AGI in 2050.
How the new model works:
Demand side: Estimate how many neural-network parameters would emulate a brain. Use this to find the computational cost of training such a model. (I think this part mischaracterizes OpenPhil's work, my comments at the bottom)
Supply side: Moore’s law, assuming
Can’t we use this as an upper bound? Maybe AGI will come sooner, but surely it won’t take longer than this estimate.
Eliezer thinks this is the same non-sequitur as Moravec’s. If you train a model big enough to emulate a brain, that doesn’t mean AGI will pop out at the end.
Other commentary: Eliezer mentions several times that he’s feeling old, tired, and unhealthy. He feels frustrated that researchers today repeat decades-old bad arguments. It takes him a lot of energy to rebut these claims
I found this persuasive, but I also think it mischaracterized the OpenPhil model
My understanding is that OpenPhil didn’t just estimate the number of neural network parameters required to train a human brain. They used six different biological anchors, including the “evolution anchor’, which I find very useful for an upper bound.
Holden Karnofsky, who seems to put much more stock in the Bio Anchors model than Eliezer, explains the model really well here. But I was frustrated to see that the write-up on Holden’s blog gives 50% by 2090 (first graph) using the evolution anchor, while the same graph in the old calcs gives only 11%. Was this model tuned after seeing the results?
My conclusion: Bio Anchors is a terrible way to model when AGI will actually arrive. But I don’t agree with Eliezer’s dismissal of using Bio Anchors to get an upper bound, because I think the evolution anchor achieves this.