Wiki Contributions

Comments

(a)

Look, we already have superhuman intelligences. We call them corporations and while they put out a lot of good stuff, we're not wild about the effects they have on the world. We tell corporations 'hey do what human shareholders want' and the monkey's paw curls and this is what we get.

Anyway yeah that but a thousand times faster, that's what I'm nervous about.

(b)
Look, we already have superhuman intelligences. We call them governments and while they put out a lot of good stuff, we're not wild about the effects they have on the world. We tell governments 'hey do what human voters want' and the monkey's paw curls and this is what we get.

Anyway yeah that but a thousand times faster, that's what I'm nervous about.

I'd say it's at least 30% likely that's the case! But if you believe that, you'd be pants-on-head loony not to drop a billion on the 'residual' 70% chance that you'll be first to market on a world-changing trillion-dollar technology. VCs would sacrifice their firstborn for that kind of deal.

Feels worth pasting in this other comment of yours from last week, which dovetails well with this:

DL so far has been easy to predict - if you bought into a specific theory of connectionism & scaling espoused by Schmidhuber, Moravec, Sutskever, and a few others, as I point out in https://www.gwern.net/newsletter/2019/13#what-progress & https://www.gwern.net/newsletter/2020/05#gpt-3 . Even the dates are more or less correct! The really surprising thing is that that particular extreme fringe lunatic theory turned out to be correct. So the question is, was everyone else wrong for the right reasons (similar to the Greeks dismissing heliocentrism for excellent reasons yet still being wrong), or wrong for the wrong reasons, and why, and how can we prevent that from happening again and spending the next decade being surprised in potentially very bad ways?

Personally, these two comments have kicked me into thinking about theories of AI in the same context as also-ran theories of physics like vortex atoms or the Great Debate. It really is striking how long one person with a major prior success to their name can push for a theory when the evidence is being stacked against it.

A bit closer to home than DM and GB, it also feels like a lot of AI safety people have missed the mark. It's hard for me to criticise too loudly because, well, 'AI anxiety' doesn't show up in my diary until June 3rd (and that's with a link to your May newsletter). But a lot of AI safety work increasingly looks like it'd help make a hypothetical kind of AI safe, rather than helping with the prosaic ones we're actually building.

I'm committing something like the peso problem here in that lots of safety work was - is - influenced by worries about the worst-case world, where something self-improving bootstraps itself out of something entirely innocuous. In that sense we're kind of fortunate that we've ended up with a bloody language model fire-alarm of all things, but I can't claim that helps me sleep at night.