Posts

Sorted by New

Wiki Contributions

Comments

Is it something like the AI-box argument? "If I share my AI breakout strategy, people will think 'I just won't fall for that strategy' instead of noticing the general problem that there are strategies they didn't think of"?  I'm not a huge fan of that idea, but I won't argue it further.

I'm not expecting a complete explanation, but I'd like to see a story that doesn't skip directly to "AI can reformat reality at will" without at least one intermediate step.  Like, this is the third time I've seen an author pull this trick and I'm starting to wonder if the real AI-safety strategy is "make sure nobody invents grey-goo nanotech."

If you have a ball of nanomachines that can take over the world faster than anyone can react to it, it doesn't really matter if it's an AI or a human at the controls, as soon as it's invented everyone dies.  It's not so much an AI-risk problem as it is a problem with technological progress in general.  (Fortunately, I think it's still up for debate whether it's even possible to create grey-goo-style nanotech.)

This is well-written, but I feel like it falls into the same problem a lot of AI-risk stories do.  It follows this pattern:

  1. Plausible (or at least not impossible) near-future developments in AI that could happen if all our current predictions pan out.
  2. ???
  3. Nanotech-enabled fully-general superintelligence converts the universe into paperclips at a significant fraction of lightspeed.

And like, the Step 1 stuff is fascinating and a worthy sci-fi story on its own, but the big question everyone has about AI risk is "How does the AI get from Step 1 to Step 3?"

(This story does vaguely suggest why step 2 is missing - there are so many companies building AIs for the stock market or big tech or cyberwarfare that eventually one of them will stumble into self-improving AGI, which in turn will figure out nanotech - but implying the existence of an answer in-story is different from actually answering the question.)