Posts

Sorted by New

Wiki Contributions

Comments

Roko3mo30

Great post. Personally I think the "computational social choice" angle is unerexplored.

I think CSC can gradually morph itself into CEV and that's how we solve AI Goalcraft.

Roko6mo112

"If the world were unified around the priority of minimizing global catastrophic risk, I think that we could reduce risk significantly further by implementing a global, long-lasting, and effectively enforced pause on frontier AI development—including a moratorium on the development and production of some types of computing hardware"

This really needs to be shouted from the rooftops. In the public sphere, people will hear "responsible scaling policy" as "It's maximally safe to keep pushing ahead with AI" rather than "We are taking on huge risks because politicians can't be bothered to coordinate".

Roko4y10

It seems to me that using a combination of execution time, memory use and program length mostly kills this set of arguments.

Something like a game-of-life initial configuration that leads to the eventual evolution of intelligent game-of-life aliens who then strategically feed outputs into GoL in order to manipulate you may have very good complexity performance, but both the speed and memory are going to be pretty awful. The fixed cost in memory and execution steps of essentially simulating an entire universe is huge.

But yes, the pure complexity prior certainly has some perverse and unsettling properties.

EDIT: This is really a special case of Mesa-Optimizers being dangerous. (See, e.g. https://www.lesswrong.com/posts/XWPJfgBymBbL3jdFd/an-58-mesa-optimization-what-it-is-and-why-we-should-care). The set of dangerous Mesa-Optimizers is obviously bigger than just "simulated aliens" and even time- and space-efficient algorithms might run into them.