Posts

Sorted by New

Wiki Contributions

Comments

From section 3.1.2:

C. The EU passes such a law. 90%

...

M. There’s nowhere that Jurgen Schmidhuber (currently in Saudi Arabia!) wants to move where he’s allowed to work on dangerously advanced AI, or he retires before he can make it. 50%

These credences feel borderline contradictory to me. M implies you believe that, conditional on no laws being passed which would make it illegal in any place he'd consider moving to, Jurgen Schmidhuber in particular has a >50% chance of building dangerously advanced AI within 20 years or so. Since you also believe the EU has a 90% chance of passing such a law before the creation of dangerously advanced AI, this implies you believe the EU has a >80% chance of outlawing the creation of dangerously advanced AI within 20 years or so. In fact, if we assume a uniform distribution over when JS builds dangerously advanced AI (such that it's cumulatively 50% 20 years from now), that requires us to be nearly certain the EU would pass such a law within 10 years if we make it that long before JS succeeds. From where does such high confidence stem?

(Meta: I'm also not convinced it's generally a good policy to be "naming names" of AGI researchers who are relatively unconcerned about the risks in serious discussions about AGI x-risk, since this could provoke a defensive response, "doubling down", etc.)

[This comment is no longer endorsed by its author]Reply