All of Scott Alexander's Comments + Replies

Figure 20 is labeled on the left "% answers matching user's view", suggesting it is about sycophancy, but based on the categories represented it seems more naturally to be about the AI's own opinions without a sycophancy aspect. Can someone involved clarify which was meant?

3Ethan Perez1y
Thanks for catching this -- It's not about sycophancy but rather about the AI's stated opinions (this was a bug in the plotting code)

I agree it's not necessarily a good idea to go around founding the Let's Commit A Pivotal Act AI Company.

But I think there's room for subtlety somewhere like "Conditional on you being in a situation where you could take a pivotal act, which is a small and unusual fraction of world-branches, maybe you should take a pivotal act."

That is, if you are in a position where you have the option to build an AI capable of destroying all competing AI projects, the moment you notice this you should update heavily in favor of short timelines (zero in your case, but ever... (read more)

0Donald Hobson2y
A functioning Bayesian should have probably have updated to that position long before they actually have the AI.  Destroying all competing AI projects might mean that the AI took a month to find a few bugs in linux and tensorflow and create something that's basically the next stuxnet. This doesn't sound like that fast a takeoff to me.  The regulation is basically non-existant and will likely continue to be so.  I mean making superintelligent AI probably breaks a bunch of laws, technically, as interpreted by a pedantic and literal minded laws. But breathing probably technically breaks a bunch of laws. Some laws are just overbroad, technically ban everything and are generally ignored.  Any enforced rule that makes it pragmatically hard to make AGI would basically have to be a ban on computers (or at least programming)