Averting doom by not building the doom machine If you fear that someone will build a machine that will seize control of the world and annihilate humanity, then one kind of response is to try to build further machines that will seize control of the world even earlier without destroying...
(Crossposted from AI Impacts Blog) This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems1. To start, here’s an outline of what I take to be the basic case2: I. If superhuman AI systems are built, any given...
[Epistemic status: my current view, but I haven’t read all the stuff on this topic even in the LessWrong community, let alone more broadly.] There is a line of thought that says that advanced AI will tend to be ‘goal-directed’—that is, consistently doing whatever makes certain favored outcomes more likely—and...
Crossposted from world spirit sock puppet. AI related disasters are often categorized as involving misaligned AI, or misuse, or accident. Where: * misuse means the bad outcomes were wanted by the people involved, * misalignment means the bad outcomes were wanted by AI (and not by its human creators), and...
This is a post about my own confusions. It seems likely that other people have discussed these issues at length somewhere, and that I am not up with current thoughts on them, because I don’t keep good track of even everything great that everyone writes. I welcome anyone kindly directing...