This was written as part of the first Refine blog post day.
A sneaking suspicion that I’ve found difficult to shake off while following AI risk discussion is that concerns regarding superintelligent AI, while clearly valid and important, have seemed to me to be very plausibly jumping the gun when not altogether unlikely nearer term AI risks are looming and have potential to wipe us out long before we reach that point, especially under the conditions where we do very little to attempt to mitigate them.
One reason I think I have for this intuition comes from extending an occasionally-repeated idea about human intelligence: that humans are nearly the dumbest possible creature capable of developing... (read 482 more words →)
I'm fairly agnostic about how dumb we're talking - what kinds of acts or confluence of events are actually likely to be effective complete x-risks, particularly at relatively low levels of intelligence/capability. But that's besides the point in some ways, because whereever someone might place the threshold for x-risk capable AI, as long as you assume that greater intelligence is harder to produce (an assumption that doesn't necessarily hold, as I acknowledged), I think that suggests that we will be killed by something not much higher than that threshold once it's first reached.
... (read 431 more words →)