This is a linkpost for https://arxiv.org/pdf/2006.04948.pdf
Andrew Critch's (Academian) and David Krueger's review of 29 AI (existential) safety research directions, each with an illustrative analogy, examples of current work and potential synergies between research directions, and discussion of ways the research approach might lower (or raise) existential risk.
Wow, this is long, and seems pretty detailed and interesting. I'd love to see someone write a selection of key quotes or a summary.
Highlighted in AN #103 with a summary, though it didn't go into the research directions (because it would have become too long, and I thought the intro + categorization was more important on average).
There is now also an interview with Critch here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/
I listened to this yesterday! Was quite interesting, I'm glad I listened to it.