AI ALIGNMENT FORUM
AF

Research AgendasAIWorld Optimization
Frontpage

24

AI Research Considerations for Human Existential Safety (ARCHES)

by Oliver Habryka
9th Jul 2020
1 min read
7

24

This is a linkpost for https://arxiv.org/pdf/2006.04948.pdf
Research AgendasAIWorld Optimization
Frontpage
AI Research Considerations for Human Existential Safety (ARCHES)
2Ben Pace
9Rohin Shah
1Ben Pace
3David Scott Krueger
1Ben Pace
New Comment
5 comments, sorted by
top scoring
Click to highlight new comments since: Today at 5:36 PM
[-]Ben Pace5y20

Wow, this is long, and seems pretty detailed and interesting. I'd love to see someone write a selection of key quotes or a summary.

Reply
[-]Rohin Shah5y90

Highlighted in AN #103 with a summary, though it didn't go into the research directions (because it would have become too long, and I thought the intro + categorization was more important on average).

Reply
[-]Ben Pace5y10

Thank you!

Reply
[-]David Scott Krueger5y30

There is now also an interview with Critch here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/

Reply
[-]Ben Pace5y10

I listened to this yesterday! Was quite interesting, I'm glad I listened to it.

Reply
Moderation Log
Curated and popular this week
5Comments

Andrew Critch's (Academian) and David Krueger's review of 29 AI (existential) safety research directions, each with an illustrative analogy, examples of current work and potential synergies between research directions, and discussion of ways the research approach might lower (or raise) existential risk.