Risks of Astronomical Suffering (S-risks)

Applied to Clippy's modest proposal by Daphne_W at 3mo
Applied to A fate worse than death? by RomanS at 7mo
Applied to Preface by Ruralvisitor83 at 8mo
Applied to The Aliens have Landed! by Multicore at 8mo

(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

 extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with unhappy human experiences.experiences of severe suffering.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as "pets," limiting growth but not causing immense suffering.

A related concept is hyperexistential risk, the risk of "fates worse than death" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. It is clear that notBut arguably all s-risks are hyperexistential, since "tiling the universe with mildly unhappy experiences"experiences of severe suffering" would likely be an s-risk but very likely wouldn't be a worse fate than death.

There are two EA organizations with s-risk prevention research as their primary focus: the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering. Much of CLR's work is on suffering-focused AI safety and crucial considerations. Although to a much lesser extent, the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Center on Long-Term Risk. Much of CLR's work is on suffering-focused AI safety and crucial considerations.risks too. 

Another approach to reducing s-risk is to "expand the moral circle",together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Center on Long-Term Risk. Much of FRI'CLR's work is on suffering-focused AI safety and crucial considerations.

Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research InstituteCenter on Long-Term Risk. Much of FRI's work is on suffering-focused AI safety and crucial considerations.