There are two EA organizations with s-risk prevention research as their primary focus: the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering. Much of CLR's work is on suffering-focused AI safety and crucial considerations. Although to a much lesser extent, the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Center on Long-Term Risk. Much of CLR's work is on suffering-focused AI safety and crucial considerations.risks too.
Another approach to reducing s-risk is to "expand the moral circle",together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.
Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Center on Long-Term Risk. Much of FRI'CLR's work is on suffering-focused AI safety and crucial considerations.
Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research InstituteCenter on Long-Term Risk. Much of FRI's work is on suffering-focused AI safety and crucial considerations.
(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.
unhappy human experiences.experiences of severe suffering.A related concept is hyperexistential risk, the risk of "fates worse than death" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se.
Itisclear that notBut arguably all s-risks are hyperexistential, since "tiling the universe withmildly unhappy experiences"experiences of severe suffering" would likely bean s-risk but very likely wouldn't be aworsefatethan death.