AI ALIGNMENT FORUM
AF

Wikitags

Risks of Astronomical Suffering (S-risks)

Edited by ignoranceprior, Rob Bensinger, eFish last updated 25th Apr 2021

(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

S-risks are an example of (also known as x-risks) according to Nick Bostrom's original definition, as they threaten to "permanently and drastically curtail [Earth-originating intelligent life's] potential". Most existential risks are of the form "event E happens which drastically reduces the number of conscious experiences in the future". S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:

 extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with experiences of severe suffering.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as "pets," limiting growth but not causing immense suffering.

A related concept is , the risk of "fates worse than death" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. But arguably all s-risks are hyperexistential, since "tiling the universe with experiences of severe suffering" would likely be worse than death.

There are two EA organizations with s-risk prevention research as their primary focus: the (CLR) and the Center for Reducing Suffering. Much of CLR's work is on suffering-focused AI safety and . Although to a much lesser extent, the and have investigated strategies to prevent s-risks too. 

Another approach to reducing s-risk is to "expand the moral circle" together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

 

See also

  • Mind crime
  • ,

 

External links

  • Reducing Risks of Astronomical Suffering: A Neglected Global Priority (FRI)
  • Introductory talk on s-risks (FRI)
  • Risks of Astronomical Future Suffering (FRI)
  • Suffering-focused AI safety: Why "fail-safe" measures might be our top intervention PDF (FRI)
  • Artificial Intelligence and Its Implications for Future Suffering (FRI)
  • Expanding our moral circle to reduce suffering in the far future (Sentience Politics)
  • The Importance of the Far Future (Sentience Politics)
hyperexistential risk
Center on Long-Term Risk
Center on Long-Term Risk
Subscribe
1
Subscribe
1
crucial considerations
Machine Intelligence Research Institute
Future of Humanity Institute
instrumentally
existential risk
Existential risk
Discussion0
Discussion0
Abolitionism
Hedonism
Utilitarianism
Posts tagged Risks of Astronomical Suffering (S-risks)
19Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
6y
7
9Sections 1 & 2: Introduction, Strategy and Governance
JesseClifton
6y
2
8Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
6y
0
6Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
JesseClifton
6y
0
7Section 7: Foundations of Rational Agency
JesseClifton
6y
1
58The Waluigi Effect (mega-post)
Cleo Nardo
2y
26
20Siren worlds and the perils of over-optimised search
Stuart Armstrong
11y
0
25CLR's recent work on multi-agent systems
JesseClifton
4y
0
Add Posts