Risks of Astronomical Suffering (S-risks)

Rob Bensinger (+105/-26)
Rob Bensinger (+31/-16)
Rob Bensinger (+1174/-214)
Multicore
ignoranceprior (+81)
ignoranceprior (+12/-12)
ignoranceprior (+384/-62) add links
ignoranceprior (+19/-57) linkfix
ignoranceprior (+560/-36) +
ignoranceprior (+12) /* See also */

S-risks are an example of existential risk (also known as x-risks) according to Nick Bostrom's original definition, as they threaten to "curtail [humanity'"permanently and drastically curtail [Earth-originating intelligent life's] potential". Most existential risks are of the form "event E happens which drastically reduces the number of conscious experiences in the future". S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

A related concept is hyperexistential risk, the risk of "fates worse than death."death" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. It is clear that not all s-risks are hyperexistential, since "tiling the universe with mildly unhappy experiences" would be an s-risk but very likely wouldn't be a worse fate than death.

Suffering(Astronomical) suffering risks (also, also known as s-risks), are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-

S-risks can be considered a formare an example of existential risk (also known as x-risks) according to Nick Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is oftenMost existential risks are of the form "event E happens which drastically reduces the number of conscious experiences in the future". S-risks therefore serve as a useful toreminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

Within the space of x-risks, we can distinguish between x-risks that threaten to prevent future populations from coming into existence (extinction risks)are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and those whichx-risks that involve neither. For example:

 extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with unhappy human experiences.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as "pets," limiting growth but not causing immense suffering.

A related concept is hyperexistential risk, the risk of "fates worse than death." It is not clear whether all hyperexistential risks are s-risks per se. It is clear that not all s-risks are hyperexistential, since "tiling the universe with mildly unhappy experiences" would createbe an s-risk but very likely wouldn't be a large amount of suffering (s-risks).worse fate than death.

 

 

Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Much of FRI's work is on suffering-focused AI safety and crucial considerations. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.

Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of existential risk according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (standard x-(extinction risks) and those which would create a large amount of suffering (s-risks).

Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. [Sentience Institute](http://www.sentienceinstitute.org/)Sentience Institute works on this value-spreading problem.

Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale.scale, vastly exceeding all suffering that has existed on Earth so far. In this sense theysense, many s-risks can be considered a form of existential risk according to Bostrom's original definition, butas they threaten to "curtail [humanity's] potential". However, it may beis often useful to distinguish between risks that threaten to prevent future populations from coming into existence (standard x-risks) and those which would instantiatecreate a large amount of suffering (s-risks).

Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. [Sentience Institute](http://www.sentienceinstitute.org/) works on this value-spreading problem.

Load More (10/11)