Diabloto96 | v1.15.0Mar 19th 2023 | (-3) | ||
Diabloto96 | v1.14.0Mar 19th 2023 | |||
Diabloto96 | v1.13.0Mar 19th 2023 | |||
Multicore | v1.12.0Nov 24th 2021 | (-158) | ||
Rob Bensinger | v1.11.0Feb 3rd 2021 | (+4/-5) | ||
Rob Bensinger | v1.10.0Feb 3rd 2021 | (+16/-26) | ||
Rob Bensinger | v1.9.0Feb 3rd 2021 | (+170/-6) | ||
Rob Bensinger | v1.8.0Feb 3rd 2021 | (+828/-392) | ||
Ruben Bloom | v1.7.0Oct 1st 2020 | (+3505/-178) | ||
Miranda Dixon-Luinenburg | v1.6.0Sep 22nd 2020 | (-1) |
The Oxford Future of Humanity Institute (FHI) was founded by Bostrom in 2005,2005 in part to study existential risks. Other institutions with a generalist focus on existential risk include the Centre for the Study of Existential Risk.
An existential risk (or x-risk) is a risk that poses astronomically large negative consequences for humanity, such as human extinction or a permanent global totalitarian government.totalitarianism.
An existential risk (or x-risk) is a risk posingthat poses astronomically large negative consequences for humanity, such as human extinction or a permanent global totalitarian government.
An existential risk is one that threatens the entire future of humanity. [...]
“Humanity”, in this context, does not mean “the biological species Homo sapiens”. If we humans were to evolve into another species, or merge or replace ourselves with intelligent machines, this would not necessarily mean that an existential catastrophe had occurred — although it might if the quality of life enjoyed by those new life forms turns out to be far inferior to that enjoyed by humans.
An existential risk (or x-risk) is a risk posing permanentastronomically large negative consequences to humanity which can never be undonefor humanity, such as human extinction or a permanent global totalitarian government.
Nick Bostrom introduced the extinction of human civilization or event of similar severity, e.g. an eternal evil dictatorship.
Interm "existential risk" in his seminal2002 paper on the subject "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards."1, Nick In the paper, Bostrom defined an existential risk as:
The total negative impactOxford Future of Humanity Institute (FHI) was founded by Bostrom in 2005, in part to study existential risks. Other institutions with a generalist focus on existential risk include the Centre for the Study of Existential Risk.
FHI's existential-risk.org FAQ notes regarding the definition of "existential risk":
If we humans were to evolve into another species, or merge or replace ourselves with intelligent machines, this would not necessarily mean that an existential
risk is onecatastrophe had occurred — although it might if the quality ofthe greatest negative impacts conceived of. Such an event could not only annihilatelifeas we value it from Earth, but would also severely damage all Earth-originating intelligentenjoyed by those new lifepotential.forms turns out to be far inferior to that enjoyed by humans.
An Existential risksexistential risk or(or x-risksrisk are risks whose) is a risk posing permanent large negative consequences involveto humanity which can never be undone such as the extinction of human civilization or eventsevent of a similar severity (such asseverity, e.g. an eternal evil dictatorship)dictatorship.
In his seminal paper on the subject 1, Nick Bostrom defined an existential risk as:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
The total negative impact of an existential risk is one of the greatest negative impacts conceived of. Such an event could not only annihilate life as we value it from Earth, but would also severely damage all Earth-originating intelligent life potential.
Bostrom 2 proposes a series of classifications for existential risks:
The total negative results of an existential risk could amount to the total of potential future lives not being realized. A rough and conservative calculation3 gives us a total of 10^54 potential future humans lives – smarter, happier and kinder then we are. Hence, almost no other task would amount to so much positive impact than existential risk reduction.
Existential risks also present an unique challenge because of their irreversible nature. We will never, by definition, experience and survive an extinction risk4 and so cannot learn from our mistakes. They are subject to strong observational selection effects5. One cannot estimate their future probability based on the past, because bayesianly speaking, the conditional probability of a past existential catastrophe given our present existence is always 0, no matter how high the probability of an existential risk really is. Instead, indirect estimates have to be used, such as possible existential catastrophes happening elsewhere. A high extinction risk probability could be functioning as a Great Filter and explain why there is no evidence of spacial colonization.
Another related idea is that of a suffering risk (or s-risk).
The focus on existential risks on LessWrong dates back to Nick Bostrom's 2002 paper Astronomical Waste: The Opportunity Cost of Delayed Technological Development. It argues that "the chief goal for utilitarians should be to reduce existential risk". Bostrom writes: