AI ALIGNMENT FORUM
Petrov Day
AF

2376

Center on Long-Term Risk (CLR)

v1.5.0Feb 8th 2025 GMT Convert editor type to CkEditor
v1.4.0Dec 8th 2023 GMT (+138/-208)
v1.3.0Oct 28th 2022 GMT (+9/-14)
v1.2.0Sep 16th 2020 GMT (+565/-127)
v1.1.0Jul 29th 2020 GMT (+11)
v1.0.0Jul 29th 2020 GMT (+187/-610)
v0.0.4Feb 9th 2018 GMT /* External links */ EA Wiki seems to be gone, so replace link with archive
v0.0.3Jul 10th 2017 GMT
v0.0.2Jun 20th 2017 GMT (+12/-6)
v0.0.1Nov 23rd 2016 GMT (+34)
Load More (10/11)
lesswrong-internal v1.5.0Feb 8th 2025 GMT Convert editor type to CkEditor LW1
Discuss this tag
Jonas V v1.4.0Dec 8th 2023 GMT (+138/-208) LW1
Discuss this tag
Konstantin P v1.3.0Oct 28th 2022 GMT (+9/-14) LW1
Discuss this tag
Ruby v1.2.0Sep 16th 2020 GMT (+565/-127) LW2
Discuss this tag
Kaj_Sotala v1.1.0Jul 29th 2020 GMT (+11) LW2
Discuss this tag
Kaj_Sotala v1.0.0Jul 29th 2020 GMT (+187/-610) LW2
Discuss this tag
riceissa v0.0.4Feb 9th 2018 GMT /* External links */ EA Wiki seems to be gone, so replace link with archive LW2
Discuss this tag
Deku-shrub v0.0.3Jul 10th 2017 GMT LW1
Discuss this tag
ignoranceprior v0.0.2Jun 20th 2017 GMT (+12/-6) LW1
Discuss this tag
ignoranceprior v0.0.1Nov 23rd 2016 GMT (+34) LW1
Discuss this tag
Load More (10/11)
lesswrong-internal
Jonas V
Kaj_Sotala
Kaj_Sotala
Deku-shrub

The Center on Long-Term Risk, formerly Foundational Research Institute, is an effective altruist organization focusedis a research group affiliated with the Swiss/German Effective Altruism Foundation. It investigates cooperative strategies to reduce risks of astronomical suffering in humanity's future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on preventinginsights from Risks of Astronomical artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.

See also

  • Suffering (S-risks)risk from emerging technologies, particularly artificial intelligence.

  • Abolitionism

External links

  • CLR website
  • Effective Altruism Wiki article on FRI
Konstantin P

The Foundational Research Institute (FRI) is a research group affiliated with the Swiss/German Effective Altruism Foundation. It investigates cooperative strategies to reduce risks of futureastronomical suffering in humanity's future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.

The Center on Long-Term Risk, formerly Foundational Research Institute, is an effective altruist is a research group affiliated with the Swiss/German Effective Altruism Foundation. It investigates cooperative strategies to reduce risks of astronomical suffering in humanity'humanity's future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.

  • CLR website

See also

  • Suffering risk
  • Abolitionism

The Center on Long-Term Risk, formerly Foundational Research Institute, is an effective altruista research group affiliated with the Swiss/German Effective Altruism Foundation. Itthat investigates cooperative strategies to reduce risks of astronomical suffering in humanity's future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems.

  • Suffering risk
  • AbolitionismMindcrime
  • CLR website
  • Effective Altruism Wiki article on FRI
Ruby

The Center on Long-Term Risk, formerly Foundational Research Institute (FRI) Institute, is a research group affiliated with the Swiss/Germanan Effective Altruism Foundationeffective altruist. It investigates cooperative strategies to reduce organization focused on preventing risksRisks of astronomical sufferingAstronomical Suffering (S-risks) in humanity's future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on insights from emerging technologies, particularly artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.intelligence.

See also

  • Suffering risk
  • Abolitionism

External links

  • Foundational Research Institute homepage
  • Effective Altruism Wiki article on FRI
ignoranceprior
ignoranceprior
riceissa