AI ALIGNMENT FORUM
AF

130
Wikitags
You are viewing version 0.0.2 of this page. Click here to view the latest version.

Center on Long-Term Risk (CLR)

Edited by ignoranceprior last updated 8th Dec 2023
You are viewing revision 0.0.2, last edited by ignoranceprior

The Foundational Research Institute (FRI) is a research group affiliated with the Swiss/German Effective Altruism Foundation. It investigates cooperative strategies to reduce risks of astronomical suffering in humanity's future (s-risks). This includes not only (post-)human suffering, but also the suffering of non-human animals and potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, sociology, philosophy, and other fields.

See also

  • Suffering risk
  • Abolitionism

External links

  • Foundational Research Institute homepage
  • Effective Altruism Wiki article on FRI
Subscribe
Discussion
Subscribe
Discussion
Posts tagged Center on Long-Term Risk (CLR)
19Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
6y
7
9Sections 1 & 2: Introduction, Strategy and Governance
JesseClifton
6y
2
8Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
6y
0
6Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
JesseClifton
6y
0
7Section 7: Foundations of Rational Agency
JesseClifton
6y
1
54Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain
Daniel Kokotajlo
5y
33
53Against GDP as a metric for timelines and takeoff speeds
Daniel Kokotajlo
5y
10
402018 AI Alignment Literature Review and Charity Comparison
Larks
7y
4
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
54Responses to apparent rationalist confusions about game / decision theory
Anthony DiGiovanni
2y
0
23Open-minded updatelessness
Nicolas Macé, JesseClifton, SMK
2y
2
25CLR's recent work on multi-agent systems
JesseClifton
5y
0
16When does technical work to reduce AGI conflict make a difference?: Introduction
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
11When would AGIs engage in conflict?
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
10When is intent alignment sufficient or necessary to reduce AGI conflict?
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
Load More (15/16)
Add Posts