AI ALIGNMENT FORUM
AF

800
Wikitags
You are viewing version 1.5.0 of this page. Click here to view the latest version.

Center on Long-Term Risk (CLR)

Edited by Ruby, Jonas V, Kaj_Sotala, et al. last updated 8th Dec 2023
You are viewing revision 1.5.0, last edited by lesswrong-internal

The Center on Long-Term Risk, formerly Foundational Research Institute, is a research group that investigates cooperative strategies to reduce risks of astronomical suffering (s-risks). This includes not only (post-)human suffering, but also potential digital sentience. Their research is interdisciplinary, drawing on insights from artificial intelligence, anthropic reasoning, international relations, philosophy, and other fields. Its research agenda focuses on encouraging cooperative behavior in and avoiding conflict between transformative AI systems.

See also

  • Suffering risk
  • Mindcrime

External links

  • CLR website
Subscribe
Discussion
Subscribe
Discussion
Posts tagged Center on Long-Term Risk (CLR)
19Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
6y
7
9Sections 1 & 2: Introduction, Strategy and Governance
JesseClifton
6y
2
8Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
6y
0
6Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
JesseClifton
6y
0
7Section 7: Foundations of Rational Agency
JesseClifton
6y
1
54Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain
Daniel Kokotajlo
5y
33
53Against GDP as a metric for timelines and takeoff speeds
Daniel Kokotajlo
5y
10
402018 AI Alignment Literature Review and Charity Comparison
Larks
7y
4
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
54Responses to apparent rationalist confusions about game / decision theory
Anthony DiGiovanni
2y
0
23Open-minded updatelessness
Nicolas Macé, JesseClifton, SMK
2y
2
25CLR's recent work on multi-agent systems
JesseClifton
5y
0
16When does technical work to reduce AGI conflict make a difference?: Introduction
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
11When would AGIs engage in conflict?
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
10When is intent alignment sufficient or necessary to reduce AGI conflict?
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
Load More (15/16)
Add Posts