AI ALIGNMENT FORUM
AF

714
Wikitags
You are viewing version 1.0.0 of this page. Click here to view the latest version.

Center on Long-Term Risk (CLR)

Edited by Kaj_Sotala, ignoranceprior last updated 8th Dec 2023
You are viewing revision 1.0.0, last edited by Kaj_Sotala

The Center on Long-Term Risk, formerly Foundational Research Institute, is an effective altruist organization focused on preventing Risks of Astronomical Suffering (S-risks) from emerging technologies, particularly artificial intelligence.

Subscribe
Discussion
Subscribe
Discussion
Posts tagged Center on Long-Term Risk (CLR)
19Preface to CLR's Research Agenda on Cooperation, Conflict, and TAI
JesseClifton
6y
7
9Sections 1 & 2: Introduction, Strategy and Governance
JesseClifton
6y
2
8Sections 5 & 6: Contemporary Architectures, Humans in the Loop
JesseClifton
6y
0
6Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
JesseClifton
6y
0
7Section 7: Foundations of Rational Agency
JesseClifton
6y
1
54Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain
Daniel Kokotajlo
5y
33
53Against GDP as a metric for timelines and takeoff speeds
Daniel Kokotajlo
5y
10
402018 AI Alignment Literature Review and Charity Comparison
Larks
7y
4
392019 AI Alignment Literature Review and Charity Comparison
Larks
6y
8
54Responses to apparent rationalist confusions about game / decision theory
Anthony DiGiovanni
2y
0
23Open-minded updatelessness
Nicolas Macé, JesseClifton, SMK
2y
2
25CLR's recent work on multi-agent systems
JesseClifton
5y
0
16When does technical work to reduce AGI conflict make a difference?: Introduction
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
11When would AGIs engage in conflict?
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
10When is intent alignment sufficient or necessary to reduce AGI conflict?
JesseClifton, Sammy Martin, Anthony DiGiovanni
3y
0
Load More (15/16)
Add Posts