This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Alignment & Agency
AF
Login
Alignment & Agency
51
An Orthodox Case Against Utility Functions
Abram Demski
3y
33
50
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
johnswentworth
2y
32
61
Alignment By Default
johnswentworth
3y
72
67
An overview of 11 proposals for building safe advanced AI
Evan Hubinger
3y
25
86
The ground of optimization
Alex Flint
3y
43
32
Search versus design
Alex Flint
3y
30
57
Inner Alignment: Explain like I'm 12 Edition
Rafael Harth
3y
12
45
Inaccessible information
Paul Christiano
3y
9
39
AGI safety from first principles: Introduction
Richard Ngo
2y
15
40
Is Success the Enemy of Freedom? (Full)
alkjash
2y
0