This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Books of LessWrong
AF
Login
Alignment & Agency
60
An Orthodox Case Against Utility Functions
Abram Demski
4y
45
52
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
johnswentworth
3y
34
63
Alignment By Default
johnswentworth
4y
72
66
An overview of 11 proposals for building safe advanced AI
Evan Hubinger
4y
31
93
The ground of optimization
Alex Flint
4y
50
32
Search versus design
Alex Flint
4y
30
58
Inner Alignment: Explain like I'm 12 Edition
Rafael Harth
4y
12
44
Inaccessible information
Paul Christiano
4y
9
39
AGI safety from first principles: Introduction
Richard Ngo
3y
15
37
Is Success the Enemy of Freedom? (Full)
alkjash
3y
0