This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Alignment & Agency
AF
Login
Alignment & Agency
60
An Orthodox Case Against Utility Functions
Abram Demski
3y
33
52
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
johnswentworth
3y
32
63
Alignment By Default
johnswentworth
3y
72
68
An overview of 11 proposals for building safe advanced AI
Evan Hubinger
3y
25
89
The ground of optimization
Alex Flint
3y
43
32
Search versus design
Alex Flint
3y
30
57
Inner Alignment: Explain like I'm 12 Edition
Rafael Harth
3y
12
45
Inaccessible information
Paul Christiano
3y
9
39
AGI safety from first principles: Introduction
Richard Ngo
3y
15
37
Is Success the Enemy of Freedom? (Full)
alkjash
3y
0