This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
442
Books of LessWrong
Alignment & Agency
61
An Orthodox Case Against Utility Functions
abramdemski
6y
45
53
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
johnswentworth
5y
34
64
Alignment By Default
johnswentworth
5y
72
72
An overview of 11 proposals for building safe advanced AI
evhub
5y
32
93
The ground of optimization
Alex Flint
5y
50
34
Search versus design
Alex Flint
5y
30
59
Inner Alignment: Explain like I'm 12 Edition
Rafael Harth
5y
12
44
Inaccessible information
paulfchristiano
5y
9
39
AGI safety from first principles: Introduction
Richard_Ngo
5y
15
41
Is Success the Enemy of Freedom? (Full)
alkjash
5y
0