AI ALIGNMENT FORUM
AF

2370
Alternate Alignment Ideas

Alternate Alignment Ideas

May 15, 2019 by abramdemski

These are 'brainstorming' posts, around the theme of what it means for a system to be helpful to a human.

16Stable Pointers to Value: An Agent Embedded in Its Own Utility Function
abramdemski
8y
9
12Stable Pointers to Value II: Environmental Goals
abramdemski
8y
0
9Stable Pointers to Value III: Recursive Quantilization
abramdemski
7y
0
23Policy Alignment
abramdemski
7y
13
21Non-Consequentialist Cooperation?
abramdemski
7y
4