AI ALIGNMENT FORUM
AF

Alternate Alignment Ideas

May 15, 2019 by Abram Demski

These are 'brainstorming' posts, around the theme of what it means for a system to be helpful to a human.

9Stable Pointers to Value: An Agent Embedded in Its Own Utility Function
Abram Demski
8y
9
12Stable Pointers to Value II: Environmental Goals
Abram Demski
7y
0
9Stable Pointers to Value III: Recursive Quantilization
Abram Demski
7y
0
23Policy Alignment
Abram Demski
7y
13
21Non-Consequentialist Cooperation?
Abram Demski
7y
4