AI ALIGNMENT FORUM
Wikitags
AF

Subscribe
Discussion0
1

Human Values

Subscribe
Discussion0
1
Written by plex last updated 16th Sep 2021

Human Values are the things we care about, and would want an aligned superintelligence to look after and support. It is suspected that true human values are highly complex, and could be extrapolated into a wide variety of forms.

Posts tagged Human Values
74The shard theory of human values
Quintin Pope, Alex Turner
3y
33
42Human values & biases are inaccessible to the genome
Alex Turner
3y
38
17Multi-agent predictive minds and AI alignment
Jan_Kulveit
7y
0
16Requirements for a Basin of Attraction to Alignment
Roger Dearnaley
1y
0
95. Moral Value for Sentient Animals? Alas, Not Yet
Roger Dearnaley
1y
0
73. Uploading
Roger Dearnaley
2y
0
66. The Mutable Values Problem in Value Learning and CEV
Roger Dearnaley
2y
0
11How Would an Utopia-Maximizer Look Like?
Thane Ruthenis
2y
9
54. A Moral Case for Evolved-Sapience-Chauvinism
Roger Dearnaley
2y
0
45Shard Theory: An Overview
David Udell
3y
2
2Brain-over-body biases, and the embodied value problem in AI alignment
Geoffrey Miller
3y
0
59Humans provide an untapped wealth of evidence about alignment
Alex Turner, Quintin Pope
3y
42
42A broad basin of attraction around human values?
Wei Dai
3y
10
28The Computational Anatomy of Human Values
Beren Millidge
2y
8
18A "Bitter Lesson" Approach to Aligning AGI and ASI
Roger Dearnaley
1y
0
Load More (15/39)
Add Posts