AI ALIGNMENT FORUM
AF

1911
Wikitags

Human Values

Edited by plex last updated 16th Sep 2021

Human Values are the things we care about, and would want an aligned superintelligence to look after and support. It is suspected that true human values are highly complex, and could be extrapolated into a wide variety of forms.

Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged Human Values
74The shard theory of human values
Quintin Pope, TurnTrout
3y
33
42Human values & biases are inaccessible to the genome
TurnTrout
3y
38
17Multi-agent predictive minds and AI alignment
Jan_Kulveit
7y
0
16Requirements for a Basin of Attraction to Alignment
RogerDearnaley
2y
0
95. Moral Value for Sentient Animals? Alas, Not Yet
RogerDearnaley
2y
0
73. Uploading
RogerDearnaley
2y
0
66. The Mutable Values Problem in Value Learning and CEV
RogerDearnaley
2y
0
11How Would an Utopia-Maximizer Look Like?
Thane Ruthenis
2y
9
54. A Moral Case for Evolved-Sapience-Chauvinism
RogerDearnaley
2y
0
45Shard Theory: An Overview
David Udell
3y
2
2Brain-over-body biases, and the embodied value problem in AI alignment
geoffreymiller
3y
0
59Humans provide an untapped wealth of evidence about alignment
TurnTrout, Quintin Pope
3y
42
42A broad basin of attraction around human values?
Wei Dai
3y
10
28The Computational Anatomy of Human Values
beren
2y
8
18A "Bitter Lesson" Approach to Aligning AGI and ASI
RogerDearnaley
1y
0
Load More (15/39)
Add Posts