AI ALIGNMENT FORUM
AF

Quantified Humanism

Aug 24, 2017 by Eliezer Yudkowsky

On the tricky question of how we should apply such theories to our ordinary moral intuitions and decision-making.

12One Life Against the World
Eliezer Yudkowsky
18y
0
4The Allais Paradox
Eliezer Yudkowsky
18y
0
7Zut Allais!
Eliezer Yudkowsky
18y
0
6Feeling Moral
Eliezer Yudkowsky
10y
0
13The "Intuitions" Behind "Utilitarianism"
Eliezer Yudkowsky
18y
0
28Ends Don't Justify Means (Among Humans)
Eliezer Yudkowsky
17y
0
8Ethical Injunctions
Eliezer Yudkowsky
17y
0
21Something to Protect
Eliezer Yudkowsky
18y
0
13When (Not) To Use Probabilities
Eliezer Yudkowsky
17y
0
12Newcomb's Problem and Regret of Rationality
Eliezer Yudkowsky
18y
0
Interlude
48Twelve Virtues of Rationality
Eliezer Yudkowsky
20y
0