AI ALIGNMENT FORUM
Alignment Hot Take Advent Calendar
AF

Alignment Hot Take Advent Calendar

Dec 01, 2022 by Charlie Steiner
15Take 1: We're not going to reverse-engineer the AI.
Charlie Steiner
2y
1
12Take 2: Building tools to help build FAI is a legitimate strategy, but it's dual-use.
Charlie Steiner
2y
0
11Take 3: No indescribable heavenworlds.
Charlie Steiner
2y
0
16Take 4: One problem with natural abstractions is there's too many of them.
Charlie Steiner
2y
1
13Take 5: Another problem for natural abstractions is laziness.
Charlie Steiner
2y
1
5Take 6: CAIS is actually Orwellian.
Charlie Steiner
2y
2
24Take 7: You should talk about "the human's utility function" less.
Charlie Steiner
2y
1
14Take 8: Queer the inner/outer alignment dichotomy.
Charlie Steiner
2y
0
23Take 9: No, RLHF/IDA/debate doesn't solve outer alignment.
Charlie Steiner
2y
5
16Take 10: Fine-tuning with RLHF is aesthetically unsatisfying.
Charlie Steiner
2y
1
16Take 11: "Aligning language models" should be weirder.
Charlie Steiner
2y
0
11Take 12: RLHF's use is evidence that orgs will jam RL at real-world problems.
Charlie Steiner
2y
1
24Take 13: RLHF bad, conditioning good.
Charlie Steiner
2y
0
10Take 14: Corrigibility isn't that great.
Charlie Steiner
2y
3