AI ALIGNMENT FORUM
Wikitags
AF

Subscribe
Discussion0

Ought

Subscribe
Discussion0
Written by Ben Pace last updated 22nd Jul 2020

Ought is an AI alignment research non-profit focused on the problem of Factored Cognition.

Posts tagged Ought
24Ought: why it matters and ways to help
Paul Christiano
6y
1
16Factored Cognition
Andreas Stuhlmüller
7y
1
69Supervise Process, not Outcomes
Andreas Stuhlmüller, jungofthewon
3y
2
10Update on Ought's experiments on factored evaluation of arguments
Owain Evans
5y
1
10[AN #86]: Improving debate and factored cognition through human experiments
Rohin Shah
5y
0
43Rant on Problem Factorization for Alignment
johnswentworth
3y
32
26Elicit: Language Models as Research Assistants
Andreas Stuhlmüller, jungofthewon
3y
4
29Prize for Alignment Research Tasks
Andreas Stuhlmüller, William Saunders
3y
6
15Ought will host a factored cognition “Lab Meeting”
jungofthewon, Andreas Stuhlmüller
3y
1
Add Posts