This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Wikitags
Ought
Edited by
Ben Pace
last updated
22nd Jul 2020
Ought
is an AI alignment research non-profit focused on the problem of
Factored Cognition
.
Subscribe
Subscribe
Discussion
0
Discussion
0
Posts tagged
Ought
Most Relevant
24
Ought: why it matters and ways to help
paulfchristiano
6y
1
16
Factored Cognition
stuhlmueller
7y
1
69
Supervise Process, not Outcomes
stuhlmueller
,
jungofthewon
3y
2
10
Update on Ought's experiments on factored evaluation of arguments
Owain_Evans
6y
1
10
[AN #86]: Improving debate and factored cognition through human experiments
Rohin Shah
6y
0
43
Rant on Problem Factorization for Alignment
johnswentworth
3y
32
26
Elicit: Language Models as Research Assistants
stuhlmueller
,
jungofthewon
3y
4
29
Prize for Alignment Research Tasks
stuhlmueller
,
William_S
3y
6
15
Ought will host a factored cognition “Lab Meeting”
jungofthewon
,
stuhlmueller
3y
1