x
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Ought — AI Alignment Forum
You are viewing version 1.0.0 of this page. Click here to view the latest version.
Ought
Edited by
Ben Pace
last updated
22nd Jul 2020
You are viewing revision 1.0.0, last edited by
Ben Pace
Ought
is an AI alignment research non-profit focused on the problem of
Factored Cognition
.
Subscribe
Discussion
Subscribe
Discussion
Posts tagged
Ought
Most Relevant
3
24
Ought: why it matters and ways to help
paulfchristiano
6y
1
3
16
Factored Cognition
stuhlmueller
7y
1
1
69
Supervise Process, not Outcomes
stuhlmueller
,
jungofthewon
4y
2
1
10
Update on Ought's experiments on factored evaluation of arguments
Owain_Evans
6y
1
1
10
[AN #86]: Improving debate and factored cognition through human experiments
Rohin Shah
6y
0
0
45
Rant on Problem Factorization for Alignment
johnswentworth
3y
32
1
26
Elicit: Language Models as Research Assistants
stuhlmueller
,
jungofthewon
4y
4
1
29
Prize for Alignment Research Tasks
stuhlmueller
,
William_S
4y
6
1
15
Ought will host a factored cognition “Lab Meeting”
jungofthewon
,
stuhlmueller
3y
1