This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
Ought
•
Applied to
Ought will host a factored cognition “Lab Meeting”
by
jungofthewon
1y
ago
•
Applied to
Rant on Problem Factorization for Alignment
by
Multicore
1y
ago
•
Applied to
Prize for Alignment Research Tasks
by
Andreas Stuhlmüller
1y
ago
•
Applied to
Elicit: Language Models as Research Assistants
by
Andreas Stuhlmüller
1y
ago
•
Applied to
Supervise Process, not Outcomes
by
Andreas Stuhlmüller
1y
ago
•
Applied to
Forecasting Thread: Existential Risk
by
MichaelA
2y
ago
•
Applied to
GPT-3 and the future of knowledge work
by
plex
3y
ago
•
Applied to
Beta test GPT-3 based research assistant
by
Multicore
3y
ago
•
Applied to
Automating reasoning about the future at Ought
by
Ben Pace
3y
ago
•
Applied to
The Majority Is Always Wrong
by
Raymond Arnold
3y
ago
•
Applied to
Current AI Safety Roles for Software Engineers
by
Multicore
3y
ago
•
Applied to
Factored Cognition
by
Ben Pace
3y
ago
•
Applied to
Solving Math Problems by Relay
by
Ben Pace
3y
ago
•
Applied to
The Stack Overflow of Factored Cognition
by
Ben Pace
3y
ago
•
Applied to
[AN #86]: Improving debate and factored cognition through human experiments
by
Ben Pace
3y
ago
•
Applied to
Update on Ought's experiments on factored evaluation of arguments
by
Ben Pace
3y
ago
•
Applied to
Ought: why it matters and ways to help
by
Ben Pace
3y
ago
Ben Pace
v1.0.0
Jul 22nd 2020
(+90)
LW
2
Ought
is an AI alignment research non-profit focused on the problem of
Factored Cognition
.
•
Created by
Ben Pace
at
3y
Ought is an AI alignment research non-profit focused on the problem of Factored Cognition.