This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Home
Library
Questions
All Posts
About
Top Questions
27
What convincing warning shot could help prevent extinction from AI?
Q
Charbel-Raphael Segerie
,
Diego Dorn
1mo
Q
0
45
why assume AGIs will optimize for fixed goals?
Q
nostalgebraist
,
Rob Bensinger
2y
Q
3
40
Forecasting Thread: AI Timelines
Q
Amanda Ngo
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
4y
Q
33
33
Why The Focus on Expected Utility Maximisers?
Q
Cinera Verinia
,
Scott Garrabrant
1y
Q
1
27
How to talk about reasons why AGI might not be near?
Q
Kaj Sotala
,
leogao
8mo
Q
9
Recent Activity
27
What convincing warning shot could help prevent extinction from AI?
Q
Charbel-Raphael Segerie
,
Diego Dorn
1mo
Q
0
7
Is CIRL a promising agenda?
Q
Chris_Leong
2y
Q
0
45
why assume AGIs will optimize for fixed goals?
Q
nostalgebraist
,
Rob Bensinger
2y
Q
3
7
What evidence is there of LLM's containing world models?
Q
Chris_Leong
8mo
Q
0
40
Forecasting Thread: AI Timelines
Q
Amanda Ngo
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
4y
Q
33
33
Why The Focus on Expected Utility Maximisers?
Q
Cinera Verinia
,
Scott Garrabrant
1y
Q
1
1
Can we isolate neurons that recognize features vs. those which have some other role?
Q
Joshua Clancy
7mo
Q
0
0
Training a RL Model with Continuous State & Action Space in a Real-World Scenario
Q
Alexander Ries
7mo
Q
0
27
How to talk about reasons why AGI might not be near?
Q
Kaj Sotala
,
leogao
8mo
Q
9
5
What is wrong with this "utility switch button problem" approach?
Q
Donald Hobson
8mo
Q
2
23
Where might I direct promising-to-me researchers to apply for alignment jobs/grants?
Q
Abram Demski
8mo
Q
3
6
Why do we care about agency for alignment?
Q
Chris_Leong
1y
Q
6