This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Home
Library
Questions
All Posts
About
Top Questions
45
why assume AGIs will optimize for fixed goals?
Q
nostalgebraist
,
Rob Bensinger
2y
Q
3
40
Forecasting Thread: AI Timelines
Q
Amanda Ngo
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
4y
Q
33
33
Why The Focus on Expected Utility Maximisers?
Q
Cinera Verinia
,
Scott Garrabrant
1y
Q
1
27
How to talk about reasons why AGI might not be near?
Q
Kaj Sotala
,
leogao
6mo
Q
9
23
Where might I direct promising-to-me researchers to apply for alignment jobs/grants?
Q
Abram Demski
6mo
Q
3
Recent Activity
45
why assume AGIs will optimize for fixed goals?
Q
nostalgebraist
,
Rob Bensinger
2y
Q
3
7
What evidence is there of LLM's containing world models?
Q
Chris_Leong
5mo
Q
0
40
Forecasting Thread: AI Timelines
Q
Amanda Ngo
,
Daniel Kokotajlo
,
Ben Pace
,
datscilly
4y
Q
33
33
Why The Focus on Expected Utility Maximisers?
Q
Cinera Verinia
,
Scott Garrabrant
1y
Q
1
1
Can we isolate neurons that recognize features vs. those which have some other role?
Q
Joshua Clancy
5mo
Q
0
0
Training a RL Model with Continuous State & Action Space in a Real-World Scenario
Q
Alexander Ries
5mo
Q
0
27
How to talk about reasons why AGI might not be near?
Q
Kaj Sotala
,
leogao
6mo
Q
9
5
What is wrong with this "utility switch button problem" approach?
Q
Donald Hobson
6mo
Q
2
23
Where might I direct promising-to-me researchers to apply for alignment jobs/grants?
Q
Abram Demski
6mo
Q
3
6
Why do we care about agency for alignment?
Q
Chris_Leong
1y
Q
6
10
What does the launch of x.ai mean for AI Safety?
Q
Chris_Leong
8mo
Q
2
7
Is CIRL a promising agenda?
Q
Chris_Leong
2y
Q
0