This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Home
Library
Questions
All Posts
About
Top Questions
33
Why The Focus on Expected Utility Maximisers?
Q
Cinera Verinia
,
Scott Garrabrant
1y
Q
1
27
How to talk about reasons why AGI might not be near?
Q
Kaj Sotala
,
leogao
3mo
Q
9
23
Where might I direct promising-to-me researchers to apply for alignment jobs/grants?
Q
Abram Demski
3mo
Q
3
40
Seriously, what goes wrong with "reward the agent when it makes you smile"?
Q
Alex Turner
,
johnswentworth
1y
Q
13
31
Does Agent-like Behavior Imply Agent-like Architecture?
Q
Scott Garrabrant
4y
Q
3
Recent Activity
33
Why The Focus on Expected Utility Maximisers?
Q
Cinera Verinia
,
Scott Garrabrant
1y
Q
1
1
Can we isolate neurons that recognize features vs. those which have some other role?
Q
Joshua Clancy
2mo
Q
0
0
Training a RL Model with Continuous State & Action Space in a Real-World Scenario
Q
Alexander Ries
2mo
Q
0
7
What evidence is there of LLM's containing world models?
Q
Chris_Leong
2mo
Q
0
27
How to talk about reasons why AGI might not be near?
Q
Kaj Sotala
,
leogao
3mo
Q
9
5
What is wrong with this "utility switch button problem" approach?
Q
Donald Hobson
3mo
Q
2
23
Where might I direct promising-to-me researchers to apply for alignment jobs/grants?
Q
Abram Demski
3mo
Q
3
6
Why do we care about agency for alignment?
Q
Chris_Leong
8mo
Q
6
10
What does the launch of x.ai mean for AI Safety?
Q
Chris_Leong
5mo
Q
2
7
Is CIRL a promising agenda?
Q
Chris_Leong
1y
Q
0
40
Seriously, what goes wrong with "reward the agent when it makes you smile"?
Q
Alex Turner
,
johnswentworth
1y
Q
13
8
Why don't quantilizers also cut off the upper end of the distribution?
Q
Alex_Altair
,
Jeremy Gillen
7mo
Q
1