This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
Home
Library
Questions
All Posts
About
Top Questions
23
Where might I direct promising-to-me researchers to apply for alignment jobs/grants?
Q
Abram Demski
11d
Q
3
26
How to talk about reasons why AGI might not be near?
Q
Kaj Sotala
,
leogao
13d
Q
4
30
Why The Focus on Expected Utility Maximisers?
Q
Cinera Verinia
,
Scott Garrabrant
9mo
Q
0
39
Seriously, what goes wrong with "reward the agent when it makes you smile"?
Q
Alex Turner
,
johnswentworth
1y
Q
13
31
Does Agent-like Behavior Imply Agent-like Architecture?
Q
Scott Garrabrant
4y
Q
2
Recent Activity
5
What is wrong with this "utility switch button problem" approach?
Q
Donald Hobson
4d
Q
2
23
Where might I direct promising-to-me researchers to apply for alignment jobs/grants?
Q
Abram Demski
11d
Q
3
26
How to talk about reasons why AGI might not be near?
Q
Kaj Sotala
,
leogao
13d
Q
4
6
Why do we care about agency for alignment?
Q
Chris_Leong
5mo
Q
6
10
What does the launch of x.ai mean for AI Safety?
Q
Chris_Leong
3mo
Q
2
30
Why The Focus on Expected Utility Maximisers?
Q
Cinera Verinia
,
Scott Garrabrant
9mo
Q
0
39
Seriously, what goes wrong with "reward the agent when it makes you smile"?
Q
Alex Turner
,
johnswentworth
1y
Q
13
8
Why don't quantilizers also cut off the upper end of the distribution?
Q
Alex_Altair
,
Jeremy Gillen
4mo
Q
1
31
Does Agent-like Behavior Imply Agent-like Architecture?
Q
Scott Garrabrant
4y
Q
2
27
How Many Bits Of Optimization Can One Bit Of Observation Unlock?
Q
johnswentworth
,
johnswentworth
5mo
Q
3
5
How much do personal biases in risk assessment affect assessment of AI risks?
Q
G Gordon Worley III
5mo
Q
1
33
Concave Utility Question
Q
Scott Garrabrant
5mo
Q
36