User Profile

Ω34855108

rohinmshah's Posts

sorted by new
All Posts
Frontpage
Curated
Questions
Events
Meta

Alignment Newsletter #42

102d9 min readShow Highlight
0

Following human norms

93d4 min readShow Highlight
0

Reward uncertainty

65d5 min readShow Highlight
0
20

Alignment Newsletter #41

76d10 min readShow Highlight
6

Human-AI Interaction

79d4 min readShow Highlight
0

AI safety without goal-directed behavior

1216d3 min readShow Highlight
6

Will humans build goal-directed agents?

1119d5 min readShow Highlight
15

What is narrow value learning?

613d1 min readShow Highlight
3

Alignment Newsletter #40

715d4 min readShow Highlight
0

rohinmshah's Comments