This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
30
ozhang — AI Alignment Forum
ozhang
Posts
Sorted by New
Wikitag Contributions
Comments
Sorted by
Newest
24
Announcing the Introduction to ML Safety course
3y
3
20
$20K In Bounties for AI Safety Public Materials
3y
0
30
Introducing the ML Safety Scholars Program
4y
0
25
SERI ML Alignment Theory Scholars Program 2022
4y
0
19
[$20K in Prizes] AI Safety Arguments Competition
4y
9
34
ML Alignment Theory Program under Evan Hubinger
4y
2
Comments