This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
ozhang
Posts
Sorted by New
24
Announcing the Introduction to ML Safety course
1y
3
19
$20K In Bounties for AI Safety Public Materials
1y
0
30
Introducing the ML Safety Scholars Program
2y
0
25
SERI ML Alignment Theory Scholars Program 2022
2y
0
19
[$20K in Prizes] AI Safety Arguments Competition
2y
9
34
ML Alignment Theory Program under Evan Hubinger
2y
2
Wiki Contributions
Comments