AI ALIGNMENT FORUM
Wikitags
AF

Subscribe
Discussion0
1

AI Safety Public Materials

Subscribe
Discussion0
1
Written by Thane Ruthenis last updated 26th Aug 2022

AI Safety Public Materials are posts optimized for conveying information on AI Risk to audiences outside the AI Alignment community — be they ML specialists, policy-makers, or the general public.

Posts tagged AI Safety Public Materials
39AGI safety from first principles: Introduction
Richard Ngo
5y
15
46When discussing AI risks, talk about capabilities, not intelligence
Victoria Krakovna
2y
3
16The Importance of AI Alignment, explained in 5 points
Daniel_Eth
2y
2
4Distribution Shifts and The Importance of AI Safety
Leon Lang
3y
2
46“The Era of Experience” has an unsolved technical alignment problem
Steve Byrnes
2mo
24
29An artificially structured argument for expecting AGI ruin
Rob Bensinger
2y
2
30TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI
Andrew Critch
2y
0
24Response to Blake Richards: AGI, generality, alignment, & loss functions
Steve Byrnes
3y
9
13Excessive AI growth-rate yields little socio-economic benefit.
Cleo Nardo
2y
0
12Response to Dileep George: AGI safety warrants planning ahead
Steve Byrnes
1y
4
14Video & transcript: Challenges for Safe & Beneficial Brain-Like AGI
Steve Byrnes
1mo
0
19[$20K in Prizes] AI Safety Arguments Competition
Dan H, Kevin Liu, ozhang, TW123, Sidney Hough
3y
9
20$20K In Bounties for AI Safety Public Materials
Dan H, TW123, ozhang
3y
0
Add Posts