AI ALIGNMENT FORUM
AF

1516
Wikitags

AI Safety Public Materials

Edited by Thane Ruthenis last updated 26th Aug 2022

AI Safety Public Materials are posts optimized for conveying information on AI Risk to audiences outside the AI Alignment community — be they ML specialists, policy-makers, or the general public.

Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged AI Safety Public Materials
39AGI safety from first principles: Introduction
Richard_Ngo
5y
15
46When discussing AI risks, talk about capabilities, not intelligence
Vika
2y
3
16The Importance of AI Alignment, explained in 5 points
Daniel_Eth
3y
2
4Distribution Shifts and The Importance of AI Safety
Leon Lang
3y
2
46“The Era of Experience” has an unsolved technical alignment problem
Steven Byrnes
6mo
24
29An artificially structured argument for expecting AGI ruin
Rob Bensinger
2y
2
30TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI
Andrew_Critch
2y
0
24Response to Blake Richards: AGI, generality, alignment, & loss functions
Steven Byrnes
3y
9
13Excessive AI growth-rate yields little socio-economic benefit.
Cleo Nardo
3y
0
12Response to Dileep George: AGI safety warrants planning ahead
Steven Byrnes
1y
4
14Video & transcript: Challenges for Safe & Beneficial Brain-Like AGI
Steven Byrnes
5mo
0
19[$20K in Prizes] AI Safety Arguments Competition
Dan H, Kevin Liu, ozhang, TW123, Sidney Hough
3y
9
20$20K In Bounties for AI Safety Public Materials
Dan H, TW123, ozhang
3y
0
Add Posts