AI ALIGNMENT FORUMTags
AF

Newsletters

EditHistorySubscribe
Discussion (0)
Help improve this page (2 flags)
EditHistorySubscribe
Discussion (0)
Help improve this page (2 flags)
Newsletters
Random Tag
Contributors
2Rohin Shah
1Multicore

Newsletters are collected summaries of recent events, posts, and academic papers.

The most prolific newsletter on Less Wrong is Rohin Shah's weekly Alignment Newsletter.

Posts tagged Newsletters
5
26QAPR 4: Inductive biases
Quintin Pope
1y
0
1
10[MLSN #8] Mechanistic interpretability, using law to inform AI alignment, scaling laws for proxy gaming
Dan H, ThomasW
7mo
0
2
40Quintin's alignment papers roundup - week 1
Quintin Pope
1y
0
1
19[AN #102]: Meta learning by GPT-3, and a list of full proposals for AI alignment
Rohin Shah
3y
0
1
13[AN #115]: AI safety research problems in the AI-GA framework
Rohin Shah
3y
2
1
24Quintin's alignment papers roundup - week 2
Quintin Pope
1y
0
1
17QAPR 3: interpretability-guided training of neural nets
Quintin Pope
1y
0
1
26[AN #166]: Is it crazy to claim we're in the most important century?
Rohin Shah
2y
2
0
21[AN #173] Recent language model results from DeepMind
Rohin Shah
1y
0
1
13[MLSN #6]: Transparency survey, provable robustness, ML models that predict the future
Dan H
1y
0
1
12[AN #112]: Engineering a Safer World
Rohin Shah
3y
2
1
15[AN #170]: Analyzing the argument for risk from power-seeking AI
Rohin Shah
2y
0
1
10Alignment Newsletter #36
Rohin Shah
5y
0
1
12[AN #145]: Our three year anniversary!
Rohin Shah
2y
0
1
16[AN #167]: Concrete ML safety problems and their relevance to x-risk
Rohin Shah
2y
4
Load More (15/138)
Add Posts