AI ALIGNMENT FORUM
AF

Wikitags

AXRP

Edited by Multicore, DanielFilan, et al. last updated 30th Dec 2024

AI X-Risk Research Podcast is a podcast hosted by Daniel Filan.

See also: Audio, Interviews

Subscribe
1
Subscribe
1
Discussion0
Discussion0
Posts tagged AXRP
37AXRP Episode 31 - Singular Learning Theory with Daniel Murfet
DanielFilan
1y
0
38AXRP Episode 27 - AI Control with Buck Shlegeris and Ryan Greenblatt
DanielFilan
1y
6
19AXRP Episode 24 - Superalignment with Jan Leike
DanielFilan
2y
3
30AXRP Episode 22 - Shard Theory with Quintin Pope
DanielFilan
2y
4
25AXRP Episode 19 - Mechanistic Interpretability with Neel Nanda
DanielFilan
3y
0
17AXRP Episode 25 - Cooperative AI with Caspar Oesterheld
DanielFilan
2y
0
25AXRP Episode 39 - Evan Hubinger on Model Organisms of Misalignment
DanielFilan
9mo
0
20AXRP Episode 33 - RLHF Problems with Scott Emmons
DanielFilan
1y
0
18AXRP Episode 15 - Natural Abstractions with John Wentworth
DanielFilan
3y
0
20AXRP Episode 38.2 - Jesse Hoogland on Singular Learning Theory
DanielFilan
9mo
0
15AXRP Episode 45 - Samuel Albanie on DeepMind’s AGI Safety Approach
DanielFilan
2mo
0
17AXRP Episode 41 - Lee Sharkey on Attribution-based Parameter Decomposition
DanielFilan
3mo
0
13AXRP Episode 40 - Jason Gross on Compact Proofs and Interpretability
DanielFilan
5mo
0
15AXRP Episode 14 - Infra-Bayesian Physicalism with Vanessa Kosoy
DanielFilan
3y
10
15AXRP Episode 13 - First Principles of AGI Safety with Richard Ngo
DanielFilan
3y
1
Load More (15/50)
Add Posts