This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
Tags
AF
Login
Conjecture (org)
•
Applied to
My guess at Conjecture's vision: triggering a narrative bifurcation
by
Alexandre Variengien
2mo
ago
•
Applied to
Barriers to Mechanistic Interpretability for AGI Safety
by
Connor Leahy
8mo
ago
•
Applied to
Introducing EffiSciences’ AI Safety Unit
by
Charbel-Raphael Segerie
10mo
ago
•
Applied to
Conjecture: A standing offer for public debates on AI
by
Andrea_Miotti
10mo
ago
•
Applied to
Critiques of prominent AI safety labs: Conjecture
by
Anonymous Omega
10mo
ago
•
Applied to
Conjecture internal survey: AGI timelines and probability of human extinction from advanced AI
by
Maris Sala
11mo
ago
•
Applied to
A technical note on bilinear layers for interpretability
by
Lee Sharkey
1y
ago
•
Applied to
Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes
by
Olivia
1y
ago
•
Applied to
A response to Conjecture's CoEm proposal
by
Kristian Freed
1y
ago
•
Applied to
Japan AI Alignment Conference Postmortem
by
Chris Scammell
1y
ago
•
Applied to
A couple of questions about Conjecture's Cognitive Emulation proposal
by
Igor Ivanov
1y
ago
•
Applied to
Japan AI Alignment Conference
by
Andrea_Miotti
1y
ago
•
Applied to
Questions about Conjecure's CoEm proposal
by
Akash
1y
ago
•
Applied to
Cognitive Emulation: A Naive AI Safety Proposal
by
Connor Leahy
1y
ago
•
Applied to
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
by
Andrea_Miotti
1y
ago
•
Applied to
Retrospective on the 2022 Conjecture AI Discussions
by
Andrea_Miotti
1y
ago
•
Applied to
Basic facts about language models during training
by
Beren Millidge
1y
ago