In February 2023, researchers from a number of top industry AI labs (OpenAI, DeepMind, Anthropic) and universities (Cambridge, NYU) co-organized a two-day workshop on the problem of AI alignment, attended by 80 of the world’s leading machine learning researchers. We’re now making recordings and transcripts of the talks available online. The content ranged from very concrete to highly speculative, and the recordings include the many questions, interjections and debates which arose throughout.

If you're a machine learning researcher interested in attending follow-up workshops similar to the San Francisco alignment workshop, you can fill out this form.

Main Talks

Ilya Sutskever - Opening Remarks: Confronting the Possibility of AGI
Jacob Steinhardt - Aligning Massive Models: Current and Future Challenges
Ajeya Cotra - “Situational Awareness” Makes Measuring Safety Tricky
Paul Christiano - How Misalignment Could Lead to Takeover
Jan Leike - Scaling Reinforcement Learning from Human Feedback
Chris Olah - Looking Inside Neural Networks with Mechanistic Interpretability
Dan Hendrycks - Surveying Safety Research Directions

Lightning talks (Day 1)

Jason Wei - Emergent abilities of language models
Martin Wattenberg - Emergent world models and instrumenting AI systems
Been Kim - Alignment, setbacks and beyond alignment
Jascha Sohl-Dickstein - More intelligent agents behave less coherently
Ethan Perez - Model-written evals
Daniel Brown - Challenges and progress towards efficient and causal preference-based reward wearning
Boaz Barak - For both alignment and utility: focus on the medium term
Ellie Pavlick - Comparing neural networks' conceptual representations to humans’
Percy Liang - Transparency and standards for language model evaluation

Lightning talks (Day 2)

Sam Bowman - Measuring progress on scalable oversight for large language models
Zico Kolter - "Safe Mode": the case for (manually) verifying the output of LLMs
Roger Grosse - Understanding LLM generalization using influence functions
Scott Niekum - Models of human preferences for learning reward functions
Aleksander Madry - Faster datamodels as a new approach to alignment
Andreas Stuhlmuller - Iterated decomposition: improving science Q&A by supervising reasoning processes
Paul Christiano - Mechanistic anomaly detection
Lionel Levine - Social dynamics of reinforcement learners
Vincent Conitzer - Foundations of cooperative AI lab
Scott Aaronson - Cryptographic backdoors in large language models



New Comment
1 comment, sorted by Click to highlight new comments since:

The day 2 lightning talks were really great.