x
This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
AI ALIGNMENT FORUM
AF
Login
I Tested LLM Agents on Simple Safety Rules. They Failed in Surprising and Informative Ways. — AI Alignment Forum