Each December, the LessWrong community reflects on the best blogposts of yesteryear, to decide which posts stood the tests of time. In this post, I aim to: * Explain some subtleties of what I'm hoping we get out of the Annual Review * Make some explicit asks (even of busy...
I've had many conversations over the last few years about the health of the AI Alignment field and one of the things that has come up most frequently (including in conversations with Rohin, Buck and various Open Phil people) is that many people wish there was more of a review...
Fictionalized/Paraphrased version of a real dialog between me and John Wentworth. Fictionalized Me: So, in the Eliezer/Richard dialogs, Eliezer is trying to get across this idea that consequentialism deeply permeates optimization, and this is important, and that's one[1] reason why Alignment is Hard. But something about it is confusing and...
After a few months of open beta, the AI Alignment Forum is ready to launch. It is a new website built by the team behind LessWrong 2.0, to help create a new hub for technical AI Alignment research and discussion. This is an in-progress FAQ about the new Forum. What...
We've just launched the beta for AlignmentForum.org. Much of the value of LessWrong has come from the development of technical research on AI Alignment. In particular, having those discussions be in an accessible place has allowed newcomers to get up to speed and involved. But the alignment research community has...
This is an experiment in short-form content on LW2.0. I'll be using the comment section of this post as a repository of short, sometimes-half-baked posts that either: 1. don't feel ready to be written up as a full post 2. I think the process of writing them up might make...