Communications lead at MIRI. Unless otherwise indicated, my posts and comments here reflect my own views, and not necessarily my employer's.
I agree with this post.
May be useful to include in the review with some of the comments, or with a postmortem and analysis by Ben (or someone).
I don't think the discussion stands great on its own, but it may be helpful for:
Seems like a good starting point for discussion. Researchers need to have some picture of what AI alignment is "for," in order to think about what research directions look most promising.
I want to see more attempts to answer this question. Also related to another post I nominated: https://www.lesswrong.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage
I'm not a slow-takeoff proponent, and I don't agree with everything in this post; but I think it's asking a lot of the right questions and introducing some useful framings.
I've added the section-2 definitions above to https://www.lesswrong.com/posts/kLLu387fiwbis3otQ/cartesian-frames-definitions.
And now I've made a LW post collecting most of the definitions in the sequence so far, so they're easier to find: https://www.lesswrong.com/posts/kLLu387fiwbis3otQ/cartesian-frames-definitions
I'm collecting most of the definitions from this sequence on one page, for easier reference: https://www.lesswrong.com/posts/kLLu387fiwbis3otQ/cartesian-frames-definitions
For my personal use when I was helping review Scott's drafts, I made some mnemonics (complete with silly emojis to keep track of the small Cartesian frames and operations) here: https://docs.google.com/drawings/d/1bveBk5Pta_tml_4ezJ0oWiq-qudzgnsRlfbGJgZ1qv4/.
(Also includes my crude visualizations of morphism composition and homotopy equivalence to help those concepts stick better in my brain.)