Rob Bensinger

Communications lead at MIRI. Unless otherwise indicated, my posts and comments here reflect my own views, and not necessarily my employer's.


Debate on Instrumental Convergence between LeCun, Russell, Bengio, Zador, and More

May be useful to include in the review with some of the comments, or with a postmortem and analysis by Ben (or someone).

I don't think the discussion stands great on its own, but it may be helpful for:

  • people familiar with AI alignment who want to better understand some human factors behind 'the field isn't coordinating or converging on safety'.
  • people new to AI alignment who want to use the views of leaders in the field to help them orient.
AI Safety "Success Stories"

Seems like a good starting point for discussion. Researchers need to have some picture of what AI alignment is "for," in order to think about what research directions look most promising.

Soft takeoff can still lead to decisive strategic advantage

I'm not a slow-takeoff proponent, and I don't agree with everything in this post; but I think it's asking a lot of the right questions and introducing some useful framings.

Multiplicative Operations on Cartesian Frames

And now I've made a LW post collecting most of the definitions in the sequence so far, so they're easier to find: 

Additive and Multiplicative Subagents

I'm collecting most of the definitions from this sequence on one page, for easier reference: 

Multiplicative Operations on Cartesian Frames

For my personal use when I was helping review Scott's drafts, I made some mnemonics (complete with silly emojis to keep track of the small Cartesian frames and operations) here:

(Also includes my crude visualizations of morphism composition and homotopy equivalence to help those concepts stick better in my brain.)

Load More