Now that Softmax—my favorite new AI company—is public, I can finally share this. They’ve funded my research and I’m very excited about what they’re doing!
> A new AI alignment player has entered the arena. > > Emmett Shear, Adam Goldstein and David Bloomin have set up shop in San Francisco with a 10-person start-up called Softmax. The company is part research lab and part aspiring money maker and aimed at figuring out how to...
Chipmonk As the Conceptual Boundaries Workshop (website) is coming up, and now that we're also planning Mathematical Boundaries Workshop in April, I want to get more clarity on what exactly it is that you want out of «boundaries»/membranes. So I just want to check: Is your goal with boundaries just...
It could be the case that these two goals are separable and independent: * “AI safety”: avoiding existential risk, s-risk, actively negative outcomes * “AI getting-everything-we-want” (CEV) This is what Davidad calls this the Deontic Sufficiency Hypothesis. If the hypothesis is true, it should be possible to de-pessimize and mitigate...