Dave O

Google AI PM; Foundation board member

Posts

Sorted by New

Wiki Contributions

Comments

Dave O6mo10-2

I think there are two paths, roughly, that RSPs could send us down. 

  1. RSPs are a good starting point. Over time we make them more concrete, build out the technical infrastructure to measure risk, and enshrine them in regulation or binding agreements between AI companies. They reduce risk substantially, and provide a mechanism whereby we can institute a global pause if necessary, which seems otherwise infeasible right now.
     
  2. RSPs are a type of safety-washing. They provide the illusion of a plan, but as written they are so vague as to be meaningless. They let companies claim they take safety seriously but don't meaningfully reduce risk, and in fact may increase it by letting companies skate by without doing real work, rather than forcing companies to act responsibly by just not developing a dangerous uncontrollable technology.

If you think that Anthropic and other labs that adopt these are fundamentally well meaning and trying to do the right thing, you'll assume that we are by default heading down path #1.  If you are more cynical about how companies are acting, then #2 may seem more plausible.

My feeling is that Anthropic et al are clearly trying to do the right thing, and that it's on us to do the work to ensure that we stay on the good path here, by working to deliver the concrete pieces we need, and to keep the pressure on AI labs to take these ideas seriously.  And to ask regulators to also take concrete steps to make RSPs have teeth and enforce the right outcomes. 

But I also suspect that people on the more cynical side aren't going to be persuaded by a post like this. If you think that companies are pretending to care about safety but really are just racing to make $$, there's probably not much to say at this point other than, let's see what happens next.