In many industries, but especially those with a potentially adversarial relationship to society like advertising and arms, self regulatory organizations (SROs) exist to provide voluntary regulation of actors in those industries to assure society of their good intentions. For example:

  • TrustArc (formerly TRUSTe) has long provided voluntary certification services to web companies to help them assure the public that companies are following basic practices that allow consumers to protect their privacy. They have been successful enough to, outside the EU, keep governments from much regulating online businesses.
  • The US Green Building Council offers multiple levels of LEED certification to provide both targets and proof to the public that real estate developers are protecting environmental commons.
  • The American Medical Association, The American Bar Association, and the National Association of Realtors are SROs that function as de facto official regulators of their industries despite being non-governmental organizations (NGOs).
  • Financial regulation was formerly and sometimes still is done via SROs, although governments have progressively taken a stronger hand in the industry over the last 100 years.

AI, especially AGI, is an area where there are many incentives to violate societal preferences and damage the commons and it is currently unregulated except where it comes into contact with existing regulations in its areas of application. Consequently, there may be reason to form an AGI SRO. Some reasons in favor:

  • An SRO could offer certification of safety and alignment efforts being taken by AGI researchers.
  • An SRO may be well positioned to reduce the risk of an AI race by coordinating efforts that would otherwise result in competition.
  • An SRO could encourage AI safety in industry and academia while being politically neutral (not tied to a single university or company).
  • An SRO may allow AI safety experts to manage the industry rather than letting it fall to other actors who may be less qualified or have different concerns that do not as strongly include prevention of existential risks.
  • An SRO could act as a "clearinghouse" for AI safety research funding.
  • An SRO could give greater legitimacy to prioritizing AI safety efforts among capabilities researchers.

Some reasons against:

  • An SRO might form a de facto "guild" and keep out qualified researchers.
  • An SRO could create the appearance that more is being done than really is.
  • An SRO could relatedly promote the wrong incentives and actually result in less safe AI.
  • An SRO might divert funding and effort from technical research in AI safety.

I'm just begining to consider the idea of assembling an SRO for AI safety, and especially interested in discussing the idea further to see if it's worth pursuing. Feedback very welcome!

New Comment