Mesa-Optimization

Diabloto96
plex (-14)
plex (+93/-83)
Rob Bensinger (+620/-372) No link to the Risks from Learned Optimization paper?!
Yoav Ravid (+25/-10)
Yoav Ravid (+21)
Ruben Bloom (+1674/-62)
Ruben Bloom (+321/-1838)
Issa Rice (+6/-87) /* External links */
Issa Rice (+231) /* External links */

In the context of AI alignment, the concern is that a base optimizer (e.g., a gradient descent process) may produce a learned model that is itself an optimizer, and that has unexpected and undesirable properties. Even if the gradient descent process is in some sense "trying" to do exactly what human developers want, the resultant mesa-optimizer will not typically be trying to do the exact same thing.1[1]

Wei Dai brings up a similar idea in an SL4 thread.2[2]

The optimization daemons article on Arbital was published probably in 2016.3[1]

  1. "Optimization daemons". Arbital. 
  2. Wei Dai. '"friendly" humans?' December 31, 2003. 
  1. ^
  2. ^

    Wei Dai. '"friendly" humans?' December 31, 2003.

Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. AIn this situation, a base optimizeroptimizes and creates a second optimizer, called a mesa-optimizer. optimizerPreviously. The primary reference work underfor this concept was called Inner Optimizer or is Hubinger et al.'s "Risks from Learned Optimization Daemons. in Advanced Machine Learning Systems".

Examples

Example: Natural selection is an optimization process (thatthat optimizes for reproductive fitness) thatfitness. Natural selection produced humans (whohumans, who are capable of pursuing goals that no longer correlate reliably with reproductive fitness). In this case, humansthemselves optimizers. Humans are optimization daemonstherefore mesa-optimizers of natural selection.

In the context of AI alignment, the concern is that an artificial general intelligence exerting optimization pressurea base optimizer (e.g., a gradient descent process) may produce mesa-optimizersa learned model that break alignment.is itself an optimizer, and that has unexpected and undesirable properties. Even if the gradient descent process is in some sense "trying" to do exactly what human developers want, the resultant mesa-optimizer will not typically be trying to do the exact same thing.1

 

 

 

 

Related ideas

Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable of pursuing goals that no longer correlate reliably with reproductive fitness). In this case, humans are optimization daemons of natural selection. In the context of AI alignment,alignment, the concern is that an artificial general intelligence exerting optimization pressure may produce mesa-optimizers that break alignment.1

Mesa-Optimization is the situation whichthat occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization Daemons. Arguably,

Examples

Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable of pursuing goals that no longer correlate reliably with reproductive fitness). In this case, humans are optimization daemons of natural selection. In the context of AI alignment, the concern is that an artificial general intelligence exerting optimization pressure may produce mesa-optimizers that break alignment.1

History

Previously work under this concept was called Inner Optimizer or Optimization Daemons.

Wei Dai brings up a mesa-optimizersimilar idea in an SL4 thread.2

The optimization daemons article on Arbital was published probably in 2016.3

Jessica Taylor wrote two posts about daemons while at MIRI:

See also

References

  1. "Optimization daemons". Arbital.
  2. Wei Dai. '"friendly" humans?' December 31, 2003.

External links

Some posts that arose fromreference optimization daemons:

  • "Cause prioritization for downside-focused value systems": "Alternatively, perhaps goal preservation becomes more difficult the base-optimizermore capable AI systems become, in which case the future might be controlled by unstable goal functions taking turns over the steering wheel"
  • "Techniques for optimizing worst-case performance": "The difficulty of evolution.

    optimizing worst-case performance is one of the most likely reasons that I think prosaic AI alignment might turn out to be impossible (if combined with an unlucky empirical situation)." (the phrase "unlucky empirical situation" links to the optimization daemons page on Arbital)

Related ideas

Mesa-Optimization is the situation which occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization daemons are optimizers that result from heavy optimization pressure on a different system. For example, natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable of pursuing goals that no longer correlate reliably with reproductive fitness). In this case,Daemons. Arguably, humans are optimization daemonsa mesa-optimizer that arose from the base-optimizer of natural selection. In the context of AI alignment, the concern is that an artificial general intelligence exerting optimization pressure may produce daemons that break alignment.1evolution.

History

Wei Dai brings up a similar idea in an SL4 thread.2

The optimization daemons article on Arbital was published probably in 2016.3

Jessica Taylor wrote two posts about daemons while at MIRI:

See also

References

External links

Some posts that reference optimization daemons:

  • "Cause prioritization for downside-focused value systems": "Alternatively, perhaps goal preservation becomes more difficult the more capable AI systems become, in which case the future might be controlled by unstable goal functions taking turns over the steering wheel"
  • "Techniques for optimizing worst-case performance": "The difficulty of optimizing worst-case performance is one of the most likely reasons that I think prosaic AI alignment might turn out to be impossible (if combined with an unlucky empirical situation)." (the phrase "unlucky empirical situation" links to the optimization daemons page on Arbital)
  • "Prize for probable problems": "I'm happy to provisionally grant that optimization daemons would be catastrophic if you couldn’t train robust models."

Related ideas:


  1. Wei Dai. '"friendly" humans?' December 31, 2003.
  2. "Optimization daemons". Arbital.

External links

Some posts that reference optimization daemons:

Related ideas:


  1. Wei Dai. '"friendly" humans?' December 31, 2003.
  2. "Optimization daemons". Arbital.

External links

Some posts that reference optimization daemons:

Related ideas:


  1. Wei Dai. '"friendly" humans?' December 31, 2003.
  2. "Optimization daemons". Arbital.
Load More (10/12)