AI ALIGNMENT FORUM
AF

812
Wikitags

Optimization

Edited by Ruby, et al. last updated 30th Dec 2024

Optimization is any kind of process that systematically comes up with solutions that are better than the solution used before. More technically, this kind of process moves the world into a specific and unexpected set of states by searching through a large search space, hitting small and low probability targets. When this process is gradually guided by some agent into some specific state, through searching specific targets, we can say it prefers that state.

The best way to exemplify an optimization process is through a simple example: Eliezer Yudkowsky suggests natural selection is such a process. Through an implicit preference – better replicators – natural selection searches all the genetic landscape space and hit small targets: efficient mutations.

Consider the human being. We are a highly complex object with a low probability to have been created by chance - natural selection, however, over millions of years, built up the infrastructure needed to build such a functioning body. This body, as well as other organisms, had the chance (was selected) to develop because it is in itself a rather efficient replicator suitable for the environment where it came up.

Or consider the famous chessplaying computer, Deep Blue. Outside of the narrow domain of selecting moves for chess games, it can't do anything impressive: but as a chessplayer, it was massively more effective than virtually all humans. It has a high optimization power in the chess domain but almost none in any other field. Humans or evolution, on the other hand, are more domain-general optimization processes than Deep Blue, but that doesn't mean they're more effective at chess specifically. (Although note in what contexts this optimization process abstraction is useful and where it fails to be useful: it's not obvious what it would mean for "evolution" to play chess, and yet it is useful to talk about the optimization power of natural selection, or of Deep Blue.)

Measuring Optimization Power

One way to think mathematically about optimization, like evidence, is in information-theoretic bits. The optimization power is the amount of surprise we would have in the result if there were no optimization process present. Therefore we take the base-two logarithm of the reciprocal of the probability of the result. A one-in-a-million solution (a solution so good relative to your preference ordering that it would take a million random tries to find something that good or better) can be said to have log_2(1,000,000) = 19.9 bits of optimization. Compared to a random configuration of matter, any artifact you see is going to be much more optimized than this. The math describes only laws and general principles for reasoning about optimization; as with probability theory, you oftentimes can't apply the math directly.

Further Reading & References

  • Optimization and the Singularity by Eliezer Yudkowsky
  • Measuring Optimization Power by Eliezer Yudkowsky

See also

  • Preference
  • Really powerful optimization process
  • Control theory
Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged Optimization
93The ground of optimization
Alex Flint
5y
50
28Optimization Amplifies
Scott Garrabrant
7y
3
61Selection vs Control
abramdemski
6y
14
58Risks from Learned Optimization: Introduction
evhub, Chris van Merwijk, Vlad Mikulik, Joar Skalse, Scott Garrabrant
6y
33
71Optimality is the tiger, and agents are its teeth
Veedrac
4y
3
30Bottle Caps Aren't Optimisers
DanielFilan
7y
10
19Beren's "Deconfusing Direct vs Amortised Optimisation"
DragonGod
3y
0
36Optimization Concepts in the Game of Life
Vika, Ramana Kumar
4y
16
26Towards Measures of Optimisation
mattmacdermott, Alexander Gietelink Oldenziel
2y
4
14Requirements for a STEM-capable AGI Value Learner (my Case for Less Doom)
RogerDearnaley
2y
0
10Goodhart's Curse and Limitations on AI Alignment
Gordon Seidoh Worley
6y
0
72Utility Maximization = Description Length Minimization
johnswentworth
5y
17
42Deconfusing Direct vs Amortised Optimization
beren
3y
3
49Ngo and Yudkowsky on AI capability gains
Eliezer Yudkowsky, Richard_Ngo
4y
46
34Search versus design
Alex Flint
5y
30
Load More (15/59)
Add Posts