When I imagine the beginnings of life on earth, I imagine a handful of molecules which just-so-happen to catalyze reactions which produce more of those same molecules. The more of molecule X there is, the more molecule X is produced. The chemical kinetic equations contain a positive feedback loop (aka instability).

There might also be other molecules which catalyze their own production. If Y catalyzes its own production more efficiently than X, then we eventually expect to see more Y than X. Still at the level of chemical kinetics, we’d say that the equations contain multiple positive feedback loops, and the feedback loop for Y has a faster doubling time than that for X.

We haven’t used the word “fitness” here at all. We’re quite literally talking about eigenvalues of a matrix (i.e. the Jacobian of the kinetic equations of the chemical system) - those are what determine the relevant doubling times, at least close to ambient steady-state concentrations. As we move away from ambient concentrations, the math will get a bit more complicated, but the qualitative idea remains: we’re talking about positive feedback loops, without any explicit mention of fitness or optimization.

But it sure seems like there should be a higher level of abstraction, at which the positive feedback loops for X and Y are competing, Y eventually wins out due to higher fitness, and there’s some meaningful sense in which the system is optimizing for fitness.

More generally, whenever there’s a dynamical system containing multiple instabilities (i.e. positive feedback loops), it seems like there should be a canonical way to interpret that system as multiple competing subsystems, under selection, optimizing for some kind of fitness function. I’d like a way to take a dynamical system containing positive feedback loops, and say both (a) what the competing subsystems are, and (b) what fitness function it’s implicitly maximizing.

Something like this would likely be useful in a number of areas:

  • Alignment: notice implicit optimization by looking for dynamic instabilities (e.g. instabilities in imperfect search).
  • Agent foundations: formulate “agents” as self-reinforcing feedback loops in dynamical systems. Tying effective self-reinforcement to world-models would probably be a key piece (e.g. along these lines).
  • Biology: generalize evolutionary theory
  • Economics: ground economic theory in selection effects (e.g. along these lines) rather than ideal agents, allowing it to apply much more broadly.
Frontpage

9

New Answer
New Comment
4 comments, sorted by Click to highlight new comments since: Today at 9:51 PM

Consider the differential equation where has many positive eigenvalues. This is the simplest case of

a dynamical system containing multiple instabilities (i.e. positive feedback loops),

Where is the selection? It isn't there. You have multiple independent exponential growth rates.

Consider a chaotic system like a double pendulum. Fix to a particular typical solution.

consider as a differential equation in . Here represents the difference between and some other solution to . If you start at then stays at . However, small variations will grow exponentially. After a while, you just get a difference between 2 arbitrary chaotic paths.

I can't see a way of meaningfully describing these as optimizing processes with competing subagents. Arguably could be optimising . However, this doesn't seem canonical, as for any invertable . and describes an exactly isomorphic system, but dosen't preserve modulus. This isomorphism does preserve . That could be the thing being optimised.

+1. The multiple feedback loops have to be competing in some important sense; it's just not true that "whenever there’s a dynamical system containing multiple instabilities (i.e. positive feedback loops) ... there should be a canonical way to interpret that system as multiple competing subsystems..."

In the OP's case study, the molecules are competing for scarce resources. More abstractly, perhaps we can say that there are multiple feedback loops such that when the system has travelled far enough in the direction pushed by one feedback loop, it destroys or otherwise seriously inhibits movement in the directions pushed by the other feedback loops.

Consider a pencil balanced on its point. It has multiple positive feedback loops, (different directions to fall in) and falling far in one direction prevents falling in others. But once it has fallen, it just sits there. That said, evolution can settle into a strong local minimum, and just sit there.

Mmm, good point. My hasty generalization was perhaps too hasty. Perhaps we need some sort of robust-to-different-initial-conditions sort of criterion.