This is a special post for quick takes by Leon Lang. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.

New to LessWrong?

10 comments, sorted by Click to highlight new comments since: Today at 4:01 AM

Zeta Functions in Singular Learning Theory

In this shortform, I very briefly explain my understanding of how zeta functions play a role in the derivation of the free energy in singular learning theory. This is entirely based on slide 14 of the SLT low 4 talk of the recent summit on SLT and Alignment, so feel free to ignore this shortform and simply watch the video.

The story is this: we have a prior , a model , and there is an unknown true distribution . For model selection, we are interested in the evidence of our model for a data set , which is given by

where  is the empirical KL divergence. In fact, we are interested in selecting the model that maximizes the average of this quantity over all data sets. The average is then given by

where  is the Kullback-Leibler divergence. 

But now we have a problem: how do we compute this integral? Computing this integral is what the free energy formula is about

The answer: by computing a different integral. So now, I'll explain the connection to different integrals we can draw. 

Let

which is called the state density function. Here,  is the Dirac delta function.  For different , it measures the density of states (= parameter vectors) that have . It is thus a measure for the "size" of different level sets. This state density function is connected to two different things. 

Laplace Transform to the Evidence

First of all, it is connected to the evidence above. Namely, let  be the Laplace transform of . It is a function  given by

In first step, we changed the order of integration, and in the second step we used the defining property of the Dirac delta. Great, so this tells us that ! So this means we essentially just need to understand .

Mellin Transform to the Zeta Function

But how do we compute ? By using another transform. Let  be the Mellin transform of . It is a function  (or maybe only defined on part of ?) given by

Again, we used a change in the order of integration and then the defining property of the Dirac delta. This is called a Zeta function. 

What's this useful for?

The Mellin transform has an inverse. Thus, if we can compute the zeta function, we can also compute the original evidence as

Thus, we essentially changed our problem to the problem of studying the zeta function  To compute the integral of the zeta function, it is then useful to perform blowups to resolve the singularities in the set of minima of , which is where algebraic geometry enters the picture. For more on all of this, I refer, again, to the excellent SLT low 4 talk of the recent summit on singular learning theory. 

This is my first comment on my own, i.e., Leon Lang's, shortform. It doesn't have any content, I just want to test the functionality.

Unfortunately not, as far as my interface goes, if you wanted to comment here.

Yes, it seems like both creating a "New Shortform" when hovering over my user name and commenting on "Leon Lang's Shortform" will do the exact same thing. But I can also reply to the comments.

Edit: This is now obsolete with our NAH distillation.

Making the Telephone Theorem and Its Proof Precise

This short form distills the Telephone Theorem and its proof. The short form will thereby not at all be "intuitive"; the only goal is to be mathematically precise at every step.

Let  be jointly distributed finite random variables, meaning they are all functions

starting from the same finite sample space with a given probability distribution  and into respective finite value spaces . Additionally, assume that these random variables form a Markov chain 

 

Lemma: For a Markov chain , the following two statements are equivalent:

(a) 

(b) For all  with 

Proof:

Assume (a): Inspecting an information diagram of  will immediately result in us also observing the Markov chain . Markov chains can be turned around, thus we get the two chains

Factorizing along these two chains, we obtain:

and thus, for all  with  . That proves (b).

Assume (b): We have

 

where, in the second step, we used the Markov chain  and in the third step, we used assumption (b). This independence gives us the vanishing of conditional mutual information:

Together with the Markov chain , this results, by inspecting an information diagram, in the equality .  

 

Theorem: Let .  The following are equivalent:

(a) 

(b) There are functions  defined on , respectively such that:

  •  with probability , i.e., the measure of all  such that the equality doesn't hold is zero.
  • For all , we have the equality , and the same for .

Proof: The Markov chain immediately also gives us a Markov chain , meaning we can without loss of generality assume that . So let's consider the simple Markov chain .

Assume (a): By the lemma, this gives us for all  with 

Define the two functions  by:

Then we have  with probability 1[1], giving us the first condition we wanted to prove. 

For the second condition, we use a trick from Probability as Minimal Map: set , which is a probability distribution. We get

That proves (b).

Assume (b): For the other direction, let  be given with  Let  be such that  and with . We have

and thus

The result follows from the Lemma.[2]

  1. ^

    Somehow, my brain didn't find this obvious. Here is an explanation: 

  2. ^

    There is some subtlety about whether the random variable  can be replaced by  in that equation. But given that they are "almost" the same random variables, I think this is valid inside the probability equation.

These are rough notes trying (but not really succeeding) to deconfuse me about Alex Turner's diamond proposal. The main thing I wanted to clarify: what's the idea here for how the agent remains motivated by diamonds even while doing very non-diamond related things like "solving mazes" that are required for general intelligence?

  • Summarizing Alex's summary:
    • Multimodal SSL initialization
    • recurrent state, action head
    • imitation learning on humans in simulation, + sim2real
      • low sample complexity
      • Humans move toward diamonds
    • policy-gradient RL: reward the AI for getting near diamonds
      • the recurrent state retains long-term information
    • After each task completion: the AI is near diamonds
  • SSL will make sure the diamond abstraction exists
  • Proto Diamond shard:
    • There is a diamond abstraction that will be active once a diamond is seen. Imagine this as being a neuron.
    • Then, hook up the "move forward" action with this neuron being active. Give reward for being near diamonds.  Voila, you get an agent which obtains reward! This is very easy to learn! More easily than other reward-obtaining computations
    • Also, other such computations may be reinforced, like "if shiny object seen, move towards it" --- do adversarial training to rule those out
    • This is all about prototypical diamonds. Thus, the AI may not learn to create a diamond as large as a sun, but that's also not what the post is about.
  • Preserving diamond abstraction/shard:
    • In Proto planning, the AI primarily thinks about how to achieve diamonds. Such thinking is active across basically all contexts, due to early RL training. 
    • Then, we will give the AI other types of tasks, like "maze solving" or "chess playing" or anything else, from very easy to very hard.
      • At the end of each task, there will be a diamond and reward.
      • By default, at the start of this new training process, the diamond shard will be active since training so far ensures it is active in most contexts. It will bid for actions before the reward is reached, and therefore, its computations will be reinforced and shaped. Also, other shards will be reinforced (ones that plan how to solve a maze, since they also steer toward the reinforcement event), but the diamond shard is ALWAYS reinforced.
        • The idea here is that the diamond shard is contextually activated BY EVERY CONTEXT, and so it is basically one huge circuit thinking about how to reach diamonds that simply gets extended with more sub-computations for how to reach diamonds.
        • Caveat: another shard may be better than the diamond shard at planning toward the end of a maze than the diamond shard which "isn't specialized". And if that's the case, then reinforcement events may make the diamond shard continuously less active in maze-solving contexts until it doesn't activate anymore at start-of-maze contexts. It's unclear to me what the hypothesis is for how to prevent this. 
          • Possibly the hypothesis is captured in this paragraph of Alex, but I don't understand it: 
            "In particular, even though online self-supervised learning continues to develop the world model and create more advanced concepts, the reward events also keep pinging the invocation of the diamond-abstraction as responsible for reward (because insofar as the agent's diamond-shard guides its decisions, then the diamond-shard's diamond-abstraction is in fact responsible for the agent getting reward). The diamond-abstraction gradient starves the AI from exclusively acting on the basis of possible advanced "alien" abstractions which would otherwise have replaced the diamond abstraction. The diamond shard already gets reward effectively, integrating with the rest of the agent's world model and recurrent state, and therefore provides "job security" for the diamond-abstraction. (And once the agent is smart enough, it will want to preserve its diamond abstraction, insofar as that is necessary for the agent to keep achieving its current goals which involve prototypical-diamonds.)"
          • I don't understand what it means to "ping the invocation of the diamond-abstraction as responsible for reward". I can imagine what it means to have subcircuits whose activation is strengthened on certain inputs, or whose computations (if they were active in the context) are changed in response to reinforcement. And so, I imagine the shard itself to be shaped by reward. But I'm not sure what exactly is meant by pinging the invocation of the diamond abstraction as responsible for reward. 

what's the idea here for how the agent remains motivated by diamonds even while doing very non-diamond related things like "solving mazes" that are required for general intelligence?

I think that the agent probably learns a bunch of values, many related to gaining knowledge and solving games and such. (People are also like this; notice that raising a community-oriented child does not require a proposal for how the kid will only care about their community, even as they go through school and such.)

Also, other shards will be reinforced (ones that plan how to solve a maze, since they also steer toward the reinforcement event), but the diamond shard is ALWAYS reinforced.

  • The idea here is that the diamond shard is contextually activated BY EVERY CONTEXT, and so it is basically one huge circuit thinking about how to reach diamonds that simply gets extended with more sub-computations for how to reach diamonds.

I think this is way stronger of a claim than necessary. I think it's fine if the agent learns some maze-/game-playing shards which do activate while the diamond-shard doesn't -- it's a quantitative question, ultimately. I think an agent which cares about playing games and making diamonds and some other things too, still ends up making diamonds.

I don't understand what it means to "ping the invocation of the diamond-abstraction as responsible for reward".

Credit assignment (AKA policy gradient) credits the diamond-recognizing circuit as responsible for reward, thereby retaining this diamond abstraction in the weights of the network.

Thanks for your answer! 

Credit assignment (AKA policy gradient) credits the diamond-recognizing circuit as responsible for reward, thereby retaining this diamond abstraction in the weights of the network.

This is different from how I imagine the situation. In my mind, the diamond-circuit remains simply because it is a good abstraction for making predictions about the world. Its existence is, in my imagination, not related to an RL update process. 

Other than that, I think the rest of your comment doesn't quite answer my concern, so I try to formalize it more. Let's work in the simple setting that the policy network has no world model and is simply a non-recurrent function  mapping from observations to probability distributions over actions. I imagine a simple version of shard theory to claim that f decomposes as follows:

,

where i is an index for enumerating shards,  is the contextual strength of activation of the i-th shard (maybe with ), and  is the action-bid of the i-th shard, i.e., the vector of log-probabilities it would like to see for different actions. Then SM is the softmax function, producing the final probabilities.

In your story, the diamond shard starts out as very strong. Let's say it's indexed by  and that  for most inputs  and that  has a large "capacity" at its disposal so that it will in principle be able to represent behaviors for many different tasks. 

Now, if a new task pops up, like solving a maze, in a specific context , I imagine that two things could happen to make this possible:

  •  could get updated to also represent this new behavior
  • The strength  could get weighed down and some other shard could learn to represent this new behavior.

One reason why the latter may happen is that  possibly becomes so complicated that it's "hard to attach more behavior to it"; maybe it's just simpler to create an entirely new module that solves this task and doesn't care about diamonds. If something like this happens often enough, then eventually, the diamond shard may lose all its influence. 

One reason why the latter may happen is that  possibly becomes so complicated that it's "hard to attach more behavior to it"; maybe it's just simpler to create an entirely new module that solves this task and doesn't care about diamonds. If something like this happens often enough, then eventually, the diamond shard may lose all its influence. 

I don't currently share your intuitions for this particular technical phenomenon being plausible, but imagine there are other possible reasons this could happen, so sure? I agree that there are some ways the diamond-shard could lose influence. But mostly, again, I expect this to be a quantitative question, and I think experience with people suggests that trying a fun new activity won't wipe away your other important values.

This is my first short form. It doesn't have any content, I just want to test the functionality.