This post is the result of joint work with Kate Woolverton. Thanks to Paul Christiano for useful comments and feedback.
The basic circuit prior setup
We’ll start with the basic setup that we’re trying to improve upon, which is trying to solve ELK via the use of a Boolean circuit size prior. Previously, I summarized Paul, Mark, and Ajeya’s argument for why this might work as follows:
As in the ELK report, there is a plausible argument for why there exists a speed prior that would prefer the direct translator to the human imitator. Naively, the problem with the speed prior here is that the computation required for the human imitator is proportional to the size of the human's Bayes net, whereas the computation required for the direct translator is proportional to the size of the model's Bayes net—and in the superhuman limit we should expect the latter to be substantially larger than the former.
The argument in the ELK report, however, is that while this argument is valid in the limit, there's reason to believe it might be invalid for all the finite cases that we care about. That's because perfect inference in either Bayes net, and thus perfect loss, shouldn't be possible in any finite case. Thus, the performance of the ontology mapping function, and thus its loss, should be proportional to how much computation it puts into its inference task—for which the direct translator has a big advantage, since it gets to reuse the computation performed by the model.
The obvious response here, and the response that is given in the ELK report, is that the above argument is very fragile—it relies on inference in the human's Bayes net being too hard to always get right on the training distribution, which is a strong assumption both about the difficulty of inference and the complexity of the training data.
Furthermore, as the ELK report also notes, it's not enough for the direct translator to just be more efficient than the human imitator: the direct translator has to be a cost-effective improvement (in terms of how much loss/computation it saves per increase in description complexity) compared to all other possible mappings.
Overall, however, despite the issues with this approach, I agree with Paul, Mark, and Ajeya’s conclusion that it is the most promising approach currently on offer:
In order to ensure we learned the direct translator, we would need to change the training strategy to ensure that it contains sufficiently challenging inference problems, and that doing direct translation was a cost-effective way to improve speed (i.e. that there aren’t other changes to the human simulator that would save even more time).
Compared to all our previous counterexamples, this one offers much more hope. We can’t rule out the possibility of a clever dataset where the direct translator has a large enough computational advantage to be preferred, and we leave it as an avenue for further research. In Section: penalizing inconsistencies we discuss an additional ingredient that we think makes computation-complexity-based approaches more plausible.
Furthermore, I agree that, as Paul, Mark, and Ajeya argue, adding consistency might improve this approach as well:
If our training process looks at the relationship between different inputs, then a bad reporter might also need to consider lots of alternative inputs before making its decision. Moreover, if we choose sets of inputs randomly, then a bad reporter might not know which other inputs it needs to think about, and there is an exponentially large space of possible situations it potentially needs to worry about. So we could imagine getting to a situation where “just answer honestly” is the computationally easiest way to solve the problem.
Rather than merely imitating human answers, a bad reporter must now play the game “look consistent according to the checks that a human can apply.”
This raises two key questions:
- Does there exist a “cheating” strategy in the “look consistent game”?
- If so, is it easy for a bad reporter to execute? Otherwise, playing this game might force the reporter to use much more computation than the direct translator.
If the answers to both questions are “yes” then we have a problem.
In Appendix: inconsistency across inputs we discuss these questions in detail for a particular concrete family of consistency checks. For now we just give some general heuristic arguments:
- We suspect that there is always some cheating strategy. Otherwise the consistency checks suffice to determine the truth all on their own. But in fact it seems like inductive biases (e.g. “simple models that have worked in the past are likely to work in the future”) seems extremely important for distinguishing between multiple consistent stories.
- If there is a cheating strategy, we think there are likely to be possible worlds where it can be found quickly, based on the same argument made in Section: speed. The “look consistent game” mostly depends on the human model, and so it may be possible to play the game well enough to beat direct translation using an amount of effort that doesn’t scale with the complexity of the predictor’s Bayes net. If that happens, then for a sufficiently complicated predictor it will be easier to play the consistency game than to directly translate.
That said, we think that this argument is very far from watertight, and that it is quite possible there exists some consistency check that provides a large computational advantage for direct translation.
Overall, I think I am currently less optimistic about adding consistency helping very much, as I don’t think it’s too hard to just use human imitation for questions humans can answer and use some consistent world model for the rest (resulting in the simplest/fastest consistent world rather than the true world). That being said, I do think consistency strictly helps, since forcing the human imitator to do such hallucination only hurts it in comparison to the direct translator. I’ll call the circuit prior + consistency the “consistent circuit prior.”
Penalizing reporters that work with many different predictors
Another strategy proposed by Paul, Mark, and Ajeya is to penalize ontology mapping functions that work with many different underlying predictor models, the intuition being that the direct translator only works for the original model, whereas the human imitator works for essentially any model.
As formulated in the ELK report, I think this approach suffers from some pretty debilitating counterexamples—though, as I will demonstrate later, I think there is a way for us to rehabilitate it. Let’s start, though, with the counterexamples.
Possible counterexample: Basic model checking
In order to game this objective, a bad reporter could “memorize” what the predictor is supposed to look like, and then decide to answer questions in a nonsensical way if the predictor is different from what it expects. If the predictor is exactly like it expects, then it can answer questions just like the human simulator.
This bad reporter would only work for exactly one predictor, and therefore it gets the maximum possible reward for “working with fewer predictors.” Moreover, this bad reporter didn’t have to do very much extra computation.
So the only way that we will end up preferring direct translation is if this bad reporter ends up requiring more parameters. This is superficially plausible, because the bad reporter needed to memorize the intended predictor.
Unfortunately, this doesn’t help very much if the predictor’s Bayes net is very simple to describe. And it seems possible to have a Bayes net which is simple to describe (meeting the criteria of this counterexample), but which has a huge number of nodes and arrows (meeting the criteria needed to counterexample the complexity and computation penalties).
For example, consider a case where the predictor’s Bayes net is a low-level model of physics. Although there are many atoms, they all behave according to the same simple laws. So we can have a situation where the predictor’s Bayes net is very simple to describe but still has a large number of nodes and arrows. We present a concrete example of this phenomenon in Appendix: game of life.
As Paul, Mark, and Ajeya argue, such a counterexample seems pretty difficult to overcome outright via this approach—especially with speed or simplicity as the underlying prior. However, such a counterexample is still consistent with this approach leading to a strict benefit for the direct translator compared to the human imitator, specifically a benefit proportional to the complexity of the predictor on the underlying prior.
Possible counterexample: Length/norm checking
Unfortunately, I think there is an even worse counterexample that removes that strict benefit. Suppose that, rather than memorize the entire predictor, the human imitator just memorizes the predictor’s length/norm and then outputs garbage if the predictor it’s given is smaller than that.
This costs substantially less complexity than memorizing the entire predictor, but results in exactly the same loss as memorizing the entire predictor. Fundamentally, that’s because the loss given in the ELK report just subtracts the norm of the lowest norm alternative predictor that works with the ontology mapping function. And the norm of the lowest norm predictor that works with such a norm-checking mapping will be the same as the norm of the original predictor, since that’s precisely what the ontology mapping function in such a case is checking.
The circuit distillation prior
In this section, I will propose a way to strictly improve the consistent circuit prior’s performance on ELK by leveraging a “fixed” version of the “penalizing reporters that work with many different models” approach.
First, some preliminaries:
- Let be some predictive model. Then, as in ELK, our goal will be to find some map which uses ’s latent knowledge to answer questions.
- Let be our dataset of tuples. We’ll assume that is trained to predict from on this dataset using a simplicity prior.
- Let be the direct translator and the human imitator plus some hallucinatory consistent world model.
- Let be Boolean circuit size complexity and be the circuit prior such that . We’ll assume and can be applied equally well to and .
- Let be some consistency check over some dataset.
- Let be the indicator function such that and .
- Let be shorthand for .
Then, we’ll define our new prior as
Conceptually, the goal of is to capture the intuition that what makes better than is that compresses in a way that doesn’t: tells you way more about the structure of the world model that it’s translating from than does ( just tells you that has to contain a human somewhere). In my opinion, I think this captures the fundamental reason why “works for fewer predictors” than and is closer to the fundamental justification behind why an approach like this might work.
Going forward, I’ll be referring to this prior as the “circuit distillation prior,” based on the idea that the term is essentially asking for the difficulty of “distilling” into conditional on . If you wanted to try to implement this prior in a concrete machine learning setup, such a distillation process would also be the place to start—though I’m putting such implementation details outside of the scope of this post for now.
Possible counterexample: Length/norm checking
To start with, I want to explore why length/norm checking doesn’t work as a counterexample to the distillation circuit prior. Note that, since our underlying prior is a circuit prior, the relevant norm here is circuit length, so I’ll just be calling this counterexample length checking.
To do that, consider the following approximation to the complexity of , conditioned on :
For the rest of this post, I’ll mostly be ignoring the condition, since we’ll only be considering s which satisfy it. Furthermore, for the rest of this post, I’ll also just be accepting the
assumption—but for this section, the validity or invalidity of that assumption is critical to why length checking doesn’t work, so we’ll be focusing on it here.
Let be defined as
where . In the next section, I’ll explore whether the usage of on its own might constrain , but for right now, since I just want to focus on the length-checking, I’ll assume that doesn’t depend on at all (e.g. it hardcodes its human model).
Now, let’s consider without using the above approximation:
This works because is only one of possible models of size and the circuit prior selects uniformly from among models of the same size.
Furthermore, since we’re assuming doesn’t constrain at all, we also have , which means we have and thus length-checking does not help at all under this prior.
However, if we naively do the same analysis using the above approximation, we get
which seems to make the term disappear! The problem here, I think, is that this approximation is only valid when we’re thinking of as an infinite bitstring. Concretely, if we add random extra gates to any smaller than before that is fed into —thus forcing to treat as an infinite bitstring—length-checking should be impossible. Since I think this approximation does work if we make that assumption, I will be continuing to use this approximation throughout this post, just under the assumption that we’re treating everything as infinite bitstrings.
However, I think that the fact that this problem only occurs at all for the approximated version further demonstrates why the original distillation term is the conceptually correct form for this term to take. This also helps shed light on what went wrong with the ELK report’s version, which is that they started from the approximated version rather than the full distillation term.
Possible counterexample: The human imitator
Now, let’s compare to . We’ll address the concern we raised previously of on its own significantly constraining here. Since we’re conditioning on , we’ll assume that includes some sort of hallucinatory consistent world model for answering questions that the human gets wrong.
Fundamentally, we have to compare and on two terms: and . All of the analysis from the basic circuit prior section above is about whether the term might just work on its own, which remains unclear—but regardless that analysis won’t change here. All we want to show is that adding the term strictly helps in terms of advantaging more than .
So, fundamentally, we need to ask what the smallest is that causes each to correctly answer questions.
For , seemingly has to be the size of , since , as a direct translation map, does essentially none of the work of actually computing the answers to the questions, so all of those answers must already exist in , and must exist in the same form as in so that knows how to look for them.
However, there is a potential way that for might be able to be smaller than . Most notably, you could have just try to use to produce answers to questions, then backtranslate those answers into the model’s ontology, such that where is the human model component of . For such an to be smaller than , however—assuming —you would need . In that case, however, we now have a tight enough bound on that the overall problem becomes substantially easier. In particular, if we assume that the performance of and are approximately proportional to the amount of compute they spend on their respective inference tasks, then the fact that gets to use ’s inference for free means that would need to be at least comparable to —but in that case, would just give us , meaning that the distillation circuit prior would just fully solve the problem. Thus, for the rest of this post, I’ll assume that such that we don’t get a smaller this way and thus the smallest for is just .
Now, let’s consider what the smallest that makes work might look like. In particular, we need to know what exactly depends on in . It seems that there is really only one way in which can make use of , which is for to extract its human model from . The only other thing that needs is the camera inputs, but it gets those for free from , so it has no need to extract them. does also then need to do its own inference in the human model given those camera inputs—but that’s not something that’s available in anywhere. Given that depends on just through ’s model of the human, it becomes clear that the smallest that causes to work is an that includes only the human.
Thus, to get , we need
where and are the human model and non-human-model components of , respectively. As is clear from the final line, adding the distillation term has improved the default ask of into the easier ask of , essentially giving a leg up proportional to .
That being said, though the circuit distillation prior is clearly a strict improvement here, since the leg up is proportional to , the circuit distillation prior fails to give us if the circuit size of is very small relative to the circuit size of , as in the basic model checking counterexample to the penalizing reporters that work with many models proposal from the ELK report. Though I think this is much less likely under a circuit prior than a simplicity prior, I won’t try to argue for it here—for now, the point is just that regardless the circuit distillation prior is still a strict improvement.
Possible counterexample: Basic model checking
One potential problem with the above analysis is that, though doesn’t need to depend on anything other than in , it could simply force a dependence through directly checking certain parts of against some hardcoded . We’ll call this counterexample , which is implemented by first checking some of the gates in against a hardcoded list of desired gates, then running if the checks pass and outputting garbage if the checks fail. Concretely, let
where compares the gates of the circuits passed into it and is some set of hardcoded gates.
Here’s the trick, however: for each gate that is specified in , both and are made larger by exactly one gate. That’s because checking one gate in takes one gate in (specifically one equality/XNOR gate) and simultaneously for each gate checked the smallest goes up by only one gate, since now it just has to add that gate.
Now, analyzing the complexity :
Therefore is strictly worse than , which means—unlike for the penalizing reporters that work with many predictors proposal in the ELK report—basic model checking doesn’t function as a counterexample to the distillation circuit prior at all.
Possible counterexample: Efficient model checking
In the previous case, we showed that simply checking gates in against gates in doesn’t work as a counterexample. But what if there’s a more efficient way of checking for ?
Consider defined as
where is some efficient way of checking whether is (we only need to check the non-human-model part since should pin down the rest). We’ll formalize such efficiency by postulating that
To start with, there’s a simple argument for why such an seemingly can’t exist: should have to look at each gate in up to the length of , otherwise it shouldn’t be able to constrain them. Looking at a gate should require at least one gate. Thus, it should be that , which contradicts our efficiency assumption. However, there is a potential flaw in this argument, which is that might be able to check the results of certain computations in in such a way that constrains the gates producing them without having to look at those gates directly. Thus, to start with, we get the condition that, for to exist, it has to somehow cleverly look at only small portions of while still constraining the rest of .
Furthermore, note that, if the bits in are sampled randomly, such an is mathematically impossible for simple information-theoretic reasons. Of course, they’re not random, but they’re pretty close: they’re simplicity distributed, since we postulated that was trained using a simplicity prior.
This leads us to an additional fact that we know about : it can’t be simpler on a simplicity prior than , otherwise wouldn’t be the simplest world model that fits the data, since a simpler world model would be available as
assuming that we pay essentially no additional complexity for the argmin, which should be true for a simplicity prior.
Thus, needs to be some uniquely--identifying function that only checks small parts of , has a lower circuit complexity than , but is more complex on a simplicity prior than . That’s a very strong set of conditions that makes it quite hard to produce such an , though I can’t completely rule out its existence.
However, even if such an is possible, the circuit distillation prior would still be strictly better than the consistent circuit prior alone, since, to get , we need
which still gives a leg up, just now proportional to rather than .
Possible counterexample: Lookup tables
Another potential counterexample that I think is worth looking at here is the use of hard-coding/inlining in . As I have discussed previously, I think this sort of thing can be hugely problematic for all types of speed priors, including Boolean circuit size.
Thus, I think it’s worth pointing out that the circuit distillation prior makes this problem strictly less bad: anything that hardcodes that it could get by looking it up in reduces ’s dependence on and thus decreases the complexity of the simplest that makes work. As a result, I suspect that a circuit distillation prior would be substantially better at generalization and thus more performance competitive than a consistent circuit prior.
You could try to get around this by looking for the smallest predictor model that’s consistent with the ontology mapping and also actually produces good predictions—but then you would always just get the same predictor model, since by assumption the predictor model is the simplest model that makes good predictions. ↩︎
If you want, you can also require to be able to predict from , e.g. with the hope of forcing to have a world model that is as good as that of . However, since can just reuse to make such predictions, this doesn’t really change anything. ↩︎
If we don’t make this assumption, we get basically the same analysis but with instead of . ↩︎
We're also assuming that here, which is fine in this case since we just want to demonstrate that length-checking doesn't help—if we relax that assumption, we get that length-checking not only doesn't help, but actually hurts, as we show with the basic model checking counterexample later. ↩︎
This is because, assuming doesn't constrain at all, ↩︎
This is because you would need which requires . ↩︎
This isn’t completely trivial, since we need to also verify that in this case doesn’t perform better than on the distillation term—but in fact even in this case still does better than , since only needs , whereas the given here needs as well as and . Furthermore, this fact ensures that, even if the above argument is wrong and we can have without , the distillation circuit prior is still a strict improvement, just now proportional to rather than the that we’ll derive later. ↩︎
We can also compare to a version of that hardcodes the human model, but such an just does strictly worse, so it’s not really worth considering here. ↩︎
can also try hashing and checking the hash instead of checking the gates directly, but that doesn’t help—each bit of the hash that you check has the same two-sided effect. ↩︎
It’s worth pointing out, however, that this is largely because we’ve moved the effect entirely into the distillation term—the advantage for over is still proportional to , as we pointed out previously. ↩︎
For example, might try to specify some good prior on —based on facts like being simplicity-distributed, the world being consistent, being consistent with the human model, etc.—and then check for high probability on that prior. ↩︎
Note that, if we use a distillation simplicity prior rather than a distillation circuit prior, this leads to a blatant contradiction. Of course, we then lose all the aforementioned benefits of using a circuit prior. ↩︎
Note that is not usable for such an , since we’re assuming . ↩︎