This is a short post on a simple point that I get asked about a lot and want a canonical reference for.

Which of the following two options is more likely to be true?

  1. AIs will internally be running explicit search processes.
  2. AIs will internally be doing something weirder and more complicated than explicit search.

In my opinion, whenever you're faced with a question about like this, it's always weirder than you think, and you should pick option (2)—or the equivalent—every single time. The problem, though, is that while option (2) is substantially more likely to be correct, it's not at all predictive—it's effectively just the “not (1)” hypothesis, which gets a lot of probability mass because it covers a lot of the space, but precisely because it covers so much of the space is extremely difficult to operationalize to make any concrete predictions about what your AI will actually do.

The aphorism here is “All models are wrong, but some are useful.” Not having a model at all and just betting on the “something else” hypothesis is always going to be more likely than any specific model, but having specific models is nevertheless highly useful in a way that the “something else” hypothesis just isn't.

Thus, in my opinion, I strongly believe that we should try our best to make lots of specific statements about internal structures even when we know those statements are likely to be wrong, because when we let ourselves make specific, structural, mechanistic models, we can get real, concrete predictions. And even if the model is literally false, to the extent that it has some plausible relationship to reality, the predictions that it makes can still be quite accurate.

Furthermore, one of my favorite strategies here is to come up with many different, independent mechanistic models and then see if they all converge: if you get the same prediction from lots of different mechanistic models, that adds a lot of credence to that prediction being quite robust. An example of this in the setting of modeling inductive biases is my “How likely is deceptive alignment?” post, where I take the two relatively independent—but both probably wrong—stories of high and low path-dependence and get the result that they both seem to imply a similar prediction about deceptive alignment, which I think lends a lot of credence to that prediction even if the specific models of inductive biases presented are unlikely to be literally correct.

Going back to the original question about explicit search, this is essentially how I like to think about the arguments in “Risks from Learned Optimization:” we argue that explicit search is a plausible model and explore what its predictions are. Though I think that the response “literally explicit search is unlikely” is potentially correct (though it depends on exactly how broad/narrow your understanding of explicit search is), it's not very constructive—my response is usually, “okay, so what's a better mechanistic model then?” That's not to say that I don't think there are any better mechanistic models than explicit search for what a powerful AI might be doing—but it is to say that coming up with some alternative mechanistic model is a necessary step of trying to improve on existing mechanistic models.

New Comment
6 comments, sorted by Click to highlight new comments since: Today at 7:08 PM

In my opinion, whenever you're faced with a question about like this, it's always more messy than you think

I think this is exactly wrong. I think that mainly because I personally went into biology research, twelve years ago, expecting systems to be fundamentally messy and uninterpretable, and it turned out that biological systems are far less messy than I expected.

We've also seen the same, in recent years, with neural nets. Early on, lots of people expected that the sort of interpretable structure found by Chris Olah & co wouldn't exist. And yet, whenever we actually delve into these systems, it turns out that there's a ton of ultimately-relatively-simple internal structure.

That said, it is a pattern that the simple interpretable structure of complex systems often does not match what humans studying them hypothesized a priori.

And yet, whenever we actually delve into these systems, it turns out that there's a ton of ultimately-relatively-simple internal structure.

I'm not sure exactly what you mean by "ton of ultimately-relatively-simple internal structure".

I'll suppose you mean "a high percentage of what models use parameters for is ultimately simple to humans" (where by simple to humans we mean something like, description length in the prior of human knowledge, e.g., natural language).

If so, this hasn't been my experience doing interp work or from the interp work I've seen (though it's hard to tell: perhaps there exists a short explaination that hasn't been found?). Beyond this, I don't think you can/should make a large update (in either direction) from Olah et al's prior work. The work should down-weight the probability of complete uninterpretablity or extreme easiness.

As such, I expect (and observe) that views about the tractability of humans understanding models come down largely to priors or evidence from other domains.

In the spirit of Evan's original post here's a (half baked) simple model:

Simplicity claims are claims about how many bits (in the human prior) it takes to explain[1] some amount of performance in the NN prior.

E.g., suppose we train a model which gets 2 nats of loss with 100 Billion parameters and we can explain this model getting 2.5 nats using a 300 KB human understandable manual (we might run into issues with irreducible complexity such that making a useful manual is hard, but let's put that aside for now).

So, 'simplicity' of this sort is lower bounded by the relative parameter efficiency of neural networks in practice vs the human prior.

In practice, you do worse than this insofar as NNs express things which are anti-natural in the human prior (in terms of parameter efficiency).

We can also reason about how 'compressible' the explanation is in a naive prior (e.g., a formal framework for expressing explanations which doesn't utilize cleverer reasoning technology than NNs themselves). I don't quite mean compressible - presumably this ends up getting you insane stuff as compression usually does.


  1. by explain, I mean something like the idea of heuristic arguments from ARC. ↩︎

That's fair—perhaps “messy” is the wrong word there. Maybe “it's always weirder than you think”?

(Edited the post to “weirder.”)

Sounds closer. Maybe "there's always surprises"? Or "your pre-existing models/tools/frames are always missing something"? Or "there are organizing principles, but you're not going to guess all of them ahead of time"?

Agreed. It's the same principle by which people are advised to engage in plan-making even if any specific plan they will invent will break on contact with reality; the same principle that underlies "do the math, then burn the math and go with your gut".

While any specific model is likely to be wrong, trying to derive a consistent model gives you valuable insights into how a consistent model would look like at all, builds model-building skills. What specific externally-visible features of the system do you need to explain? How much complexity is required to do so? How does the process that created the system you're modeling interact with its internals? How does the former influence the relative probabilities of different internal designs? How would you be able to distinguish one internal structure from another?

Thinking about concrete models forces you to, well, solidify your understanding of the subject matter into a concrete model — and that's non-trivial in itself.

I'd done that exercise with a detailed story of AI agency development a few months ago, and while that model seems quite naive and uninformed to me now, having built it significantly improved my ability to understand others' models, see where they connect and what they're meant to explain.

(Separately, this is why I agree with e. g. Eliezer that people should have a concrete, detailed plan not just for technical alignment, but for how they'll get the friendly AGI all the way to deployment and AI-Risk-amelioration in the realistic sociopolitical conditions. These plans won't work as written, but they'll orient you, give you an idea of how it even looks like to be succeeding at this task vs. failing.)