1. The Principle

Suppose you have some difficult cognitive problem you want to solve. What is the difference between (1) making progress on the problem by thinking about it for an hour and (2) solving a well-defined subproblem whose solution is useful for the entire problem?

(Finding a good characterization of the 'subproblem' category is important for Factored Cognition, but for [this post minus the last chapter], you can think of it purely as a problem of epistemic rationality and human thinking.)

I expect most to share the intuition that there is a difference. However, the question appears ill-defined on second glance. 'Making progress' has to cash out as learning things you didn't know before, and it's unclear how that isn't 'solving subproblems'. Whatever you learned could probably be considered the solution to some problem.

If we accept this, then both (1) and (2) technically involve solving subproblems. Nonetheless, we would intuitively talk about subproblems in (2) and not in (1). Can we characterize this difference formally? Is there a well-defined, low-level quantity such that our intuition as to whether we would call a bundle of cognitive work a 'subproblem' corresponds to the size of this quantity? I think there is. If you want, take a minute to think about it yourself; I've put my proposed solution into spoilers.

I think the quantity is the length of the subproblem's solution, where by "solution", I mean "the information about the subproblem relevant for solving the entire problem".

As an example, suppose the entire problem is "figure out the best next move in a chess game". Let's contrast (1) and (2):

  • (1) was someone thinking about this for an hour. The 'solution' here consists of everything she learns throughout that time, which may include many different ideas/insights about different possible moves/resolved confusions about the game state. There is probably no way to summarize all that information briefly.

  • (2) was solving a well-defined subproblem. An example here is, "figure out how good Be5 is".[1] If the other side can check in four turns given that move, then the entire solution to this subproblem is the three-word statement "Be5 is terrible".

2. The Software Analogy

Before we get to why I think the principle matters, let's try to understand it better. I think the analogy to software design is helpful here.

Suppose a company wants to design some big project that will take about 900k (i.e., 900000) lines of code. How difficult is this? Here is a naive calculation:

An amateur programmer with Python can write a 50 line procedure without bugs in an hour, which suggests a total time requirement of 18k hours. Thus, a hundred amateur programmers working 30 hours a week can write the project in six weeks.

I'm not sure how far this calculation is off, but I think it's at least a factor of 20. This suggests that linear extrapolation doesn't work, and the reason for this is simple: as the size of the project goes up, not only is there more code to implement, but every piece of code becomes harder because the entire project is more complex. There are mode dependencies, more sources of error, and so forth.

This is where decompositions come in. Suppose the entire project can be visualized like this, where black boxes denote components (corresponding to pieces of code) and edges dependencies between components.

This naturally factors into three parts. Imagine you're head of the team tasked with implementing the bottom-left part. You can look at your job like this:

(An 'interface' is purely a specification of the relationship, so the ellipses are each less than one black box.)

Your team still has to implement 300k lines of code, but regardless of how difficult this is, it's only marginally harder than implementing a project that consists entirely of 300k lines. In the step from 300k to 900k, the cost actually does scale almost linearly.[2]


As said at the outset, I'm talking about this not to make a point about software design but as an analogy to the topic of better and worse decompositions. In the analogy, the entire problem is coding the 900k line system, the subproblems are coding the three parts, and the solutions to the second and third part are the interfaces.

I think this illustrates both why the mechanism is important and how exactly it works.

For the 'why', imagine the decomposition were a lot worse. In this case, there's a higher overhead for each team, ergo higher overall cost. This has a direct analog in the case where a person is thinking about a problem on her own: the more complex the solutions to subproblems are, the harder it becomes for her to apply them to the entire problem. We are heavily bottlenecked by our ability to think about several things at once, so this can make a massive difference.

For the 'how', notice that, while the complexity of the entire system trivially grows with its size, the task of programming it can ideally be kept simple (as in the case above), and this is done by hiding complexity. From the perspective of your team (previous picture), almost the entire complexity of the remaining project is hidden: it's been reduced to two simple, well-defined interfaces

This mechanism is the same in the case where someone is working on a problem by herself: if she can carve out subproblems, and if those subproblems have short solutions, it dramatically reduces the perceived complexity of the entire problem. In both cases, we can think of the quality of a decomposition as the total amount of complexity it hides.[3]

3. Human Learning

I've come to view human learning primarily under the lens of hiding complexity. The world is extremely complicated; the only way to navigate it is to view it on many different layers of abstraction, such that each layer describes reality in a way that hides 99%+ of what's really going on. Something as complex as going grocery shopping is commonly reduced to an interface that only models time requirement and results.

Abstractly, here is the principled argument as to why we know this is happening:

  1. Thinking about a lot of things at once feels hard.
  2. Any topic you understand well feels easy.
  3. Therefore, any topic you understand well doesn't depend on a lot of things in your internal representation (i.e., in whatever structure your brain uses to store information).
  4. However, many topics do, in fact, depend on a lot of things.
  5. This implies your internal representation is hiding complexity.

For a more elaborate concrete example, consider the task "create a presentation about ", where is something relatively simple:

  • At the highest level, you might think solely about the amount of time you have left to do it; the complexity of how to do it is hidden.
  • One level lower, you might think about (1) creating the slides and (2) practicing the speaking part; the complexity of how to do either is hidden.
  • One level lower, you might think about (1) what points you want to make throughout your presentation and (2) in what order do you want to make those points; the complexity of how to turn a point into a set of slides is hidden.
  • One level lower, you might think about how what slides you want for each major point; the complexity of how to create each individual slide is hidden.
  • Et cetera.

In absolute terms, preparing a presentation is hard. It requires many different actions that must be carried out with a lot of precision for them to work. Nonetheless, the process of preparing it probably feels easy all the way because every level hides a ton of complexity. This works because you understand the process well: you know what levels of abstraction to use, and how and when to transition between them.

The extreme version of this view (which I'm not arguing for) is that learning is almost entirely about hiding complexity. When you first hear of some new concept, it sounds all complicated and like it has lots of moving parts. When you successfully learned it, the complexity is hidden, and when the complexity is hidden, you have learned it. Given that humans can only think about a few things at the same time, this process only bottoms out on exceedingly simple tasks. Thus, under the extreme view, it's not turtles all the way down, but pretty far down. For the most part, learning just is representing concepts such that complexity is hidden.


I once wrote a tiny post titled 'We tend to forget complicated things'. The observation was that, if you stop studying a subject when it feels like you barely understand it, you will almost certainly forget about it in time (and my conclusion was that you should always study until you think it's easy). This agrees with the hiding complexity view: if something feels complicated, it's a sign that you haven't yet decomposed it such that complexity is hidden at every level, and hence haven't learned it properly. Under this view, 'learning complicated things' is almost an oxymoron: proper learning must involve making things feel not-complicated.

It's worth noting that this principle appears to apply even for memorizing random data, at least to some extent, even though you might expect pure memorization to be a counter-example.

There is also this lovely pie chart, which makes the same observation for mathematics:

That is, math is not inherently complicated; only the parts that you haven't yet represented in a nice, complexity-hiding manner feel complicated. Once you have mastered a field, it feels wonderfully simple.

4. Factored Cognition

As mentioned in the outset, characterizing subproblems is important for Factored Cognition. Very briefly, Factored Cognition is about decomposing a problem into smaller problems. In one setting, a human has access to a model that is similar to herself, except (1) slightly dumber and (2) much faster (i.e., it can answer questions almost instantly).

The hope is that this combined system (of the human who is allowed to use the model as often as she likes) is more capable than either the human or the model by themselves, and the idea is that the human can amplify performance by decomposing big problems into smaller problems, letting the model solve the small problems, and using its answers to solve the big problem.

There are a ton of details to this, but most of them don't matter for our purposes.[4] What does matter is that the model has no memory and can only give short answers. This means that the human can't just tell it 'make progress on the problem', 'make more progress on the problem' and so on, but instead has to choose subproblems whose solutions can be described in a short message.

An unexpected takeaway from thinking about this is that I now view Factored Cognition as intimately related with learning in general, the reason being that both share the goal of choosing subproblems whose solutions are as short as possible:

  • In the setting I've described for Factored Cognition, this is immediate from the fact that the model can't give long answers.
  • For learning, this is what I've argued in this post. (Note that optimizing subproblems to minimize the length of their solutions is synonymous with optimizing them to maximize their hidden complexity.)

In other words, Factored Cognition primarily asks you to do something that you want to do anyway when learning about a subject. I've found that better understanding the relationship between the two has changed my thinking about both of them.


(This post has been the second of two prologue posts for an upcoming sequence on Factored Cognition. I've posted them as stand-alone because they make points that go beyond that topic. This won't be true for the remaining sequence, which will be narrowly focused on Factored Cognition and its relevance for Iterated Amplification and Debate.)


  1. Be5 is "move the bishop to square E5". ↩︎

  2. One reason why this doesn't reflect reality is that real decompositions will seldom be as good; another is that coming up with the decomposition is part of the work (and in extension, part of the cost). Note that, even in this case, the three parts all need to be decomposed further, which may not work as well as the first decomposition did. ↩︎

  3. In Software design, the term 'modularity' describes something similar, but it is not a perfect match. Wikipedia defines it as "a logical partitioning of the 'software design' that allows complex software to be manageable for the purpose of implementation and maintenance". ↩︎

  4. After all, this is a post about hiding complexity! ↩︎

New Comment