Book review: Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind, by Robert Kurzban.

Minds are Modular

Many people explain minds by positing that they're composed of parts:

  • the id, ego, and super-ego
  • the left side and the right side of the brain
  • System 1 and System 2
  • the triune brain
  • Marvin Minsky's Society of Mind

Minsky's proposal is the only one of these that resembles Kurzban's notion of modularity enough to earn Kurzban's respect. The modules Kurzban talks about are much more numerous, and more specialized, than most people are willing to imagine.

Here's Kurzban's favorite Minsky quote:

The mind is a community of "agents." Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. [...] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really "exist." In our picture of the mind we will imagine many "sub-persons", or "internal agents", interacting with one another. Solving the simplest problem---seeing a picture---or remembering the experience of seeing it---might involve a dozen or more---perhaps very many more---of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or "censoring" others from thinking forbidden thoughts.

Let's take the US government as a metaphor. Instead of saying it's composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary, Anthony Fauci, a Speaker of the House, more generals than I can name, even more park rangers, etc.

In What Is It Like to Be a Bat?, Nagel says "our own mental activity is the only unquestionable fact of our experience". In contrast, Kurzban denies that we know more than a tiny fraction of our mental activity. We don't ask "what is it like to be an edge detector?", because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences. Most of our introspection involves examining a mental model that we construct for propaganda purposes.

Is Self-Deception Mysterious?

There's been a good deal of confusion about self-deception and self-control. Kurzban attributes the confusion to attempts at modeling the mind as a unitary agent. If there's a single homunculus in charge of all of the mind's decisions, then it's genuinely hard to explain phenomena that look like conflicts between agents.

With a sufficiently modular model of minds, the confusion mostly vanishes.

A good deal of what gets called self-deception is better described as being strategically wrong.

For example, when President Trump had COVID, the White House press secretary had a strong incentive not to be aware of any evidence that Trump's health was worse than expected, in order to reassure voters without being clearly dishonest. Whereas the White House doctor had some reason to err a bit on the side of overestimating Trump's risk of dying. So it shouldn't surprise us if they had rather different beliefs. I don't describe that situation as "the US government is deceiving itself", but I'd be confused as to whether to describe it that way if I could only imagine the government as a unitary agent.

Minds work much the same way. E.g. the cancer patient who buys space on a cruise that his doctor says he won't live to enjoy (presumably to persuade allies that he'll be around long enough to be worth allying with), while still following the doctor's advice about how to treat the cancer. A modular model of the mind isn't surprised that his mind holds inconsistent beliefs about how serious the cancer is. The patient's press-secretary-like modules are pursuing a strategy of getting friends to make long-term plans to support the patient. They want accurate enough knowledge of the patient's health to sound credible. Why would they want to be more accurate than that?

Self-Control

Kurzban sees less value in the concept of a self than do most Buddhists.

almost any time you come across a theory with the word "self" in it, you should check your wallet.

Self-control has problems that are similar to the problems with the concept of self-deception. It's best thought of as conflicts between modules.

We should expect context-sensitive influences on which modules exert the most influence on decisions. E.g. we should expect a calorie-acquiring module to have more influence when a marshmallow is in view than if a path to curing cancer is in view. That makes it hard for a mind to have a stable preference about how to value eating a marshmallow or curing cancer.

If I think I see a path to curing cancer that is certain to succeed, my cancer-research modules ought to get more attention than my calorie-acquiring modules. I'm pretty sure that's what would happen if I had good evidence that I'm about to cure cancer. But a more likely situation is that my press-secretary-like modules say I'll succeed, and some less eloquent modules say I'll fail. That will look like a self-control problem to those who want the press secretary to be in charge, and look more like politics to those who take Kurzban's view.

I could identify some of my brain's modules as part of my "self", and say that self-control refers to those modules overcoming the influence of the non-self parts of my brain. But the more I think like Kurzban, the more arbitrary it seems to treat some modules as more privileged than others.

The Rest

Along the way, Kurzban makes fun of the literature on self-esteem, and of models that say self-control is a function of resources.

The book consists mostly of easy to read polemics for ideas that ought to be obvious, but which our culture resists.

Warning: you should skip the chapter titled Morality and Contradictions. Kurzban co-authored a great paper called A Solution to the Mysteries of Morality. But in this book, his controversial examples of hypocrisy will distract attention of most readers from the rather unremarkable wisdom that the examples illustrate.

New to LessWrong?

New Comment
7 comments, sorted by Click to highlight new comments since: Today at 3:40 PM

Kurzban sees less value in the concept of a self than do most Buddhists.

Given that he brings it up in order to criticize it, and feels that the concept is misleading enough to spend significant time dissecting, it feels like he actually has exactly the same notion of its value (and largely the same goal for discussing it) as most Buddhists do. Both think the folk notion of the self is incoherent and does a lot of harm, and want to show that to people.

Along the way, Kurzban makes fun of the literature on self-esteem, and of models that say self-control is a function of resources.

Worth noting that as I recall, Kurzban called the resource models of self-esteem as being nonsense somewhat before they started getting empirically disproven.

I made the comparison in part to point out the similarities, and I guess I should have been clearer about that.

My understanding of Buddhism is fairly shallow. I think I checked this Wikipedia page, which says there's some disagreement. But now that I reread it, it does suggest that many Buddhists agree with Kurzban.

Yes, it was published in 2010.

Thanks for the review here. I found this book highly interesting and relevant. I've been surprised at how much it seems to have been basically ignored. 

I did a few posts on it back in the day, but yeah it hasn't been noticed much.

Let's take the US government as a metaphor. Instead of saying it's composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary

Both are useful models of different levels of the US government. Is the claim here that there is no useful model of the brain as a few big powerful modules that aggregate sub-modules? Or is it merely that others posit merely a few large modules, whereas Kurzban thinks we must model both small and large agents at once?

We don't ask "what is it like to be an edge detector?", because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences.

If "human experience" includes the experience of an edge detector, I have to ask for a definition of "human experience". Is he saying an edge detector is conscious or sentient? What does it mean to talk of the experience of such a relatively small and simple part of the brain? Why should we care what its experience is like, however we define it?

Kurzban doesn't directly address the question of whether it's ever useful to model the mind as made of a few big parts. I presume he would admit they can sometimes be reasonable models to use. He's mostly focused on showing that those big parts don't act like very unified agents. That seems consistent with sometimes using simpler, less accurate models.

He certainly didn't convince me to stop using the concepts of system 1 and system 2. I took his arguments as a reminder that those concepts were half-assed approximations.

He's saying that it's extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it's similarly unobvious whether we should worry about the suffering of edge detectors.

He's saying that it's extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it's similarly unobvious whether we should worry about the suffering of edge detectors.

Being concerned implies 1) something has experiences 2) they can be negative / disliked in a meaningful way 3) we morally care about that.

I'd like to ask about the first condition: what is the set of things that might have experience, things whose experiences we might try to understand? Is there a principled or at least reasonable and consistent definition? Is there a reason to privilege edge detectors made from neurons over, say, a simple edge detector program made from code? Could other (complex, input-processing) tissues and organs have experience, or only those made from neurons?

Could the brain be logically divided in N different ways, such that we'd worry about the experience of a certain sub-network using division A, and not worry about a different sub-network using division B, but actually they're composed mostly of the same neurons, we just model them differently?

We talk about edge detectors mostly because they're simple and "stand-alone" enough that we located and modeled them in the brain. There are many more complex and less isolated parts of the brain we haven't isolated and modeled well yet; should that make us more or less concerned that they (or parts of them) have relevant experiences?

Finally, if very high-level parts of my brain ("I") have a good experience, while a theory leads us to think that lots of edge-detectors inside my brain are having a bad experiences ("I can't decide if that's an edge or not, help!"), how might a moral theory look that would resolve or trade-off these against each other?