Anthropic Decision Theory (ADT) replaces anthropic probabilities (SIA and SSA) with a decision theory that doesn't need anthropic probabilities to function. And, roughly speaking, ADT shows that total utilitarians will have a behaviour that looks as if it was using SIA, while average utilitarians look like they are using SSA.

That means that the various paradoxes of SIA and SSA can be translated into ADT format. This post will do that, and show how the paradoxes feel a lot less counter-intuitive under ADT. Some of these have been presented before, but I wanted to gather them in one location. The paradoxes examined are:

  1. The Doomsday Argument.
  2. The Adam and Eve problem.
  3. The UN++ problem.
  4. The Presumptuous Philosopher.
  5. Katja Grace's SIA doomsday argument.

The first three are are paradoxes of SSA (which increases the probability of "small" universes with few observers), while the last three are paradoxes of SIA (which increases the probability of "large" universes with many observers).

No Doomsday, just a different weighting of rewards

The famous Doomsday Argument claims that, because of SSA's preferences for small numbers of observers, the end of the human species is closer than we might otherwise think.

How can we translate that into ADT? I've found it's generally harder to translate SSA paradoxes into ADT that SIA ones, because average utilitarianism is a bit more finicky to work with.

But here is a possible formulation: a disaster may happen 10 years from now, with 50% probability, and will end humanity with a total of humans. If humans survive the disaster, there will be humans total.

The agent has the option of consuming resources now, or consuming resources in 20 years time. If this were a narrow-minded selfish agent, then it will consume early if , and late if .

However, if the agent is an average utilitarian, the amount of expected utility they derive from from consuming early is (the expected average utility of , averaged over survival and doom), while the expected utility for consuming late is (since consuming late means survival).

This means that the breakeven point for the ADT average utilitarian is when:

  • .

If is much larger than , then the ADT agent will only delay consumption if is similarly larger than .

This looks like a narrow-minded selfish agent that is convinced that doom is almost certain. But it's only because of the weird features of average utilitarianism.

Adam and Eve and differentially pleasurable sex and pregnancy

In the Adam and Eve thought experiment, the pair of humans want to sleep together, but don't want to get pregnant. The snake reassures them that because a pregnancy would lead to billions of descendants, SSA's preferences for small universes means that this is almost impossibly unlikely, so, time to get frisky.

There are two utilities to compare here: the positive utility of sex (), and the negative utility of pregnancy (). Assume a chance of pregnancy from having sex, and a subsequent descendants.

Given an average utilitarian ADT couple, the utility derived from sex is , while the disutility from pregnancy is . For large enough , those terms will be approximately and .

So the disutility of pregnancy is buried in the much larger population.

There are more extreme versions of the Adam and Eve problem, but they are closely related to the next paradox.

UN++: more people to dilute the sorrow

In the UN++ thought experiment, a future world government seeks to prevent damaging but non-fatal gamma ray bursts by committing to creating many many more humans, if the bursts happen. The paradox is that SSA implies that this should lower the probability of the bursts.

In ADT, this behaviour is perfectly rational: if we assume that the gamma ray-bursts will cause pain to the current population, then creating a lot of new humans (of same baseline happiness) will dilute this pain, by averaging it out over a larger population.

So in ADT, the SSA paradoxes just seem to be artefacts of the weirdness of average utilitarianism.

Philosopher: not presumptuous, but gambling for high rewards

We turn now to SIA, replacing our average utilitarian ADT agent with a total utilitarian one.

In the Presumptuous Philosopher thought experiment, there are only two possible theories about the universe: and . Both posit large universes, but posits a much larger universe than , with trillions of times more observers.

Physicists are about to do an experiment to see which theory is true, but the SIA-using Presumptuous Philosopher (PP) interrupts them, saying that is almost certain because of SIA. Indeed, they are willing to bet on at odds of up to a trillion-to-one.

With that betting idea, the problem is quite easy to formulate in ADT. Assume that all PP are total utilitarians towards each other, and will all reach the same decision. Then there are a trillion times more PPs in than in . Which means that winning a bet in is a trillion times more valuable than winning it in .

Thus, under ADT, the Presumptuous Philosopher will indeed bet on at odds of up to a trillion to one, but the behaviour is simple to explain: they are simply going for a low-probability, high-utility bet with higher expected utility than the opposite. There does not seem to be any paradox remaining.

SIA Doomsday: care more about mosquito nets in large universes

Back to SIA. The SIA Doomsday Argument, somewhat simplified, is since SIA means that we should expect there to be a lot of observers like ourselves, then it is more likely that the Fermi paradox is explained by a late Great Filter (which kills civilizations that are more advanced than us) than a early Great Filter (which kills life at an earlier stage or stops it from evolving in the first place). The reason for this is that, obviously, there are more observers like us for a late Great Filter than an early one.

To analyse this in decision theory, use the same setup as for the standard Doomsday Argument: choosing between consuming now (or donating to AMF, or similar), or in twenty years, with a risk of human extinction in ten years.

To complete the model, assume that if the Great Filter is early, there will be no human extinction, while if it is late, there is a chance of extinction. If the Great Filter is late, there are advanced civilizations across the universe, while if it is early, there are only . Assume that the agent currently estimates late-vs-early Great Filters as 50-50.

With the usual ADT agent assuming that all their almost-copies reach the same decision in every civilization, the utility from early consumption is (total utility averaged over late vs early Great Filters), while the utility from late consumption is .

So a total utilitarian ADT agent will be more likely to go for early consumption than the objective odds would imply. And the more devastating the late Great Filter, the stronger this effect.

For large , these approximate to and .

New Comment