I love the sleeping Beauty problem, but I think it leads people astray, by making them think that anthropic reasoning is about complicated situations with multiple duplicates. That leads to people saying erroneous things, like "SIA implies there are a lot of observers".

But anthropic reasoning isn't that complicated; SIA, especially, is mostly just Bayesian updating.

Specifically, if there are no exact duplicates of yourself in your universe, SIA is just Bayesian updating on the fact that you exist; this is the same update that an outside observer would make, if informed of your existence. So, if theory has prior probability and gives you a probability of existing, then SIA updates 's probability to (and then renormalises everything):

This result is easy to see - since SIA is independent of reference class, just restrict the reference class to exact copies of you. If there is only on one such copy in the universe, then the update rule follows.

Even if there are multiple exact copies of you, you can still mostly see SIA as Bayesian updating over your future observations. See this footnote[1] for more details.

Indirect effect on population size

So, what does this mean for the number of observers in the universe? Well, SIA can have an indirect effect on population size. If, for instance, theory posits that life is likely to happen, then our existence is more likely, so gets a relative boost by SIA compared with most other theories.

So, SIA's boosting of other observers' existence is only an indirect effect of it boosting our existence. The more independent our existence is of them, or the more independent we suspect it might be, the less impact SIA has on them.


  1. Suppose that there are exact copies of you, and that they are going to make independently random observations. Then as soon as is much bigger than , you can expect that each copy will make a different observation; so, ultimately, you expect there to be only one exact future copy of you.

    So if you Bayesianly update for each possible future copy (weighted by the probability of that future observation), you will get SIA. This is the trick that full non-indexical conditioning uses.

    This can be seen as a partial solution to the Boltzmann brain problem: Boltzmann brains won't diverge, because they won't have future experiences. Personally, I prefer to address the issue by mixing in a bit of decision theory; my decisions are only relevant if I'm not a Boltzmann brain, so I'll start with "I exist and am not a Boltzmann brain" as an initial assumption. ↩︎

New Comment