In this post, I'll defend these claims:
By the "common understanding", I mean something like:
Sometimes "can't conclude anything" is weakened to allow some weak updating.
Now A1 sounds reasonable. But consider instead:
Formally the two arguments have the same structure. Now, people might start objecting that the difference between an observer or no observers is not the same thing as the difference between an observer seeing a loss and one not seeing it. And then I might respond by slicing into the definition of observer, creating "half-observers", and moving smoothly between observer and non-observer...
But that's the wrong response, on both of our parts (shame on you, hypothetical strawman, for reasoning like that!). The key question is not "can we justify that A1 and A2 might be different?" Because we can always justify something like that if we work on it hard enough.
Instead we should be asking "1) Why do we find A1 convincing?", and "2) Do we have reasons to believe A1 is wrong?"
My answer to 2) is "yes, of course; A2 is clearly wrong, and A1 is formally structured the same way, so there must be a paradox lurking there" (spoiler: there is a paradox lurking there).
For 1), I introspected on why I had been lead astray for so long, and here are some of the reasons why we might believe A1 (or at least think that anthropic reasoning is hard):
The desire not to go overboard is the easiest to understand: to those who might say "so, it turns out we were safe after all!", we can answer, correctly "not necessarily; we might just have got lucky". And that is correct; we might have got lucky. But it's also some evidence, at least, that maybe we were safer than we thought.
Let's go back to the idea that started this all: the anthropic principle. Looking at the Wikipedia article on it, there seems to be a bunch of different principles; here's my attempt at putting them in a table:
The basic anthropic principleP( the universe allows life | we live )≈1Weak AP (Carter)P( our space-time location allows life |our space-time location has observers )≈1Weak APP( the universe allows life |(Barrow and Tipler)carbon-based life exists )≈1Strong AP (Carter)P( the universe allows life |observers exist )≈1Strong APP( the universe/multiverse allows life )=1(Barrow and Tipler)
The Barrow and Tipler Strong AP is in my view wrong (I think they're mixing frequentist and Bayesian probability, if they have to posit an actual multiverse). But the other ones seem trivially true, just as matter of conditional probability. And the differences between them are unimportant: whether it's looking at the whole universe, or our space-time location, and at observers in general, carbon based life, or just ourselves. All of these are equally true, and it seems to me that people arguing about different versions of the AP just haven't seen them written down as they are here, where it's clear that they are all of a similar format:
Now look back at A1. It looks similar, but it isn't; the conditionals are used a bit differently. What A1 says is "conditional on us surviving, the probability of an existential catastrophe having happened is zero. And this probability is independent of whether the world is safe or not. Hence we can't deduce whether the world is safe or not".
All the mischief is in that word "hence". Conditional probabilities are tricky and counterintuitive; to pick an example from logical uncertainty, P(''0=1''|"0=0")=0 while P("0=0"|"0=1")=1. And, in general, you can't move "is independent of" from one side of the conditional to the other.
So these probabilities have to be computed explicitly - though you can get a hint of the potential mistake by considering "conditional on us seeing ourselves lose the lottery, the probability of us winning the lottery is zero. And this probability is independent of the odds of the lottery. Hence we can't say anything about the odds of the lottery".
I have actually computed the odds explicitly, using Bayesian reasoning, to show that statements like A1 are wrong. But let's invert the problem: if we assumed A1 was true, what would that imply?
Imagine that the world is either safe (low risk of existential catastrophe) or dangerous (high risk of existential catastrophe). Then A1 would argue that P( safe | we survived ) is the same as P( safe ): our survival provides no evidence of the world being safe. Then applying almighty Bayes:
P( we survived | safe )=P( safe | we survived )×P( we survived )P( safe )=P( we survived ).
The same reasoning shows P( we survived )=P( we survived | dangerous ). Therefore A1 would force us to conclude that the safe and the dangerous worlds have exactly the same level of risk!
Similar problems arise if we try and use weaker versions of A1 - maybe our survival is some evidence, just not strong evidence. But Bayes will still hit us, and force us to change our values of terms like P( we survived | dangerous ). There are simply not enough degrees of freedom in the system for anthropic updating to be done any way other than in the normal way.
There are clearly issues of selection bias and survivorship bias in anthropic reasoning. We can't conclude from seeing all the life around us, that the universe is full of life.
But that doesn't stop us from updating normally, it just means we have to update on exactly what we know: not on the information that we observe, but on the fact that we observe it.
Take a classical example of survivorship bias: hedge funds success. We see a lot of successful hedge funds, and we therefore conclude that hedge funds are generally successful. But that conclusion is mistaken, because the least successful hedge funds tend to go bankrupt, leaving us with a skewed sample. So if we noticed "most hedge funds I can see are successful", concluded "most hedge funds are successful", and updated on that... then we'd be wrong.
Similarly, if we noticed a lot of life around us, concluded "life is common", and updated on that, we'd be wrong. If, however, we instead concluded "life is common on at least one planet" and updated on that, then we would be correct.
Notice how specific the update requirements can be. Suppose we had three theories. Theory T1 gives a 25% probability to life existing on any given planet. Theory T2 gives a 50% probability for life existing on any Earth-like planet, and 0% for other planets. While theory T3 gives a 100% probability to life existing on Earth, specifically, and 0% to life existing anywhere else.
Now, the different Ti might have different priors. But updating them on the fact of our existence will increase the probability of T3 twice as much as T2, which itself is twice as much as T1. Even though T1 posits a universe filled with life and T3 a universe almost empty of life, our existence is evidence for T3 over T1.
So, when updating on anthropic evidence, we have to update on what we see (and the fact that we see it), and not assume we are drawing from a random sample of possible observations about the universe. But, with those caveats, anthropic updating works just as normal updating.
There's a final reason that anthropic reasoning can seem daunting. I've shown above that the update process of anthropic probability is the normal update process. But what about the initial probabilities? There are a plethora of anthropic probability theories - SIA, SSA, FNC - and some people (ie me) arguing that probabilities don't even exist, and that you have to use decision theory instead.
But in this section I'll show that, if you make some reasonable assumptions about the size of the universe (or at least the size of the part of the universe you're willing to consider), then all those probabilities collapse into the same thing, which is pretty much just normal probability for the universe in which you exist. If we make those assumptions, we can then do anthropic probabilities in an easy way, at least for problems without explicit duplication.
Let's talk about how unique you are. From human to human, there is typically 20 million base pairs of variation. Our brain processes 50 bits per second, or 2.2 billion bits in a lifetime. A lot of this information will be highly redundant, but not all of it.
The Hubble volume roughly 1031 cubic light years, or roughly 1031×(1016)3=1071m3 in volume. In bits, this is log2(1071)=262. So if we packed our Hubble volume with humans, and those humans were initially identical but had had about ten seconds to diverge, then we would not expect to find two copies of the same human anywhere.
Of course, humans are not packed anywhere near that density, and humans diverge a lot more than that. So we expect to go a great great great ... great great great distance before finding identical copies of ourselves.
So I define a medium sized universe as universes larger than our own, but where we'd expect to find but a single copy of ourselves. These universes can, of course, be very big - a universe that is 10300 times bigger than the Hubble volume would count as a very small example of a medium-sized universe.
This might seem controversial; after all, doesn't the universe appear to be infinite? Well, probability theories have problems with infinity, anthropic probability theories even more so. In most areas, we are fine with ignoring the infinity and just soldiering on in our local area; I'm suggesting that we do that for most anthropic reasoning as well. By "most" I mean "reasoning about situations that don't involve infinities, deliberate duplication, or simulations". Though you can't shove that many simulations into a medium sized universe, so avoiding simulations may be unnecessary (it does tend to make the rest of the reasoning much easier, though).
Different theories of anthropic probability are trying to answer subtly different questions about the universe and ourselves. But they only really differ if there are multiple copies of the same person.
Take SIA. We know that SIA is independent of reference class, so we may as well take the reference class Rs consisting of a the agents subjectively indistinguishable from a given human (eg ourselves). Because there are almost certainly no duplicates in this universe, this reduces to a single copy, at most. So if PSIARs is the probability function for SIA with this reference class, then it is almost exactly equal to P(⋅| we exist ) for P the non-anthropic probability distribution over universes.
And P(⋅| we exist ) is just the Full Non-indexical conditioning version of anthropic probability. Now, I know that FNC is inconsistent; still, in medium sized universes, it's very close to being consistent (and very close to being SIA).
If we use SSA with the reference class Rs or the consistent class Rst, we get a similar almost equality:
And understates how nearly identical these probabilities are.
Now, there is one anthropic probability theory that is different: SSA with significantly larger reference class (say the class of all humans, all sentient beings, or all "observers"). But this post argues against those larger reference classes, claiming they belong more to decision theory and morality, not probability. And remember, the definition of the reference class for SSA is contained in the question we are asking. Only for questions where "we could have been person X", in a specific sense, does SSA with larger reference classes make sense.
Another reason to restrict to Rs is that in medium sized universes, the anthropic probabilities are essentially free from all the usual paradoxes.
Notice that in using Rs, we haven't had to formally define what an "observer" is, or what would qualify an agent to get that rank. Instead we're just looking at agents that are subjectively indistinguishable from each other, a narrow and reasonably well-defined class.
So, here's how to proceed with anthropics in most situations:
And that should suffice for most non-specialised work in the area.