tl;dr: there is no real Doomsday argument in ADT. Average utilitarians over-discount the future compared with total utilitarians, but ADT can either increase or decrease this effect. The SIA Doomsaday argument can also be constructed, but this is simply a consequence of total utilitarian preferences, not of increased probability of doom.

I've been having a lot of trouble formulating a proper version of the doomsday argument for Anthropic Decision Theory (ADT). ADT mimics SIA-like decisions (for total utilitarians, those with a population independent utility function, and certain types of selfish agents), and SSA-like decisions (for average utilitarians, and a different type of selfish agent). So all paradoxes of SIA and SSA should be formulatable in it. And that is indeed the case for the presumptuous philosopher and the Adam and Eve paradox. But I haven't found a good formulation of the Doomsday argument.

And I think I know why now. It's because the Doomsday argument-like effects come from the preferences of those average utilitarian agents. Adding anthropic effects does not make the Doomsday argument stronger! It's a non-anthropic effect of those preferences. ADT may allow certain selfish agents to make acausal contracts that make them behave like average utilitarian agents, but it doesn't add any additional effect.

 

Doomsday decisions

Since ADT is based on decisions, rather than probabilities, we need to formulate the Doomsday argument in decision form. The most obvious method is a decision that affects the chances of survival of future generations.

But those decisions are dominated by whether the agent desires future generations or not! Future generations of high average happiness are desired, those of lower average happiness are undesirable. This effect dominates the decisions of average utilitarians, making it hard to formulate a decision that addresses 'risk of doom' in isolation. There is one way of doing this, though: looking at how agents discount the future.

 

Discounting the future

Consider the following simple model. If humanity survives for n generations, there will have been a total of Gqn humans who ever lived, for some G (obviously q>1). At each generation, there is an independent probability p of extinction, and pq < 1 (so the expected population is finite). At each generation, there is an (independent) choice of consuming a resource to get X utilities, or investing it for the next generation, who will automatically consume it for rX utilities.

Assume we are now at generation n. From the total utilitarian perspective, consuming the resource gives X with certainty, and rX with probability p. So the total utilitarian will delay consumption iff pr>1.

The average utilitarian must divide by total population. Let C be the current expected reciprocal of the population. Current consumption gives an expected XC utilities. By symmetry arguments, we can see that, if humanity survives to the next generation (an event of probability p), the expected reciprocal of population is C/q. If humanity doesn't survive, there is no delayed consumption; so the expected utility of delaying consumption is prXC/q. Therefore the average utilitarian will delay consumption iff pr/q > 1.

So the average utilitarian acts as if they discounted the future by p/q, while the total utilitarian discounts it by p. In a sense, the average utilitarian seems to fear the future more.

But where's the ADT in this? I've derived this result just by considering what an average utilitarian would do for any given n. Ah, but that's because of the particular choice I've made for population growth and risk rate. A proper ADT average utilitarian would compute the different pi and qi for all generation steps and consider the overall value of "consume now" decisions. In general, this could result in discounting that is either higher or lower than the myoptic, one-generation only, average utilitarian. The easy way to see this is to imagine that p is as above (and p is small), as are almost all the q - except for qn. Then the ADT average utilitarian discount rate is still roughly p/q, while the myoptic average utilitarian discount rate at generation n is p/qn, which could be anything.

So the "Doomsday argument" effect - the higher discounting of the future - is an artefact of average utilitarianism, while the anthropic effects of ADT can either increase or decrease this effect.

 

SIA Doomsday

LessWronger turchin reminded me of Katja Grace's SIA doomsday argument. To simplify this greatly, it's the argument that since SIA prefers worlds with many people in them (most especially many people "like us"), this increases the probability that there are/were/will be many civilizations at our level of development. Hence the Great Filter - the process that stops the universe from being filled with life - is most likely in the future for our kind of civilizations. Hence the probability of doom is higher.

How does this work, translated into ADT format? Well, imagine there were two options: either the great filter is in the distant evolutionary past, or is in the future. The objective uncertainty is 50-50 on either possibility. If the great filter is in the future, your civilization has a probability p of getting through it (thus there is a total probability of p/2 of your civilization succumbing to a future great filter). You have the option of paying a cost C to avoid the great filter entirely for your civilization. You derive a benefit B from your civilization surviving.

Then you will pay C iff C<Bp/2. But now imagine that you are a total utilitarian, you also care about the costs and benefits from other civilizations, and you consider your decision is linked with theirs via ADT. If the great filter is early, let's assume that your civilization is the only one still in existence. If the great filter is late, then there are Ω civilizations still around.

Therefore if the great filter is early, the total cost is C (your civilization, the only one around, pays C, but gets no benefit as there is no late great filter). However, if the great filter is late, the total cost is ΩC and the total benefit is ΩBp (all of Ω civilizations pay C and get benefit B with probability p). So the expected utility gain is ΩBp-(Ω+1)C. So you will pay the cost iff C < BpΩ/(Ω+1).

To an outsider this looks like you believe the probability of a late great filter is Ω/(Ω+1), rather than 0.5. However, this is simply a consequence of your total utilitarian preferences, and don't reflect an objectively larger chance of death.

New Comment
9 comments, sorted by Click to highlight new comments since:

Do you mean that DA does not matter, because we still have to invest in x-risks prevention, if we are selfless, or partying if we are selfish? But I could suggest a class of DA with doom very soon, in 20 years. If it true, I have to invest more in prevention of it as it would mean my death? (This version of DA is about the one, where the only intelligent observers from which I am randomly chosen are the observers which knows DA. This class appeared only in 1983 and will probably end in 21 century).

ADT basically doesn't have a doomsday argument. Just like SIA, you can't formulate it successfully. So the risk of death is the objective risk we see, not adjusted by various DAs.

But Katja Grace showed that SIA has its own DA, that is that Great filter is more likely to be ahead of us. I don't understand how ADT prevent it.

Here's the SIA doomsday argument in ADT: suppose you are a total utilitarian towards all beings at your level of civilizational development. Suppose someone offers you to bet on whether the Great filter is early or late. Suppose you assume that such bets are commonly made, and commonly made by rich selfish people to altruists (that last clause is just to say that there is nothing wrong with winning the bet).

Then betting on "early great filter": when you win, only a few other people win. But betting on "late great filter", then a lot of people win when you do.

This produces the SIA doomsday, but illustrates clearly that it has nothing to do with probabilities of doom. I'll add this to the top post to reflect this.

I think that the size of the bet is depending of my estimation of the probabilities of different outcomes (may be not in this case) and so we can't completely exclude probabilities estimation.

But in general I agree this your theory. It is useful in estimation of the x-risks. We don't need exact probabilities of different risks. We need information how to use our limited resources to prevent them. This is our bets on our ability to prevent them. But to make such bets we need some idea about order of magnitude of risks and their order in time.

What people often don't notice about SIA is that it implies 100% certainty that there are an infinite number of people.

This is not 100% certain, so SIA is false.

I think it is true, as we live in the infinite universe in many levels where all possible people exist. But this exhaust SIA action, and it becomes non-informative and thus does not cancel SSA in DA.

We might live in an infinite universe, but this does not have a probability of 100%.

The probability of it is high. There are several Tegmark's levels of the universe infinity which are mutually independent. (quantum multiverse, cosmological inflation, independent universes, eternal existence)

Also my own existence under SIA is argument for almost infinite universe.

And as humans are finite, we don't need infinite universe for existing of all possible humans, just very large.