TL;DR:  crazy predictions based on anthropic reasoning seem crazy only because they contradict our exaggerated expectations about too good future.

Let’s look at the two statements:

  1. I am in the middle of the set of all humans ranged by birth rank and therefore large human civilization will exist for only a few more millennia at least.
  2. I am in the very beginning of human history and humanity will exist for billions of years and will colonize the whole galaxy.

The first statement is the Doomsday argument in a nutshell and it is generally regarded as wrong. It could be false for two reasons: either the conclusion is false, or its logic is false.

Often someone argues that DA logic must be false because the conclusion is false, and therefore optimistic statement 2 is true. That is, we will survive for billions of years; therefore, we will not die in the next millennia, and thus logic of DA is wrong.

However, the optimistic statement 2 is only true for a person who is deep into transhumanism, nanotech etc but ignores x-risks. For many people, the doomy statement 1 is more probable, especially for those, who are deep into climate change, nuclear war risks and etc.

For a techno-optimist, the conclusion of DA is wrong not because it is inherently wrong, but because it contradicts our best hopes for the great future – so hating DA is wishful thinking.

Everything adds up to normality. As DA is using mediocrity thinking, it is “normal” by definition, in a tautological sense. I am typical, therefore, I am in the middle, therefore, the end is as far as the beginning. DA doesn’t say that the end is very near.

But the end becomes surprisingly near if we use the birth rank for the calculation, but the real clock time for the timing of the end. As I said in “Each reference class has its own end” we should define “the end” in the same terms as we define the reference class. Therefore, being in the middle of birth rank is neither surprising nor especially bad. It only says that there will be tens of billions of births in the future. It even doesn’t say that there will be a catastrophe in the end.

The DA prediction about being in the middle of the birth ranks becomes bad and surprising only when we compare it with the expected exponential growth of the population ( or very high plato). In that case, all these billions of births will happen in the next millennia. This suggests an abrupt end to the exponential growth which is interpreted as a global catastrophe. But there are obviously other possible population scenarios: population could slowly decline without extinction, or everyone becomes immortal but the birth rate decline (as now happening in rich countries).

Anyway, DA becomes surprising only when it is applied to our optimistic expectation that the human population will continue to be very high.

New to LessWrong?

New Comment
44 comments, sorted by Click to highlight new comments since: Today at 7:23 AM

I'm reminded of a recent discussion about Boltzmann brains; one of the easiest ways to disbelieve in them is to believe the universe is probably finite and not-extremely-large due to anthropic arguments of the form that also imply the Doomsday paradox.

Perhaps a bigger problem than the Doomsday paradox for SSA is probability-pumping, e.g. Nick Bostrom's Adam and Eve thought experiment.

My default anthropic theory to go with is something like SIA but based on computational density of observers rather than number of observers in the universe (similar to Tomasik's PSA).

(related: How the Simulation Argument Dampens Future Fanaticism)

I think he is describing the paradox of supernatural predicting power suggested by the doomsday argument and SSA in general. It will boost the probability of scenarios with a smaller reference class. Like in the sleeping beauty problem, SSA suggests the probability of heads is 2/3 after learning now is Monday, even though the toss is yet to happen.

Following similar logic, the astronaut can boost his survival chance by limiting the number of people saved. He can form this intention: select and reheat the passengers one by one. As soon as he feels he has been reheated halt the entire process and let all the remaining astronauts die. This will link his survival to a smaller reference class, which can boost its probability. How much will it help depends on the "correct" reference class. If the correct reference class only entails the astronauts then it would be very significant. If the correct reference class includes all "observers" in the universe then the increase would be marginal almost zero. But nonetheless, it would be greater than 50%. 

This conclusion is very counter-intuitive. E.g. if I have been reheated, should I keep to the intention of killing the remaining astronauts? How can it still affect my chance of survival? It seems retro-causing. 

I consider this a counterargument against SSA and the doomsday argument. But I like this thought experiment. It shows in order to actually conduct a sampling process among a group of agents that includes the first person, it has to forfeit the first-person perspective. e.g. from the viewpoint of an impartial outsider, in this case the security cameras

This maybe sounds more like he is preventing possible futures in which he doesn't exist, like if I rig a world destroying bomb when I die then a larger percentage of possible futures will have an older me. 

The problem with the Doomsday Argument is not it's too pessimistic about the future. One can be as optimistic or pessimistic about humanity's future as he likes. But according to DA that prior belief must inevitably get much bleaker once he considers his own birth rank. 

One's own birth rank is information about which physical person the first person is: "out of all the human beings which one is me". It is perspective-specific, yet it is used to make conclusions on something perspective-independent: the total number of human beings ever exists. If DA's logic is correct, then I can have supernatural predicting power as long as it is related to the size of the reference class. E.g. in the sleeping beauty problem, I can predict the probability of heads for a fair coin yet to be tossed to be 2/3. Or as jessicata pointed out earlier, consider's Adam and Eve problem by Bostrom. 

I don't support SIA either. Because it uses the existence of the first person as evidence. That boosts the size of the reference class which counters the DA. But that leads to biasing to theories with a bigger reference class. I constantly see the claim that SIA is independent of how you define the reference class. That is not true. Are brains-in-vats in the reference class? What about programmes simulating humans or Boltzmann brains? Whether they shall be considered valid "observers", i.e. in the same reference class as me, will greatly change the judgments on the relating theories once SIA is applied. 

I think the problem lies in the assumption of treating the first-person perspective as an observation selection effect. e.g. considering "I" a random sample from all observers. There is no basis for such assumptions. From a first-person perspective, one naturally knows who "I" is. It is primitively clear, no need to explain it with a selection process. On the other hand, if we wish to not take any specific perspective: to reason "objectively", then there shouldn't be an "I" in the logic at all. 

Check this thought-experiment about my view. 

Not sure what you mean. Doomsday argument is about how to think about the information that the first-person "I" is a particular physical person. It suggests treating it the same way as if a random sampling process has selected said physical person. SIA agrees with using sampling process, but disagrees with the range it is sampled from.

The actual DA reference class is not "all of humanity" but "all of humanity who think about DA". Another solution to DA is that there are lots of future humans, but they don't think about DA.

That is the position of some DA supporters. Not all. I would even hesitate to call it mainstream. 

Anyway, let's say that is the dominating take on DA. Is avturchin committing genocide of future generations by discussing it on an open forum, making more people aware of the doomsday argument? 

I think that most futures where we succeed at realizing our cosmic potential, we become competent enough that we stop thinking about doomsday arguments (or at least leave such thoughts to superhuman AIs). But yes, I do think we should discuss DA less often.

Interest to DA peaked in 2000s and now is declining

Don’t think real risk here, as almost all who read this post, knew about DA.

Are there anybody who never heard about it and become interested?

But Carter was afraid to publish about DA for 10 years, probably for similar reasons.

Since few people have thought of the Doomsday argument independently, and there is lots of discussion of it, why should one look at individual people? Shouldn't the reference class be "civilizations" or "intellectual communities"?  And then it's not at all clear that doom is coming anytime soon.

Really, though, the whole idea of picking a "reference class" is just arbitrary, indicating that the whole line of reasoning has no sound basis.

As I said in "Each reference class has its own end", the problem of reference class is not problem, because for each class there are its own type of the end.

In your example, our civilization started to think about doomsday argument around 1973, almost 50 years ago. And in around 50 years from now we will stop to think about it. It is not necessary a global catastrophe, may be wу will just lose interest. But combined with other forms of DA (birth rank) and other non-DA ideas, like x-risks, it looks like a plausible explanation.

This actually demonstrates the problem further. If using "civilization" as the reference class then as you said humans would stop thinking about DA in about 50 years since it started 50 years ago. But what if we use "people thinking about DA" as the reference class? Due to the internet giving it more exposure, there are a lot more people thinking about DA now than in the 80s and 90s. If I am in the middle of all these people, then we would likely stop thinking about DA a lot sooner.

Similarly, human civilization has existed for about 5000 years so it would exist for another 5000. But for much of history, the global population is way less than a billion. We are likely around the 100 billionths human every born. So if use each person as the reference class then with the population boom the end shall arrive much sooner. 

The forecast of the future changes drastically when different reference classes are used. So for DA to be valid there must be an exclusively "correct" reference class. But they all seem arbitrary. 

I agree with your prediction: complex civilization capable to think about DA will collapse soon, in a few decades, but some form of medieval civilization can exist a few millennia. It is completely normal and typical outcome, if we ignore hopes on space exploration.

This staged collapse prediction is what follows from the idea that “each reference class has its own end”: for the reference class of DA thinkers the end is nigh. For written civilization it is in few thousands years.

I would regard the world in 1900 as a "complex civilization capable of thinking about DA". It's just that nobody bothered to think about it or publish their thoughts. So shouldn't we expect our society to remain that capability for another 120 years? At the same time, we also expect everyone to stop thinking about DA in 50 years. Because DA has been only discussed for 50 years so far?

For any choice of reference of class to have the same prediction of the future, that prediction would effectively be a mirror image of the past. 

BTW, what is your opinion about medocrity principle, that is, the idea of typicality of you, me and Earth? 

I think the very idea of "I am a typical observer" is misguided. Because "observer" is a target drawn around where the arrow is. The arrow is the first person "I" in this analogy.

Everyone knows who the first-person "I" refers to since the only subjective experience felt is due to that particular physical body. We then put physical systems similar to this body into a category, and give it a name. But what similar feature is chosen to perform this grouping is arbitrary. From my personal perspective, such groups can be middle-aged men, things that can do simple arithmetic, synapsids, carbon-based lifeforms, macroscopic physical system, etc. It would be rather absurd to think the first-person "I" is typical for all these groups. 

Furthermore, what does "typical" among a group really mean? If we look at the features that define a category, then of course I am similar to everything else. Since this grouping is based on me having that feature in the first place. This gives a false sense of mediocracy. But why would I be typical in terms of other features? e.g. for macroscopic physical systems, the defining feature is its scale, why should I expect myself exist at a typical time for this group? There is no reason for it. Various anthropic camps try to support this by regarding "I" as a random sample of some sort. But that is just adding ad-hoc assumptions. 

It is not a coincidence that most anthropic theories have trouble defining what "observer" really means, which in turn messes up the reference class. (This is not exclusive to SSA. SIA and FNC are plagued by it too). Because it has no hard definition. It is just a circle drawn around the first-person "I" with a radius of anyone's choosing. 

Many think "observer" can be conclusively defined as someone/something that is conscious. But what is consciousness in the first place? The only consciousness that anyone has access to is that of the first person. "I know I am conscious, and can never be sure if you are just auto-piloting philosophical zombies." I guess other people/animals/programs might also be conscious only because of their similarity to myself. 

All in all, I feel people who hold "I am a typical observer" as an indisputable truth didn't take a hard look at what the word "I" or "observer" or "typical" really means. 

I think that what you said here and elsewhere could boil down to two different views:

  1. Going from 1 position to 3 position in probabilities sense is ontologically impossible, period. No meaningful probability updates.
  2. We need to take hard look on what is "I", "observer", and "typical", and only after we clearly define them, we could said something meaningful about probabilities. 

I tend here to agree with the second view, and I explored different aspects of it in some of my posts. 

I'm not sure what 1 position and 3 position mean here. I would summarize my argument as the first-person perspective is based on subjective experience. It is a primitive notion that cannot be logically analyzed. Just like in Euclidean geometry we can't analyze any of its axioms. Take then as given, that's it. 

All the rest, like no self-locating probability, perspective disagreement, rejection of doomsday argument and presumptuous philosopher, double-halving in sleeping beauty, and rejection of fine-tuned universe, are just conclusions based on that. 

1 position = first-person perspective, 3 position = third-person perspective

Well in that case yes. 3rd person's perspective is just a shorthand for the perspective of a god's eye view. We should not switch perspectives halfway in any given analysis. 

To get more credible estimates with 90 per cent confidence, it better to take just order of magnitude. In that case, the apparent strange overconfidence of DA predictions dissappears as well as its mirror structure.

So we can say that both ability to think about DA and the thinking about it will exist for several decades.

(Note also that Laplace seems to be the first who was close to DA, and it was in 1801)

It is like Laplace sunrise problem: everyday the sun have risen is a small bit of evidence that it more likely to rise again. The same way if the world didn't end today, it is a small evidence that allows to extend our expected doomsday date.

I've read your linked post, and it doesn't convince me.  The reasoning doesn't seem rooted in any defensible principles, but is rather just using plausible-sounding heuristics which there is no reason to think will produce consistent results.

The example of the person placed on the unknown-sized grid has a perfectly satisfactory solution using standard Bayesian inference: You have a prior for the number of cells in the row.  After observing that you're in cell n, the likelihood function for there being R rows is zero for R less than n, and 1/R for R greater than or equal to n.  You multiply the likelihood by the prior and normalize to get a posterior distribution for R.  Observing that you're in cell 1 does increase the probability of small values for R, but not necessarily in the exact way you might think from a heuristic about needing to by "typical".

To illustrate the inconsistencies of that heuristic, consider that for as long as humans don't go extinct, we'll probably be using controlled fire, the wheel, and lenses.  But fire was controlled hundreds of thousands of years ago, the wheel was invented thousands of years ago, and lenses were invented hundreds of years ago.  Depending on which invention you focus on, you get completely different predictions of when humans will go extinct, based on wanting us to be "typical" in the time span of the invention.  I think none of these predictions have any validity.

“End of the reference class” is not extinction, the class could end in differently. For any question we ask we simultaneously define reference class and what we mean by its ending.

In your example of fire, wheels and lenses: imagine that humanity will experience a very long period civilizational decline. Lens will disappear first, wheels seconds and fire will be the last in million of years. It is a boring but plausible apocalypse.

Possible, sure.  But the implication of inference from these reference classes is that this future with a long period of civilizational decline is the only likely one - that some catastrophic end in the near future is pretty much ruled out.  Much as I'd like to believe that, I don't think one can actually infer that from the history of fire, wheels, and lenses.

I agree with you. The correct reference class is only those who think about DA - and this imply the end is very soon, in a few decades.

But this again is not a surprising news, which could trigger our intuition. Several x-risks has high probability to happen in this timeframe. Complex societies with high population are unstable, and DA is just another way to say that.

imho the correct reference class is: non-genetically-modified humans. After this -- "everyone becomes immortal but the birth rate declines" -- happens to the class, it won't matter who thought about DA earlier.

You are at a bus stop, and have been waiting for a bus for 5 min. The "doomsday logic" says that you are expected to wait another 5 min. 5 min later without a bus you are expected to wait another 10 min. If you look at the reference class of all bus stop waits, some of them have a bus coming in the next minute, some in 10, some in an hour, some next day, some never (because the route changed). You can't even estimate the expected value of the bus wait time until you narrow the reference class to a subset where "expected value" is even meaningful, let alone finite. To do that, you need extra data other than the time passed. Without it you get literally ZERO information about when the bus is coming. You are stuck in Knightian uncertainty. So it's best not to fret about the Doomsday argument as is, and focus on collecting extra data, like what x-risks are there, what the resolution to the Fermi paradox might be, etc.

What you describes here is the Laplace sunrise problem: If sun rose 5000 times, what are the chances that it will rise tomorrow? Laplace solved the problem and got almost the same equation as Gott’s Doomsday Argument - and got 1 in 5002 chance of non-rise tomorrow - which gives around 50 per cent chances of non-rise in next 5000 days. But everyday his estimation could be updated on the data that the sun has risen again.

But he didn’t use anthropic reasoning, but instead did the sum of all possible hypothesis about sunrise probabilities consisted with the observation. Anyway, he may have some assumptions about how hypothesis are distributed.

He didn't "solve" it, not in any meaningful sense of the term "solve". He probably implicitly assumed a certain distribution and did the calculation for the next day only. To solve it would mean to gather all possible data about the reasons the sun might not rise, and define what "sun not rising" even means.

While sun-rise problem setup is somewhat crazy, the bus waiting problem is ubiquitous. For example, I am waiting for some process to terminate in my computer or file starting to download. The rule of thumb is that if it is not terminating in a few minutes, it will not terminate soon, and it is better to turn off the process.

Leslie in "The end of the world" suggested a version of DA which is independent of assumptions of probability distributions of events. He suggested that if we assume deterministic universe without world-branching, then any process has unknown to us but fixed duration T. For example, the time from previous bus arrival to the next bus arrival is Tb. It is not a variable, it has fixed value for today and for this bus, and Omega may know it. Say, it 15 minutes. It doesn't depend on the way how arrivals of other buses are distributes, are they regular, normally distributed etc. It is only for this bus.

Now you arrive on the bus station. You only know two things: the time from last arrival and the fact that you came in a random moment relative to bus arrivals. In that case, you can estimate time until next bus's arrival according to doomsday argument logic: it will be around the same time as from previous arrival. 

[-]ZT52y20

My intuition says: you cannot predict, or gain any evidence about the future, based on anthropic arguments like this.

The general way I’ve thought of the DA is that it’s probably correct reasoning, but it’s not the only relevant evidence. Even if DA gives us a billion-to-one prior against being in the first billionth of humanity, we could easily find strong enough evidence to overcome that prior. (cf https://www.lesswrong.com/posts/JD7fwtRQ27yc8NoqS/strong-evidence-is-common)

What it could be? Alien supercivilization?

It mostly comes down to outlook on x-risk. If we align an AI, then we’re probably good for the future.

But chances of the alignment are small? Like less than 10 per cent? Also, AI can solve DA once and for all.

I don't particularly think that we currently have strong evidence against the doomsday argument being accurate.