The Sleeping Beauty problem is a classic conundrum in the philosophy of self-locating uncertainty. From Elga (2000):

Sleeping Beauty. Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Here are some variants of the problem, not to be taken all too seriously.


Sleeping Logic

Sleeping Logic-1. Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the parity of the 1,000th digit of  (Even: once; Odd: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened, to what degree ought you believe that the 1,000th digit of  is Even?

 

Sleeping Logic-2. Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the parity of the third digit of  (Even: once; Odd: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened, to what degree ought you believe that the third digit of  is Even?

 

Sleeping Riemann. A group of researchers (some of whom are mathematicians from an extremely advanced civilization) are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the truth of the Riemann hypothesis (True: once; False: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened, to what degree ought you believe that the Riemann hypothesis is true?

Sleeping Counterfact

Sleeping Counterfact. A group of researchers, spearheaded by a perfect predictor, are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). During each waking in the Tails branch, you are asked to give the researchers $100, and during the waking in the Heads branch you are given $250 if and only if they predicted you would pay up twice in the Tails branch. After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened and asked to pay up, should you?



Sleeping Nested

Sleeping Nested. Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either twice, or once and then immediately make you face the problem of Sleeping Nested, depending on the toss of a fair coin (Heads: once and Sleeping Nested; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Solution

 The expected number of wakings: 

 

which implies that , where  and . This means that the problem at hand is (a reversed version of) the standard Sleeping Beauty.

Sleeping Newcomb

Sleeping Newcomb-1. Some researchers, led by the infamous superintelligence Omega, are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a biased coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. The weight of the coin is determined by what the superintelligence predicts that you would say when you are awakened and asked to what degree ought you believe that the outcome of the coin toss is Heads. Specifically, if the superintelligence predicted that you would have a degree of belief  in Heads, then they will have weighted the coin such that the 'objective chance' of Heads is . So, when you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Solution.

 Thirder/SIAHalfer/SSA
Deliberational epistemic EDT

Fixed points: 

Fixed point: every [1]

Epistemic CDT 

(with a uniform prior)

 

Sleeping Newcomb-2. Some researchers, led by the superintelligence Omega, are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a biased coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. The weight of the coin is determined by what the superintelligence predicts your response would be when you are awakened and asked to what degree you ought to believe that the outcome of the coin toss is Heads. Specifically, if Omega predicted that you would have a degree of belief  in Heads, then they will have weighted the coin such that the 'objective chance' of Heads is Then: when you are in fact awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Solution.

 

Thirder/SIA

Halfer/SSA

Deliberational epistemic EDT

Fixed point:  [2][3]

Fixed point: 

Epistemic CDT 

(with a uniform prior)

Sleeping Past

Sleeping Past. Some researchers, led by a superintelligence, are going to put you to sleep. You will be awakened one time, and one time only. However, depending on the parity of the 1,000th digit of , the superintelligence (who knows the the 1,000th digit of ) will have conducted an experiment on you last week (Even: once, and no experiment last week; Odd: once, and there was an experiment last week). This past experiment consisted in the superintelligence sneaking into your house during night and inducing a perfectly realistic dream of the future situation in which you are awoken in the standard part of this problem. After the potential dream and each waking, you will be put back to sleep with a drug that makes you forget that dream or waking. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Sleeping Loop

Sleeping Loop. God is deciding which of two worlds to create, and decides to use a fair divine coin to make the decision. If he creates the ‘Tails world’, you will be woken up once on Monday and once on Tuesday. If he creates the ‘Heads world’, you will inhabit a closed timelike curve in which you are awoken once on Monday before being put to sleep and will travel to earlier in the day on Monday and be woken up once more and so on. After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads? 

Sleeping Mary

Sleeping Mary. Some researchers are going to put you to sleep (in a very colorful room). Before the experiment begins, you are given a drug which gives you all the relevant scientific knowledge there is about color, color perception, and so on. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). Additionally, just before the potential second waking, yet another drug will be given to you which makes you colorblind. After each waking, they will put you back to sleep with a third drug that makes you forget that waking and also reverses the effects of the colorblindness drug. (After the experiment is over, you are given a fourth drug—MDMA—as a reward for your participation.) When you are awakened,  to what degree ought you believe that the outcome of the coin toss is Heads?

Sleeping Zombie

Sleeping p-zombie. A group of researchers (some of whom are eminent metaphysicians) are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. However, just before the potential second awakening (in the case of Tails), the researchers will give you a powerful drug—made in the metaphysics lab—that turns you into a philosophical zombie for the five minutes that you are awake. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

 

Sleeping l-zombie. A group of experimental metaphysicians are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. However, just before the potential second awakening (in the case of Tails), the researchers will give you a drug that taps into your metaphysics and turns you into a logical zombie for the five minutes that you are awake. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Sleeping Parfit

Sleeping Parfit. Two research groups are working together on an experiment which you have signed up for: it begins with one of the groups putting you to sleep. During the two days that your sleep will last (Monday and Tuesday), they will briefly wake you up either once or twice in particular ways, depending on the toss of a fair coin (Heads: once; Tails: twice). Here is the setup: on Sunday evening, you enter a teleporter (which works by recreating a perfect copy of you in the other research group’s lab and destroying the original copy instantaneously). If the coin lands Tails, you are teleported to the other lab and awoken twice, on Monday and Tuesday. If the coin lands Heads, you are teleported to the other lab but your original copy is not destroyed, and both you and your copy are awoken once on Monday. After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Sleeping Collapse

Sleeping Collapse. Some researchers are going to put you to sleep. During the two days (Monday and Tuesday) that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair quantum coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. Suppose that you believe the von Neumann-Wigner interpretation of quantum mechanics, in which consciousness causes wave function collapse. If the quantum coin lands Heads, you will be awoken at 11am on Monday; and if the quantum coin lands Tails, you will be awoken at noon on Monday (and whenever on Tuesday). Assume that there are no other conscious observers in the universe beside you when you are awake. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Sleeping Cardinalities

Sleeping Cardinalities. Some researchers are going to put you to sleep for eternity, and they will briefly wake you up either a countable infinite number of times or an uncountable infinite number of times, depending on the toss of a fair coin (Heads: countable; Tails: uncountable). After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Sleeping Grim Wakers

Sleeping Grim Wakers. Some researchers are going to put you to sleep. During the two days (Monday and Tuesday) that your sleep will last, they will briefly wake you up in a particular manner, depending on the toss of a fair coin. If the coin lands Tails, you will be awoken twice, on Monday and Tuesday. If the coin lands Heads, a countably infinite number of Grim Wakers are hired. A Grim Waker is a special kind of research assistant with the following properties: It comes to work on Monday at a time between 11am and noon and if you're asleep, it instantaneously wakes you; and if you're already awake, it doesn't do anything. There are infinitely many Grim Wakers set to check in at 11:30am, 11:15am, 11:07.5am, 11:03.75am, and so on. After noon each day, the researchers will put you back to sleep with a drug that makes you forget the events of the day. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Sleeping Rosswood

Sleeping Rosswood. Some researchers are going to put you to sleep. During the infinite number of days that your sleep will last, they will briefly wake you up a different number of times depending on the toss of a fair coin. If Heads, you will be awoken one time on the first day. If Tails, the following will happen: you will be awakened ten times in a row on days 1 to 10, and a copy will be made of you which is then sent back in time to day 1 to prevent the corresponding awakening. Next, you will awakened ten times in a row on days 11-20 and a copy will be made of you which is then sent back in time to day 2 to prevent the second awakening. And so on… After each waking, they will put you back to sleep with a drug that makes you forget that waking. When you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Hint.

Suppose the coin lands Tails: consider any day , is Sleeping Beauty awoken on this day?


  1. ^

    Since every credence  works here, one should arguably go with zero or one.

  2. ^

     Does rationality require you to have irrational credences?

  3. ^

    Proof. Suppose I say ‘’. Then I have uncentred credences  and . So the Thirder Rule/SIA says that my centred credence should be . If we now set  (to find the fixed point), we get , where  is the only positive value. (See Briggs (2010) for details on the Thirder Rule/SIA.)

New to LessWrong?

New Comment
10 comments, sorted by Click to highlight new comments since: Today at 9:18 PM

These are great. Though Sleeping Mary can tell that she's colourblind on any account of consciousness. Whether or not she learns a phenomenal fact when going from 'colourblind scientist' to 'scientist who sees colour', she does learn the propositional fact that she isn't colourblind.

So, if she sees no colour, she ought to believe that the outcome of the coin toss is Tails. If she does see colour, both SSA and SIA say P(Heads)=1/2.

Yeah great point, thanks. We tried but couldn't really get a set-up where she just learns a phenomenal fact. If you have a way of having the only difference in the 'Tails, Tuesday' case be that Mary learns a phenomenal fact, we will edit it in!

I did particularly like the "Sleeping Loop" version, which manages to even confuse the question of how many times you've been awakened: just once, or infinitely many times? Congratulations!

My follow-up question for almost all of them though, is based on use of the word "should" in the question. Since it presumably is not any moral version of "should", it's presumably a meaning in the direction of "best achieves a desired outcome".

What outcome am I trying to maximize, here? Am I trying to maximize some particular metric over prediction accuracy? In which case, which metric and how is it being applied? If I give the same answer twice based on the same information, is that scored differently from giving that answer once? If some p-zombie answers that same way that I would have if I were conscious, does that score count for my prediction or is it considered irrelevant? (Although this comment ends here, don't worry - I have a lot more questions!)

My follow-up question for almost all of them though, is based on use of the word "should" in the question. Since it presumably is not any moral version of "should", it's presumably a meaning in the direction of "best achieves a desired outcome".

The 'should' only designates what you think epistemic rationality requires of you in the situation. That might be something consequentialist (which is what I think you mean by "best achieves a desired outcome"), like maximizing accuracy[1], but it need not be; you could think there are other norms[2]

To see why epistemic consequentialism might not be the whole story, consider the following case from Greaves (2013) where the agent seemingly maximises accuracy by ignoring evidence and believing an obviously false thing.

Imps. Emily is taking a walk through the Garden of Epistemic Imps. A child plays on the grass in front of her. In a nearby summerhouse are n further children, each of whom may or may not come out to play in a minute. They are able to read Emily ’s mind, and their algorithm for deciding whether to play outdoors is as follows. If she forms degree of belief 0 that there is now a child before her, they will come out to play. If she forms degree of belief 1 that there is a child before her, they will roll a fair die, and come out to play iff the outcome is an even number. More generally, the summerhouse children will play with chance , where is the degree of belief Emily adopts in the proposition  that there is now a child before her. Emily ’s epistemic decision is the choice of credences in the proposition  that there is now a child before her, and, for each , the proposition  that the th summerhouse child will be outdoors in a few minutes’ time.

See Konek and Levinstein (2019) for a good discussion, though.

If I give the same answer twice based on the same information, is that scored differently from giving that answer once?

Once again, this depends on your preferred view of epistemic rationality, and specifically how you want to formulate the accuracy-first perspective. Whether you want to maximize individual, average or total accuracy is up to you! The problems formulated here are supposed to be agnostic with regard to such things; indeed, these are the types of discussions one wants to motivate by formulating philosophical dilemmas.

  1. ^

    This is plausibly cashed out by tying your epistemic utility function to a proper scoring rule, e.g. the Brier score.

  2. ^

    See e.g. Sylvan (2020) for a discussion of what non-consequentialism might look like in the general, non-anthropic, case.

Regarding Sleeping Counterfact: there seems to be two updates you could make, and thus there should be conceptual space for two interesting ways of being updatelessness in this problem; you could be 'anthropically updateless', i.e., not update on your existence in the standard Thirder way, and you could also be updateless with respect to the researchers asking for money (just as in counterfactual mugging). And it seems like these two variants will make different recommendations.

Suppose you make the first update, but not the second. Then the evidentialist value of paying up would plausibly be .

Suppose, on the other hand, that you are updateless with respect to both variables. Then the evidentialist value of paying up would be .

Does rationality require you to have irrational credences?

Truly wonderful

Interesting! Did thinking about those variants make you update your credences in SIA/SSA (or else)? 

(Btw, maybe it's worth adding the motivation for thinking about these problems in the intro of the post.) :)

Same as Sylvester, though my credence in consciousness-collapse interpretations of quantum mechanics has moved from 0.00001 to 0.000001.

Interesting! Did thinking about those variants make you update your credences in SIA/SSA (or else)?

No, not really! This was mostly just for fun.

I expected to see Sleeping beauty trolley problem:

One beauty is one Monday track and 5 beauties are on Tuesday track.  All beauties are exact copies of each other. Should you change the direction of the trolley, given that no of them will ever love you?