Credit allocation: the structure of this estimate, and some of the numbers, are basically lifted from a talk Nuño Sempere gave, but most of the numbers aren’t.

Epistemic status: I spent maybe an hour putting this together. I spent more time thinking about the bit I wrote about more, and less time thinking about the bit I wrote about less. For everything but the prior, you might want to check out the Samotsvety Group’s forecast here.

tl;dr About 0.01%.

As I write this, there’s a lot of tension between the West and Russia over Russia’s invasion of Ukraine. If you’re living in a major city like London, you might wonder whether you should move to the countryside in case your city is the target of a nuclear strike. Here’s my estimate of the chances of that, as well as some context for how bad those odds are.

First of all: what’s the prior odds for this sort of thing? Let’s say we’re talking about a NATO nuclear exchange with Russia. Since the US and Russia have co-existed as nuclear powers for about 70 years, you would think that Laplace’s law of succession would give you about 70:1 odds against a nuclear strike in a normal year. That being said, I think this is conservative: Laplace’s law assumes that in the absence of historical data, you’d have even odds on a nuclear strike. But if I didn’t know any history, I’d think the odds of intense conflict in a given year would be like 20:1 against, not 1:1, since countries don’t want to have this sort of conflict and have ways of avoiding it. The way we adjust for that is that we add that 20 to the 70 years to get a prior odds of 90:1 - see this Wikipedia page on pseudocounts.

I then think we should update down a bit because there’s a relatively wide class of intense NATO-Russia conflicts, only some of which would lead to nuclear war. I’m going to say that that’s a 1.5:1 update against (a mild update because it’s easy to see how intense NATO-Russia conflict leads to a nuclear exchange), and then get 135:1 odds against, on priors. Note that this is a different kind of update to adding pseudocounts that produces different effects, and I’m not totally sure it’s legit, but I’m going with it.

Now: we should probably update on the fact that Russia’s invading Ukraine, and the West is sanctioning Russia over that. The question is, how big an update is that? Bayes’ rule says we multiply the odds by the likelihood ratio: that is, the ratio between the probability of something like the current conflict happening given nuclear escalation this year, and the probability of something like the current conflict happening given no nuclear escalation this year. I’ll treat those two separately.

So: what’s the probability of something like the Ukraine situation given a nuclear exchange this year? I’d actually expect a nuclear exchange to be precipitated by somewhat more direct conflict, rather than something more proxy-like. For instance, maybe we’d expect Russia to talk about how Estonia is rightfully theirs, and how it shouldn’t even be a big deal to NATO, rather than the current world where the focus has been on Ukraine specifically for a while. So I’d give this conditional probability as 1/3, which is about 3/10. [EDIT: in this EA forum comment, Lukas_Gloor points out that this is neglecting the relative likelihood of the various ways one could escalate to nuclear war. Since the current invasion is the sort of thing that (a) might lead to nuclear war and (b) Russia might do, this term should likely be higher than 1/3.]

What’s the probability of something like the Ukraine situation given no nuclear exchange this year? Luckily, we can actually empirically estimate this, by looking at all the years NATO and Russia haven’t had a nuclear exchange, and seeing how many of them had something like this Ukraine situation. I’d count the NATO bombing of Yugoslavia, the initial invasion of Ukraine, the Cuban missile crisis, and the Russian invasion of Afghanistan. Let’s say Yugoslavia counts for 1 year, Ukraine 1 counts for 1 year, Cuba counts for 1 year, and Afghanistan counts for 3 years (Wikipedia tells me that the invasion lasted 10 years, but I’ve got to assume for most of those years NATO and Russia had figured out that they weren’t going to nuke each other). So, that’s 6 years out of 70, but Laplace’s law says we should add pseudocounts to make that probability 7/71, which is about 1/10.

So: the likelihood ratio is (1/10):(3/10) = 1:3. Multiplying that by our prior odds of 135:1, we get 135:3 = 45:1 odds against a NATO-Russia nuclear exchange this year, which is a probability of about 2.2%. We can turn that into a monthly probability by dividing by 12 (this isn’t exact, but that’s nowhere near the biggest problem with this estimate) to get a monthly probability of 0.18%.

Next: what’s the chance that hits London and kills any given smart resident? Pulling numbers out of my butt, let’s say 1/4 chance a nuclear exchange would hit London (there are lots of other targets, maybe it would be contained, maybe the UK could shoot it down), a 1/2 chance that someone could get out before the actual bomb hit (e.g. by leaving once the UK gets more strongly involved), and a 1/2 chance a random London resident would die conditional on a nuclear attack (nuclear attacks kill fewer people than you’d guess). That divides the probability by 16, and 0.18%/16 is about 0.01%.

So: I’ve estimated a 0.01% chance of dying of a nuclear attack on London in the next month, or 100 micromorts. How bad is that? Well, the average English person incurs 24 micromorts per day by being alive (according to Wikipedia), but that’s concentrated a lot among the elderly - micromorts.rip claims that 20-year-olds incur about 1 micromort per day. Another way to think about it is that you get 1 micromort every 370 km you drive in a car (according to Wikipedia), and 120 micromorts if you give birth vaginally (according to Wikipedia, which sources this number from a British book so I’m assuming that’s a UK statistic).

Here’s another way to weigh that risk: suppose that you expect to live 50 more years. Then, 100 micromorts shortens your expected lifespan by 0.01% * 365 days/yr * 50 yrs ~= 2 days. So you should be willing to spend up to that much time to bring the risk down to 0, but no more.

What if you think I’m too optimistic here? Let’s say you stick to 70:1 as your prior, think that this is exactly what you’d see in a world where nuclear bombs eventually got deployed (bringing the likelihood ratio up to 1:10), think there’s a 1/2 chance that London would be hit, think that there’s only a 1/4 chance you could get out, and think that 3/4 of Londoners would die. Then, you get a 1/8 annual chance of a nuclear strike, a 1/96 monthly chance, and a total 0.4% chance of dying in the next month. At those chances, the reduction in expected lifespan if you expect to live for 50 more years otherwise is 73 days, and you should definitely leave London for safer ground. That said, I think that’s probably too pessimistic.

[EDIT: As mentioned the Samotsvety group has their own forecast with a similar breakdown. Their prior strikes me as too low, and they give the probability of London being hit conditional on any nukes as 0.18, which strikes me as too low, but also give the probability of smart residents being able to escape as 3/4, and the proportion of remaining residents dying as 3/4, which I would probably defer to. If I use those numbers, then you'd multiply the 0.18% chance of a nuclear strike by (1/4) * (1/4) * (3/4) = 3/64 rather than 1/16. Luckily those numbers are close enough that it doesn't really change my answer.]

New to LessWrong?

New Comment
8 comments, sorted by Click to highlight new comments since: Today at 5:56 PM

About the 70:1 odds at the start of the calculation: Shouldn't we include our knowledge from other potential conflicts as well to get to our prior? I.e. India - Pakistan, US - China, Russia/SU - China?

Maybe, but they seem relatively closely linked, and the two biggest players seem like they're in their own reference class. That said, I basically buy that this should shade the prior down a bit, but just not hugely.

Doesn't the anthropic bias impact the calculation, where you take into account not seeing nuclear war before? 

fwiw I do think that it's a concern. But there is also an anti-inductiveness to close calls, where the more you've had in the past the more measures that might have been implemented to avoid future close calls.

So e.g., updating upwards on the Cuba missile crisis doesn't feel right, because the red phone got implemented after that.

My sense is that anthropic bias isn't that big a deal here, and IIRC Carl Shulman claimed that he also thought it didn't matter in a recent Yudkonversation transcript, which basically convinced me. If you think it's a big deal, I invite you to include it in the calculation.

Now: we should probably update on the fact that Russia’s invading Ukraine, and the West is sanctioning Russia over that. The question is, how big an update is that? Bayes’ rule says we multiply the odds by the likelihood ratio: that is, the ratio between the probability of something like the current conflict happening given nuclear escalation this year, and the probability of something like the current conflict happening given no nuclear escalation this year. I’ll treat those two separately.

Is Bayes' rule even computationally useful for this kind of evidence? 🤔

I feel like usually with Bayes, the evidence is causally downstream from the hypothesis. This makes it sensible to compute the likelihoods, because one can "roll out" the dynamics from the hypothesis and count out the results. But in this case, it doesn't really make sense to "roll out" nuclear war or lack thereof, since it happens after the evidence, rather than before it.

Of course you attempt to do it anyway, so I should probably address that:

So: what’s the probability of something like the Ukraine situation given a nuclear exchange this year? I’d actually expect a nuclear exchange to be precipitated by somewhat more direct conflict, rather than something more proxy-like. For instance, maybe we’d expect Russia to talk about how Estonia is rightfully theirs, and how it shouldn’t even be a big deal to NATO, rather than the current world where the focus has been on Ukraine specifically for a while. So I’d give this conditional probability as 1/3, which is about 3/10.

I'm not sure where these numbers come from. They look like they come from your general impression of this stuff, but what's the advantage of using Bayes' rule for this general impression, over just taking the general impression the opposite way, making up a number for the updates due to the current war?

What’s the probability of something like the Ukraine situation given no nuclear exchange this year? Luckily, we can actually empirically estimate this, by looking at all the years NATO and Russia haven’t had a nuclear exchange, and seeing how many of them had something like this Ukraine situation. I’d count the NATO bombing of Yugoslavia, the initial invasion of Ukraine, the Cuban missile crisis, and the Russian invasion of Afghanistan. Let’s say Yugoslavia counts for 1 year, Ukraine 1 counts for 1 year, Cuba counts for 1 year, and Afghanistan counts for 3 years (Wikipedia tells me that the invasion lasted 10 years, but I’ve got to assume for most of those years NATO and Russia had figured out that they weren’t going to nuke each other). So, that’s 6 years out of 70, but Laplace’s law says we should add pseudocounts to make that probability 7/71, which is about 1/10.

I suppose this side of the calculation is more sensible, since you can sort of get historical data on it. But the historical data assumes that it doesn't change over time, which I'm not sure I buy.

Is Bayes' rule even computationally useful for this kind of evidence? 🤔

It's at least valid, and I think I made use of it.

I'm not sure where these numbers come from. They look like they come from your general impression of this stuff, but what's the advantage of using Bayes' rule for this general impression, over just taking the general impression the opposite way, making up a number for the updates due to the current war?

The advantage is that I didn't have to make up the other half of the likelihood ratio, which I would have had to do if I just made up the update.

I suppose this side of the calculation is more sensible, since you can sort of get historical data on it. But the historical data assumes that it doesn't change over time, which I'm not sure I buy.

My sense is that incorporating possible changes over time will be significantly harder and not actually change the answer all that much.