Harry was wondering if he could even get a Bayesian calculation out of this. Of course, the point of a subjective Bayesian calculation wasn't that, after you made up a bunch of numbers, multiplying them out would give you an exactly right answer. The real point was that the process of making up numbers would force you to tally all the relevant facts and weigh all the relative probabilities. Like realizing, as soon as you actually thought about the probability of the Dark Mark not-fading if You-Know-Who was dead, that the probability wasn't low enough for the observation to count as strong evidence.

− HPMOR Chapter 86: Multiple Hypothesis Testing

I'm 28 years old and have never had a drivers license. At some point earlier on in my life I decided that driving is something that has a bad expected value (EV) due to the risk of death and the *massive* value of life, but at the same time, the EV isn't so bad where I will go out of my way to avoid it. Since deciding this, the belief has become cached. However, various things have recently prompted me to reconsider the belief.

- I'm looking to buy a home and am leaning towards making it a requirement that the place is fully walkable (as opposed to mostly walkable but sometimes requiring a car).
- The topic of MIRI relocating came up and the badness of cars is relevant to that.
- I have a wedding coming up where I have to travel from Vegas to SF. I have the option of getting a ride from my girlfriend's mom, or taking a flight. I'm tempted to go by car so I don't have to pay the money for a flight, but I don't actually think that is the right decision. ("No thanks Sandra. I think that computers are going to take over the world and make us all immortal. You have a slightly higher chance of dying in a car than a plane, so I'd rather pay for a flight.")
- Covid has prompted me to explore the EV of doing things. Eg. looking at the chances of catching covid and dying, trying to put a dollar amount on that, and then asking whether such an activity is worth it. In doing so for covid, it naturally leads to asking the question for activities such as driving as well.

Perhaps this is a good starting point. As I started to get into in my comment on the MIRI relocation post, in 2018, there were 11.18 deaths per 100k people in the US, or a 0.01118% chance of dying. If you value life at $10M, `0.0001118 * $10,000,000 = $1,118`

. Let's ballpark it at $1,000/year.

But you're a safer driver than average, right? I say that a little bit tongue in cheek because of the cognitive bias where 93% of people think they're above average. But to be conservative in this analysis, let's say you're in the 95th percentile in driving ability/safety. How much does that reduce the risk?

This is a big unknown for me. I really hope it cuts it by a few orders of magnitude. On the one hand it seems plausible, because a lot of deaths happen due to things like driving drunk, driving drowsy, road rage, youth, elderliness, etc. Maybe if you drive really carefully under safe road conditions without any impairments, you can be 1000x safer than the baseline. On the other hand, after a cursory search, it looks like there's roughly a 2.5-to-1 ratio of non-alcohol to alcohol related fatalities. But alcohol isn't the only thing you're avoiding by being in that 95th percentile. Maybe we can ballpark it and say that half of deaths are due to avoidable stuff. Being conservative, maybe we can add a little bit more buffer and say that you have 1/4 the risk of dying compared to baseline. Which brings us to $250 a year. And we'll ignore the potential for injury, harming others directly, and harming others because people are devastated when people they love die or get hurt.

$250 sounds like a very reasonable price to pay for the convenience of being able to drive. But here's the kicker: due to the potential for living eg. hundreds of thousands of years, perhaps life should be valued *way* more than the standard $10M. You can get to a valuation of $10M if you value a year at $200k and expect to live another 50 years. But what if you think there's a 10% chance of living another 100k years? That means there's an expectation of living roughly 10k years instead of 50 years. And those are hopefully going to be some pretty awesome years to be a part of.

I'm not an futurist or an AI researcher so I am not in the best position to estimate this. Fortunately for me, this community seems to have a lot of people who know about this stuff, so please let me know what you all think! In brief, here are my own thoughts.

The Wait But Why article on AI had some good commentary on what AI experts think. This is a long quote, but the length seems appropriate.

In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2

- Median optimistic year (10% likelihood): 2022
- Median realistic year (50% likelihood): 2040
- Median pessimistic year (90% likelihood): 2075
So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.

A separate study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:3

- By 2030: 42% of respondents
- By 2050: 25%
- By 2100: 20%
- After 2100: 10%
- Never: 2%
Pretty similar to Müller and Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.

But AGI isn’t the tripwire, ASI is. So when do the experts think we’ll reach ASI?

Müller and Bostrom also asked the experts how likely they think it is that we’ll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4

The median answer put a rapid (2 year) AGI → ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.

We don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years. So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.

Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.

Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?

Superintelligence will yield tremendous power—the critical question for us is:

Who or what will be in control of that power, and what will their motivation be?

The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.

Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Müller and Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%.

Let's try to ballpark this. How much is a post-singularity year worth? We said $200k as a rough estimate for a 21st century year. The article said a 52% chance that the outcome is good, and a 31% chance it is bad. Suppose a good year is worth $500k and a bad year costs $500k, and otherwise it is $0. `0.52 * $500k + 0.31 * -$500k = $105k`

. Sounds fair enough to go with $100k. Although, my mind doesn't want to go there, but we may be dealing with *way* higher magnitudes here. Eg. more along the lines of heaven and hell types of good/bad. In which case, if good and bad scale evenly, post-singularity life years become way more valuable, in expectation. But if badness scales faster than goodness, those post-singularity life years start to have a negative EV. But that is a hard attitude to adopt. "Humanity is doomed. I'm destined to die. May as well maximize the amount of fun I have now." Actually, I suppose a lot of people have that attitude, but for a certain type of personality that I sense is common here, it seems like a hard thing to accept.

Anyway, darkness aside, let's just go with $100k as the value of a post-singularity life year. As the article says:

a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI

This is another place where I could be wrong, but I would think that ASI basically implies that we would solve death. Right? I'm going to assume that. If I'm wrong, tell me in the comments.

The surveys seem pretty damn optimistic that ASI will happen at some point this century, but let's be conservative and say that it only has a 10% chance of happening. That's conservative, right?

How many years do you expect to live post-singularity? I would think it'd be a ton! Death is bad, ASI → we solved death, so an expectation of 100k years sounds totally plausible to me. A 10% chance of 100k years is an expectation of 10k years. `10k years * $100k/year = $1B`

as the value of life. Which is 100x the $10M we used earlier, so the cost of $250/year becomes $25,000/year. And that probably crosses the line of "not worth it".

However, there are a lot of places where my assumptions could be off by orders of magnitude. Maybe life expectancy post-singularity is only 1k years instead of 10k. That would bring the cost of driving way down back to "worth it" levels. On the other hand, I do feel like I've been pretty conservative in my assumptions, and it is plausible that there's something like a 50% chance of me living to ASI, and life expectancy given ASI is something like 1M years. In that case, the value of life is something like `50% chance * 1M years * $100k/year = $50B`

, and thus the cost of driving for a given year is `0.001118 * $50B = $5,590,000`

. It sounds crazy to say that driving costs north of $5M a year in expectation, but these are crazy big/small numbers, and technological growth is exponential. Humans are known to have terrible intuitions for both of those things, so perhaps it isn't worth putting too much weight into the common sensical idea that $5M a year is ludicrous.