As of 2022, humans have a life expectancy of ~80 years and a hard limit of ~120. Most rationalists I know agree that dying is a bad thing and at minimum we should have an option to live considerably longer and free of the "diseases of the age", if not indefinitely. It seems to me that this is exactly the kind of problem where rationality skills like "taking things seriously", "seeing with fresh eyes" and awareness of time discounting and status quo bias should help one to notice something is very very wrong and take action. Yet - with the exception of cryonics[1] and a few occasional posts on LW - this topic is largely ignored in the rationality community, with relatively few people doing the available interventions on the personal level, and almost nobody actively working on solving the problem for everyone.

I am genuinely confused, why is this happening? How is it possible that so many people who are equipped with epistemological tools to understand they and everyone they love are going to die, understand it's totally horrible, understand this problem is solvable in principle, can keep on doing nothing about it?

 

There is a number of potential answers to this question I can think of, but none of them is satisfying and I'm not posting them to avoid priming.

[ETA: to be clear, I have spent a reasonable amount of time and effort making sure that the premise of the question is indeed the case - whether rationalists are insufficiently concerned about mortality - and my answer is unequivocal "yes". In case you have evidence to the contrary, please feel free to post them as an answer]

  1. ^

    It's an interesting question exactly how likely cryonics is to work and I'm planning to publish my analysis of this at some point. But unless you assign a ridiculously optimistic probability to it working, the problem largely remains. Even 80% probability of success would mean your chances are worse than in Russian roulette! Besides, my impression is that only a minority of rationalists is signed up anyway.

New to LessWrong?

New Answer
New Comment

10 Answers sorted by

Vladimir_Nesov

Feb 10, 2022

180

AGI is likely closer than an anti-aging intervention that adds decades discovered without AGI. I used to believe that AGI results in either death or approximately immediate perfect cure for aging and other forms of mortality (depending on how AI alignment and judgement of morality work out), and that is a reason to mostly ignore anti-aging. Recently I began to see less powerful/general (by design) AGI as a plausible way of controlling AI risk, that isn't easy to make more generally useful safely. If that works out, immediate cure for aging doesn't follow, even after AI risk is no longer imminent. This makes current anti-aging research not as pointless. (In one partial failure mode, with an anti-goodharting non-corrigible AI, straightforward AI development might even become permanently impossible, thwarted by the AGI that controls AI risk but can't be disabled. In that case any anti-aging must be developed "manually".)

Matthew Barnett

Feb 10, 2022

150

I can only speak for my personal experience, but I think there's a significant minority of rationalists who care about preventing their own personal deaths a lot. I know because I've met them during my own process of figuring out what to do about death.

Personally, I video record my most of my life, plan to get cryopreserved (especially via the best methods available), am interested in and currently trying to pursue evidence-based strategies to slow aging, and try to avoid excess exposure to risk of injuries. There's not a lot more I can personally do to stop my own death besides these things, so oftentimes, I tend to just stop talking about it.

My impression is that it's more than most people do! [Although full disclosure, myself I'm signed up with CI and following what I believe is the right pattern of diet and exercise. I'll probably start some of the highest benefit/risk ratio compounds (read: rapamycin and/or NAD+ stuff) in a year or two when I'm past 30.]

But also, how do you feel about donating to the relevant orgs (e.g. SENS), working in a related or adjacent area, and advocating for this cause?

8Matthew Barnett2y
I think of myself as having two parts to my utility function (really just, what I care about). There's a selfish part, and a non-selfish part. As for the selfish component, I'm happy to pursue personal strategies to delay my aging and death. Indeed, I feel that my personal life extension strategies are extreme even by the standards of conventional life extension enthusiasts. I don't see a compelling selfish reason to donate to or work for life extension organizations. Even if I was a highly skilled biologist (and I'm not), the number of hours or days I could realistically hope to hasten the end of aging would be a low number. In that amount of time, I could have pursued better strategies aimed at helping myself alone.  While delaying death by one day gives a combined sum of millions of years of extra life across everyone, to me it's just one day. That's hardly worth switching careers over. On the other hand, the non-selfish part of my utility function prefers to do what's best for the world generally, and I don't find life extension research particularly competitive across this axis. In the past, I've contemplated volunteering to help life extension advocacy, but it was more of a personal emotional thing than what I thought would actually be effective. I have considered whether life extension could turn out to be extremely important for non-selfish reasons in this post. Ultimately, I do not find the arguments very compelling. Not only am I skeptical that life extension is coming any time soon, but I suspect that by the time it arrives, something even more important (such as AGI) will be here already.
1SurvivalBias2y
I personally believe exactly the right kind of advocacy may be extremely effective, but that's really a story for a post. Otherwise yeah, AGI is probably higher impact for those who can and want to work there. However, in my observation the majority of rationalists do not in fact work in AGI, and imao life extension and adjacent areas have a much wider range of opportunities and so could be a good fit for many of those people.

Ben Pace

Feb 10, 2022

80

I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.

Also my rationalist housemate Daniel Filan often reminds me of his basic belief about how doing 30 mins of exercise a few times a week has an expected return of something like 10 hours of life or whatever. (I forget the details.) It definitely happens to me a bunch.

Also right now I'm pretty excited about figuring out more of the micromorts I spend on different things, and get used to calculating things with them (including diet, exercise, as well as things in the reference class of walking through shady places at night or driving without a seatbelt). Now that I've gotten lots of practice with microcovid estimates, I can do this sort of thing much easier.

>I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.

That was one of my top guesses, and I'm definitely not implying that longevity is higher or equal priority than AI alignment - it's not. I'm just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn't come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI i... (read more)

4[anonymous]2y
I'll go out there and say it: longevity is a higher priority than AI alignment. I think this community got nerd sniped on AI alignment and it is simply against the social norms here to prioritize differently.
2meedstrom2y
There's no need for rhetorical devices like "I'll go out there and say it". Please. Also the force of norms looks weak to me in this place, it's a herd of cats, so that explanation makes little sense. Also, it's fine to state your understanding of a topic without describing everyone else as "nerd sniped", no one will needle you for your conclusion. Also, there's little point to commenting if you only state your conclusion -- the conclusion is uninteresting, we're looking to learn from the thought process behind it.
4[anonymous]2y
It's not a rhetorical device though? The OP said: He wrote as if that was an open-and-shut case that needed no argumentation at all. I simply wrote that I am taking the other side.
3Ben Pace2y
I mean, the field of AI has been around ~70 years, and it looks to me we’re more than half way through the route to AGI. So even if we got full life extension today it wouldn’t have that much impact for that many people.
5SurvivalBias2y
Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially indefinite very very long lives. This definitely counts as "much impact for many people" on my book.   But also, what's the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it's a hotly debated topic so asking for your personal best estimate.
3Ben Pace2y
Sure, it's a lot compared to most activities, but it's not a lot compared to the total people who could live in the future lightcone. You have to be clear what you're comparing to when you say something is large. My estimate? Oh I dunno. The future is hard to predict, and crazy shit happens by default. But currently I'd be more surprised if it didn't happen than if it did. So more than 50%, for 50 years. Also more than 50% for 30 years. My guess is there's a lot of very scalable and valuable products to be made with ML, which will put all the smart people and smart money in the world into improving ML, which is a very powerful force. Shrug. I'd have to think more to try to pin it down more.

I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.

On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly. Put another way, I think it makes sense to focus on preventing existential risks, but not as a means of preventing one's own death.

One optimistic explanation is that rationalists care more about AI risk because it's an altruistic pursuit. That's one possible way of answering OP's question. 

3TurnTrout2y
1. I decide both my actions and, to varying extents, the actions of people like me. On a gut level, I also refuse to live in a world where people like me do nothing about AI risk for your reason of low expected individual impact, because that feels cowardly. (TBC this is a rebuke of that reason, not of you) 1. A high enough P(death from AI) screens off the benefits of many other interventions. If I thought myself 90% likely to die to AI before age 50, then I wouldn't care much about living to 90 instead of 80.
2Vaniver2y
I think this depends a lot on 1) time discounting 2) whether you think there will be anything like impact certificates / rewards for helping in the future. That is, it may be the case that increasing chance of positive singularity by 1/million is worth more than your natural lifespan in EV terms (while, of course, mattering very little for most discount rates). And if you think the existence of Earth is currently worth like 2 quadrillion dollars (annual world GDP * 20), and you can increase probability of survival by a millionth, and you'll be compensated something like a thousandth of the value you provided, then you're looking at $2M in present value.
1[comment deleted]2y

Dustin

Feb 10, 2022

60

Speculation here, but if we grant your premise, then the answer to your question might be something like:

Rationalists largely come from engineering backgrounds.  Rightly or wrongly, AI is mostly framed in an engineering context and mortality is mostly framed in the context of biologists and medical doctors.

 

That being said, I think it's really important to suss out if the premise of your question is correct.  If it is so, and the signals we are getting about AI risk organizations having almost too much cash, we should be directing some portion of our funding to organizations like SENS instead of AI risk.

Vaniver

Feb 12, 2022

50

I care about longevity; I donate to longevity research institutions. I also try to live healthily.

That said, I'm also in my early 30s. I just took an actuarial table and my rough probability distribution of when I expect transformative AI to be possible and calculated my probability of dying vs. my probability of seeing transformative AI, and ended up with 23% and 77%. So, like,  even if I'm totally selfish, on my beliefs it seems three times more important to do something about the Singularity than all-cause mortality.

This is less true the older someone is, of course.

Maybe I am misreading this, but when they say "using the mortality rates for 2019", I think they are assuming that there won't be increases in life expectancy. Like, we're currently observing that people born in the 1930s living ~80 years, and so we'll assume that people born in eg. the 1980s will also live ~80 years. But that seems like a very bad assumption to me.

ChristianKl

Feb 11, 2022

40

There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point. 

Even without having direct access to AGI new machine learning advances for protein folding and protein design might be more central to longevity than the research that's billed as longevity research.

That said, I do agree that anti-aging is an important topic. One problem of how people who prescribe to fight it often seem to be into seeking the key under the lightbulb.

The SENS paradigm seems insular to me. I don't have a charitable explanation of why fascia getting tenser as people age isn't on their list of aging damage. 

There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point. 

True but there's also plenty of people who think otherwise, other comments being an example.

I'm not a biologist, but I'm reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that's what you're talking about. It's kinda like asking why "going to a boardgame party in San Francisco" isn't on th... (read more)

2ChristianKl2y
I don't think there's a good reason to make that assumption. There are various factors that lead to fascia getting tense. Substances like fibrin keep the fascia contracted and don't get automatically cleared.  SENS is certainly not the only organization and there are plenty of people who don't believe that aging is as easy as just curing the hallmarks. 
1SurvivalBias2y
I would be very surprised if inflammation or loss of proteostasis did not have any effect on fascia, if only because they have negative effect on ~everything. But more importantly, I don't think there's any significant number of people dying from fascia stiffness? That's one of the main ideas behind the hallmarks of aging, that you don't have to solve the entire problem in its every minuscule aspect at once. If you could just forestall all these hallmarks or even just some of them, you could probably increase lifespan and healthspan significantly, thus buying more time to fix other problems (or develop completely knew approaches like mind uploading or regenerative medicine or whatever else).
[-][anonymous]2y10

Attributing magical capabilities to AGI seems to be a common cognitive failure mode :( is there not some way we can encourage people to be more grounded in their expectations?

4Matthew Barnett2y
AI need not be magical for its development to have a profound effect on the progress of science and technology. It is worth understanding the mechanisms that some people have proposed. Here's a blog post series that explains one potential route.
2[anonymous]2y
Those posts are a prime example of the magical thinking I’m talking about: the assumption that scaling real world processes is like Factorio. That kind of seamless scaling is only approached in the highly controlled world of software, and even then any software engineer worth their salt can tell you just how unreliable immature automation can be. The real world is extremely messy, stochastic, and disordered, and doesn’t map well into the type of problems that recent advances in AI have been good at solving. We may soon get to the point where an AGi is able to construct a monumental plan for developing nanotech capabilities… only for that plan to not survive its first contact with the real world. At best we can hope for AI assistants helping to offload certain portions of the R&D effort, like we are currently seeing with AlphaFold. However the problem domains where AI can be effective in finding such useful models are limited. And while I can think of some other areas that would benefit from the same AlphaFold treatment (better molecular dynamics codes, for example), it’s not the kind of stuff that would lead to revolutionary super-exponential advances. The singletarian thinking which pervades the AI x-risk crowd just isn’t reflective of practical reality. AI development increases by constant factors the rate at which technology is advanced. That is good and valuable. But right now the rate at which molecular nanotechnology or longevity are being advanced is effectively nil for reasons that have nothing to do with the technical capabilities AI would advance. So there is a strong argument to be made that attacking these problems head on—like how Elon Musk attacked electric cars and space launch capability—would have more of an impact that the meta-level work on AI.
2ChristianKl2y
The recent advances in AI have not produced AGIs.  AlphaFold is essentially a tool. It's not a replacement for the current scientists in the way an AGI that's much smarter then the current scientists would be. 
2[anonymous]2y
You misunderstood my intent of that statement. I was saying that AGI wouldn't be smarter or more capable than the current scientists in solving these particular problems for a very long time, even if architecturally it is able to attack the same problems more efficiently. It's not enough of a constrained problem that a computer running in a box is able to replace the role of humans, not at least until it has human-level effectors to allow it to embody itself in the real world. AGI wouldn't be categorically different from present day AI. It's just an AI for writing AI (hence, "general"), but the AI's it writes are still constrained in much the same way as the AI that we write today. If there is some reason for not believing this would be the case, it is so-far unstated.

Dagon

Feb 11, 2022

30

I think many MANY smart people realize something is very wrong.  There's been a LOT written about it, including much of the early LessWrong content.  

The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild.  There is a crisis coming in my own death, but I don't see much to do about it.

to notice something is very very wrong and take action.
...
understand this problem is solvable in principle

I do NOT think that the "and take action" part is trivial, nor that the problem is solvable in principle, certainly not with much likelihood of impacting current rationalists' lives.

In terms of "what can I do to increase the area under the curve of probability-weighted happiness and longevity", working on nearer-term issues has much higher expected value, IMO. 

The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild.

 

Am I reading this incorrectly or are you saying that you don't care about your friends and loved ones dying?

There's at least two currently ongoing clinical trials with an explicit goal of slowing aging in humans (TAME and PEARL), that's just the most salient example. At some point I'll definitely make a post with a detailed answer to the question of ... (read more)

3Dagon2y
I care about my friends and loved ones.  I even care about strangers.  I'm a fan of life extension research.   But I'm not dedicating much of my resources to it - in the big picture, one human's about as good as another, and in the small picture I don't expect to have much chance of success, and don't want to reduce my enjoyment of my remaining time on a crazy longshot. I have to say that neither of those trials look particularly promising on the "ending aging" front.  They may slightly delay some problems (and that's GREAT - living longer is, in fact, better), but that's not anywhere near solving it in principle.  Mind upload might be a solution eventually, but I think it's more likely for bio-brains to continue dying and the immortal are digital from birth.  
1SurvivalBias2y
Of course they are not, that's not the point. The point is that they can add more time for us to discover more cures - to the few decades most rationalists already have, considering the age distribution. During that time new approaches will likely be discovered, hopefully adding even more time, until we get to mind uploading, or nanobots constantly repairing the body, or some other complete solution. The concept is called longevity escape velocity. Why would you think that? And another question. Imagine you've found yourself with an incurable disease and 3 years to live. Moreover, it's infectious and it has infected everyone you love. Would you try experimental cures and encourage them to try as well, or would you just give up so as not to reduce your enjoyment of the remaining time?
3Dagon2y
This is an obvious and common enough analogy that you don't need to frame it as a thought experiment.  I understand that I have an incurable disease.  It's longer than 3 years, I hope, but not by much more than an order of magnitude, certainly nowhere near 2.  I'm not even doing everything I could in terms of lifestyle, exercise, and nutrition to extend it, let alone "experimental" cures.   It's not infectious, fortunately - everyone already has it.   Friends I've lost to disease, accident, or suicide ALSO didn't universally commit to "experimental cures" - in all cases I know of, the cost (non-monetary cost of side-effects more than pure money, but some of that too) of the long-shots were higher than their perceived success rate.   As Pascal's Wager options go, giving up significant resources or happiness over the next decade for a VERY TINY chance of living longer, seems to be among the less compelling formulations.
1SurvivalBias2y
Equating high risk/high reward strategies with Pascal Wager is a way too common failure mode, and it's helped by putting numbers on your estimates. How much is VERY TINY, how much do you think the best available options really cost, and how much would you be willing to pay (assuming you have that kind of money) for a 50% chance of living to 300 years? To be clear, I'm not so much trying to convince you personally, as to get a generally better sense of the inferential distances involved.
1Dagon2y
I'd actually like to be convinced, but I suspect our priors differ by enough that it's unlikely.   I currently assign less than a 0.05% that I'll live another 50 years (which would put me over 100), and three orders of magnitude less likely that I'll live to 300.  These are small enough that I don't have as much precision in my beliefs as that implies, of course. Conditional on significant lifestyle changes, I can probably raise those chances by 10x, from vanishingly unlikely to ... vanishingly unlikely.  Conditional on more money than I'm likely to have (which is already in the top few percent of humanity), maybe another 3x.   I don't believe there are any tradeoffs I can make which would give me a 50% chance to live to 300 years.
1SurvivalBias2y
That's, like, 99.95% probability, one in two thousand chances. You'd have two orders of magnitude higher chances of survival if you were to literally shoot yourself with a literal gun. I'm not sure you can forecast anything at all (about humans or technologies) with this degree of certainty decades into the future, definitely not that every single one of dozens attempts in a technology you're not an expert in fail and every single one of hundreds attempts in another technology you're not an expert in fail (building aligned AGI). I don't believe it either, it's a thought experiment, I assumed it'd be obvious since it's a very common technique to estimate how much one should value low probabilities.
2Dagon2y
I think we've found at least one important crux, I'm going to bow out now.  I realize I misspoke earlier - I don't much care if I become convinced, but I very much hope you succeed in keeping me and you and others alive much longer.  

Razied

Feb 10, 2022

20

The anti-aging field is going great as far as I can see, with billion-dollar investements happening regularly, clinical trials are ongoing and the field as a whole has started to attract the attention it deserves. I think rationalists are not especially worried because they (or rather, I do) believe that the problem is already well on its way to being solved. If we don't all die from misaligned AI / nuclear war / biological weapon  in the next 20 years, I don't think we'll have to worry about aging too much.

[-][anonymous]2y90

I wish this was the case. However those large scale investments you speak of are mostly being put into things which address the symptoms of growing old, but not the underlying causes. There are very, very few researchers working on permanently ending aging or at least full rejuvenation, and they are chronically underfunded.

2Matthew Barnett2y
I agree that the amount of funding that goes into explicitly anti-aging research is often greatly exaggerated. That said, as you may have heard, Altos Labs recently got started, and rumors indicate that it's being well funded by Jeff Bezos and maybe a few others. My general impression is that anti-aging researchers think this is a big deal. Karl Pfleger has tried to catalog companies that are trying to address aspects of aging, and his list is quite long, possibly a great deal longer than you might expect. Biological research in fields related to aging, especially stem cell research and cancer research, is not underfunded (at least, in my estimation).
2[anonymous]2y
It is a big deal. It is also not as big a deal as work towards full rejuvenation would be. Altos Labs, like Calico and others before it, is attempting to cure diseases of aging. They are not, to my knowledge, attempting to achieve full rejuvenation that would prevent age-related disease by means of eternally maintained youth. It is, in principle, easier to prevent cancer than to cure it. And the strategies you would use for each are different. There aren't many people outside of SENS who are working on the rejuvenation-as-prevention approach.

Thanks for the answer, that wasn't one of my top guesses! Based on your experience, do you think it's widely held in the community?

 

And I totally see how it kinda makes sense from the distance because it's what the most vocal figures of the anti-aging community often claim. The problem is that it has also been the case 20 years ago - see Methuselah Foundation "make 90 the new 50 by 2030" - and probably 20 years before that. And, to the best of my understanding, while substantial progress has been made, there hasn't been any revolutions comparable with... (read more)

Martin Randall

Feb 15, 2022

10

Mortality is a very old problem, and lots of smart people have spent lots of time thinking about it. Perhaps the best intervention anyone has come up with is harm reduction via acceptance. That's the approach I'm taking personally. Denial is popular, but isn't very rationalist and seems to lead to more overall suffering.

I'm not working on promoting this approach because it's literally thousand of years old and that's not a good personal fit. But I support and respect people who do.

Smallpox is also a very old problem, and lots of smart people had spent lots of time thinking about it, until they've figured out a way to fix it. In theory, you could make an argument that no viable approaches exist today or in the foreseeable future and so harm reduction is the best strategy (from the purely selfish standpoint, working on the problem would still help the people in the future in this scenario). However, I don't think in practice it would be a very strong argument, and in any case you are not making it.

If you're say 60+ than yes, anti-agin... (read more)

1Martin Randall2y
Yes, there are a number of interventions available that could delay death by a few years. For example, my copy of "Ageless: The New Science of Getting Older Without Getting Old", which is almost a year old, ends with a short list: 1. Don't smoke 2. Don't eat too much 3. Get some Exercise 4. Get seven to eight hours of sleep 5. Get vaccinated and wash your hands 6. Take care of your teeth 7. Wear sunscreen 8. Monitor your heart rate and blood pressure 9. Don't bother with supplements 10. Don't bother with longevity drugs yet 11. Be a woman. Do these count? When you say "relatively few people [are] doing the available interventions on the personal level" are these the interventions you're talking about?
1SurvivalBias2y
Yes and no. 1-6 are obviously necessary but not sufficient - there's much more to diet and exercise than "not too much" and "some" respectively. 7 and 8 are kinda minor and of dubious utility except for in some narrow circumstances so whatever. And 9 and 10 are hotly debated and that's exactly what you'd need rationality for, as well as figuring out the right pattern of diet and exercise. And I mean right for each individual person, not in general, and the same with supplements - a 60-year old should have much higher tolerance for potential risks of a longevity treatment than a 25yo, since the latter has more less to gain and more to loose.

methree

Feb 10, 2022

-30

I'm not sure everyone thinks death is bad. I mean, it's been a "feature" of being human since before there were humans and it has worked quite well so far to have a process of death. Messing with a working system is always a dangerous proposition, so I, personally, wonder if it is wise to remove that feature. Therefore, I do nothing about it (maybe I should be more active in opposition? I don't know).

Dangerous proposition in what sense? Someone may die? Everyone may die? I have, um, not very good news for you...

1Martin Randall2y
So many answers here. For example: maybe without death, reproduction rates fall off a cliff, society ages, culture shifts from "explore" to "exploit", we never leave Earth, we waste the vast majority of our potential as a species. Later, our sun dies, everyone survives the experience, we realize that we're in a philosophical hypothetical and the thought experiment ends in bathos.
1SurvivalBias2y
Oh no, what if me and everyone I care about would only get to live 5 billion instead of 80 years. And all that only to find out it was a half-assed hypothetical.
1Martin Randall2y
I would prefer to have this conversation without the sarcasm. Maybe I encouraged it with my "half-assed hypothetical". If so, please consider this an attempt to reset the tone. I read this as a claim that it is impossible for the elimination of death, aging, or mortality to be dangerous because it can only decrease the danger of dying. I replied by pointing out that there are other dangers, such as the danger of astronomical waste. Another danger is suffering risk. The story in Surface Detail points in that direction. If I misread you then you were probably saying something I agree with. I read this as a statement that you aren't concerned about astronomical waste. That's a completely reasonable response, many philosophers agree with you.
[-][anonymous]2y60

I can think of 100 billion reasons death is bad. I struggle to come up with a single reason why it is good that my grandma was forced to die. Are you sure you are not subject to motivated reasoning here?

40 comments, sorted by Click to highlight new comments since: Today at 7:25 PM

I'm skeptical of the premise of the question.

I do not think your stated basis for thinking rationalists are not concerned with mortality is sufficient to grant you that it is true.

I'd be happy to be proven wrong, and existence is generally much easier to prove than non-existence. Can you point to any notable rationality-adjacent organizations focused on longevity research? Bloggers or curated sequences? When was the last rationalist event with focus on life extension (not counting cryonics, it was last Sunday)? Any major figures in the community focused on this area?

To be clear, I don't mean "concerned about a war in Ukraine" level, I mean "concerned about AI alignment" level. Since these are the two most likely ways for the present day community members humans to die, with the exact proportion between them depending on one's age and AI timelines estimate, I would expect a roughly comparable level of attention and that is very much not what I observe. Am I looking in the wrong places?

Tags on LW: Longevity, Aging

The now-defunct Longevity Research Institute and Daphnia Labs were founded and run by Sarah Constantin. Geroscience magazine was run by someone at a rationalist house. SENS is adjacent.  At least one ACX grant went to support a longevity researcher. I also know of private projects that have never been announced publicly. 

It is not AI-level attention, but it is much more than is given to Ukraine. 

I agree, Ukraine was an exaggeration. I've checked the tags and grants before asking the question, and am well aware of SENS but never thought or heard of it being adjacent. Is it? Didn't know of the three defunct institutions as well, so I should raise my estimate somewhat.

I'm not arguing that you're wrong I'm just saying that you seem to have just assumed it was true without really setting out to prove it or line up convincing evidence. It just struck me that you seemed to be asking "why" before answering "if".

I'm also not sure that the answers to your questions in this comment are as necessarily revealing as they might seem at first glance. For example, more of the low hanging fruit might be picked WRT mortality...not as much to be revealed. Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff, which gets discussed plenty.

It might be that you're right but if I were you I'd like to determine that first.

I have indeed spent a certain amount of time figuring out whether it's the case, and the answer I came to was "yep, definitely". Edited the question to make it more clear. I didn't lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).

 

>low hanging fruit might be picked WRT mortality

I'm doubtful, but I can certainly see a strong argument for this! However my point is that, like with existential risks, it is a serious enough problem that it's worth focusing on even after low hanging fruit has been picked up.

 

>Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff

Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you're lucky.

I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples

 

Well, I'm not arguing in bad faith.  In fact, I'm almost not arguing at all!  If your premise is correct, I think it's a very good question to ask! 

To the extent I am arguing it's with the assumption behind the premise.  To me, it does not seem readily apparent that rationalists are less concerned with mortality than they are with AI risk.  At least not so readily apparent that it can just be glossed over.

I'm doubtful, but I can certainly see a strong argument for this!

To be clear, here I'm not actually making the low-hanging fruit argument. I'm just pointing out one of the things that came to mind that make your premise not so readily apparent to me. Another thing I thought about is that hardly anyone outside of the rationalist community is, or has ever, thought about AI risk. Most people probably don't even acknowledge that AI risk is a thing.  Mortality is thought about by everyone, forever.  It's almost as if mortality risk concern is a different reference class than AI risk concern.

I think if you were to summarize my objection to just glossing over the premise of your question it's that relative amounts of rationalist activity surrounding mortality and AI risk is, to me, not sufficiently indicative of concern so that you can just gloss over the basis for your question.  If you are correct, I think it's very important, but it's not obvious to me that you are correct.  If you are correct, I think it's really important to make that argument rather than glossing it over.

I spend maybe 2 minutes per day ensuring my doors are locked and maybe an hour per day picking out clothes, getting dressed, washing my face, doing my hair, etc.  I don't think that means I'm less concerned about the physical security of my home relative to my physical appearance!

Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you're lucky.

Yeah, I'm talking about exercise and "eating healthy" and all the stuff that everyone knows you should do but many don't because it's unpleasant and hard.  

 

Anyway, I also think it's likely that the questions I'd want answered are so adjacent to the question you want answered that a good answer to any of them will largely answer all of them.

Mortality is thought about by everyone, forever.

Technically probably yes, but the specific position of "it is something we can and should do something about right now" is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. Maybe that's why I may sound a bit frustrated or negative, because it feels like the people who clearly should be able to make this conclusion, for some reason don't. And that's why I'm basically asking this question, to understand why don't or what am I missing or whatever is going on.

 

By the way, can you clarify what's your take on the premise of the question? I'm still not sure whether you think:

  • Rationalists are paying comparatively little attention to mortality and it is justified
  • Rationalists are paying comparatively little attention to mortality and it is not justified
  • Rationalists are paying comparatively lot attention to mortality and I'm just not looking in the right places
  • Something else

 

Yeah, I'm talking about exercise and "eating healthy" and all the stuff that everyone knows you should do but many don't because it's unpleasant and hard.

Ok, in that case akrasia etc debates are very relevant. But even so, not everybody knows. Maybe the facts that you should exercise and watch what you eat themselves are relatively uncontroversial (although I still remember the dark days when EY himself was advocating on facebook that "calories in / calories out" is bullshit). But exactly what kinds of diet and exercise are optimal for longevity is a hugely controversial topic, and it's mainly not for the lack of data, it's for the lack of interpretation, i.e. something that we could well try to do on lesswrong. So it'd be cool to see more posts e.g. like this.

By the way, can you clarify what's your take on the premise of the question?

 

I lean towards little attention and it is not justified, but I'm really just feeling around in the dark here...and thus my bit of frustration at just jumping right past the step at determining if this is actually the case.

I can imagine plausible arguments for each of the options you give (and more) and I'm not entirely convinced by any of them. 

[-][anonymous]2y30

Are you aware of SENS? There is massive overlap between them and the rationality community here in the Bay Area. They are, however, surprisingly underfunded and receive relatively little attention on sites like this compared with, say, AI alignment. So I see your point.

I'm well aware, but this comment section is the first time I hear there's a non-trivial overlap! Are you saying many active rationalists are SENS supporters?

[-][anonymous]2y20

It is one of the most common charities donated to by effective altruists here. But what I'm also saying is that many of the people working at SENS have had some level of exposure to the less wrong / rationalist community.

Hmm that's interesting, I need to find those people.

Eternal youth is a tempting goal, and I hate hate hate getting old and eventually dying probably more that anything, but... There is almost nothing I can do about it personally, and in my estimation the chance of any meaningful progress in the next couple of decades (i.e. reaching anything close to escape velocity) is negligible. Cryonics is a hail Mary option, and I am not sure if it's worth spending a sizable chunk of my savings (or income) on that. The evaluation of the situation might be similar for others. So, what may look like "not being concerned" is in reality giving up on a hopeless if tempting cause.

[-][anonymous]2y60

I find this viewpoint at odds with the evidence. People who are really attacking this issue, like the SENS research foundation, seem to think that longevity escape velocity is achievable within our lifetimes.

Robert Freitas, who knows more than anyone else alive about the medical applications of nanotechnology, believes that our limitations are due to tooling, and if we had atomically precise manufacturing then all diseases of the body (including aging) would be trivial to solve. He and his partner Ralph Merkle believe that APM could be achieved in 10 years time with proper funding.

Ray Kurzweil, for all his faults, plots some pretty accurate graphs. Those graphs show us achieving the necessary process technology to manipulate matter at the sub-nanometer scale within 20 years, max.

Are you pushing 80 years old? That's the only reason I can imagine you'd think this beyond your lifetime. Both the SENS and nanotech approaches are constrained by lack of resources, including people working on the problem. This is an area where you could make a difference, if you put in a lot of effort.

I've briefly looked into SENS and it comes across as cultish and not very credible. Nanotech would be neat, but getting it working and usable as nanobots swarming human body without extreme adverse effects seems like something achievable but with a timeline of half a century or so. Kurzwell has not had a great track record in forecasting. I think the best chance of extending human lifespan of someone alive today until the aging kinks are worked out is figuring out hibernation: slowing down metabolism 10-20 times and keeping the body in the fridge. But I don't see anyone working on that, though there is some discussion of it in the context of months-long interplanetary travel.

[-][anonymous]2y20

Kurzwell is completely inept at making predictions from his graphs. He is usually quite wrong in a very naive way. For example, one of his core predictions is when we will achieve human-level AI based on (IIRC) nothing more than when a computer with a number of transistors equal to neurons in the human brain could be bought off-the-shelf for $1000. As if that line in the sand had anything at all to do with making AGI.

But his exponential chart about transistors/$ is simply raw data, and the extrapolation is a straightforward prediction that has held true. He has another chart on the topic of manipulatable feature sizes using various approaches, and that also shows convergence on nanometer-resolution in the 2035-2045 timeframe. I trust this in the same way that I trust his charts about Moore's law: it's not a law of nature, but I wouldn't bet against it either.

Cryonics is around 20 bucks a month if you get it through insurance, plus 120 to sign up.

With that out of the way, I think there is substantial difference between "no LEV in 20 years" and "nothing can be done". For one thing, known interventions - diet, exercise, very likely some chemicals - can most likely increase your life expectancy by 10-30 years depending on how right you get it, age, health and other factors. For another thing, even if working on the cause, donating to it or advocating for it won't help yourself, it can still help many people you know and love, not to mention everyone else. Finally, the whole point of epistemic rationality (arguably) is to work correctly with probabilities. How certain you are that there will be no LEV in 20 years? If there's a 10% chance, isn't it's worth giving a try and increasing it a bit? If you ~100% certain, where do you get this information?

This seems like a good time to shamelessly plug a post I wrote: How much should we value life?. I'd love to hear anything that people think or have to say about it.